CN117934317A - Multi-sensor-based underwater robot online positioning method - Google Patents

Multi-sensor-based underwater robot online positioning method Download PDF

Info

Publication number
CN117934317A
CN117934317A CN202410089165.4A CN202410089165A CN117934317A CN 117934317 A CN117934317 A CN 117934317A CN 202410089165 A CN202410089165 A CN 202410089165A CN 117934317 A CN117934317 A CN 117934317A
Authority
CN
China
Prior art keywords
acquired image
image
index
current moment
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410089165.4A
Other languages
Chinese (zh)
Inventor
陈晓博
曹颖
冯翠芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shihang Huayuan Technology Co ltd
Suzhou Shihang Intelligent Technology Co ltd
Beijing Shihang Intelligent Technology Co ltd
Original Assignee
Beijing Shihang Huayuan Technology Co ltd
Suzhou Shihang Intelligent Technology Co ltd
Beijing Shihang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shihang Huayuan Technology Co ltd, Suzhou Shihang Intelligent Technology Co ltd, Beijing Shihang Intelligent Technology Co ltd filed Critical Beijing Shihang Huayuan Technology Co ltd
Priority to CN202410089165.4A priority Critical patent/CN117934317A/en
Publication of CN117934317A publication Critical patent/CN117934317A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of underwater robot positioning, in particular to an online underwater robot positioning method based on multiple sensors. The method comprises the steps of obtaining a noise estimation value through the dispersion of the gray distribution condition of pixel points of an acquired image between acquisition moments in the process of an underwater robot to a target operation area and the change of the relation between the distribution and positioning distance of the target operation area on a time sequence; the method comprises the steps that through a communication area obtained through edge distribution of a target operation area, dividing areas at the same position are obtained based on pixel similarity, and edge influence indexes are obtained according to differences between the dividing areas at the same position and the communication area; and obtaining a noise index according to the noise estimated value and the edge influence index, filtering the acquired image, and assisting in robot positioning. According to the invention, the image quality is improved through the noise influence degree of the target operation area in motion and local, the position condition of the image representation area is enhanced, and the positioning precision and reliability of the multi-sensor underwater robot are improved.

Description

Multi-sensor-based underwater robot online positioning method
Technical Field
The invention relates to the technical field of underwater robot positioning, in particular to an online underwater robot positioning method based on multiple sensors.
Background
With the development of technology, the application of the underwater robot is more and more extensive, such as ocean exploration, search and rescue tasks or underwater maintenance. The on-line positioning of the underwater robot is realized by a multi-sensor fusion mode, typical sensors comprise a sonar, an inertial measurement unit, a vision sensor, a magnetometer and the like, and the positioning precision and the robustness of the underwater robot can be improved by the data fusion of the multi-sensor so as to more efficiently complete the task.
Due to the complexity and uncertainty of the underwater environment, in the sonar detection process, the transmission of sound is influenced by a medium, for example, the position information of a target area is error caused by fish shoals and the like, and a visual sensor is required to be combined to accurately position the underwater robot. However, the fusion of the multiple sensors can enable impulse noise to exist in the process of acquiring images by the visual sensors, so that the image quality is lower, the conventional underwater filtering method does not consider the motion change condition of the robot when the robot goes to an operation area, the enhancement effect of the operation area for assisting in positioning is poor, the area cannot be accurately positioned, the quality of a filtered image acquired by the visual sensors is low, the accuracy of the collaborative positioning of the multiple sensors is lower, and the positioning precision and reliability of the underwater robot are reduced.
Disclosure of Invention
In order to solve the technical problems that in the prior art, the quality of a filtered image acquired by a vision sensor is low, so that the accuracy of multi-sensor collaborative positioning is low, and the positioning accuracy and reliability of an underwater robot are reduced, the invention aims to provide an online positioning method of the underwater robot based on the multi-sensor, which adopts the following technical scheme:
The invention provides an on-line positioning method of an underwater robot based on multiple sensors, which comprises the following steps:
Acquiring an acquisition image and a positioning distance at each acquisition time in the process that the underwater robot moves to a target operation area;
Obtaining an image characteristic value of each acquired image according to the complex condition of the gray distribution of the pixel points in the target operation area corresponding to each acquired image; obtaining a noise estimated value of an acquired image at the current moment according to the discrete degree of the image characteristic value change among different acquired images and the change relation between the distribution condition of a target operation area in the acquired image and the positioning distance along with the acquisition moment at all acquisition moments;
According to the edge distribution in the target operation area in the acquired image at the current moment, a communication area in the acquired image is obtained; dividing the target operation area again based on pixel point similarity by combining the position of each communication area in the acquired image at the current moment to obtain a division area corresponding to each communication area; obtaining an edge influence index of the acquired image at the current moment according to the distribution size, gray level and difference degree of the remarkable edge condition between the dividing areas and the communicating areas at the same position in the acquired image at the current moment;
Acquiring a noise index of the acquired image at the current moment according to the noise estimated value and the edge influence index of the acquired image at the current moment; filtering the acquired image at the current moment according to the noise index to obtain an enhanced underwater image; and positioning the underwater robot according to the enhanced underwater image.
Further, the positioning distance is the distance between the underwater robot and the target operation area at each acquisition time.
Further, the method for acquiring the image characteristic value comprises the following steps:
for any one acquired image, acquiring the occurrence frequency of each gray level in a target operation area of the acquired image; calculating information entropy of all occurrence frequencies in a target operation area of the acquired image, and obtaining a pixel type distribution characteristic value of the acquired image;
Calculating the average value of all pixel gray values in a target operation area of the acquired image to obtain a pixel gray characteristic value of the acquired image;
Obtaining an image characteristic value of the collected image according to the pixel type distribution characteristic value and the pixel gray characteristic value of the collected image; the pixel type distribution characteristic value and the pixel gray characteristic value are positively correlated with the image characteristic value.
Further, the method for obtaining the noise estimation value comprises the following steps:
Counting the total number of the pixel points corresponding to the target operation area in each acquired image to obtain the distribution area of each acquired image; arranging the distribution areas at all acquisition moments according to a time sequence order to obtain an area sequence; arranging the positioning distances at all the acquisition moments according to a time sequence order to obtain a distance sequence;
Calculating covariance of the area sequence and the distance sequence to obtain variation correlation of the acquired image at the current moment; taking the product of the standard deviation of the area sequence and the standard deviation of the distance sequence as the change degree value of the acquired image at the current moment; carrying out normalization processing on the ratio of the change correlation to the change degree value to obtain a change correlation index of the acquired image at the current moment;
Under the time sequence, calculating the difference of the image characteristic values between every two adjacent acquired images to obtain the characteristic change difference; calculating variances of all feature variation differences to obtain variation discreteness indexes of the acquired images at the current moment;
Obtaining a noise estimation value of the acquired image at the current moment according to the change association index and the change discreteness index of the acquired image at the current moment; the change relevance index and the change discreteness index are positively correlated with the noise estimation value.
Further, the method for obtaining the divided regions includes:
Acquiring the mass center of each connected region in the acquired image at the current moment; and performing superpixel segmentation on a target operation area of the acquired image based on the position of the centroid, and taking the range of each superpixel obtained by segmentation as a division area of a corresponding communication area of the centroid.
Further, the method for obtaining the edge influence index comprises the following steps:
Taking each connected region and the corresponding divided region in the acquired image at the current moment as a region pair;
counting the number of pixel points of each region in any region pair in the acquired image at the current moment to obtain the distribution number of each region; taking the difference of the distribution quantity between two areas in the area pair as a distribution difference index of the area pair;
Calculating the gray average value of the pixel points of each region in the pair of regions to obtain the gray average value of each region; taking the difference of the gray average values between two areas in the pair of areas as a gray difference index of the pair of areas;
acquiring gradient values of edge points of each region in the region pair, and calculating average values of the gradient values of all the edge points in each region in the region pair to obtain gradient average values of each region; taking the difference of the gradient mean values between the two regions in the region pair as a gradient difference index of the region pair;
obtaining the difference index of the region pair according to the distribution difference index, the gray level difference index and the gradient difference index of the region pair; the distribution difference index, the gray level difference index and the gradient difference index are positively correlated with the difference index;
and calculating the average value of the difference indexes of all the region pairs in the acquired image at the current moment to obtain the edge influence index of the acquired image at the current moment.
Further, the method for obtaining the noise index comprises the following steps:
And calculating the noise estimation value of the acquired image at the current moment and the two norms of the edge influence index to obtain the noise index of the acquired image at the current moment.
Further, filtering the acquired image at the current moment according to the noise index to obtain an enhanced underwater image, including:
And taking the noise index as the filtering kernel size of the Gaussian filter, and then carrying out filtering operation on the acquired image at the current moment to obtain the enhanced underwater image.
Further, the positioning of the underwater robot according to the enhanced underwater image comprises:
And inputting the enhanced underwater image into the trained neural network, and outputting the relative position information of the underwater robot relative to the target operation area.
Further, the obtaining the connected region in the acquired image according to the edge distribution in the target operation region in the acquired image at the current moment includes:
Acquiring an edge in an acquired image at the current moment through an edge detection algorithm; in the target working area, the edge is defined as a communicating area in which the image is acquired.
The invention has the following beneficial effects:
According to the method, the superposition influence condition of noise in the process that the underwater robot moves to the target operation area is considered, the noise estimation value of the acquired image at the current moment is obtained through the discrete condition of the gray distribution condition of the pixels of the acquired image between the acquisition moments and the change of the relation between the distribution and the positioning distance of the target operation area in the acquired image on the time sequence, the noise influence condition of influence area analysis is found according to the change rule in the movement process, and the degree of influence of noise on the acquired target operation area is further aimed at. Further considering the characteristic that a plurality of subarea information exists in the target operation area, obtaining the difference condition between the areas through two kinds of division of the edge division and the pixel point similar division of the target operation area, and carrying out detail analysis influenced by noise during edge acquisition, namely obtaining the edge influence index of the acquired image at the current moment according to the distribution size, gray level and difference degree of the remarkable edge condition between the division areas and the connected areas at the same position in the acquired image at the current moment. Finally, the whole analysis and the local analysis, namely the noise estimation value and the edge influence index are combined, so that the noise index is obtained to perform better filtering on the acquired image at the current moment, and the enhanced underwater image is obtained to perform robot positioning. According to the invention, the filtering process is comprehensively adjusted by analyzing the affected degree of the change rule of the target operation area in the motion and the affected degree of the local edge detection, so that a filtering image with higher quality is obtained, the position of the underwater operation area is more accurately obtained, the accuracy of the multi-sensor cooperative positioning is improved, and the positioning precision and reliability of the underwater robot are further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an online positioning method of an underwater robot based on multiple sensors according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of the specific implementation, structure, characteristics and effects of the multi-sensor-based on-line positioning method for the underwater robot according to the invention, which is provided by the invention, with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a multi-sensor-based underwater robot online positioning method.
Referring to fig. 1, a flowchart of an on-line positioning method for an underwater robot based on multiple sensors according to an embodiment of the present invention is shown, and the method includes the following steps:
S1: and acquiring an acquisition image and a positioning distance at each acquisition time in the process that the underwater robot moves to the target operation area.
The on-line positioning of the underwater robot is realized by a multi-sensor fusion mode, and mainly comprises a sonar sensor, a visual sensor, an inertial measurement element and the like, wherein the visual sensor captures image information of the underwater environment through a camera, can provide the position of the robot relative to the environment of a working area, promotes accurate positioning and navigation, and can compensate the limitations of other sensors, for example, when the robot approaches the working area, the underwater sonar is limited by resolution and environmental noise and can not position more detailed positions, and the visual sensor provides higher resolution and detailed information, so that the target working area can be positioned more accurately. The target operation area depends on the requirements and target characteristics of the work task, for example, in marine resource investigation, the target operation area may be a specific submarine deposit area, and the target operation area may be marked by sending sound waves by using sonar equipment and receiving the sound back before the operation is performed, so that the underwater robot can perform further tracking and exploration.
In the embodiment of the invention, a sonar coordinate system and a geodetic coordinate system can be mapped into the same coordinate system through a coordinate system conversion matrix to perform preliminary positioning on the underwater robot and descend to a position required by the target operation area at a uniform speed for operation, but due to interference of different media, position information in the coordinate system may have errors, and accurate positioning on the underwater robot is required by fusing data of the multiple sensors. When the underwater robot reaches the vicinity of the target area, the outline of the marked target operation area needs to be accurately identified by utilizing a visual sensor, so that the accurate positioning of the underwater robot is facilitated. It should be noted that, the positioning process by using the sonar coordinates is a technical means well known to those skilled in the art, and is not described in detail herein.
The marked target operation area can be directly acquired in the acquired image based on the positioning process, but because the prior marking is a relatively fuzzy marking process, the range of the specific target operation area needs to be acquired in the image by more accurate enhancement analysis. In the embodiment of the invention, the vision sensor suitable for the underwater environment is selected, such as an underwater camera or an underwater optical sensor, and the sensors are generally provided with a waterproof design, can stably operate in the underwater environment, are installed on the underwater robot, can normally perform necessary integrated work, and can cooperatively cooperate with other systems of the robot. Wherein parameters of the sensor, such as exposure time, frame rate, and focus, are configured to suit specific underwater environment and task requirements. The control system on the underwater robot can be connected with the sensor through a cable or wireless communication to realize real-time image transmission so as to acquire instant visual feedback when the underwater robot operates, and the image data is transmitted to a surface ship or a ground station through a wireless communication channel. It should be noted that the specific configuration and the implementation of the obtaining process may be adjusted according to the specific implementation, which is not limited herein.
And acquiring an acquisition image and a positioning distance at each acquisition time in the process that the underwater robot moves to a target operation area through the configured vision sensor. In one embodiment of the invention, the motion track of the robot is estimated by using the inertial measurement unit on the underwater robot, the image of the marked area is acquired before the underwater robot enters the target operation area, the acquisition frequency is set to be once per second, that is, the image is acquired every second in the motion process and the gray processing is carried out, the acquired image corresponding to each acquisition time before the current time is acquired, because the distance between the underwater robot and the target operation area in the acquired image influences the condition of needing to analyze the area in the acquired image, for example, according to the characteristics of the near-far distance, when the underwater robot approaches the target operation area, the corresponding target operation area part in the acquired image is more, and therefore the distance between the underwater robot and the target operation area at each acquisition time is taken as the positioning distance at each acquisition time. It should be noted that, the distance may be obtained by a depth sensor, for example, which is a technical means well known to those skilled in the art, and will not be described herein. In other embodiments of the present invention, the distance that the underwater robot has fallen may be used as the positioning distance, and the positioning distance is mainly used to reflect the degree of variation of the movement distance.
So far, the image and the position information acquired by the underwater robot in the motion process are obtained.
S2: obtaining an image characteristic value of each acquired image according to the complex condition of the gray distribution of the pixel points in the target operation area corresponding to each acquired image; and under all acquisition time points, obtaining the noise estimation value of the acquisition image at the current time point according to the discrete degree of the image characteristic values among different acquisition images and the change relation between the distribution condition and the positioning distance of the target operation area in the acquisition image along with the acquisition time point.
The method comprises the steps of firstly, analyzing by estimating the noise influence degree of an underwater robot in the process of image acquisition, and estimating the noise influence degree of the image in the whole motion process, wherein the noise influence degree is mainly positioned based on a target operation area in the process of the subsequent image auxiliary positioning, so that the noise condition of the target operation area part needs to be focused during image analysis, and firstly, obtaining the image characteristic value of each acquired image according to the pixel point distribution confusion degree in each acquired image and the gray level distribution condition of the corresponding target operation area.
Preferably, for any one collected image, the same analysis is performed on all the collected images, the occurrence frequency of each gray level in the target operation area of the collected image is obtained, the information entropy of all the occurrence frequencies in the target operation area of the collected image is calculated, and the pixel type distribution characteristic value of the collected image is obtained. The information entropy of occurrence frequency reflects the uniform distribution condition of pixels corresponding to each gray level, and when the characteristic value of the type distribution is larger, the characteristic that the distribution of pixels with different gray levels in a target operation area is more non-uniform is indicated, and the characteristic value of the type distribution represents the characteristic that the distribution of the pixels is uniform.
Further, calculating an average value of gray values of all pixel points in a target operation area of the acquired image, obtaining a pixel gray characteristic value of the acquired image, reflecting the overall distribution trend of the gray values of the pixel points in the target area through the gray average value, and indicating that the overall gray value of the pixel points in the target operation area is higher when the pixel gray characteristic value is larger, wherein the pixel gray characteristic value represents the gray distribution trend characteristic of the pixel points.
Finally, according to the pixel type distribution characteristic value and the pixel gray characteristic value of the acquired image, the image characteristic value of the acquired image is obtained, and the image characteristic value reflecting the gray characteristic of the pixel points in the target area is obtained by combining the uniform distribution condition of the pixel points and the gray value distribution trend. The pixel type distribution characteristic value and the pixel gray characteristic value are positively correlated with the image characteristic value, and in the embodiment of the invention, the expression of the image characteristic value is as follows:
Wherein E t is expressed as an image characteristic value of the t-th acquired image, n is expressed as a total number of gray levels corresponding to pixel points in the target operation area corresponding to the t-th acquired image, H is expressed as a total number of pixel points in the target operation area corresponding to the t-th acquired image, g i is expressed as a total number of I-th gray levels corresponding to pixel points in the target operation area corresponding to the t-th acquired image, I r is expressed as a gray value of r-th pixel points in the target operation area corresponding to the t-th acquired image, and ln is expressed as natural logarithm.
Wherein,Expressed as the frequency of occurrence of the ith gray level in the target job area for the tth acquired image,Pixel class distribution eigenvalue denoted as the t-th acquired image,/>The pixel gray characteristic value expressed as the t-th acquired image reflects that the pixel type distribution characteristic value and the pixel gray characteristic value are positively correlated with the image characteristic value in a multiplication mode, and in other embodiments of the invention, other basic mathematical operations can be adopted to reflect that the pixel type distribution characteristic value and the pixel gray characteristic value are positively correlated with the image characteristic value, such as addition or power operation, and the like, without limitation.
After the image characteristic value representing the gray characteristic of each acquired image is obtained, the analysis can be carried out according to the change of the image characteristic value between different acquisition moments in time sequence, as the underwater robot continuously approaches the target operation area, the change of the image characteristic value between different acquisition images tends to be stable, and due to the visual characteristic that the distribution area of the target operation area in the acquired image gradually increases along with the approach of the distance, namely, the positioning distance and the area distribution area have strong linear relation, when the image characteristic value changes discretely and the linear relation between the positioning distance and the distribution of the target operation area is weaker, the noise influence suffered in the process is larger, so that under all acquisition moments, the noise estimation value of the acquired image at the current moment is obtained according to the discrete degree of the image characteristic value between different acquisition images and the change relation between the distribution condition and the positioning distance of the target operation area in the acquisition image along with the acquisition moment.
Preferably, the total number of the corresponding pixel points of the target operation area in each collected image is counted, the distribution area of each collected image is obtained, and the distribution area condition of the target operation area is reflected through the number of the pixel points. The distribution areas under all the acquisition moments are arranged according to the time sequence to obtain an area sequence, the positioning distances under all the acquisition moments are arranged according to the time sequence to obtain a distance sequence, and the degree of change correlation between the areas and the distances is reflected through the time sequence.
And calculating covariance of the area sequence and the distance sequence to obtain change correlation of the acquired image at the current moment, and representing a linear relation of time sequence change by the covariance, wherein the area and the distance are in a negative correlation relation on time sequence change due to visual characteristics of near and far, namely the covariance is more close to minus one. Taking the product of the standard deviation of the area sequence and the standard deviation of the distance sequence as the change degree value of the acquired image at the current moment, carrying out normalization processing on the ratio of the change correlation to the change degree value to obtain the change correlation index of the acquired image at the current moment, and eliminating the influence of the change degree in the change correlation through the change degree value so that the covariance value is not influenced by the change scale.
Under the time sequence, calculating the difference of image characteristic values between every two adjacent acquired images to obtain characteristic change difference, calculating the variance of all characteristic change differences in the process that the underwater robot is continuously close to the target operation area to obtain the change discreteness index of the acquired images at the current moment, wherein the larger the change discreteness index is, the more discrete the characteristic change is in time sequence, and the more noise interference factors possibly cause the disorder of normal characteristic change.
Finally, according to the change association index and the change discreteness index of the acquired image at the current moment, the noise estimated value of the acquired image at the current moment is obtained, and the integral degree of the acquired image affected by noise at the moment when the acquired image moves to the current moment is obtained through the relation between the characteristic change condition of the area and the distribution distance in the movement process. In the embodiment of the invention, the expression of the noise estimation value is as follows:
Where B is represented as a noise estimate of the acquired image at the current time, deltae l is represented as the first feature variation difference, m is represented as the total number of feature variation differences, Expressed as the average of all feature variation differences, p is expressed as an area sequence, q is expressed as a distance sequence, σ p is expressed as the standard deviation of the area sequence, σ q is expressed as the standard deviation of the distance sequence, cov () is expressed as a covariance calculation formula.
Where Cov (p, q) is expressed as the covariance of the area sequence and the distance sequence, i.e., the change correlation, σ p×σq is expressed as the degree of change value,The change correlation index is expressed as a change correlation index, the change correlation index reflects the negative correlation degree of the area and the distance, the larger the change correlation is, the worse the negative correlation is, the more noise influence factors are, and the larger the change correlation index is. /(I)The larger the variation dispersion index is, the larger the influence of noise on the image is, and the more chaotic the image characteristic variation is, so the larger the noise estimation value is. In other embodiments of the present invention, other basic mathematical operations may be used to reflect that the change association index and the change dispersion index are both positively correlated with the noise estimation value, such as addition, which is not described herein. It should be noted that, in the embodiment of the present invention, all the values involved in the calculation are subjected to dimensionality removal processing, so as to eliminate the influence of the calculation of different dimensionalities, and the dimensionality removal processing method is a technical means well known to those skilled in the art, and will not be described in detail.
In other embodiments of the present invention, if the positioning distance is a distance that the underwater robot has fallen, the distance and the area distribution are in a positive correlation linear relationship, and when the greater the variation correlation is, the stronger the positive correlation is, the variation correlation and the noise estimation value should be in a negative correlation, so as to reflect the noise influence degree.
So far, the analysis of the integral change in the motion process is completed, and the intensity influenced by noise in the current underwater robot motion process is obtained.
S3: according to the edge distribution in the target operation area in the acquired image at the current moment, a communication area in the acquired image is obtained; dividing the target operation area again based on pixel point similarity by combining the position of each communication area in the acquired image at the current moment to obtain a division area corresponding to each communication area; and obtaining the edge influence index of the acquired image at the current moment according to the distribution size, gray level and the difference degree of the edge remarkable condition between the dividing areas and the communicating areas at the same position in the acquired image at the current moment.
After approaching the target working area, accurate positioning is required according to the target working area in the acquired image, so that the outline in the target working area in the acquired image should be complete and clear, but the outline of the edge is blurred due to the influence of equipment and underwater environment, and the image quality is affected. Meanwhile, different ground surface conditions exist in the target operation area, namely different areas, in order to facilitate the follow-up accurate control of the underwater robot, the influence degree of noise on the edges of the different areas needs to be analyzed, and therefore local analysis is further needed to be carried out on the edge acquisition conditions in the target operation area in the acquired image at the current moment, and the situation that the edges are influenced by the noise is analyzed.
Firstly, carrying out edge analysis on an acquired image at the current moment to preliminarily obtain different distribution areas, namely obtaining a communication area in the acquired image according to edge distribution in a target operation area in the acquired image at the current moment. In one embodiment of the invention, the edge in the acquired image at the current moment is acquired through an edge detection algorithm, and the edge is enclosed into a range as a communication area of the acquired image in the target operation area. It should be noted that, the edge detection algorithm is a technical means well known to those skilled in the art, such as a canny edge detection algorithm, and the like, and will not be described herein.
Because the edge detection is obtained based on the change condition of gray scale, the pixel distribution of different areas is similar, and when the degree of blurring of the edge by noise is large, the range of the edge detection result is inaccurate, so that the position of each connected area is selected again based on the similarity of pixel points, namely, the position of each connected area in the acquired image at the current moment is combined, and the target operation area is divided again based on the similarity of the pixel points to obtain the divided area corresponding to each connected area. In one embodiment of the invention, the centroid of each communication area in the acquired image at the current moment is acquired, the centroid represents the center position of the communication area, the center position can be divided again based on the similarity of pixel points, namely, the target operation area of the acquired image is divided into super pixels based on the positions of the centroids, the range of each super pixel obtained by dividing is taken as the dividing area of the corresponding communication area of the centroid, the position of each centroid is the center point of the super pixel, and the divided super pixel is obtained, so that each communication area corresponds to one dividing area. It should be noted that, the super-pixel segmentation is an algorithm based on pixel similarity division, and the super-pixel segmentation and the centroid acquisition are technical means well known to those skilled in the art, and are not described herein. In other embodiments of the present invention, the divided regions may be obtained by other methods based on pixel similarity analysis, such as region growing based on the location of the centroid, which is not limited herein.
When the situation that the edge information in the acquired image is affected by noise is smaller, the distribution characteristics between the connected areas and the corresponding divided areas are consistent, so that the situation that the edge is affected by noise is obtained through the difference situation between the two areas of edge division and pixel division, and the edge influence index of the acquired image at the current moment is obtained according to the distribution size, the gray level and the difference situation of the edge salient situations between the divided areas and the connected areas at all the same positions in the acquired image at the current moment through comprehensive analysis of the area, the gray level and the salient degree of the edge points of the corresponding areas.
Preferably, each connected region and the corresponding divided region in the acquired image at the current moment are taken as a region pair, and the corresponding region is matched for analysis. For any one region pair in the acquired image at the current moment, carrying out the same difference analysis on each region pair, firstly counting the number of pixel points of each region in the region pair to obtain the distribution number of each region, reflecting the area distribution condition between the connected region and the corresponding divided region through the distribution number, taking the difference of the distribution number between the two regions in the region pair as a distribution difference index of the region pair, reflecting the difference of the areas between the two regions through the distribution difference index, and indicating that the larger the distribution difference index is, the larger the area distribution difference between the regions is, and the larger the edge is affected by noise.
Further, the gray average value of the pixel points of each region in the region pair is calculated, the gray average value of each region is obtained, the gray consistency in each region is reflected through the gray average value, the difference of the gray average values between the two regions in the region pair is used as a gray difference index of the region pair, when the difference of the whole gray conditions between the two regions is larger, the fact that more pixel points with obvious difference exist on the distribution after the two regions are divided is indicated, and the influence degree of noise on the edge is larger.
Further, gradient values of edge points of each region in the region pair are obtained, average values of gradient values of all edge points in each region in the region pair are calculated, gradient average values of each region are obtained, the edge saliency degree of each region is reflected from the gradient average values, the difference of the gradient average values between two regions in the region pair is used as gradient difference indexes of the region pair, when the difference of the saliency degree between the edges is larger, the larger the selection difference of the edges between the connected regions and the divided regions is indicated, and the larger the influence of noise on the edges is caused. It should be noted that, the obtaining of the gradient value of the pixel point is a technical means well known to those skilled in the art, and will not be described herein.
And obtaining the difference index of the region pair according to the distribution difference index, the gray level difference index and the gradient difference index of the region pair, and comprehensively reflecting the overall difference condition between the region pair by combining the three indexes. The distribution difference index, the gray level difference index and the gradient difference index are positively correlated with the difference index; in the embodiment of the invention, the expression of the difference index is:
where w v is denoted as a difference index for the v-th region pair, S v1 is denoted as the number of distributions for the first region in the v-th region pair, S v2 is denoted as the number of distributions for the second region in the v-th region pair, Expressed as the gray-scale mean of the first region in the v-th region pair,/>Expressed as a gray average value of the second region in the v-th region pair, N v1 expressed as a total number of edge points in the first region in the v-th region pair, T e expressed as a gradient value of the e-th edge point in the first region in the v-th region pair, N v2 expressed as a total number of edge points in the second region in the v-th region pair, T z expressed as a gradient value of the z-th edge point in the second region in the v-th region pair, and i expressed as an absolute value extraction function.
Wherein, |S v1-Sv 2 < 2 > | is expressed as a distribution difference index of the v-th region pair,Gray scale difference index expressed as v-th region pair,/>Represented as the gradient mean of the first region in the v-th region pair,Represented as the gradient mean of the second region in the v-th region pair,A gradient difference index expressed as a v-th region pair. In other embodiments of the present invention, other basic mathematical operations may be used to reflect that the distribution difference index, the gray level difference index, and the gradient difference index are all positively correlated with the difference index, such as multiplication, without limitation.
Calculating the average value of the difference indexes of all the region pairs in the acquired image at the current moment, obtaining the edge influence indexes of the acquired image at the current moment, combining the previous difference conditions of all the region pairs to obtain the edge influence indexes, reflecting the edge characterization accurate condition in the target operation region, and further reflecting the influence degree of noise on edge detection. In the embodiment of the invention, the expression of the edge influence index is:
Wherein A is the edge influence index of the acquired image at the current moment, w v is the difference index of the v-th region pair, and G is the total number of the region pairs in the acquired image at the current moment.
By analyzing the accuracy of image edge detection and division in the acquired image at the current moment, the influence degree of noise on the edge in the target working area of the acquired image at the current moment is analyzed locally.
S4: acquiring a noise index of the acquired image at the current moment according to the noise estimated value and the edge influence index of the acquired image at the current moment; filtering the acquired image at the current moment according to the noise index to obtain an enhanced underwater image; and positioning the underwater robot according to the enhanced underwater image.
And combining the overall analysis on the motion condition and the local analysis of the single acquired image to comprehensively obtain the noise influence intensity of the acquired image at the current moment, namely obtaining the noise index of the acquired image at the current moment according to the noise estimated value and the edge influence index of the acquired image at the current moment, wherein the noise index reflects the degree of filtering of the image.
Preferably, the noise estimation value of the acquired image at the current moment and the two norms of the edge influence index are calculated, and the noise index of the acquired image at the current moment is obtained. In the embodiment of the invention, the expression of the noise index is:
In the method, in the process of the invention, The noise index is represented as a noise index of the acquired image at the current time, B is represented as a noise estimation value of the acquired image at the current time, and A is represented as an edge influence index of the acquired image at the current time. It should be noted that, the calculation formula of the two norms is a calculation means well known to those skilled in the art, and the meaning of the specific formula is not described herein.
When the noise estimated value is larger, the edge influence index is larger, which means that the noise degree of the collected image at the current moment is larger in the whole motion and the noise influence of the local edge detection is larger, so that the noise index is larger, and the stronger filtering degree is required.
Therefore, further, the acquired image at the current moment is filtered according to the noise index, and the enhanced underwater image is obtained. In one embodiment of the invention, the noise index is used as the filtering kernel size of the Gaussian filter, then the acquired image at the current moment is subjected to filtering operation to obtain the enhanced underwater image, and the noise index is used for replacing the traditional standard deviation adjustment filtering effect, so that the filtering effect is better, and the acquired image with higher quality and clearer definition is obtained.
When the underwater robot reaches the vicinity of the target area, it may have been shown in the position coordinates of the control system to be very close to the specified position, at which time the acquired image of the vision sensor is needed to further locate and adjust the position of the underwater robot. In the embodiment of the invention, the enhanced underwater image is input into the trained neural network, the relative position information of the underwater robot relative to the target operation area is output, the accurate positioning of the underwater robot is realized, and the position can be adjusted according to the relative position information. It should be noted that, training of the neural network is a technical means well known to those skilled in the art, and will not be described herein. In other embodiments of the present invention, the center of mass position of the target area in the enhanced underwater image may be determined directly, and the center of mass position of the image of the vision sensor of the underwater robot may be aligned with the center of mass position to achieve position adjustment. The on-line positioning precision of the underwater robot is improved by integrating a plurality of sensors such as a sonar sensor, an inertial measurement unit and the like, and the stability of the system is improved, so that the underwater robot is better adapted to complex underwater environments.
In summary, the invention considers the superposition influence condition of noise in the process of moving the underwater robot to the target operation area, obtains the noise estimation value of the collected image at the current moment through the discrete condition of the gray distribution condition of the pixels of the collected image between the collection moments and the change of the relation between the distribution and the positioning distance of the target operation area in the collected image in time sequence, discovers the noise influence condition of influence area analysis from the change rule in the moving process, and aims at obtaining the influence degree of noise on the target operation area. Further considering the characteristic that a plurality of subarea information exists in the target operation area, obtaining the difference condition between the areas through two kinds of division of the edge division and the pixel point similar division of the target operation area, and carrying out detail analysis influenced by noise during edge acquisition, namely obtaining the edge influence index of the acquired image at the current moment according to the distribution size, gray level and difference degree of the remarkable edge condition between the division areas and the connected areas at the same position in the acquired image at the current moment. Finally, the whole analysis and the local analysis, namely the noise estimation value and the edge influence index are combined, so that the noise index is obtained to perform better filtering on the acquired image at the current moment, and the enhanced underwater image is obtained to perform robot positioning. According to the invention, the filtering process is comprehensively adjusted by analyzing the affected degree of the change rule of the target operation area in the motion and the affected degree of the local edge detection, so that a filtering image with higher quality is obtained, the position of the underwater operation area is more accurately obtained, the accuracy of the multi-sensor cooperative positioning is improved, and the positioning precision and reliability of the underwater robot are further improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. An on-line positioning method of an underwater robot based on multiple sensors, which is characterized by comprising the following steps:
Acquiring an acquisition image and a positioning distance at each acquisition time in the process that the underwater robot moves to a target operation area;
Obtaining an image characteristic value of each acquired image according to the complex condition of the gray distribution of the pixel points in the target operation area corresponding to each acquired image; obtaining a noise estimated value of an acquired image at the current moment according to the discrete degree of the image characteristic value change among different acquired images and the change relation between the distribution condition of a target operation area in the acquired image and the positioning distance along with the acquisition moment at all acquisition moments;
According to the edge distribution in the target operation area in the acquired image at the current moment, a communication area in the acquired image is obtained; dividing the target operation area again based on pixel point similarity by combining the position of each communication area in the acquired image at the current moment to obtain a division area corresponding to each communication area; obtaining an edge influence index of the acquired image at the current moment according to the distribution size, gray level and difference degree of the remarkable edge condition between the dividing areas and the communicating areas at the same position in the acquired image at the current moment;
Acquiring a noise index of the acquired image at the current moment according to the noise estimated value and the edge influence index of the acquired image at the current moment; filtering the acquired image at the current moment according to the noise index to obtain an enhanced underwater image; and positioning the underwater robot according to the enhanced underwater image.
2. The multi-sensor-based on-line positioning method of the underwater robot according to claim 1, wherein the positioning distance is the distance between the underwater robot and the target operation area at each acquisition time.
3. The multi-sensor-based online positioning method for the underwater robot according to claim 1, wherein the image characteristic value obtaining method comprises the following steps:
for any one acquired image, acquiring the occurrence frequency of each gray level in a target operation area of the acquired image; calculating information entropy of all occurrence frequencies in a target operation area of the acquired image, and obtaining a pixel type distribution characteristic value of the acquired image;
Calculating the average value of all pixel gray values in a target operation area of the acquired image to obtain a pixel gray characteristic value of the acquired image;
Obtaining an image characteristic value of the collected image according to the pixel type distribution characteristic value and the pixel gray characteristic value of the collected image; the pixel type distribution characteristic value and the pixel gray characteristic value are positively correlated with the image characteristic value.
4. The multi-sensor-based online positioning method for the underwater robot according to claim 2, wherein the method for obtaining the noise estimation value comprises the following steps:
Counting the total number of the pixel points corresponding to the target operation area in each acquired image to obtain the distribution area of each acquired image; arranging the distribution areas at all acquisition moments according to a time sequence order to obtain an area sequence; arranging the positioning distances at all the acquisition moments according to a time sequence order to obtain a distance sequence;
Calculating covariance of the area sequence and the distance sequence to obtain variation correlation of the acquired image at the current moment; taking the product of the standard deviation of the area sequence and the standard deviation of the distance sequence as the change degree value of the acquired image at the current moment; carrying out normalization processing on the ratio of the change correlation to the change degree value to obtain a change correlation index of the acquired image at the current moment;
Under the time sequence, calculating the difference of the image characteristic values between every two adjacent acquired images to obtain the characteristic change difference; calculating variances of all feature variation differences to obtain variation discreteness indexes of the acquired images at the current moment;
Obtaining a noise estimation value of the acquired image at the current moment according to the change association index and the change discreteness index of the acquired image at the current moment; the change relevance index and the change discreteness index are positively correlated with the noise estimation value.
5. The multi-sensor-based online positioning method for the underwater robot according to claim 1, wherein the obtaining method for the divided areas comprises the following steps:
Acquiring the mass center of each connected region in the acquired image at the current moment; and performing superpixel segmentation on a target operation area of the acquired image based on the position of the centroid, and taking the range of each superpixel obtained by segmentation as a division area of a corresponding communication area of the centroid.
6. The multi-sensor-based online positioning method of the underwater robot according to claim 1, wherein the method for acquiring the edge influence index comprises the following steps:
Taking each connected region and the corresponding divided region in the acquired image at the current moment as a region pair;
counting the number of pixel points of each region in any region pair in the acquired image at the current moment to obtain the distribution number of each region; taking the difference of the distribution quantity between two areas in the area pair as a distribution difference index of the area pair;
Calculating the gray average value of the pixel points of each region in the pair of regions to obtain the gray average value of each region; taking the difference of the gray average values between two areas in the pair of areas as a gray difference index of the pair of areas;
acquiring gradient values of edge points of each region in the region pair, and calculating average values of the gradient values of all the edge points in each region in the region pair to obtain gradient average values of each region; taking the difference of the gradient mean values between the two regions in the region pair as a gradient difference index of the region pair;
obtaining the difference index of the region pair according to the distribution difference index, the gray level difference index and the gradient difference index of the region pair; the distribution difference index, the gray level difference index and the gradient difference index are positively correlated with the difference index;
and calculating the average value of the difference indexes of all the region pairs in the acquired image at the current moment to obtain the edge influence index of the acquired image at the current moment.
7. The multi-sensor-based online positioning method for the underwater robot according to claim 1, wherein the noise index obtaining method comprises the following steps:
And calculating the noise estimation value of the acquired image at the current moment and the two norms of the edge influence index to obtain the noise index of the acquired image at the current moment.
8. The multi-sensor-based online positioning method of the underwater robot according to claim 1, wherein the filtering the acquired image at the current moment according to the noise index to obtain the enhanced underwater image comprises:
And taking the noise index as the filtering kernel size of the Gaussian filter, and then carrying out filtering operation on the acquired image at the current moment to obtain the enhanced underwater image.
9. The multi-sensor based on-line positioning method of an underwater robot according to claim 1, wherein the positioning of the underwater robot based on the enhanced underwater image comprises:
And inputting the enhanced underwater image into the trained neural network, and outputting the relative position information of the underwater robot relative to the target operation area.
10. The method for online positioning of an underwater robot based on multiple sensors according to claim 1, wherein the obtaining the connected region in the acquired image according to the edge distribution in the target working region in the acquired image at the current time comprises:
Acquiring an edge in an acquired image at the current moment through an edge detection algorithm; in the target working area, the edge is defined as a communicating area in which the image is acquired.
CN202410089165.4A 2024-01-23 2024-01-23 Multi-sensor-based underwater robot online positioning method Pending CN117934317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410089165.4A CN117934317A (en) 2024-01-23 2024-01-23 Multi-sensor-based underwater robot online positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410089165.4A CN117934317A (en) 2024-01-23 2024-01-23 Multi-sensor-based underwater robot online positioning method

Publications (1)

Publication Number Publication Date
CN117934317A true CN117934317A (en) 2024-04-26

Family

ID=90766000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410089165.4A Pending CN117934317A (en) 2024-01-23 2024-01-23 Multi-sensor-based underwater robot online positioning method

Country Status (1)

Country Link
CN (1) CN117934317A (en)

Similar Documents

Publication Publication Date Title
CN111856448A (en) Marine obstacle identification method and system based on binocular vision and radar
US9576375B1 (en) Methods and systems for detecting moving objects in a sequence of image frames produced by sensors with inconsistent gain, offset, and dead pixels
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
CN111563878B (en) Space target positioning method
WO2008020598A1 (en) Subject number detecting device and subject number detecting method
CN114565655A (en) Depth estimation method and device based on pyramid segmentation attention
CN114266977A (en) Multi-AUV underwater target identification method based on super-resolution selectable network
CN116188893A (en) Image detection model training and target detection method and device based on BEV
CN110276801B (en) Object positioning method and device and storage medium
CN112164093A (en) Automatic person tracking method based on edge features and related filtering
CN113538545B (en) Monocular depth estimation method based on electro-hydraulic adjustable-focus lens and corresponding camera and storage medium
CN108647605B (en) Human eye gaze point extraction method combining global color and local structural features
CN106408600B (en) A method of for image registration in sun high-definition picture
CN117115436A (en) Ship attitude detection method and device, electronic equipment and storage medium
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN117934317A (en) Multi-sensor-based underwater robot online positioning method
Tang et al. A novel approach for automatic recognition of LPI radar waveforms based on CNN and attention mechanisms
CN112907728B (en) Ship scene restoration and positioning method and system based on camera and edge calculation
CN113514796B (en) Passive positioning method, system and medium
CN112365600B (en) Three-dimensional object detection method
CN112116538A (en) Ocean exploration image quality enhancement method based on deep neural network
Thoms et al. Tightly Coupled, Graph-Based DVL/IMU Fusion and Decoupled Mapping for SLAM-Centric Maritime Infrastructure Inspection
Ferreira et al. A comparison between different feature-based methods for ROV vision-based speed estimation
RU2818870C1 (en) Method for filtering uninformative zones on video frame
CN117889867B (en) Path planning method based on local self-attention moving window algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination