CN116243720B - AUV underwater object searching method and system based on 5G networking - Google Patents
AUV underwater object searching method and system based on 5G networking Download PDFInfo
- Publication number
- CN116243720B CN116243720B CN202310458080.4A CN202310458080A CN116243720B CN 116243720 B CN116243720 B CN 116243720B CN 202310458080 A CN202310458080 A CN 202310458080A CN 116243720 B CN116243720 B CN 116243720B
- Authority
- CN
- China
- Prior art keywords
- image
- neuron
- underwater
- object image
- suspicious object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000006855 networking Effects 0.000 title claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 59
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 43
- 238000013528 artificial neural network Methods 0.000 claims abstract description 18
- 210000002569 neuron Anatomy 0.000 claims description 126
- 239000013598 vector Substances 0.000 claims description 91
- 230000000694 effects Effects 0.000 claims description 53
- 238000012549 training Methods 0.000 claims description 49
- 238000007781 pre-processing Methods 0.000 claims description 21
- 238000005070 sampling Methods 0.000 claims description 13
- 230000005284 excitation Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 210000002856 peripheral neuron Anatomy 0.000 claims description 10
- 239000000654 additive Substances 0.000 claims description 8
- 230000000996 additive effect Effects 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 6
- 238000007670 refining Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 230000005764 inhibitory process Effects 0.000 claims 2
- 230000005540 biological transmission Effects 0.000 abstract description 42
- 238000004891 communication Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 230000001629 suppression Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- QVFWZNCVPCJQOP-UHFFFAOYSA-N chloralodol Chemical compound CC(O)(C)CC(C)OC(O)C(Cl)(Cl)Cl QVFWZNCVPCJQOP-UHFFFAOYSA-N 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/04—Control of altitude or depth
- G05D1/06—Rate of change of altitude or depth
- G05D1/0692—Rate of change of altitude or depth specially adapted for under-water vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an AUV underwater object searching method and system based on 5G networking, wherein the method comprises the steps of constructing an underwater three-dimensional map of a searched water area, and establishing a base station database, wherein the base station database comprises a plurality of target object images to be searched; dividing an underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area; marking three-dimensional coordinate positions of a plurality of suspicious points in a path planning area of the underwater robot; acquiring a first suspicious object image at the three-dimensional coordinate position of the suspicious point, and performing image processing on the first suspicious object image to obtain a second suspicious object image; and comparing the second suspicious object image with the target object image to be searched, and if the second suspicious object image is consistent with the target object image to be searched, representing that the searching is successful. The invention solves the problems of low transmission rate of data on water and underwater, high running power consumption of the AUV and low searching efficiency caused by overlarge positioning error of the AUV in the conventional AUV object searching mode.
Description
Technical Field
The invention relates to the technical field of underwater robots, in particular to an AUV underwater object searching method and system based on 5G networking.
Background
In the face of increasing demands for underwater task-seeking and complex underwater environments, higher demands are being placed on underwater robots (Autonomous Underwater Vehicle, AUV) and surface unmanned vessels (Unmanned Surface Vehicle, USV). The AUV used in the existing underwater object-finding task is mainly controlled manually by a remote controller, the AUV is used for controlling, underwater information is collected by the AUV and then is directly transmitted to the water surface USV, the water surface USV transmits the information to an onshore base station through a communication link such as MESH and LORA, and the onshore base station transmits the information to an onshore data center through a satellite communication link. Most underwater searching modes in the current stage can be divided into single AUV and multi AUV underwater searching according to the range of the searching water area. When a single AUV searches for a target, it often occurs that data obtained by the AUV cannot be transmitted to shore, resulting in inaccurate positioning and wasting AUV energy, resulting in failure of the search task.
Aiming at underwater object-finding tasks, the definition of underwater images and the stability of underwater acoustic communication acquired by the AUV are important factors, so that requirements on the positioning of the AUV, the accuracy of underwater three-dimensional terrain, the definition of the images and the success rate of the underwater acoustic communication are higher, and the requirements on the stability and the rapidity of the whole information transmission process are higher. In the face of an unknown water area, the detection of underwater topography also needs to be quicker and more accurate, and when the task of more accurate control is needed, such as underwater object searching, in the face of some special underwater environments, the AUV needs to be positioned more accurately. If complete data processing and video image analysis are performed on the AUV, the computing power on the AUV searching end is seriously consumed, too many devices are installed on the AUV, the cost is greatly increased, and the running power consumption of the AUV is high. The object searching is carried out in an unknown water area, and the obstacle avoidance capability of the AUV is also a big test. If all the calculations are placed on-shore facilities, the AUV needs to communicate with an application server deployed on the public network, and must go through a line such as AUV-underwater acoustic communication-water receiving equipment-shore-based data center. The water receiving equipment mostly adopts a fourth generation mobile communication technology (4G) to communicate with an onshore base station, and the problems of limited distance and transmission rate exist. The conventional AUV object searching mode can cause the AUV to waste a lot of energy. Because of the severe underwater acoustic transmission environment, the AUV positioning error is too large to receive information in real time, so that the search efficiency is reduced.
Disclosure of Invention
Aiming at the defects, the invention provides an AUV underwater searching method and system based on 5G networking, which aim to solve the problems of low transmission rate of on-water and underwater data, high running power consumption of the AUV and low searching efficiency caused by overlarge positioning error of the AUV in a conventional AUV searching mode.
To achieve the purpose, the invention adopts the following technical scheme:
an AUV underwater object searching method based on 5G networking comprises the following steps:
step S1: constructing an underwater three-dimensional map of a searched water area, and establishing a base station database, wherein the base station database comprises a plurality of target object images to be searched;
step S2: dividing the underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area;
step S3: marking a plurality of suspicious point three-dimensional coordinate positions in the path planning area of the underwater robot;
step S4: acquiring a first suspicious object image at a suspicious point three-dimensional coordinate position, and performing image processing on the first suspicious object image to obtain a second suspicious object image;
step S5: comparing the second suspicious object image with the target object image to be searched, and if the second suspicious object image is consistent with the target object image to be searched, representing that the searching is successful; and if the second suspicious object image is inconsistent with the target object image to be searched, the searching is unsuccessful.
Preferably, in step S4, the following substeps are specifically included:
step S41: preprocessing the first suspicious object image to obtain a preprocessed first suspicious object image;
step S42: and carrying out refinement treatment on the preprocessed first suspicious object image to obtain a second suspicious object image.
Preferably, in step S41, the following substeps are specifically included:
step S411: performing gray level conversion on the first suspicious object image to obtain a first gray level image;
step S412: translating and stretching a histogram of the first gray level image within a limit value range by using CLAHE-WT based on Rayleigh distribution, so that a maximum point of a histogram parabola of the first gray level image translates to a middle gray level;
step S413: the histogram parabola of the first gray level image is stretched towards the low gray level and the high gray level, so that the times of each gray level in the first gray level image are more balanced, and a second gray level image, namely a preprocessed first suspicious object image, is obtained.
Preferably, in step S42, the following steps are specifically included:
step S421: initializing the iteration number F, and a first coefficient vector a c Is the maximum length L of (2) cmax Second coefficient vector a t Is the maximum length L of (2) tmax A relaxation coefficient lambda, and a first threshold delta c And a second threshold delta t Wherein delta c =λ*L cmax ,δ t =λ*L tmax F iterations are carried out on the preprocessed first suspicious object image X;
step S422: if the image is a high-frequency signal X t Keep unchanged, update the image low frequency signal X c Calculating the residual part X 'after updating the image low-frequency signal' c Wherein X 'is' c =X-X c ;
Calculating a first coefficient vector a c Wherein, the method comprises the steps of, wherein, is X' c And determining a first coefficient vector a by soft thresholding c By setting the threshold to delta c Processing a first coefficient vector a c Obtaining a first coefficient after treatment +.>By changing->Reconstruction X c Obtaining a reconstructed image low-frequency signal X' c Wherein T is c Is X c Is a sparse representation of structural information;
step S423: if the image is low frequencySignal X c Remains unchanged, updates the image high frequency signal X t Calculating the residual part X 'after updating the image high-frequency signal' t =X-X t ;
Calculating a second coefficient vector a t Wherein, the method comprises the steps of, wherein,is X' t And determining a second coefficient vector a by soft thresholding t By setting the threshold to delta t Processing the second coefficient vector a t Obtaining a second coefficient after treatment +.>By changing->Reconstruction X t Obtaining a reconstructed image high-frequency signal X' t Wherein T is t Is X t Is a sparse representation of structural information;
Step S424: updating the first threshold delta c Let delta c =δ c -lambda, if delta c Ending the lambda algorithm; or updating the second threshold delta t Let delta t =δ t -lambda, if delta t Ending the lambda algorithm;
step S425: low frequency signal X' for reconstructed image " c The gray level image of (2) is subjected to scale Retinex enhancement, gaussian surrounding scale c is input, and under discrete conditions, integration is converted into summation, and scale parameter lambda is determined R A numerical value;
step S426: x is calculated by the formula (1) r (x,y);
For a single scale, the input image is S (x, y), the luminance image is L (x, y), and the reflected image is R (x, y);
wherein X is r (x, y) is the first output image, is the convolution operator, F (x,y) is a center-around function,c is a gaussian surround scale and ≡c F (x, y) dxdy=1;
step S427: x is to be r (X, y) converting from logarithmic domain to real domain, and performing linear stretching to obtain a second output image X cR (X, y), i.e. enhanced low frequency structural image X cR And stored in Double form;
step S428: noise reduction processing is carried out by using a K-SVD algorithm, a sliding factor s=1 is selected, and the scale is thatIs +.>Collecting an image I, and obtaining a vector set according to a block iteration strategy>Wherein y is i Is the i-th iteration block vector, M is the number of iteration block vectors, and M= is satisfied >N is the total number of pixel blocks in the input image, N is the number of pixel blocks in the input image;
step S429: sampling the vector set Y by a random sampling method, and extracting the sampled block vector to a training sample setWherein y' i Is the ith training sample, M 'is the number of training samples, and M' is more than 0 and less than or equal to M;
step S4210: dictionary training based on a sparse K-SVD method is carried out on the training sample set Y' to obtain a sparse dictionary
Step S4211: according to the formulaSparse dictionary->Carry-in get training dictionary->Wherein A is a basic dictionary;
step S4212: for the obtained training dictionarySparse coding is used for vector set Y of all overlapped blocks by using OMP-cholesky algorithm to obtain sparse matrix +.>R ij Extracting an operator for the image overlapping block; by the formula->Obtaining an estimate of YNoise-reduced high-frequency texture image +.>Obtained from formula (2):
step S4213: for the enhanced low-frequency structure image X cR And the high-frequency texture image after noise reductionAnd carrying out additive operation to obtain an underwater image L (x, y), namely a second suspicious object image.
Preferably, in step S2, the change rule of the neuron activity in the neuron network algorithm is represented as formula (3):
wherein u is k Is the activity value of neuron k in the neuron network; u (u) l Is the activity value of neuron l adjacent to neuron k; the parameter A, B, D is a positive constant, -A reflects the activity value u of neuron k k Is a rate of decay of (2); b and D are each u k Upper and lower limit values of (a), i.e. u k ∈[-D,B];I k An external input signal representing neuron k, when I k >0 represents an excitation signal, when I k <0 represents a suppression signal;indicating that a neuron capable of generating an excitation signal for neuron k is defined at a distance from its location not exceedingWithin the peripheral neuron area of (a), namely, the peripheral 26 neurons;
w kl the neuron connection weight coefficient between neurons k and l is represented as formula (4):
where, kl represents the distance between neuron k and neuron l in the neural network, and μ is a constant coefficient.
In another aspect, the application provides an AUV underwater searching system based on 5G networking, which comprises:
the construction module is used for constructing an underwater three-dimensional map of the searched water area;
the base station database comprises a plurality of target object images to be searched;
the dividing module is used for dividing the underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area;
the marking module is used for marking three-dimensional coordinate positions of a plurality of suspicious points in the path planning area of the underwater robot;
The acquisition module is used for acquiring a first suspicious object image at the three-dimensional coordinate position of the suspicious point;
the image processing module is used for carrying out image processing on the first suspicious object image to obtain a second suspicious object image;
the comparison judging module is used for comparing the second suspicious object image with the target object image to be searched, and if the second suspicious object image is consistent with the target object image to be searched, the search is successful; and if the second suspicious object image is inconsistent with the target object image to be searched, the searching is unsuccessful.
Preferably, the image processing module includes:
the image preprocessing sub-module is used for preprocessing the first suspicious object image to obtain a preprocessed first suspicious object image;
and the image refining processing sub-module is used for refining the preprocessed first suspicious object image to obtain a second suspicious object image.
Preferably, the image preprocessing sub-module includes:
the image gray level conversion subunit is used for performing gray level conversion on the first suspicious object image to obtain a first gray level image;
a first processing subunit, configured to translate and stretch a histogram of the first gray image within a range of a limiting value by using a CLAHE-WT based on rayleigh distribution, so that a maximum point of a histogram parabola of the first gray image translates to a middle gray level;
And the second processing subunit is used for stretching the histogram parabola of the first gray level image to low gray level and high gray level so that the times of each gray level appearing in the first gray level image are more balanced, and a second gray level image, namely a preprocessed first suspicious object image, is obtained.
Preferably, the image refinement processing sub-module includes:
an initialization subunit for initializing the iteration number F, a first coefficient vector a c Is the maximum length L of (2) cmax Second coefficient vector a t Is the maximum length l of (2) tmax A relaxation coefficient lambda, and a first threshold delta c And a second threshold delta t Wherein delta c =λ*L cmax ,δ t =λ*L tmax ;
An iteration subunit, configured to iterate the preprocessed first suspicious object image X for F times;
a reconstruction image low frequency signal calculation subunit for calculating the image high frequency signal X t Keep unchanged, update the image low frequency signal X c Calculating the residual part X 'after updating the image low-frequency signal' c Wherein X 'is' c =X-X c ;
Calculating a first coefficient vector a c Wherein, the method comprises the steps of, wherein,is X' c And determining a first coefficient vector a by soft thresholding c By setting the threshold to delta c Processing a first coefficient vector a c Obtaining a first coefficient after treatment +.>By changing->Reconstruction X c Obtaining a reconstructed image low-frequency signal X' c Wherein T is c Is X c Is a sparse representation of structural information;
reconstruction image high frequency signal calculation subunit for calculating the image low frequency signal X c Remains unchanged, updates the image high frequency signal X t Calculating the residual part X 'after updating the image high-frequency signal' t =X-X t ;
Calculating a second coefficient vector a t Wherein, the method comprises the steps of, wherein,is X' t And determining a second coefficient vector a by soft thresholding t By setting the threshold to delta t Processing the second coefficient vector a t Obtaining a second coefficient after treatment +.>By changing->Reconstruction X t Obtaining a reconstructed image high-frequency signal X' t Wherein T is t Is X t Is a sparse representation of structural information;
an updating subunit for updating the first threshold value delta c Let delta c =δ c -lambda, if delta c Ending the lambda algorithm; or updating the second threshold delta t Let delta t =δ t -lambda, if delta t Ending the lambda algorithm;
a first operator subunit for reconstructing the image low frequency signal X' c The gray level image of (2) is subjected to scale Retinex enhancement, gaussian surrounding scale c is input, and under discrete conditions, integration is converted into summation, and scale parameter lambda is determined R A numerical value;
a second operator unit for calculating X from formula (1) r (x,y);
For a single scale, the input image is S (x, y), the luminance image is L (x, y), and the reflected image is R (x, y);
Wherein X is r (x, y) is the first output image, is a convolution operator, F (x, y) is a center-around function,c is a gaussian surround scale and ≡c F (x, y) dxdy=1;
enhanced low frequency image acquisition subunit for converting X r (X, y) converting from logarithmic domain to real domain, and performing linear stretching to obtain a second output image X cR (X, y), i.e. enhanced low frequency structural image X cR And stored in Double form;
a vector set acquisition subunit, configured to perform noise reduction processing by using a K-SVD algorithm, select a sliding factor s=1, and scale to beIs +.>Collecting an image I, and obtaining a vector set according to a block iteration strategyWherein y is i Is the i iteration block vector, M is the number of the iteration block vectors, and meets the following requirementsN is the total number of pixel blocks in the input image, N is the number of pixel blocks in the input image;
a vector set sampling subunit, configured to sample the vector set Y by a random sampling method, and extract a sampled block vector to a training sample setWherein y' i Is the ith training sample, M 'is the number of training samples, and M' is more than 0 and less than or equal to M;
the sample set training subunit is used for carrying out dictionary training on the training sample set Y' based on a sparse K-SVD method to obtain a sparse dictionary
Training dictionary acquisition subunit for following formulaSparse dictionary->Carry-in get training dictionary->Wherein A is a basic dictionary;
a noise-reduced high-frequency image acquisition subunit for obtaining a training dictionarySparse coding is used for vector set Y of all overlapped blocks by using OMP-cholesky algorithm to obtain sparse matrix +.>R ij Extracting an operator for the image overlapping block; by the formula->Get an estimate of Y->Noise-reduced high-frequency texture image +.>Obtained from formula (2):
a third operator subunit for performing a reconstruction of the enhanced low frequency structure image X cR And the high-frequency texture image after noise reductionAnd carrying out additive operation to obtain an underwater image L (x, y), namely a second suspicious object image.
Preferably, in the dividing module, the underwater three-dimensional map is divided by a grid method in combination with a neuron network algorithm, and a change rule of neuron activity in the neuron network algorithm is expressed as a formula (3):
wherein u is k Is the activity value of neuron k in the neuron network; u (u) l Is the activity value of neuron l adjacent to neuron k; the parameter A, B, D is a positive constant, -A reflects the activity value u of neuron k k Is a rate of decay of (2); b and D are each u k Upper and lower limit values of (a), i.e. u k ∈[-D,B];I k An external input signal representing neuron k, when I k >0 represents an excitation signal, when I k <0 represents a suppression signal;indicating that a neuron capable of generating an excitation signal for neuron k is defined at a distance from its location not exceedingWithin the peripheral neuron area of (a), namely, the peripheral 26 neurons;
w kl the neuron connection weight coefficient between neurons k and l is represented as formula (4):
where, kl represents the distance between neuron k and neuron l in the neural network, and μ is a constant coefficient.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
1. according to the scheme, the underwater and water data transmission is realized by adopting a mode of mutually matching the search underwater robot (sAUV) and the transmission underwater robot (cAUV), so that the transmission speed, range and stability of underwater information obtained by the underwater robot (AUV) are improved.
2. According to the scheme, the first suspicious object image is obtained by the searching underwater robot (sAAV), then the first suspicious object image is preprocessed by the transmission underwater robot (cAUV), and then the further refined processing of the first suspicious object image after the preprocessing is completed by the unmanned underwater vehicle (USV), so that the two underwater robots (AUVs) take the roles respectively, and the equipment required to be carried on each underwater robot (AUV) is reduced, so that the power consumption and the cost of the equipment are reduced. Positioning a transmission underwater robot (cAUV) through an unmanned water vehicle (USV) on water and positioning a search underwater robot (sAUV) through the transmission underwater robot (cAUV), sending a section of time-stamped information upwards at intervals in the positioning process, and repeatedly adjusting the position through the time difference of the received information so as to improve the transmission rate of the information. The positioning of the search underwater robot (sAUV) is more accurate, and the packet loss rate of underwater data can be reduced by transmitting the underwater robot (cAUV) as an intermediate point of information transmission.
3. According to the scheme, the search water area is artificially divided into a plurality of grids, the grids are combined with the neuron algorithm, corresponding neuron activity values are set for each grid, the target search efficiency of the underwater robot (AUV) is conveniently improved, the positioning accuracy of the underwater robot (AUV) is guaranteed, the communication efficiency of the underwater robot (AUV) is guaranteed, the repeated search times of the underwater robot (AUV) are reduced, and the energy of the single underwater robot (AUV) is maximally utilized.
Drawings
Fig. 1 is a flow chart of steps of an AUV underwater searching method based on 5G networking.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
An AUV underwater object searching method based on 5G networking comprises the following steps:
step S1: constructing an underwater three-dimensional map of a searched water area, and establishing a base station database, wherein the base station database comprises a plurality of target object images to be searched;
Step S2: dividing the underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area;
step S3: marking a plurality of suspicious point three-dimensional coordinate positions in the path planning area of the underwater robot;
step S4: acquiring a first suspicious object image at a suspicious point three-dimensional coordinate position, and performing image processing on the first suspicious object image to obtain a second suspicious object image;
step S5: comparing the second suspicious object image with the target object image to be searched, and if the second suspicious object image is consistent with the target object image to be searched, representing that the searching is successful; and if the second suspicious object image is inconsistent with the target object image to be searched, the searching is unsuccessful.
According to the AUV underwater object searching method based on the 5G networking, as shown in fig. 1, the first step is to construct an underwater three-dimensional map of a searched water area and a database, wherein the database comprises a plurality of target object images to be searched; specifically, a multi-beam sounding system is carried on a water surface Unmanned Ship (USV), a large-scale navigation is carried out in a search water area, and beams emitted by a transmitting transducer and beams received by a receiving transducer in the multi-beam sounding system are processed, so that an underwater three-dimensional map of the search water area can be constructed. In addition, before the searching task starts, a plurality of target object images to be searched are input into an onshore base station in advance, so that the base station database is built. Dividing the underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area; specifically, the underwater three-dimensional map is divided into regular and uniform grids containing information stored in a matrix by using a grid method, each grid is regarded as a neuron, and the neuron activity value in the grid can guide the searching underwater robot (sAAV) to advance, and the underwater robot path planning area in the embodiment is the area with the largest neuron activity value. Marking three-dimensional coordinate positions of a plurality of suspicious points in the path planning area of the underwater robot; specifically, a plurality of suspicious point three-dimensional coordinate positions are marked in the underwater robot path planning area through high-performance embedded equipment carried by an unmanned water vehicle (USV), and then the search underwater robot (sAAV) is arranged to go to the suspicious points. The fourth step is to acquire a first suspicious object image at the three-dimensional coordinate position of the suspicious point, and to perform image processing on the first suspicious object image to acquire a second suspicious object image; specifically, after the search underwater robot (sAAV) reaches the suspicious point, the obtained first suspicious object image is preprocessed and uploaded to a transmission underwater robot (cAUV) which is close to the water surface and serves as a transmission medium, and the position of the next suspicious object is moved forward. Then, a transmission underwater robot (cAUV) with a transmission function uploads the preprocessed first suspicious object image into an unmanned underwater vehicle (USV), the transmission underwater robot (cAUV) can improve the transmission rate of the first suspicious object image data under water, and the unmanned underwater vehicle (USV) performs refinement processing on the preprocessed first suspicious object image to obtain a second suspicious object image and transmits the second suspicious object image to an onshore base station through a 5G communication network. The fifth step is to compare the second suspicious object image with the target object image to be searched, if the second suspicious object image is consistent with the target object image to be searched, the searching is successful; if the second suspicious object image is inconsistent with the target object image to be searched, the searching is not successful; specifically, the on-shore base station compares the second suspicious object image with the object image to be searched, judges whether the object in the second suspicious object image is the object, if so, stops the searching task, and does not stop the searching task until the searching underwater robot (sAAV) receives the information of the object, which is confirmed to be the object, transmitted by the on-shore base station.
According to the scheme, the underwater and water data transmission is realized by adopting a mode of mutually matching the search underwater robot (sAUV) and the transmission underwater robot (cAUV), so that the transmission speed, range and stability of underwater information obtained by the underwater robot (AUV) are improved. According to the scheme, the first suspicious object image is obtained by the searching underwater robot (sAAV), then the first suspicious object image is preprocessed by the transmission underwater robot (cAUV), and then the further refined processing of the first suspicious object image after the preprocessing is completed by the unmanned underwater vehicle (USV), so that the two underwater robots (AUVs) take the roles respectively, and the equipment required to be carried on each underwater robot (AUV) is reduced, so that the power consumption and the cost of the equipment are reduced. Positioning a transmission underwater robot (cAUV) through an unmanned water vehicle (USV) on water and positioning a search underwater robot (sAUV) through the transmission underwater robot (cAUV), sending a section of time-stamped information upwards at intervals in the positioning process, and repeatedly adjusting the position through the time difference of the received information so as to improve the transmission rate of the information. The positioning of the search underwater robot (sAUV) is more accurate, and the packet loss rate of underwater data can be reduced by transmitting the underwater robot (cAUV) as an intermediate point of information transmission. According to the scheme, the search water area is artificially divided into a plurality of grids, the grids are combined with the neuron algorithm, corresponding neuron activity values are set for each grid, the target search efficiency of the underwater robot (AUV) is conveniently improved, the positioning accuracy of the underwater robot (AUV) is guaranteed, the communication efficiency of the underwater robot (AUV) is guaranteed, the repeated search times of the underwater robot (AUV) are reduced, and the energy of the single underwater robot (AUV) is maximally utilized.
Preferably, in step S4, the method specifically comprises the following substeps:
step S41: preprocessing the first suspicious object image to obtain a preprocessed first suspicious object image;
step S42: and carrying out refinement treatment on the preprocessed first suspicious object image to obtain a second suspicious object image.
In this embodiment, a first suspicious object image is obtained by a search underwater robot (sAUV) and then is sent to a transmission underwater robot (cAUV) for preprocessing, specifically, the scheme places a positioning and track gesture part with high real-time performance and simple calculation on the search underwater robot (sAUV), and places preprocessing of a video image on the transmission underwater robot (cAUV), so that the operation burden of the two underwater robots (AUV) can be reduced. Finally, finishing further refinement treatment of the first suspicious object image after pretreatment by the unmanned on water (USV), specifically, placing the refinement treatment of the video image to the unmanned on water (USV) and slicing the unmanned on water (USV) through a 5G network to realize interactive transmission of data with the characteristics of large bandwidth and low delay.
Preferably, in step S41, the method specifically includes the following substeps:
step S411: performing gray level conversion on the first suspicious object image to obtain a first gray level image;
Step S412: translating and stretching a histogram of the first gray level image within a limit value range by using CLAHE-WT based on Rayleigh distribution, so that a maximum point of a histogram parabola of the first gray level image translates to a middle gray level;
step S413: the histogram parabola of the first gray level image is stretched towards the low gray level and the high gray level, so that the times of each gray level in the first gray level image are more balanced, and a second gray level image, namely a preprocessed first suspicious object image, is obtained.
In this embodiment, the first suspicious object image obtained by the transmission underwater robot (cAUV) is a color image, after gray conversion, a first gray image is obtained, and then histogram equalization is performed on the first gray image by using the CLAHE-WT with rayleigh distribution, so that the image contrast of the first gray image can be enhanced, and finally a second gray image is obtained.
Preferably, in step S42, the method specifically includes the following steps:
step S421: initializing the iteration number F, and a first coefficient vector a c Is the maximum length L of (2) cmax Second coefficient vector a t Is the maximum length L of (2) tmax A relaxation coefficient lambda, and a first threshold delta c And a second threshold delta t Wherein delta c =λ*L cmax ,δ t =λ*L tmax F iterations are carried out on the preprocessed first suspicious object image X;
step S422: if the image is a high-frequency signal X t Keep unchanged, update the image low frequency signal X c Calculating the residual part X 'after updating the image low-frequency signal' c Wherein X 'is' c =X-X c ;
Calculating a first coefficient vector a c Wherein, the method comprises the steps of, wherein,is X' c And determining a first coefficient vector a by soft thresholding c By setting the threshold to delta c Processing a first coefficient vector a c Obtaining a first coefficient after treatment +.>By changing->Reconstruction X c Obtaining a reconstructed image low-frequency signal X' c Wherein T is c Is X c Is a sparse representation of structural information;
step S423: if the image is a low-frequency signal X c Remains unchanged, updates the image high frequency signal X t Calculating the residual part X 'after updating the image high-frequency signal' t =X-X t ;
Calculating a second coefficient vector a t Wherein, the method comprises the steps of, wherein,is X' t And determining a second coefficient vector a by soft thresholding t By setting the threshold to delta t Processing the second coefficient vector a t Obtaining a second coefficient after treatment +.>By changing->Reconstruction X t Obtaining a reconstructed image high-frequency signal X' t Wherein T is t Is X t Is sparse of (2)Representing structural information;
step S424: updating the first threshold delta c Let delta c =δ c -lambda, if delta c Ending the lambda algorithm; or updating the second threshold delta t Let delta t =δ t -lambda, if delta t Ending the lambda algorithm;
step S425: low frequency signal X' for reconstructed image " c The gray level image of (2) is subjected to scale Retinex enhancement, gaussian surrounding scale c is input, and under discrete conditions, integration is converted into summation, and scale parameter lambda is determined R A numerical value;
step S426: x is calculated by the formula (1) r (x,y);
For a single scale, the input image is S (x, y), the luminance image is L (x, y), and the reflected image is R (x, y);
wherein X is r (x, y) is the first output image, is a convolution operator, F (x, y) is a center-around function,c is a gaussian surround scale and ≡c F (x, y) dxdy=1;
step S427: x is to be r (X, y) converting from logarithmic domain to real domain, and performing linear stretching to obtain a second output image X cR (X, y), i.e. enhanced low frequency structural image X cR And stored in Double form;
step S428: noise reduction processing is carried out by using a K-SVD algorithm, a sliding factor s=1 is selected, and the scale is thatIs +.>Collecting an image I, and obtaining a vector set according to a block iteration strategy>Wherein y is i Is the i-th iteration block vector, M is the number of iteration block vectors, and M= is satisfied>N is the total number of pixel blocks in the input image, N is the number of pixel blocks in the input image;
Step S429: sampling the vector set Y by a random sampling method, and extracting the sampled block vector to a training sample setWherein y' i Is the ith training sample, M 'is the number of training samples, and M' is more than 0 and less than or equal to M;
step S4210: dictionary training based on a sparse K-SVD method is carried out on the training sample set Y' to obtain a sparse dictionary
Step S4211: according to the formulaSparse dictionary->Carry-in get training dictionary->Wherein A is a basic dictionary;
step S4212: for the obtained training dictionarySparse coding is used for vector set Y of all overlapped blocks by using OMP-cholesky algorithm to obtain sparse matrix +.>R ij Extraction for overlapping blocks of an imageAn operator; by the formula->Obtaining an estimate of YNoise-reduced high-frequency texture image +.>Obtained from formula (2):
step S4213: for the enhanced low-frequency structure image X cR And the high-frequency texture image after noise reductionAnd carrying out additive operation to obtain an underwater image L (x, y), namely a second suspicious object image.
In this embodiment, the second suspicious object image is obtained by performing refinement processing on the preprocessed first suspicious object image X. Specifically, the enhanced low-frequency structure image X cR And the high-frequency texture image after noise reductionAnd carrying out additive operation so as to obtain an underwater image L (x, y) with the enhancement and noise reduction effects, and uploading the underwater image L (x, y) to an onshore base station by a water unmanned vehicle (USV) through a 5G communication mode.
Preferably, in step S2, the change rule of the neuron activity in the neuron network algorithm is represented by formula (3):
wherein u is k Is the activity value of neuron k in the neuron network; u (u) l Is a neuron adjacent to neuron kAn activity value of l; the parameter A, B, D is a positive constant, -A reflects the activity value u of neuron k k Is a rate of decay of (2); b and D are each u k Upper and lower limit values of (a), i.e. u k ∈[-D,B];I k An external input signal representing neuron k, when I k >0 represents an excitation signal, when I k <0 represents a suppression signal;indicating that a neuron capable of generating an excitation signal for neuron k is defined at a distance from its location not exceedingWithin the peripheral neuron area of (a), namely, the peripheral 26 neurons;
w kl the neuron connection weight coefficient between neurons k and l is represented as formula (4):
where, kl represents the distance between neuron k and neuron l in the neural network, and μ is a constant coefficient.
Specifically, the definition of the activity of the external signal corresponding to the neuron in the neural network is expressed as formula (5):
the neuron activity of the passing and non-passing positions is changed, and after passing a point, the neuron activity of the passing position is updated in real time through the processing of an unmanned water vehicle (USV) and the updated value is transmitted to a searching underwater robot (sAAV) so as to avoid repeated searching of the same area.
Substituting the value of the external signal into the formula (3) to calculate the activity value of each neuron in the output neural network. Search underwater robot (sAAV) selects the next navigation position P according to the activity value of each neuron n Search underwater robot(sAAV) the specific search planned path is expressed as in equation (6):
where Path is the Path set of sAAV; p (P) p ,P c ,P n Respectively representing a current position, a previous time position and a next time position of the search underwater robot (sAAV); u (u) k Is the activity value of the current neuron k; u (u) l Is the activity value of peripheral neuron l: the k l is the Euclidean distance between the current neuron k and the peripheral neuron l.
According to the formula (6), when the underwater robot (AUV) searches for the navigation path at the next moment, the neuron activity values near the current neuron are compared, and the neuron with the largest neuron activity value is used as the next position, and the process is repeated until the suspicious object is found.
When the search underwater robot (sAAV) reaches the suspicious object position, the photographed underwater image with the time stamp is transmitted to the upper device, and then the neuron activity corresponding to the external signal is updated, and the activity of the suspicious object is defined as-1 as the obstacle. The search underwater robot (sAAV) then proceeds to the next suspicious object location until an information surface object has been found that has been received from the shore.
In another aspect, the application provides an AUV underwater searching system based on 5G networking, which comprises:
the construction module is used for constructing an underwater three-dimensional map of the searched water area;
the base station database comprises a plurality of target object images to be searched;
the dividing module is used for dividing the underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area;
the marking module is used for marking three-dimensional coordinate positions of a plurality of suspicious points in the path planning area of the underwater robot;
the acquisition module is used for acquiring a first suspicious object image at the three-dimensional coordinate position of the suspicious point;
the image processing module is used for carrying out image processing on the first suspicious object image to obtain a second suspicious object image;
the comparison judging module is used for comparing the second suspicious object image with the target object image to be searched, and if the second suspicious object image is consistent with the target object image to be searched, the search is successful; and if the second suspicious object image is inconsistent with the target object image to be searched, the searching is unsuccessful.
According to the AUV underwater object searching system based on the 5G networking, under the mutual cooperation of the construction module, the establishment module, the division module, the marking module, the acquisition module, the image processing module and the comparison judging module, the underwater robot can accurately and stably search suspicious objects underwater.
According to the scheme, the underwater and water data transmission is realized by adopting a mode of mutually matching the search underwater robot (sAUV) and the transmission underwater robot (cAUV), so that the transmission speed, range and stability of underwater information obtained by the underwater robot (AUV) are improved. According to the scheme, the first suspicious object image is obtained by the searching underwater robot (sAAV), then the first suspicious object image is preprocessed by the transmission underwater robot (cAUV), and then the further refined processing of the first suspicious object image after the preprocessing is completed by the unmanned underwater vehicle (USV), so that the two underwater robots (AUVs) take the roles respectively, and the equipment required to be carried on each underwater robot (AUV) is reduced, so that the power consumption and the cost of the equipment are reduced. Positioning a transmission underwater robot (cAUV) through an unmanned water vehicle (USV) on water and positioning a search underwater robot (sAUV) through the transmission underwater robot (cAUV), sending a section of time-stamped information upwards at intervals in the positioning process, and repeatedly adjusting the position through the time difference of the received information so as to improve the transmission rate of the information. The positioning of the search underwater robot (sAUV) is more accurate, and the packet loss rate of underwater data can be reduced by transmitting the underwater robot (cAUV) as an intermediate point of information transmission. According to the scheme, the search water area is artificially divided into a plurality of grids, the grids are combined with the neuron algorithm, corresponding neuron activity values are set for each grid, the target search efficiency of the underwater robot (AUV) is conveniently improved, the positioning accuracy of the underwater robot (AUV) is guaranteed, the communication efficiency of the underwater robot (AUV) is guaranteed, the repeated search times of the underwater robot (AUV) are reduced, and the energy of the single underwater robot (AUV) is maximally utilized.
Preferably, the image processing module includes:
the image preprocessing sub-module is used for preprocessing the first suspicious object image to obtain a preprocessed first suspicious object image;
and the image refining processing sub-module is used for refining the preprocessed first suspicious object image to obtain a second suspicious object image.
In this embodiment, in the image preprocessing sub-module, a first suspicious object image is obtained by a search underwater robot (sAUV) and then is subjected to preprocessing by a transmission underwater robot (cAUV), specifically, the scheme places the positioning and track gesture part with high instantaneity and simple calculation on the search underwater robot (sAUV), and places the preprocessing of the video image on the transmission underwater robot (cAUV), so that the operation burden of the two underwater robots (AUV) can be reduced. In the image refinement processing submodule, the water unmanned aerial vehicle (USV) completes further refinement processing of the first suspicious object image after preprocessing, specifically, the refinement processing of the video image is put into the water unmanned aerial vehicle (USV) and the interactive transmission of data is realized through the characteristics of large bandwidth and low time delay of 5G network slicing.
Preferably, the image preprocessing sub-module includes:
The image gray level conversion subunit is used for performing gray level conversion on the first suspicious object image to obtain a first gray level image;
a first processing subunit, configured to translate and stretch a histogram of the first gray image within a range of a limiting value by using a CLAHE-WT based on rayleigh distribution, so that a maximum point of a histogram parabola of the first gray image translates to a middle gray level;
and the second processing subunit is used for stretching the histogram parabola of the first gray level image to low gray level and high gray level so that the times of each gray level appearing in the first gray level image are more balanced, and a second gray level image, namely a preprocessed first suspicious object image, is obtained.
In this embodiment, the image gray-scale conversion subunit is configured to facilitate converting the first suspicious image in color into a gray-scale image. The arrangement of the first processing subunit and the second processing subunit can enhance the image contrast of the first gray scale image, and finally obtain the second gray scale image.
Preferably, the image refinement processing sub-module includes:
an initialization subunit for initializing the iteration number F, a first coefficient vector a c Is the maximum length L of (2) cmax Second coefficient vector a t Is the maximum length L of (2) tmax A relaxation coefficient lambda, and a first threshold delta c And a second threshold delta t Wherein delta c =λ*L cmax ,δ t =λ*L tmax ;
An iteration subunit, configured to iterate the preprocessed first suspicious object image X for F times;
a reconstruction image low frequency signal calculation subunit for calculating the image high frequency signal X t Keep unchanged, update the image low frequency signal X c Calculating the residual part X 'after updating the image low-frequency signal' c Wherein X 'is' c =X-X c ;
Calculating a first coefficient vector a c Wherein, the method comprises the steps of, wherein,is X' c And determining a first coefficient vector a by soft thresholding c By setting the threshold to delta c Processing a first coefficient vector a c Obtaining a first coefficient after treatment +.>By changing->Reconstruction X c Obtaining a reconstructed image low-frequency signal X' c Wherein T is c Is X c Is a sparse representation of structural information;
reconstruction image high frequency signal calculation subunit for calculating the image low frequency signal X c Remains unchanged, updates the image high frequency signal X t Calculating the residual part X 'after updating the image high-frequency signal' t =X-X t ;
Calculating a second coefficient vector a t Wherein, the method comprises the steps of, wherein,is X' t And determining a second coefficient vector a by soft thresholding t By setting the threshold to delta t Processing the second coefficient vector a t Obtaining a second coefficient after treatment +. >By changing->Reconstruction X t Obtaining a reconstructed image high-frequency signal X' t Wherein T is t Is X t Is a sparse representation of structural information;
an updating subunit for updating the first threshold value delta c Let delta c =δ c -lambda, if delta c Ending the lambda algorithm; or updating the second threshold delta t Let delta t =δ t -lambda, if delta t Ending the lambda algorithm;
a first operator subunit for reconstructing the image low frequency signal X' c The gray level image of (2) is subjected to scale Retinex enhancement, gaussian surrounding scale c is input, and under discrete conditions, integration is converted into summation, and scale parameter lambda is determined R A numerical value;
a second operator unit for calculating X from formula (1) r (x,y);
For a single scale, the input image is S (x, y), the luminance image is L (x, y), and the reflected image is R (x, y);
wherein X is r (x, y) is the first output image, is a convolution operator, F (x, y) is a center-around function,c is a gaussian surround scale and ≡c F (x, y) dxdy=1;
enhanced low frequency image acquisition subunit for converting X r (X, y) converting from logarithmic domain to real domain, and performing linear stretching to obtain a second output image X cR (X, y), i.e. enhanced low frequency structural image X cR And stored in Double form;
a vector set acquisition subunit, configured to perform noise reduction processing by using a K-SVD algorithm, select a sliding factor s=1, and scale to be Is +.>Collecting an image I, and obtaining a vector set according to a block iteration strategyWherein y is i Is the i iteration block vector, M is the number of the iteration block vectors, and meets the following requirementsN is the total number of pixel blocks in the input image, N is the number of pixel blocks in the input image;
a vector set sampling subunit, configured to sample the vector set Y by a random sampling method, and extract a sampled block vector to a training sample setWherein y' i Is the ith training sample, M 'is the number of training samples, and M' is more than 0 and less than or equal to M;
the sample set training subunit is used for carrying out dictionary training on the training sample set Y' based on a sparse K-SVD method to obtain a sparse dictionary
Training dictionary acquisition subunit for following formulaSparse dictionary->Carry-in get training dictionary->Wherein A is a basic dictionary;
a noise-reduced high-frequency image acquisition subunit for obtaining a training dictionarySparse coding is used for vector set Y of all overlapped blocks by using OMP-cholesky algorithm to obtain sparse matrix +.>R ij Extracting an operator for the image overlapping block; by the formula->Get an estimate of Y->Noise-reduced high-frequency texture image +.>Obtained from formula (2):
a third operator subunit for performing a reconstruction of the enhanced low frequency structure image X cR And the high-frequency texture image after noise reductionAnd carrying out additive operation to obtain an underwater image L (x, y), namely a second suspicious object image.
In this embodiment, the enhanced low-frequency structural image and the noise-reduced high-frequency texture image can be obtained by the interaction of the initialization subunit, the iteration subunit, the reconstruction image low-frequency signal calculation subunit, the reconstruction image high-frequency signal calculation subunit, the update subunit, the first operation subunit, the second operation subunit, the enhanced low-frequency image acquisition subunit, the vector set sampling subunit, the training dictionary acquisition subunit and the noise-reduced high-frequency image acquisition subunit. And in the third operation subunit, carrying out additive operation on the enhanced low-frequency structural image and the noise-reduced high-frequency texture image, thereby obtaining the underwater image with the enhancement and noise reduction effects.
Preferably, in the dividing module, the underwater three-dimensional map is divided by combining a grid method with a neuron network algorithm, and a change rule of neuron activity in the neuron network algorithm is expressed as a formula (3):
wherein u is k Is the activity value of neuron k in the neuron network; u (u) l Is the activity value of neuron l adjacent to neuron k; the parameter A, B, D is a positive constant, -A reflects the activity value u of neuron k k Is a rate of decay of (2); b and D are each u k Upper and lower limit values of (a), i.e. u k ∈[-D,B];I k External input signal representing neuron kWhen I k >0 represents an excitation signal, when I k <0 represents a suppression signal;indicating that a neuron capable of generating an excitation signal for neuron k is defined at a distance from its location not exceedingWithin the peripheral neuron area of (a), namely, the peripheral 26 neurons;
w kl the neuron connection weight coefficient between neurons k and l is represented as formula (4):
where, kl represents the distance between neuron k and neuron l in the neural network, and μ is a constant coefficient.
Specifically, the definition of the activity of the external signal corresponding to the neuron in the neural network is expressed as formula (5):
the neuron activity of the passing and non-passing positions is changed, and after passing a point, the neuron activity of the passing position is updated in real time through the processing of an unmanned water vehicle (USV) and the updated value is transmitted to a searching underwater robot (sAAV) so as to avoid repeated searching of the same area.
Substituting the value of the external signal into the formula (3) to calculate the activity value of each neuron in the output neural network. Search underwater robot (sAAV) selects the next navigation position P according to the activity value of each neuron n The specific search planning path for searching the underwater robot (sAAV) is represented by the formula (6):
Where Path is the Path set of sAAV; p (P) p ,P c ,P n Respectively representing a current position, a previous time position and a next time position of the search underwater robot (sAAV); u (u) k Is the activity value of the current neuron k; u (u) l Is the activity value of peripheral neuron l: the k l is the Euclidean distance between the current neuron k and the peripheral neuron l.
According to the formula (6), when the underwater robot (AUV) searches for the navigation path at the next moment, the neuron activity values near the current neuron are compared, and the neuron with the largest neuron activity value is used as the next position, and the process is repeated until the suspicious object is found.
When the search underwater robot (sAAV) reaches the suspicious object position, the photographed underwater image with the time stamp is transmitted to the upper device, and then the neuron activity corresponding to the external signal is updated, and the activity of the suspicious object is defined as-1 as the obstacle. The search underwater robot (sAAV) then proceeds to the next suspicious object location until an information surface object has been found that has been received from the shore.
Furthermore, functional units in various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations of the above embodiments may be made by those skilled in the art within the scope of the invention.
Claims (7)
1. An AUV underwater object searching method based on 5G networking is characterized in that: the method comprises the following steps:
step S1: constructing an underwater three-dimensional map of a searched water area, and establishing a base station database, wherein the base station database comprises a plurality of target object images to be searched;
step S2: dividing the underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area;
in step S2, the change rule of the neuron activity in the neuron network algorithm is represented by the following formula (3):
wherein u is k Is the activity value of neuron k in the neuron network; u (u) l Is the activity value of neuron l adjacent to neuron k; the parameter A, B, D is a positive constant, -A reflects the activity value u of neuron k k Is a rate of decay of (2); b and D are each u k Upper and lower limit values of (a), i.e. u k ∈[-D,B];I k An external input signal representing neuron k, when I k > 0 represents the excitation signal, when I k < 0 indicates an inhibition signal;the neuron which can generate excitation signal to neuron k is limited to the position distance of not more than +.>Within the peripheral neuron area of (a), namely, the peripheral 26 neurons;
w kl the neuron connection weight coefficient between neurons k and l is represented as formula (4):
wherein, kl represents the distance between neuron k and neuron l in the neural network, and mu is a constant coefficient;
step S3: marking a plurality of suspicious point three-dimensional coordinate positions in the path planning area of the underwater robot;
step S4: acquiring a first suspicious object image at a suspicious point three-dimensional coordinate position, and performing image processing on the first suspicious object image to obtain a second suspicious object image;
in step S4, the method specifically includes the following substeps:
step S41: preprocessing the first suspicious object image to obtain a preprocessed first suspicious object image;
step S42: carrying out refinement treatment on the preprocessed first suspicious object image to obtain a second suspicious object image;
in step S42, the method specifically includes the steps of:
step S421: initializing the iteration number F, and a first coefficient vector a c Is the maximum length L of (2) cmax Second coefficient vector a t Is the maximum length L of (2) tmax A relaxation coefficient lambda, and a first threshold delta c And a second threshold delta t Wherein delta c =λ*L cmax ,δ t =λ*L tmax F iterations are carried out on the preprocessed first suspicious object image X;
step S422: if the image is a high-frequency signal X t Keep unchanged, update the image low frequency signal X c Calculating the residual part X 'after updating the image low-frequency signal' c Wherein X 'is' c =X-X c ;
Calculating a first coefficient vector a c Wherein, the method comprises the steps of, wherein, is X' c And determining a first coefficient vector a by soft thresholding c By setting the threshold to delta c Processing a first coefficient vector a c Obtaining a first coefficient after treatment +.>By transformationReconstruction X c Obtaining a reconstructed image low-frequency signal X c Wherein T is c Is X c Is a sparse representation of structural information;
step S423: if the image is a low-frequency signal X c Remains unchanged, updates the image high frequency signal X t Calculating the residual part X 'after updating the image high-frequency signal' t =X-X t ;
Calculating a second coefficient vector a t Wherein, the method comprises the steps of, wherein, is X' t And determining a second coefficient vector a by soft thresholding t By setting the threshold to delta t Processing the second coefficient vector a t Obtaining a second coefficient after treatment +.>By transformationReconstruction X t Obtaining a high-frequency signal X' of the reconstructed image t Wherein T is t Is X t Is a sparse representation of structural information;
step S424: updating the first threshold delta c Let delta c =δ c -lambda, if delta c Ending the lambda algorithm; or updating the second threshold delta t Let delta t =δ t -lambda, if delta t Ending the lambda algorithm;
step S425: reconstruction patternLike a low frequency signal X c The gray level image of (2) is subjected to scale Retinex enhancement, gaussian surrounding scale c is input, and under discrete conditions, integration is converted into summation, and scale parameter lambda is determined R A numerical value;
step S426: x is calculated by the formula (1) r (x,y);
For a single scale, the input image is S (x, y), the luminance image is L (x, y), and the reflected image is R (x, y);
wherein X is r (x, y) is the first output image, is a convolution operator, F (x, y) is a center-around function,c is a gaussian surround scale and ≡c F (x, y) dxdy=1;
step S427: x is to be r (X, y) converting from logarithmic domain to real domain, and performing linear stretching to obtain a second output image X cR (X, y), i.e. enhanced low frequency structural image X cR And stored in Double form;
step S428: noise reduction processing is carried out by using a K-SVD algorithm, a sliding factor s=1 is selected, and the scale is thatIs +.>Collecting an image I, and obtaining a vector set according to a block iteration strategy>Wherein y is i Is the i iteration block vector, M is the number of the iteration block vectors, and satisfies +.> N is the total number of pixel blocks in the input image, N is the number of pixel blocks in the input image;
Step S429: sampling the vector set Y by a random sampling method, and extracting the sampled block vector to a training sample setWherein y' i Is the ith training sample, M 'is the number of training samples, and M' is more than 0 and less than or equal to M;
step S4210: dictionary training based on a sparse K-SVD method is carried out on the training sample set Y' to obtain a sparse dictionary
Step S4211: according to the formulaSparse dictionary->Carry-in get training dictionary->Wherein A is a basic dictionary;
step S4212: for the obtained training dictionarySparse coding is used for vector set Y of all overlapped blocks by using OMP-cholesky algorithm to obtain sparse matrix +.>R ij Extracting an operator for the image overlapping block; by the formula->Get an estimate of Y->Noise-reduced high-frequency texture image +.>Obtained from formula (2):
step S4213: for the enhanced low-frequency structure image X cR And the high-frequency texture image after noise reductionCarrying out additive operation to obtain an underwater image L (x, y), namely a second suspicious object image;
step S5: comparing the second suspicious object image with the target object image to be searched, and if the second suspicious object image is consistent with the target object image to be searched, representing that the searching is successful; and if the second suspicious object image is inconsistent with the target object image to be searched, the searching is unsuccessful.
2. The underwater AUV object searching method based on 5G networking according to claim 1, wherein the method comprises the following steps: in step S41, the method specifically includes the following substeps:
step S411: performing gray level conversion on the first suspicious object image to obtain a first gray level image;
step S412: translating and stretching a histogram of the first gray level image within a limit value range by using CLAHE-WT based on Rayleigh distribution, so that a maximum point of a histogram parabola of the first gray level image translates to a middle gray level;
step S413: the histogram parabola of the first gray level image is stretched towards the low gray level and the high gray level, so that the times of each gray level in the first gray level image are more balanced, and a second gray level image, namely a preprocessed first suspicious object image, is obtained.
3. AUV (autonomous Underwater vehicle) underwater object searching system based on 5G networking is characterized in that: use of an AUV underwater searching method based on 5G networking according to any of claims 1-2, the system comprising:
the construction module is used for constructing an underwater three-dimensional map of the searched water area;
the base station database comprises a plurality of target object images to be searched;
The dividing module is used for dividing the underwater three-dimensional map by combining a grid method with a neural network algorithm to obtain an underwater robot path planning area;
the marking module is used for marking three-dimensional coordinate positions of a plurality of suspicious points in the path planning area of the underwater robot;
the acquisition module is used for acquiring a first suspicious object image at the three-dimensional coordinate position of the suspicious point;
the image processing module is used for carrying out image processing on the first suspicious object image to obtain a second suspicious object image;
the comparison judging module is used for comparing the second suspicious object image with the target object image to be searched, and if the second suspicious object image is consistent with the target object image to be searched, the search is successful; and if the second suspicious object image is inconsistent with the target object image to be searched, the searching is unsuccessful.
4. An AUV underwater searching system based on 5G networking according to claim 3, wherein: the image processing module includes:
the image preprocessing sub-module is used for preprocessing the first suspicious object image to obtain a preprocessed first suspicious object image;
and the image refining processing sub-module is used for refining the preprocessed first suspicious object image to obtain a second suspicious object image.
5. The 5G networking-based AUV underwater finder system of claim 4, wherein: the image preprocessing sub-module comprises:
the image gray level conversion subunit is used for performing gray level conversion on the first suspicious object image to obtain a first gray level image;
a first processing subunit, configured to translate and stretch a histogram of the first gray image within a range of a limiting value by using a CLAHE-WT based on rayleigh distribution, so that a maximum point of a histogram parabola of the first gray image translates to a middle gray level;
and the second processing subunit is used for stretching the histogram parabola of the first gray level image to low gray level and high gray level so that the times of each gray level appearing in the first gray level image are more balanced, and a second gray level image, namely a preprocessed first suspicious object image, is obtained.
6. The 5G networking-based AUV underwater finder system of claim 4, wherein: the image refinement processing sub-module comprises:
an initialization subunit for initializing the iteration number F, a first coefficient vector a c Is the maximum length L of (2) cmax Second coefficient vector a t Is the maximum length L of (2) tmax A relaxation coefficient lambda, and a first threshold delta c And a second threshold delta t Wherein delta c =λ*L cmax ,δ t =λ*L tmax ;
An iteration subunit, configured to iterate the preprocessed first suspicious object image X for F times;
a reconstruction image low frequency signal calculation subunit for calculating the image high frequency signal X t Keep unchanged, update the image low frequency signal X c Calculating the residual part X 'after updating the image low-frequency signal' c Wherein X 'is' c =X-X c ;
Calculating a first coefficient vector a c Wherein, the method comprises the steps of, wherein, is X' c And determining a first coefficient vector a by soft thresholding c By setting the threshold to delta c Processing a first coefficient vector a c Obtaining a first coefficient after treatment +.>By transformationReconstruction X c Obtaining a reconstructed image low-frequency signal X c Wherein T is c Is X c Is a sparse representation of structural information;
reconstruction image high frequency signal calculation subunit for calculating the image low frequency signal X c Remains unchanged, updates the image high frequency signal X t Calculating the residual part X 'after updating the image high-frequency signal' t =X-X t ;
Calculating a second coefficient vector a t Wherein, the method comprises the steps of, wherein, is X' t And determining a second coefficient vector a by soft thresholding t By setting the threshold to delta t Processing the second coefficient vector a t Obtaining a second coefficient after treatment +.>By transformationReconstruction X t Obtaining a high-frequency signal X' of the reconstructed image t Wherein T is t Is X t Is a sparse representation of structural information;
an updating subunit for updating the first threshold value delta c Let delta c =δ c -lambda, if delta c Ending the lambda algorithm; or updating the second threshold delta t Let delta t =δ t -lambda, if delta t Ending the lambda algorithm;
a first operator unit for reconstructing the image low frequency signal X c The gray level image of (2) is subjected to scale Retinex enhancement, gaussian surrounding scale c is input, and under discrete conditions, integration is converted into summation, and scale parameter lambda is determined R A numerical value;
a second operator unit for calculating X from formula (1) r (x,y);
For a single scale, the input image is S (x, y), the luminance image is L (x, y), and the reflected image is R (x, y);
wherein X is r (x, y) is the first output image, is a convolution operator, F (x, y) is a center-around function,c is a gaussian surround scale and ≡c F (x, y) dxdy=1;
enhanced low frequency image acquisition subunit for converting X r (X, y) converting from logarithmic domain to real domain, and performing linear stretching to obtain a second output image X cR (X, y), i.e. enhanced low frequency structural image X cR And stored in Double form;
a vector set acquisition subunit, configured to perform noise reduction processing by using a K-SVD algorithm, select a sliding factor s=1, and scale to be Is +.>Collecting an image I, and obtaining a vector set according to a block iteration strategyWherein y is i Is the i iteration block vector, M is the number of the iteration block vectors, and meets the following requirementsN is the total number of pixel blocks in the input image, N is the number of pixel blocks in the input image;
a vector set sampling subunit, configured to sample the vector set Y by a random sampling method, and extract a sampled block vector to a training sample setWherein y' i Is the ith training sample, M 'is the number of training samples, and M' is more than 0 and less than or equal to M;
the sample set training subunit is used for carrying out dictionary training on the training sample set Y' based on a sparse K-SVD method to obtain a sparse dictionary
Training dictionary acquisition subunit for following formulaSparse dictionary->Carry-in get training dictionary->Wherein A is a basic dictionary;
a noise-reduced high-frequency image acquisition subunit for obtaining a training dictionarySparse coding is used for vector set Y of all overlapped blocks by using OMP-cholesky algorithm to obtain sparse matrix +.>R ij Extracting an operator for the image overlapping block; from the formulaGet an estimate of Y->Noise-reduced high-frequency texture image +.>Obtained from formula (2):
a third operator subunit for performing a reconstruction of the enhanced low frequency structure image X cR And the high-frequency texture image after noise reductionAnd carrying out additive operation to obtain an underwater image L (x, y), namely a second suspicious object image.
7. An AUV underwater searching system based on 5G networking according to claim 3, wherein: in the dividing module, the underwater three-dimensional map is divided by combining a grid method with a neuron network algorithm, wherein the change rule of the neuron activity in the neuron network algorithm is expressed as a formula (3):
wherein u is k Is the activity value of neuron k in the neuron network; u (u) l Is the activity value of neuron l adjacent to neuron k; the parameter A, B, D is a positive constant, -A reflects the activity value u of neuron k k Is a rate of decay of (2); b and D are each u k Upper and lower limit values of (a), i.e. u k ∈[-D,B];I k An external input signal representing neuron k, when I k > 0 represents the excitation signal, when I k < 0 indicates an inhibition signal;the neuron which can generate excitation signal to neuron k is limited to the position distance of not more than +.>Within the peripheral neuron area of (a), namely, the peripheral 26 neurons;
w kl the neuron connection weight coefficient between neurons k and l is represented as formula (4):
where, kl represents the distance between neuron k and neuron l in the neural network, and μ is a constant coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310458080.4A CN116243720B (en) | 2023-04-25 | 2023-04-25 | AUV underwater object searching method and system based on 5G networking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310458080.4A CN116243720B (en) | 2023-04-25 | 2023-04-25 | AUV underwater object searching method and system based on 5G networking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116243720A CN116243720A (en) | 2023-06-09 |
CN116243720B true CN116243720B (en) | 2023-08-22 |
Family
ID=86633385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310458080.4A Active CN116243720B (en) | 2023-04-25 | 2023-04-25 | AUV underwater object searching method and system based on 5G networking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116243720B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111045453A (en) * | 2019-12-25 | 2020-04-21 | 南京工程学院 | Cooperative control system and method based on unmanned ship and multi-underwater robot |
CN111413698A (en) * | 2020-03-04 | 2020-07-14 | 武汉理工大学 | Target positioning method for underwater robot searching and feeling |
CN114248893A (en) * | 2022-02-28 | 2022-03-29 | 中国农业大学 | Operation type underwater robot for sea cucumber fishing and control method thereof |
CN114675643A (en) * | 2022-03-21 | 2022-06-28 | 广州杰赛科技股份有限公司 | Information transmission path planning method and device of wireless sensor network |
CN114779801A (en) * | 2021-01-22 | 2022-07-22 | 中国科学院沈阳自动化研究所 | Autonomous remote control underwater robot path planning method for target detection |
CN115373383A (en) * | 2022-07-15 | 2022-11-22 | 广东工业大学 | Autonomous obstacle avoidance method and device for garbage recovery unmanned boat and related equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10894676B2 (en) * | 2017-07-17 | 2021-01-19 | Symbolic Llc | Apparatus and method for building a pallet load |
US11874407B2 (en) * | 2020-02-19 | 2024-01-16 | Coda Octopus Group Inc. | Technologies for dynamic, real-time, four-dimensional volumetric multi-object underwater scene segmentation |
-
2023
- 2023-04-25 CN CN202310458080.4A patent/CN116243720B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111045453A (en) * | 2019-12-25 | 2020-04-21 | 南京工程学院 | Cooperative control system and method based on unmanned ship and multi-underwater robot |
CN111413698A (en) * | 2020-03-04 | 2020-07-14 | 武汉理工大学 | Target positioning method for underwater robot searching and feeling |
CN114779801A (en) * | 2021-01-22 | 2022-07-22 | 中国科学院沈阳自动化研究所 | Autonomous remote control underwater robot path planning method for target detection |
CN114248893A (en) * | 2022-02-28 | 2022-03-29 | 中国农业大学 | Operation type underwater robot for sea cucumber fishing and control method thereof |
CN114675643A (en) * | 2022-03-21 | 2022-06-28 | 广州杰赛科技股份有限公司 | Information transmission path planning method and device of wireless sensor network |
CN115373383A (en) * | 2022-07-15 | 2022-11-22 | 广东工业大学 | Autonomous obstacle avoidance method and device for garbage recovery unmanned boat and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN116243720A (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109932708B (en) | Method for classifying targets on water surface and underwater based on interference fringes and deep learning | |
CN110070025B (en) | Monocular image-based three-dimensional target detection system and method | |
CN112561796B (en) | Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network | |
CN113326930B (en) | Data processing method, neural network training method, related device and equipment | |
US10943352B2 (en) | Object shape regression using wasserstein distance | |
CN113361373A (en) | Real-time semantic segmentation method for aerial image in agricultural scene | |
CN112183742B (en) | Neural network hybrid quantization method based on progressive quantization and Hessian information | |
CN115421158B (en) | Self-supervision learning solid-state laser radar three-dimensional semantic mapping method and device | |
CN113077017B (en) | Synthetic aperture image classification method based on pulse neural network | |
CN116468995A (en) | Sonar image classification method combining SLIC super-pixel and graph annotation meaning network | |
CN116189147A (en) | YOLO-based three-dimensional point cloud low-power-consumption rapid target detection method | |
Lee et al. | PMNet: Robust pathloss map prediction via supervised learning | |
CN114820668A (en) | End-to-end building regular outline automatic extraction method based on concentric ring convolution | |
CN110647977A (en) | Method for optimizing Tiny-YOLO network for detecting ship target on satellite | |
CN117274831B (en) | Offshore turbid water body depth inversion method based on machine learning and hyperspectral satellite remote sensing image | |
CN116243720B (en) | AUV underwater object searching method and system based on 5G networking | |
CN113989612A (en) | Remote sensing image target detection method based on attention and generation countermeasure network | |
CN116659516B (en) | Depth three-dimensional attention visual navigation method and device based on binocular parallax mechanism | |
Giacomo et al. | Sonar-to-satellite translation using deep learning | |
CN114663307B (en) | Integrated image denoising system based on uncertainty network | |
CN114706087A (en) | Underwater terrain matching and positioning method and system for three-dimensional imaging sonar point cloud | |
CN112613518A (en) | AUV-based side-scan sonar image domain adaptive learning real-time segmentation method | |
CN112085779A (en) | Wave parameter estimation method and device | |
Zhang et al. | Vision-based UAV obstacle avoidance algorithm on the embedded platform | |
CN116109944B (en) | Satellite image cloud target extraction method based on deep learning network architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |