CN115047890A - Unmanned ship control method, unmanned ship control device and computer-readable storage medium - Google Patents

Unmanned ship control method, unmanned ship control device and computer-readable storage medium Download PDF

Info

Publication number
CN115047890A
CN115047890A CN202210984091.1A CN202210984091A CN115047890A CN 115047890 A CN115047890 A CN 115047890A CN 202210984091 A CN202210984091 A CN 202210984091A CN 115047890 A CN115047890 A CN 115047890A
Authority
CN
China
Prior art keywords
image
unmanned ship
target
water surface
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210984091.1A
Other languages
Chinese (zh)
Other versions
CN115047890B (en
Inventor
喻俊志
孔诗涵
孟岩
魏力夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202210984091.1A priority Critical patent/CN115047890B/en
Publication of CN115047890A publication Critical patent/CN115047890A/en
Application granted granted Critical
Publication of CN115047890B publication Critical patent/CN115047890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0206Control of position or course in two dimensions specially adapted to water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for controlling an unmanned ship and a computer readable storage medium, wherein the method for controlling the unmanned ship comprises the following steps: when the unmanned ship is detected to shake violently in the process of sailing, mechanically stabilizing a pan-tilt camera in the unmanned ship according to the input expected posture; determining a translation vector of an image between adjacent frames in an acquired image acquired by a holder camera after mechanical stabilization; constructing a target jitter curve according to the translation vector, calculating a compensation value according to the target jitter curve, and performing stability-increasing compensation on the collected image according to the compensation value to obtain a target environment image; determining a water surface target in the target environment image, calculating a direction angle of the water surface target, calculating a course angle according to the direction angle, determining relative state information corresponding to the water surface target based on the course angle, and controlling the unmanned ship to operate according to the relative state information. The invention realizes that the unmanned ship can effectively acquire the relative state information of the water surface target under the communication refusal environment, and avoids the unmanned ship deviating from the course.

Description

Unmanned ship control method, unmanned ship control device and computer-readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for controlling an unmanned ship, and a computer-readable storage medium.
Background
As is known, real-time and reliable data communication is an important link for ensuring efficient cooperative control of multiple unmanned ships, and a conventional communication means in a communication rejection environment is invalid, so that a new technical challenge is brought to formation control of the multiple unmanned ships. Under the environment of communication refusal, the communication between the unmanned ship and the onshore command system and the communication between the unmanned ship individuals are interfered, high-speed and reliable communication and data transmission are difficult to guarantee, and even the relative state between the unmanned ships is invalid, so that the unmanned ships deviate from the course, and the formation performance of the unmanned ships is seriously influenced.
Disclosure of Invention
The invention mainly aims to provide a method and equipment for controlling an unmanned ship and a computer readable storage medium, and aims to solve the technical problem of how to effectively acquire relative state information of a water surface target by the unmanned ship and avoid the unmanned ship deviating from the course in a communication refusal environment.
In order to achieve the above object, the present invention provides a method for controlling an unmanned ship, which is applied to the unmanned ship, and comprises the following steps:
when detecting that the unmanned ship shakes violently in the sailing process, mechanically stabilizing a holder camera in the unmanned ship according to an input expected posture;
determining a collected image collected by the holder camera after the mechanical stabilization, determining an original video sequence in the collected image, and calculating a translation vector of an image between adjacent frames in the original video sequence;
constructing a target jitter curve according to the translation vector, calculating a compensation value according to the target jitter curve, and performing stability augmentation compensation on the acquired image according to the compensation value to obtain a target environment image;
determining a water surface target in the target environment image, and determining a direction angle of the water surface target in an imaging plane of the target environment image according to the contour structure characteristics of the water surface target;
and calculating a course angle of the water surface target according to the direction angle, determining relative state information corresponding to the water surface target based on the course angle, and controlling the unmanned ship to operate according to the relative state information.
Optionally, the step of calculating a translation vector of an adjacent inter-frame image in the original video sequence includes:
determining all inter-frame images in the original video sequence, and determining a previous frame image and a current frame image in each inter-frame image;
performing warping transformation on the previous frame image to obtain a transformed image, extracting a first image feature in the transformed image, and extracting a second image feature of the current frame image;
and performing multi-scale cross-correlation splicing on the first image characteristic and the second image characteristic, and processing through a preset full connecting layer to obtain translation vectors of the current frame image and the previous frame image.
Optionally, the step of constructing the target jitter curve according to the translation vector and calculating a compensation value according to the target jitter curve includes the steps of:
constructing a historical jitter curve according to the translation vector, and carrying out normalization processing on the historical jitter curve;
inputting the historical jitter curve subjected to normalization processing into a preset recurrent neural network model for model training to obtain a predicted future jitter curve;
and calculating a compensation value corresponding to the adjacent inter-frame images according to the historical jitter curve and the predicted future jitter curve.
Optionally, the step of constructing a historical shaking curve according to the translation vector includes:
shifting pixel points of the previous frame of image according to the translation vector to obtain a translated image;
cutting the current frame image to obtain a first cut image, and cutting the translated image to obtain a second cut image;
and performing back propagation based on the first cut image, the second cut image and a preset structure similarity loss function to construct a historical jitter curve of the pan-tilt camera.
Optionally, the step of calculating the heading angle of the water surface target according to the direction angle includes:
constructing an image coordinate system in an imaging plane of the target environment image, and determining an imaging point of a central point of the water surface target in the image coordinate system;
calculating a terminal point of an imaging plane direction vector of the water surface target according to the direction angle and the image coordinate system;
and determining a conversion relation between the image coordinate system and a preset world coordinate system, converting the imaging point into a first coordinate under the world coordinate system according to the conversion relation, converting the tail end point into a second coordinate under the world coordinate system according to the conversion relation, and calculating the course angle of the water surface target according to the first coordinate and the second coordinate.
Optionally, the step of determining an orientation angle of the water surface target in an imaging plane of the target environment image according to the contour structure feature of the water surface target includes:
extracting the contour structure characteristics of the water surface target, inputting the contour structure characteristics into a trained lightweight neural network model for model training to obtain the direction angle of the water surface target in the imaging plane of the target environment image, wherein a preset training direction angle is represented by a point position on a unit circle, a preset direction angle loss function is derived according to the point position, and the model training is carried out on the preset lightweight neural network model according to the derivation result to obtain the trained lightweight neural network model.
Optionally, the step of mechanically stabilizing the pan-tilt camera in the unmanned ship according to the input desired pose includes:
and calculating attitude errors according to the input expected attitude and the acquired attitude, calculating a motor control value according to the attitude errors, and mechanically stabilizing the pan-tilt camera in the unmanned ship based on the motor control value.
In addition, in order to achieve the above object, the present invention further provides an unmanned ship control method, which is applied to unmanned ship formation and comprises:
according to the unmanned ship control method, generating relative state information of a water surface target corresponding to the unmanned ship, wherein the water surface target comprises other unmanned ships in the unmanned ship formation;
determining nodes of each unmanned ship in a preset directed topological graph according to the relative state information, and determining a pilot and a follower in the unmanned ship formation according to the nodes in the directed topological graph;
calculating the position state deviation of each follower relative to the pilot, and constructing a self-adaptive distribution control protocol according to a preset Laplacian matrix, the position state deviation and the relative state information;
and carrying out unmanned ship formation control according to the self-adaptive distribution control protocol.
In addition, in order to achieve the above object, the present invention further provides an unmanned ship control apparatus, which includes a memory, a processor, and an unmanned ship control program stored in the memory and operable on the processor, wherein the unmanned ship control program, when executed by the processor, implements the steps of the unmanned ship control method as described above.
In addition, to achieve the above object, the present invention also provides a computer-readable storage medium having an unmanned ship control program stored thereon, the unmanned ship control program, when executed by a processor, implementing the steps of the unmanned ship control method as described above.
According to the invention, when the unmanned ship is detected to shake violently, the tripod head camera is subjected to mechanical stability augmentation according to the expected posture, and then the acquired image is subjected to digital stability augmentation, so that when the unmanned ship faces a complex sea environment with high stormy waves and multiple disturbances, the good resistance of the mechanical stability augmentation to the strong disturbance and the sensitive compensation capability of the digital stability augmentation to the micro-jitter are fused, and the unmanned ship can achieve the purpose of clearly seeing the environment. The method comprises the steps of calculating a translation vector of an adjacent frame image of an original video sequence of an acquired image acquired by a pan-tilt camera, constructing a target jitter curve according to the translation vector, calculating a compensation value according to the target jitter curve, performing stability enhancement compensation on the acquired image according to the compensation value to obtain a target environment image, namely compensating each frame image to enable the finally obtained target environment image to be high-definition and shake-free, calculating a direction angle of a water surface target in the target environment image, calculating a course angle according to the direction angle, acquiring relative state information of the water surface target based on the course angle, and controlling the unmanned ship to run according to the relative state information, so that the unmanned ship can effectively and accurately acquire the relative state information of the water surface target, and then controlling the unmanned ship to run according to the relative state information to avoid the unmanned ship from deviating the course.
Drawings
Fig. 1 is a schematic diagram of a terminal \ device structure of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the unmanned ship control method according to the present invention;
FIG. 3 is a schematic flow chart of a third embodiment of the unmanned ship control method according to the present invention;
FIG. 4 is a schematic flow chart of mechanical stability augmentation and digital stability augmentation of the unmanned ship control method according to the invention;
FIG. 5 is a schematic overall flow chart of the unmanned ship control method according to the present invention;
FIG. 6 is a schematic diagram of an auto-supervised twin learning network for the unmanned ship control method of the present invention;
FIG. 7 is a schematic diagram of a camera shake network prediction of the unmanned ship control method according to the present invention;
FIG. 8 is a schematic view of a water surface target orientation vision measurement model of the unmanned ship control method of the present invention;
fig. 9 is a schematic flow chart of a target direction angle estimation network in the unmanned ship control method according to the present invention.
The objects, features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention is unmanned ship control equipment.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that turns off the display screen and/or the backlight when the terminal device is moved to the ear. Of course, the terminal device may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an unmanned ship control program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and processor 1001 may be configured to invoke the drone control program stored in memory 1005 and perform the following operations:
referring to fig. 2, the present invention provides an unmanned ship control method applied to an unmanned ship, in a first embodiment of the unmanned ship control method, the unmanned ship control method including the steps of:
step S10, when detecting that the unmanned ship shakes violently in the process of sailing, mechanically stabilizing a pan-tilt camera in the unmanned ship according to the input expected posture;
in a communication rejection environment, the traditional communication modes of the unmanned ship, the onshore command system and the unmanned ship are interfered, high-speed and reliable communication and data transmission are difficult to ensure, and even the relative state between the unmanned ships is invalid, so that the formation performance is seriously influenced. With the rapid development of artificial intelligence, robot vision becomes an important means for the robot to know the environment, and the common electromagnetic interference, satellite failure and the like cause the main factors of communication rejection, and the interference to vision perception is small. Therefore, a novel directed contact framework under a communication rejection environment is established based on the active visual perception capability of the unmanned ship, and the distributed visual formation control method under the condition of no global command information is explored, so that the method has important theoretical significance and application value.
Therefore, in the embodiment, a visual formation control framework of 'clear environment' → 'position for recognition' → 'well coordination' is constructed, and cloud deck camera cascade stability augmentation in a high sea wave environment, target course angle real-time estimation based on a lightweight deep learning network, multi-unmanned ship distributed formation control in a directed switching topology and the like are provided.
And the capability of continuously acquiring stable images of the onboard camera is the basis for constructing a novel visual-based contact architecture in a communication rejection environment. The method is oriented to the complex marine environment with large wind waves and multiple disturbances, combines the good resistance of mechanical stability augmentation to strong disturbance and the sensitive compensation capability of digital stability augmentation to micro-jitter, and designs a camera tripod head cascade stability augmentation scheme. Meanwhile, an interframe camera motion estimation method based on self-supervision twin learning is explored to fit a historical camera shake curve, a prediction algorithm of a camera rhythmic shake curve in a sea wave environment based on a Long short-term memory network (LSTM) is provided according to historical shake data, stability augmentation compensation of camera images is further achieved, a clear and shake-free video sequence is obtained, and high-quality perception information is provided for formation control based on a visual contact framework.
And aiming at the problem that the global state information of the unmanned ship can not be basically acquired under the communication refusal environment, the relative state of the unmanned ship and the adjacent individuals is estimated by utilizing the active vision of the unmanned ship, and the effective connection between the individuals in the formation can be established. Therefore, a visual measurement model of the orientation of the water surface target is constructed, and the mapping relation between the orientation of the scene plane target and the orientation of the imaging plane target is analyzed; meanwhile, an imaging plane target direction estimation method based on a lightweight neural network is provided, accurate real-time target course angle estimation is further achieved by means of a mapping relation, and real-time reliable unmanned inter-ship relative state information is finally provided for distributed formation control.
Aiming at the switching topology problem under the interference of the directional characteristic of a novel visual communication framework and complex sea conditions in the communication rejection environment, a self-adaptive formation control protocol design method with time-varying gain in the directional communication topology is provided based on the relative state information between adjacent unmanned ships; meanwhile, the essential conditions that a multi-unmanned ship system can keep the expected formation form and reach the expected speed under the switching topological structure are explored, and finally, the stable and coordinated distributed formation control under the complex sea condition is realized
In this embodiment, the unmanned ship is often influenced by sea waves during the sailing process, and generates violent shaking. The camera system carried on the unmanned ship can shake along with the ship body, so that the camera image is severely shaken, effective sensing information cannot be provided for a rear-end mode identification and motion control module, and the safety and the stability of multi-unmanned ship formation control are seriously influenced. Therefore, when the unmanned ship is detected to shake violently in the process of sailing, the cradle head cameras on the unmanned ship can be subjected to cascade stability augmentation, including mechanical stability augmentation and digital stability augmentation. Mechanical stabilization is the elimination of large amplitude image jitter by controlling the camera to maintain a desired pose in an interfering environment. The cascade stability augmentation in the embodiment includes first-stage mechanical stability augmentation and second-stage digital stability augmentation, wherein the mechanical stability augmentation is to calculate an attitude error according to an input expected attitude, input the attitude error into an adaptive controller with anti-interference capability, calculate a control value corresponding to each shaft motor, and finally realize the mechanical stability augmentation of the pan-tilt camera. The digital stability augmentation is to estimate the inter-camera motion by using an original video sequence, further predict a rhythmic camera shaking motion curve in a sea wave environment, perform stability augmentation compensation on an input image based on the estimated inter-camera motion curve, and finally acquire high-definition shake-free image information. For example, as shown in fig. 4, in a first-stage mechanical stability augmentation link, an input desired attitude is obtained first, and is transmitted to a three-axis pan-tilt on an unmanned ship through an anti-interference sensor, and a pan-tilt camera is installed on the three-axis pan-tilt, because the unmanned ship is traveling on the sea, sea wave interference exists, at this time, an actual attitude of the three-axis pan-tilt is collected through the attitude sensor, an attitude error is calculated, and is transmitted to an anti-interference controller, a control value corresponding to each axis motor is calculated to complete mechanical stability augmentation of the three-axis pan-tilt, then, the second-stage digital stability augmentation link is entered, an original video sequence (i.e., a collected image) is obtained through the pan-tilt camera on the three-axis pan-tilt, inter-frame motion estimation (i.e., a translation vector of an adjacent inter-frame image is calculated), camera motion prediction (i.e., a motion curve of a prediction camera is predicted), stability augmentation compensation is performed (i.e., a compensation value is calculated, and each frame image is compensated), and obtaining the video sequence after the stability is increased.
Step S20, determining the collected image collected by the pan-tilt camera after the mechanical stabilization, determining an original video sequence in the collected image, and calculating a translation vector of an adjacent inter-frame image in the original video sequence;
the corresponding pixel positions of the same position point in two adjacent frames of the video sequence in the world coordinate system satisfy the following affine relationship, namely:
p t = αH 3x3 p t-1 +β;
wherein p is t And p t-1 Respectively representing world coordinatesIs the homogeneous pixel coordinate, H, of the same position point at t and t-1 3x3 Is a homography matrix with 8 degrees of freedom, and beta is a homogeneous translation vector. Since the time interval between two adjacent frames is very small and can be ignored, only the planar motion of the camera is considered, i.e., α = 0. Due to the identity matrix H 3x3 The method is obtained by resolving an Inertial Measurement Unit (IMU) sensor of a camera pan-tilt, so that only a translation vector beta between two adjacent frames needs to be determined.
In this embodiment, after the mechanical stability augmentation of the pan/tilt camera is completed, digital stability augmentation needs to be performed. And digital stabilization requires acquisition of a translation vector between two frames. Therefore, the captured image captured by the pan-tilt camera can be obtained first, then, for determining the original video sequence (i.e. the video sequence arranged in each frame of image) in the captured image, each frame of image in the original video sequence is determined, and the translation vector of the adjacent inter-frame image is determined, and the manner of determining the translation vector of the adjacent inter-frame image can be performed in an inter-frame camera motion estimation manner based on self-supervision twin learning, so as to determine the translation vector of the adjacent inter-frame image.
Step S30, constructing a target shaking curve according to the translation vector, calculating a compensation value according to the target shaking curve, and performing stability augmentation compensation on the collected image according to the compensation value to obtain a target environment image;
in the present embodiment, the target jitter profile may include a historical jitter profile and a predicted future jitter profile. After the translation vectors of the adjacent inter-frame images are obtained, the historical jitter curve of the camera can be fitted in an accumulation mode, and since the jitter of the camera in the sea wave environment is rhythmic, the future jitter curve can be estimated and predicted according to the historical jitter curve, and then the compensation value is calculated according to the historical jitter curve and the predicted future jitter curve. In another scenario, the compensation value may also be calculated directly from the translation vector. And then, carrying out stability augmentation compensation on the current frame image in the acquired image according to the compensation value, and taking the whole image as a target environment image after carrying out stability augmentation compensation on each frame image. So that the unmanned ship can see the environment clearly.
Step S40, determining a water surface target in the target environment image, and determining the direction angle of the water surface target in the imaging plane of the target environment image according to the contour structure characteristics of the water surface target;
step S50, calculating the course angle of the water surface target according to the direction angle, determining the relative state information corresponding to the water surface target based on the course angle, and controlling the unmanned ship to operate according to the relative state information.
Under the environment of communication rejection, the global state of the individuals in the formation cannot be acquired by means of satellite positioning and the like, and meanwhile, the individual state sharing is difficult to realize by means of local communication. Therefore, in this embodiment, in addition to enabling the unmanned ship to acquire a clear target environment image to see the surrounding environment clearly, the estimation of the course angle is also required to be performed to recognize the position of the unmanned ship. Therefore, the perception of the relative motion state of a water surface target (such as a neighboring unmanned ship) can be realized by utilizing the active vision of the unmanned ship individual, so as to endow the unmanned ship with the capability of 'recognizing the position'. Therefore, effective connection among the unmanned ships is established, and the control effect of distributed formation of the system is further guaranteed. Therefore, position measurement needs to be carried out on a water surface target (such as a target unmanned ship) according to the target environment image, and the position measurement mode can be obtained through a binocular ranging mode. And (4) carrying out speed measurement on the water surface target, wherein the speed measurement mode can be obtained by carrying out differential operation on the position result. In addition, the heading angle of the technical surface target is required. And parameters such as course angle, speed, position and the like are used as relative state information corresponding to the water surface target, and then the unmanned ship is controlled to operate according to the state information of the water surface target. When the course angle is calculated, the direction angle of the water surface target on the imaging plane can be estimated based on the imaging plane target direction of the light weight neural network, namely, the contour structure characteristics of the water surface target can be input into a trained light weight neural network model for model training, and the direction angle of the water surface target on the imaging plane is obtained. And then calculating a visual measurement model of the orientation of the water surface target, and further calculating to obtain the course angle of the water surface target.
To assist in understanding the principle of the unmanned ship control system in this embodiment, the following description will be made.
For example, as shown in fig. 5, a visual-based formation control framework of "see-through environment" → "recognize position" → "do-get-cooperate" is constructed. In a clear environment link, the tripod head camera cascade stability augmentation (including primary mechanical stability augmentation and secondary digital stability augmentation) is carried out in a high sea wave environment, a tripod head camera cascade stability augmentation scheme is designed, a camera motion estimation algorithm based on self-supervision twin learning is researched, and the camera shake curve prediction based on LSTM is researched. The method comprises the steps of providing low-jitter visual information for a position recognition link by seeing clearly an environment link, estimating a visual target course angle based on light-weight deep learning in real time in the position recognition link, constructing a water surface target orientation visual measurement model, and researching an imaging plane target direction estimation method based on a light-weight deep neural network. And then, a novel directional, robust and real-time contact framework based on vision in a communication rejection environment is constructed based on a clear environment link and an accurate position link, and real-time and reliable relative state information is provided for a well-made cooperation link. And in a cooperative link, carrying out distributed formation control on the multiple unmanned ships under the condition of directed switching topology, designing a self-adaptive formation control protocol with time-varying gain, and carrying out theoretical analysis on the effectiveness of the formation control protocol under the condition of switching topology. And system integration and experimental verification are supported by seeing clearly the environment link, recognizing the position link and doing the coordination link. For example, indoor pond experiments, field water area tests, large-scale cluster simulation, and the like are performed. And optimizing and improving a link of seeing clearly the environment, a link of recognizing the position and a link of making good cooperation based on an experimental verification result.
In the embodiment, when the unmanned ship is detected to shake violently, the tripod head camera is subjected to mechanical stability augmentation according to the expected posture, and then the acquired image is subjected to digital stability augmentation, so that when the unmanned ship faces a complex sea environment with high storms and multiple disturbances, the good resistance of the mechanical stability augmentation to the strong disturbance and the sensitive compensation capability of the digital stability augmentation to micro-jitter are combined, and the unmanned ship can achieve the purpose of clearly seeing the environment. The method comprises the steps of calculating a translation vector of an adjacent frame image of an original video sequence of an acquired image acquired by a pan-tilt camera, constructing a target jitter curve according to the translation vector, calculating a compensation value according to the target jitter curve, performing stability enhancement compensation on the acquired image according to the compensation value to obtain a target environment image, namely compensating each frame image to enable the finally obtained target environment image to be high-definition and shake-free, calculating a direction angle of a water surface target in the target environment image, calculating a course angle according to the direction angle, acquiring relative state information of the water surface target based on the course angle, and controlling the unmanned ship to run according to the relative state information, so that the unmanned ship can effectively and accurately acquire the relative state information of the water surface target, and then controlling the unmanned ship to run according to the relative state information to avoid the unmanned ship from deviating the course.
Further, based on the first embodiment of the present invention, a second embodiment of the unmanned ship control method of the present invention is provided, in this embodiment, the step S20 of the above embodiment is a refinement of the step of calculating the translation vector of the adjacent inter-frame image in the original video sequence, and includes:
step a, determining all inter-frame images in the original video sequence, and determining a previous frame image and a current frame image in each inter-frame image;
b, performing warping transformation on the previous frame image to obtain a transformed image, extracting a first image feature in the transformed image, and extracting a second image feature of the current frame image;
and c, performing multi-scale cross-correlation splicing on the first image characteristic and the second image characteristic, and processing through a preset full connecting layer to obtain translation vectors of the current frame image and the previous frame image.
In this embodiment, after acquiring the captured image captured by the pan-tilt camera, each frame of image of the original video sequence in the captured image, that is, the inter-frame image, is determined, and then the translation vector between each frame of image and the adjacent previous frame of image is calculated. Therefore, the method can traverse the images between frames to determine the current frame image and the previous frame image adjacent to the current frame image, and then perform the modeling through the self-supervision twin learning network set in advanceAnd performing type training to obtain a translation vector estimation value, namely the translation vector of the current frame. Before model training is performed through an auto-supervised twin learning network, warping transformation needs to be performed on a previous frame of image to obtain a transformed image, and specifically, H estimated based on an IMU (Inertial Measurement Unit) is used 3x3 And performing warping transformation (namely warping perspective transformation) on the matrix to obtain a transformed image. And then, inputting the transformed image and the current frame image into a self-supervision twin learning network for model training to obtain a translation vector.
In particular, the twin neural network (i.e. the self-supervised twin learning network) is a type of neural network architecture comprising two or more identical sub-networks. Each sub-network has the same parameters and weights, and the parameter updates are done simultaneously on both sub-networks. Namely, the self-supervision twin learning network in this embodiment can realize weight sharing of the sub-networks, the number of parameters to be optimized is small, the network can be successfully trained based on less data, and the overfitting phenomenon is not easy to occur. Therefore, in the present embodiment, adjacent frame images are compared to estimate their translation transformation vectors, i.e. translation vectors, and the network inputs are similar images. The self-supervision twin learning network in the embodiment includes a twin feature extraction network, a multi-scale cross-correlation splicing module, a full connection layer, a translation sampler and a self-supervision learning mechanism. For example, as shown in fig. 6, a current frame image I is acquired t And the previous frame image I t-1 . Then, for the previous frame image I t-1 Performing warping transformation to obtain a transformed image W t-1 . The current frame image I t And transforming the image W t-1 Inputting the image data into a twin feature extraction network for feature extraction to obtain a first image feature and a second image feature, wherein the twin feature extraction network comprises two sub-Networks with the same network architecture, namely CNN (Convolutional Neural Networks). I.e. the current frame image I t Transforming the image W through the CNN network, the feature tensors generated by the 14 × 32 convolution kernels and the 3 × 3 and 5 × 5 convolution kernels t-1 And feature tensors generated by passing through the CNN network, the convolution layers of 14 × 32, and the convolution layers of 3 × 3 and 5 × 5. Then will beAnd performing multi-scale cross-correlation splicing on the two feature tensors, performing 7 × 128 convolution layers, and performing splicing on the convolution layers to obtain a translation vector
Figure 656094DEST_PATH_IMAGE001
. And then, determining the translated image based on the translation vector by using a translation sampler, namely increasing the offset of each pixel point by each pixel point according to the translation vector. Then, Cropping is performed on the translated image and the current frame image, and Structural Similarity (SSIM) comparison is performed. And then, back propagation is carried out by using a loss function based on SSIM, and efficient optimization of network parameters is realized. Wherein, the back propagation is to utilize a loss function chain rule to solve the gradient, and further realize the parameter training of the neural network.
And performing multi-scale cross-correlation stitching on the first image characteristic and the second image characteristic, namely performing multi-scale cross-correlation on the characteristic tensor of the first image characteristic and the characteristic tensor of the second image characteristic. The multi-scale cross-correlation is to extract image features of multiple scales by using the receptive fields of convolution kernels of different sizes, and to enrich feature diversity through cross-correlation splicing. Specifically, the multi-scale cross-correlation comprises:
Figure 635551DEST_PATH_IMAGE002
wherein, X 1 And X 2 In order to input the characteristic diagram,
Figure 415288DEST_PATH_IMAGE003
and
Figure 122082DEST_PATH_IMAGE004
the feature tensors generated by convolution kernels of different sizes for the input feature map, "' are cross correlation operations, respectively. And A is the image features after multi-scale cross-correlation.
In the embodiment, a previous frame image and a current frame image of all inter-frame images in an acquired image are determined, warping transformation is performed on the previous frame image, then first image features are extracted, second image features are directly extracted from the current frame image, multi-scale cross-correlation splicing is performed on the two image features, and a translation vector is obtained through full-connection layer processing. Therefore, the accuracy and effectiveness of the obtained translation vector are guaranteed.
Specifically, the step of constructing a target jitter curve according to the translation vector and calculating a compensation value according to the target jitter curve includes:
d, constructing a historical jitter curve according to the translation vector, and carrying out normalization processing on the historical jitter curve;
step e, inputting the historical jitter curve subjected to normalization processing into a preset recurrent neural network model for model training to obtain a predicted future jitter curve;
and f, calculating a compensation value corresponding to the adjacent inter-frame image according to the historical jitter curve and the predicted future jitter curve.
In the present embodiment, the target jitter profile includes a historical jitter profile and a predicted future jitter profile. After the translation vector is obtained, a historical jitter curve of the camera can be fitted in an accumulation mode, then the historical jitter curve is normalized and input into an LSTM (Long Short-Term Memory) network (recurrent neural network) for training, and a normalized future jitter curve is obtained and used as a predicted future jitter curve. And in order to make the stability augmentation compensation smoother, the following calculation formula can be adopted to calculate the compensation value of the current frame image. Namely:
Figure 936454DEST_PATH_IMAGE005
wherein,
Figure 821234DEST_PATH_IMAGE006
representing the last center point in the historical jitter curve.
Figure 557108DEST_PATH_IMAGE007
Indicating the next center point determined by the predicted future jitter curve。
Figure 100216DEST_PATH_IMAGE008
And
Figure 769095DEST_PATH_IMAGE009
respectively representing the distance between the compensation point calculated by the current frame image translation transformation vector and the last central point in the historical jitter curve and the distance between the compensation point calculated by the current frame image translation transformation vector and the next central point determined by the predicted future jitter curve.
Figure 824776DEST_PATH_IMAGE010
Representing the compensation value of the current frame image.
For example, as shown in fig. 7, the LSTM network in this embodiment may perform model training on the historical jitter curve after normalization, that is, perform forward propagation through the LSTM, to obtain a normalized predicted jitter curve, that is, a predicted future jitter curve. And then, performing reverse optimization on the predicted future jitter curve, namely performing Adam optimization through regular square error loss function operation according to the predicted future jitter curve and a true value of the future jitter curve.
In the embodiment, a historical jitter curve is constructed according to the translation vector, a future jitter curve is predicted through the recurrent neural network model, and the compensation value corresponding to the adjacent inter-frame image is calculated according to the two curves, so that the adjacent inter-frame image is compensated more smoothly.
Specifically, the step of constructing a historical jitter curve according to the translation vector includes:
step g, shifting pixel points of the previous frame of image according to the translation vector to obtain a translated image;
step h, cutting the current frame image to obtain a first cut image, and cutting the translated image to obtain a second cut image;
and i, performing back propagation on the basis of the first cut image, the second cut image and a preset structure similarity loss function to construct a historical jitter curve of the holder camera.
In this embodiment, after obtaining the translation vector of the current frame, the pixel point of the previous frame image may be shifted by the translation sampler set in advance, that is, a certain offset is added to each pixel point, and the offset may be determined according to the translation vector. And taking the image subjected to the pixel point offset as a translated image. And then sized as 32x 32. And performing clipping operation on the translated image to obtain a second clipped image. And performing cutting operation on the current frame image according to the same cutting mode to obtain a first cut image. And then inputting the first clipping image and the second clipping image into a preset structure Similarity loss function (SSIM) for model training, and performing back propagation according to a training result so as to obtain enough translation vectors to construct a history jitter curve of the pan-tilt camera.
In this embodiment, the shifted image is obtained by shifting the pixel point of the previous frame of image according to the shift vector, then the shifted image and the current frame of image are cut, and back propagation is performed according to the structural similarity loss function to construct the historical jitter curve, thereby ensuring the accuracy and validity of the constructed historical jitter curve.
And step one, calculating the course angle of the water surface target according to the direction angle, and comprising the following steps:
step j, constructing an image coordinate system in an imaging plane of the target environment image, and determining an imaging point of the central point of the water surface target in the image coordinate system;
k, calculating a terminal point of a direction vector of an imaging plane of the water surface target according to the direction angle and the image coordinate system;
step l, determining a conversion relation between the image coordinate system and a preset world coordinate system, converting the imaging point into a first coordinate under the world coordinate system according to the conversion relation, converting the tail end point into a second coordinate under the world coordinate system according to the conversion relation, and calculating the course angle of the water surface target according to the first coordinate and the second coordinate.
In this embodiment, under the condition that distortion of a camera lens of the pan-tilt camera is negligible, when it is determined that the optical axis of the camera is inclined to the scene plane, the orientation of the water surface target is visually measured. For example, as shown in FIG. 8, an image coordinate system may be constructed in the imaging plane of the target environment image, i.e., O w Is a coordinate origin and has an X-axis direction of X w In the Y-axis direction of Y w In the Z-axis direction of Z w . And when the water surface target is the water surface unmanned ship, the imaging point of the center position of the water surface unmanned ship on the imaging plane is
Figure 579105DEST_PATH_IMAGE011
. The direction angle (angle relative to the u-axis) of the unmanned surface vehicle on the imaging plane is
Figure 633561DEST_PATH_IMAGE012
. The length coefficient defining the direction vector of the imaging plane is
Figure 156946DEST_PATH_IMAGE013
. The end point of the imaging plane direction vector is
Figure 649107DEST_PATH_IMAGE014
And because all points in the scene plane satisfy Z in the world coordinate system w And = 0. According to the Faugeras (basis matrix) calibration rule, the following can be obtained:
Figure 890733DEST_PATH_IMAGE015
wherein,
Figure 978906DEST_PATH_IMAGE016
Figure 622377DEST_PATH_IMAGE017
. Therefore, m needs to be determined / So as to be according to m / And determining a conversion relation between the world coordinate system and the image coordinate system. Can utilize the markStationary plate, finding m from 4 known points / . When acquiring m / The two-dimensional coordinates of points in the plane of the scene (i.e., the image plane) can then be calculated according to the following equation. Namely:
Figure 19860DEST_PATH_IMAGE018
;
so far, the coordinates of the center point of the unmanned ship on the water surface under the world coordinate system can be respectively obtained
Figure 748782DEST_PATH_IMAGE019
I.e. the first coordinate, and the coordinate towards the end of the vector (i.e. the end point)
Figure 139180DEST_PATH_IMAGE020
I.e., the second coordinate, and further the heading angle, which is the orientation of the unmanned surface vessel, can be obtained by the following equation:
Figure 637158DEST_PATH_IMAGE021
,
in this embodiment, an image coordinate system is first constructed in an imaging plane of a target environment image, then an imaging point of a central point of a water surface target in the image coordinate system is determined, a terminal point of a direction vector of the imaging plane of the water surface target is calculated, coordinate transformation is performed on the imaging point and the terminal point according to a transformation relation between the image coordinate system and a world coordinate system, and a course angle of the water surface target is calculated according to a first coordinate and a second coordinate after transformation, so that the accuracy and effectiveness of the calculated course angle are ensured.
Specifically, the step of determining the direction angle of the water surface target in the imaging plane of the target environment image according to the contour structure characteristics of the water surface target comprises the following steps:
and m, extracting the contour structure characteristics of the water surface target, inputting the contour structure characteristics into a trained lightweight neural network model for model training to obtain the direction angle of the water surface target in the imaging plane of the target environment image, wherein the preset training direction angle is represented by a point position on a unit circle, a preset direction angle loss function is derived according to the point position, and the model training is carried out on the preset lightweight neural network model according to the derivation result to obtain the trained lightweight neural network model.
In this embodiment, when the direction angle of the water surface target is determined, the contour structure feature of the water surface target may be extracted in the current frame image, and the extracted contour structure feature may be input to a lightweight neural network model trained in advance to perform model training, so as to obtain the direction angle of the water surface target. That is, in the present embodiment, a lightweight neural network based on MobileNetV2 is provided to realize real-time estimation of the target direction angle of the imaging plane. The MobileNet V2 neural network splits the traditional convolution into two types of convolution, namely depth type convolution and point-by-point type convolution, by redesigning the structure of the convolutional neural network, so that the calculation complexity of the model is effectively reduced, and meanwhile, the inverted residual structure is introduced to extract more features, so that the accuracy of model calculation is improved. The inverted residual structure is evolved from a residual structure in a ResNet neural network, and is different from the structural characteristics that the dimensionality is firstly reduced and then increased in the residual structure, the inverted residual structure adopts a mode of increasing the dimensionality and then reducing the dimensionality to form a basic backbone network of the model, the whole structure not only effectively retains characteristic information, but also can greatly reduce the parameters and the calculated amount of the model.
When the model training is performed on the preset lightweight neural network model, the training direction angle (set in advance) of the training can be represented by the point location on the unit circle, that is, the direction angle of the target can be represented by the point location on the unit circle by defining the unit circle, for example
Figure 471122DEST_PATH_IMAGE022
. Output of a network
Figure 421760DEST_PATH_IMAGE023
Satisfy the following requirements
Figure 117315DEST_PATH_IMAGE024
. When reverse propagation is performed, the standard Smooth-L2 norm is usedLoss, or defining a loss function related to the direction angle, i.e.:
Figure 469799DEST_PATH_IMAGE025
wherein,
Figure 474664DEST_PATH_IMAGE026
is the true value of the target azimuth, which satisfies the unit circle constraint. In addition, in order to facilitate optimization of network parameters, the loss functions related to direction angles are derived along the x-axis direction and along the y-axis direction.
As shown in fig. 9, the network architecture of the lightweight neural network model may perform target detection from a target environment image, perform contour structure feature extraction, input the contour structure feature into Conv2d, perform inverse residual error network, perform Conv2d, and output a direction angle of the water surface target through a regression layer. Wherein the inverse residual network comprises at least a network structure of the network (1 x1 PW, 3x3 DW; 1x1 Conv).
In this embodiment, the direction angle of the water surface target is obtained by inputting the profile structure characteristics of the water surface target into the trained lightweight neural network model for model training, so that the accuracy and effectiveness of the obtained direction angle are ensured.
Further, the step of mechanically stabilizing the pan-tilt camera in the unmanned ship according to the input expected posture includes:
and n, calculating an attitude error according to the input expected attitude and the acquired attitude, calculating a motor control value according to the attitude error, and mechanically stabilizing the pan-tilt camera in the unmanned ship based on the motor control value.
In this embodiment, when controlling the unmanned ship, and allowing the unmanned ship to take a clearer picture, mechanical stability augmentation needs to be performed on the unmanned ship first, that is, an input expected attitude of the unmanned ship can be obtained first, and the input expected attitude is transmitted to the pan-tilt through the anti-interference sensor, and then an actual attitude is acquired through the attitude sensor, and an attitude error is calculated according to the actual attitude and the expected attitude, and the attitude error is input into the adaptive controller with anti-interference capability, and a motor control value corresponding to each shaft motor is calculated, and each motor is controlled according to the motor control value, so that mechanical stability augmentation of the pan-tilt camera in the unmanned ship is achieved, and the stability of the pan-tilt camera is improved.
In the embodiment, the attitude error is calculated according to the expected attitude and the acquired attitude, then the motor control value is calculated according to the attitude error, and the pan-tilt camera in the unmanned ship is mechanically stabilized according to the motor control value, so that the large-amplitude image shake is eliminated, and the pan-tilt camera is in the expected attitude.
Further, referring to fig. 3, a third embodiment of the unmanned ship control method according to the present invention is proposed based on any one of the first to second embodiments described above, and in the embodiment of the present invention, the unmanned ship control method is applied to unmanned ship formation, and includes the steps of:
step S100, according to the unmanned ship control method in the embodiment, generating relative state information of a water surface target corresponding to the unmanned ship, wherein the water surface target comprises other unmanned ships in the unmanned ship formation;
step S200, determining nodes of each unmanned ship in a preset directed topological graph according to the relative state information, and determining a pilot and a follower in the unmanned ship formation according to the nodes in the directed topological graph;
step S300, calculating the position state deviation of each follower relative to the pilot, and constructing a self-adaptive distribution control protocol according to a preset Laplacian matrix, the position state deviation and the relative state information;
and S400, performing unmanned ship formation control according to the self-adaptive distribution control protocol.
In this embodiment, each unmanned ship in the formation of unmanned ships may adopt the unmanned ship control method in any one of the above embodiments, and the water surface target may be set as another unmanned ship in the formation of unmanned ships, for example, when the unmanned ship is a follower, the water surface target may be set as a navigator.
And is communicatingUnder the rejection environment, unmanned inter-ship communication established based on visual perception is unidirectional, and in a complex sea condition, wind wave interference can cause continuous switching of directional communication topology based on visual communication. Therefore, in this embodiment, based on the second-order multi-agent model, an adaptive distributed formation control method for a directed communication topology is provided to ensure that a multi-unmanned ship system is "well coordinated", and in this embodiment, a multi-agent system composed of a leader agent (i.e., a navigator) and N follower agents (i.e., followers) is provided, and a directed topology graph is constructed
Figure 224183DEST_PATH_IMAGE027
In directed topology graphs
Figure 644800DEST_PATH_IMAGE027
In, the node
Figure 179686DEST_PATH_IMAGE028
Representing pilot, node
Figure 293136DEST_PATH_IMAGE029
Indicating a follower. The dynamic formula for the ith agent (i.e., the drone) may be as follows:
Figure 31416DEST_PATH_IMAGE030
wherein,
Figure 318041DEST_PATH_IMAGE031
respectively representing the location, velocity and control inputs of the ith agent. Simultaneously according to directed topology
Figure 379538DEST_PATH_IMAGE027
Defining Laplace matrix of multi-agent
Figure 975473DEST_PATH_IMAGE032
. And the laplace matrix satisfies the following relationship:
Figure 388000DEST_PATH_IMAGE033
Figure 478316DEST_PATH_IMAGE034
then, constructing an adaptive distribution control protocol according to a preset Laplace matrix, the position state deviation and the relative state information, namely:
Figure 659898DEST_PATH_IMAGE035
Figure 662620DEST_PATH_IMAGE036
wherein,
Figure 562443DEST_PATH_IMAGE037
the deviation of the position state of the ith follower relative to the pilot is expressed as an expected formation parameter.
Figure 190871DEST_PATH_IMAGE038
The deviation of the position state of the jth follower relative to the pilot is represented as an expected formation parameter.
Figure 821702DEST_PATH_IMAGE039
Indicating a constant parameter set in advance.
After the self-adaptive distribution control protocol is constructed, unmanned ship formation control can be performed according to the word protocol, for example, 3-5 intelligent unmanned ships are used for searching people suffering from water surface disasters and conveying escape materials.
In addition, in order to meet the technical challenge of the multi-unmanned ship collaborative formation task in the communication rejection environment, by comprehensively applying the knowledge and the method of the multi-disciplinary theories such as computer vision, artificial intelligence, control science, mode recognition, robotics and the like, establishing a novel visual oriented, robust and real-time contact framework for the multi-unmanned ship formation in the situation of failure of the conventional communication means by providing a camera holder cascade stability-increasing method in the sea wave environment and a visual target course angle estimation method based on lightweight deep learning, and providing a multi-unmanned ship distributed formation control method under the condition of oriented switching topology based on the novel contact framework, the stable and robust distributed formation control of the multi-unmanned ship system under the extreme condition of coexistence of the communication rejection and the complex sea condition is finally realized.
In the embodiment, the relative state information of the unmanned ship in the unmanned ship formation is obtained firstly, then the nodes in the directed topological graph are determined according to the relative state information, the pilot and the follower are determined according to the nodes, the self-adaptive distribution control protocol is established according to the position state deviation of the follower relative to the pilot, the Laplace matrix and the relative state information, and then the unmanned ship formation control is carried out according to the self-adaptive distribution control attraction, so that the unmanned ship can effectively obtain the relative state information of the water surface target in the communication rejection environment, the unmanned ship can carry out the overall control of the unmanned ship formation according to the relative state information, and the phenomenon that the unmanned ship deviates from the course is avoided.
In addition, the present invention also provides an unmanned ship control apparatus including: a memory, a processor, and an unmanned ship control program stored on the memory; the processor is used for executing the unmanned ship control program to realize the steps of the unmanned ship control method.
The present invention also provides a computer readable storage medium storing one or more programs, which are also executable by one or more processors for implementing the steps of the above-described unmanned ship control method embodiments.
The specific implementation manner of the computer-readable storage medium of the present invention is substantially the same as that of the above-mentioned embodiments of the unmanned ship control method, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An unmanned ship control method is characterized by being applied to an unmanned ship and comprising the following steps:
when detecting that the unmanned ship shakes violently in the sailing process, mechanically stabilizing a holder camera in the unmanned ship according to an input expected posture;
determining a collected image collected by the holder camera after the mechanical stabilization, determining an original video sequence in the collected image, and calculating a translation vector of an image between adjacent frames in the original video sequence;
constructing a target jitter curve according to the translation vector, calculating a compensation value according to the target jitter curve, and performing stability-increasing compensation on the acquired image according to the compensation value to obtain a target environment image;
determining a water surface target in the target environment image, and determining a direction angle of the water surface target in an imaging plane of the target environment image according to the contour structure characteristics of the water surface target;
and calculating a course angle of the water surface target according to the direction angle, determining relative state information corresponding to the water surface target based on the course angle, and controlling the unmanned ship to operate according to the relative state information.
2. The unmanned ship control method of claim 1, wherein said step of calculating translation vectors for adjacent inter-frame images in said original video sequence comprises:
determining all inter-frame images in the original video sequence, and determining a previous frame image and a current frame image in each inter-frame image;
performing warping transformation on the previous frame image to obtain a transformed image, extracting a first image feature in the transformed image, and extracting a second image feature of the current frame image;
and performing multi-scale cross-correlation splicing on the first image characteristic and the second image characteristic, and processing through a preset full connecting layer to obtain translation vectors of the current frame image and the previous frame image.
3. The unmanned-vessel control method of claim 1, wherein the target jitter profile comprises a historical jitter profile and a predicted future jitter profile, and the step of constructing the target jitter profile from the translation vectors and calculating the compensation values from the target jitter profile comprises:
constructing a historical jitter curve according to the translation vector, and carrying out normalization processing on the historical jitter curve;
inputting the historical jitter curve subjected to normalization processing into a preset recurrent neural network model for model training to obtain a predicted future jitter curve;
and calculating a compensation value corresponding to the adjacent inter-frame images according to the historical jitter curve and the predicted future jitter curve.
4. The unmanned-vessel control method of claim 3, wherein said step of constructing a historical jitter curve from said translation vectors comprises:
shifting pixel points of the previous frame of image according to the translation vector to obtain a translated image;
cutting the current frame image to obtain a first cut image, and cutting the translated image to obtain a second cut image;
and performing back propagation based on the first cut image, the second cut image and a preset structure similarity loss function to construct a historical jitter curve of the pan-tilt camera.
5. The unmanned ship control method of claim 1, wherein said step of calculating a heading angle of said surface target based on said heading angle comprises:
constructing an image coordinate system in an imaging plane of the target environment image, and determining an imaging point of a central point of the water surface target in the image coordinate system;
calculating a terminal point of an imaging plane direction vector of the water surface target according to the direction angle and the image coordinate system;
and determining a conversion relation between the image coordinate system and a preset world coordinate system, converting the imaging point into a first coordinate under the world coordinate system according to the conversion relation, converting the tail end point into a second coordinate under the world coordinate system according to the conversion relation, and calculating the course angle of the water surface target according to the first coordinate and the second coordinate.
6. The unmanned ship control method of claim 1, wherein said step of determining an azimuth angle of said water surface target in an imaging plane of said target environment image based on a contour structural feature of said water surface target comprises:
extracting the contour structure characteristics of the water surface target, inputting the contour structure characteristics into a trained lightweight neural network model for model training to obtain the direction angle of the water surface target in the imaging plane of the target environment image, wherein a preset training direction angle is represented by a point position on a unit circle, a preset direction angle loss function is derived according to the point position, and the model training is carried out on the preset lightweight neural network model according to the derivation result to obtain the trained lightweight neural network model.
7. The unmanned-vessel control method of any one of claims 1-6, wherein the step of mechanically stabilizing a pan-tilt camera in the unmanned vessel according to the input desired pose comprises:
and calculating attitude errors according to the input expected attitude and the acquired attitude, calculating a motor control value according to the attitude errors, and mechanically stabilizing the pan-tilt camera in the unmanned ship based on the motor control value.
8. An unmanned ship control method is applied to unmanned ship formation and comprises the following steps:
the unmanned ship control method of any of claims 1-7, generating relative status information of surface targets corresponding to the unmanned ship, wherein the surface targets comprise other unmanned ships in a formation of unmanned ships;
determining nodes of each unmanned ship in a preset directed topological graph according to the relative state information, and determining a pilot and a follower in the unmanned ship formation according to the nodes in the directed topological graph;
calculating the position state deviation of each follower relative to the pilot, and constructing a self-adaptive distribution control protocol according to a preset Laplacian matrix, the position state deviation and the relative state information;
and carrying out unmanned ship formation control according to the self-adaptive distribution control protocol.
9. An unmanned ship control apparatus, characterized by comprising: memory, a processor and an unmanned ship control program stored on the memory and executable on the processor, the program when executed by the processor implementing the steps of the unmanned ship control method according to any of claims 1 to 7, 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an unmanned ship control program, which when executed by a processor implements the steps of the unmanned ship control method according to any one of claims 1 to 7, 8.
CN202210984091.1A 2022-08-17 2022-08-17 Unmanned ship control method, unmanned ship control device and computer-readable storage medium Active CN115047890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210984091.1A CN115047890B (en) 2022-08-17 2022-08-17 Unmanned ship control method, unmanned ship control device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210984091.1A CN115047890B (en) 2022-08-17 2022-08-17 Unmanned ship control method, unmanned ship control device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN115047890A true CN115047890A (en) 2022-09-13
CN115047890B CN115047890B (en) 2022-11-01

Family

ID=83168335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210984091.1A Active CN115047890B (en) 2022-08-17 2022-08-17 Unmanned ship control method, unmanned ship control device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115047890B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578030A (en) * 2023-05-25 2023-08-11 广州市番高领航科技有限公司 Intelligent control method and system for water inflatable unmanned ship
CN117528259A (en) * 2024-01-08 2024-02-06 深圳市浩瀚卓越科技有限公司 Intelligent shooting light supplementing method, device and equipment for cradle head and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device
CN103295213A (en) * 2013-06-07 2013-09-11 广州大学 Image stability augmentation algorithm based on object tracking
CN105681663A (en) * 2016-02-26 2016-06-15 北京理工大学 Video jitter detection method based on inter-frame motion geometric smoothness
WO2017140096A1 (en) * 2016-02-18 2017-08-24 北京臻迪科技股份有限公司 Unmanned ship and system
CN110291780A (en) * 2019-05-15 2019-09-27 深圳市大疆创新科技有限公司 Image stability augmentation control method, capture apparatus and moveable platform
CN110337622A (en) * 2018-08-31 2019-10-15 深圳市大疆创新科技有限公司 Vertical tranquilizer control method, vertical tranquilizer and image acquisition equipment
CN110337668A (en) * 2018-04-27 2019-10-15 深圳市大疆创新科技有限公司 Image stability augmentation method and apparatus
CN112102369A (en) * 2020-09-11 2020-12-18 陕西欧卡电子智能科技有限公司 Autonomous inspection method, device and equipment for water surface floating target and storage medium
CN112631305A (en) * 2020-12-28 2021-04-09 大连海事大学 Anti-collision anti-interference control system for formation of multiple unmanned ships
CN113064434A (en) * 2021-03-27 2021-07-02 西北工业大学 Water surface target detection and tracking control method based on master-slave formation
CN113228619A (en) * 2020-08-25 2021-08-06 深圳市大疆创新科技有限公司 Shooting control method and device, movable platform and storage medium
CN113994134A (en) * 2020-10-15 2022-01-28 深圳市大疆创新科技有限公司 Stability-increasing cradle head and movable platform
CN114396945A (en) * 2022-03-24 2022-04-26 陕西欧卡电子智能科技有限公司 Unmanned ship edge cleaning path planning method, device, equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device
CN103295213A (en) * 2013-06-07 2013-09-11 广州大学 Image stability augmentation algorithm based on object tracking
WO2017140096A1 (en) * 2016-02-18 2017-08-24 北京臻迪科技股份有限公司 Unmanned ship and system
CN105681663A (en) * 2016-02-26 2016-06-15 北京理工大学 Video jitter detection method based on inter-frame motion geometric smoothness
CN110337668A (en) * 2018-04-27 2019-10-15 深圳市大疆创新科技有限公司 Image stability augmentation method and apparatus
CN110337622A (en) * 2018-08-31 2019-10-15 深圳市大疆创新科技有限公司 Vertical tranquilizer control method, vertical tranquilizer and image acquisition equipment
CN110291780A (en) * 2019-05-15 2019-09-27 深圳市大疆创新科技有限公司 Image stability augmentation control method, capture apparatus and moveable platform
CN113228619A (en) * 2020-08-25 2021-08-06 深圳市大疆创新科技有限公司 Shooting control method and device, movable platform and storage medium
CN112102369A (en) * 2020-09-11 2020-12-18 陕西欧卡电子智能科技有限公司 Autonomous inspection method, device and equipment for water surface floating target and storage medium
CN113994134A (en) * 2020-10-15 2022-01-28 深圳市大疆创新科技有限公司 Stability-increasing cradle head and movable platform
CN112631305A (en) * 2020-12-28 2021-04-09 大连海事大学 Anti-collision anti-interference control system for formation of multiple unmanned ships
CN113064434A (en) * 2021-03-27 2021-07-02 西北工业大学 Water surface target detection and tracking control method based on master-slave formation
CN114396945A (en) * 2022-03-24 2022-04-26 陕西欧卡电子智能科技有限公司 Unmanned ship edge cleaning path planning method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余自权等: "群体无人机分数阶智能容错同步跟踪控制", 《中国科学:技术科学》 *
吴文涛等: "多领航者导引无人船集群的分布式时变队形控制", 《中国舰船研究》 *
胡建章等: "水面无人艇集群系统研究", 《舰船科学技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578030A (en) * 2023-05-25 2023-08-11 广州市番高领航科技有限公司 Intelligent control method and system for water inflatable unmanned ship
CN116578030B (en) * 2023-05-25 2023-11-24 广州市番高领航科技有限公司 Intelligent control method and system for water inflatable unmanned ship
CN117528259A (en) * 2024-01-08 2024-02-06 深圳市浩瀚卓越科技有限公司 Intelligent shooting light supplementing method, device and equipment for cradle head and storage medium
CN117528259B (en) * 2024-01-08 2024-03-26 深圳市浩瀚卓越科技有限公司 Intelligent shooting light supplementing method, device and equipment for cradle head and storage medium

Also Published As

Publication number Publication date
CN115047890B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN115047890B (en) Unmanned ship control method, unmanned ship control device and computer-readable storage medium
US10659768B2 (en) System and method for virtually-augmented visual simultaneous localization and mapping
US9639913B2 (en) Image processing device, image processing method, image processing program, and storage medium
US11906983B2 (en) System and method for tracking targets
CN110989639B (en) Underwater vehicle formation control method based on stress matrix
CN110517324B (en) Binocular VIO implementation method based on variational Bayesian adaptive algorithm
WO2018120350A1 (en) Method and device for positioning unmanned aerial vehicle
CN110533724B (en) Computing method of monocular vision odometer based on deep learning and attention mechanism
CN112991400B (en) Multi-sensor auxiliary positioning method for unmanned ship
Wang et al. Correlation flow: robust optical flow using kernel cross-correlators
CN108900775B (en) Real-time electronic image stabilization method for underwater robot
CN108444452B (en) Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device
CN110598370B (en) Robust attitude estimation of multi-rotor unmanned aerial vehicle based on SIP and EKF fusion
CN114217303B (en) Target positioning and tracking method and device, underwater robot and storage medium
WO2020004029A1 (en) Control device, method, and program
JP2018009918A (en) Self-position detection device, moving body device, and self-position detection method
CN111461008B (en) Unmanned aerial vehicle aerial photographing target detection method combined with scene perspective information
EP3859275B1 (en) Navigation apparatus, navigation parameter calculation method, and program
WO2018133074A1 (en) Intelligent wheelchair system based on big data and artificial intelligence
Kang et al. Development of a peripheral-central vision system for small UAS tracking
CN116485974A (en) Picture rendering, data prediction and training method, system, storage and server thereof
Indelman Navigation performance enhancement using online mosaicking
CN114494339B (en) Unmanned aerial vehicle target tracking method based on DAMDNet-EKF algorithm
Zhang et al. An integrated unmanned aerial vehicle system for vision based control
CN114358419A (en) Pose prediction method, pose prediction device, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared