CN117690194B - Multi-source AI biodiversity observation method and acquisition system - Google Patents
Multi-source AI biodiversity observation method and acquisition system Download PDFInfo
- Publication number
- CN117690194B CN117690194B CN202311681124.6A CN202311681124A CN117690194B CN 117690194 B CN117690194 B CN 117690194B CN 202311681124 A CN202311681124 A CN 202311681124A CN 117690194 B CN117690194 B CN 117690194B
- Authority
- CN
- China
- Prior art keywords
- gps
- unmanned ship
- beidou
- imu
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 25
- 230000033001 locomotion Effects 0.000 claims abstract description 18
- 241001465754 Metazoa Species 0.000 claims abstract description 14
- 238000004891 communication Methods 0.000 claims abstract description 11
- 230000004927 fusion Effects 0.000 claims description 42
- 230000005856 abnormality Effects 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000003062 neural network model Methods 0.000 claims description 9
- 230000001133 acceleration Effects 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000009472 formulation Methods 0.000 abstract description 2
- 239000000203 mixture Substances 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- DMBHHRLKUKUOEG-UHFFFAOYSA-N diphenylamine Chemical group C=1C=CC=CC=1NC1=CC=CC=C1 DMBHHRLKUKUOEG-UHFFFAOYSA-N 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/70—Multimodal biometrics, e.g. combining information from different biometric modalities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/05—Underwater scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Human Computer Interaction (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
Abstract
The application provides a method and an acquisition system for multi-source AI biodiversity observation, which belong to the field of ecological intelligent science and technology and the field of biodiversity observation, wherein the acquisition system comprises the following components: unmanned ship and edge server, unmanned ship in the system includes: the device comprises a water surface rotatable motion camera, an underwater rotatable motion camera, an audio acquisition device, a communication antenna, a central control unit, a Beidou/GPS positioning unit, an IMU unit and a storage unit; and an edge server in the system performs AI identification of animals and plants. The application can obtain high-quality biological observation data by utilizing artificial intelligence and high-precision positioning, provides important parameters for evaluating biological diversity and provides basis for scientific protection measures and policy formulation of the biological diversity.
Description
Technical Field
The application relates to the field of ecological intelligent science and technology and the field of biological observation, in particular to an observation method and an acquisition system for biodiversity.
Background
As a branch in intelligent robots, unmanned vessels generally comprise: the intelligent driving, image acquisition and transmission functions have the advantages of small volume, low cost, good maneuverability, capability of carrying different sensors and the like, so that the unmanned ship is adopted to observe and acquire the information of ecological environment biodiversity in a very feasible direction. However, the existing unmanned ship has several problems in observation and collection: 1, the acquisition parameters are incomplete. The acquired data only contains the acquired target video or audio data, but does not contain environmental parameters, such as real-time geographic position information and the like, and does not contain real-time information and the like, so that necessary information is deleted when the data is analyzed subsequently; 2, inaccurate positioning. Inland unmanned ships often work in inland river, reservoir and small lake etc. areas, and the topography is narrow and small and the ocean is comparatively complicated relatively, when being close to the coast, because the shielding of building or trees, big dipper or GPS signal lose easily to lead to the problem of location inefficacy or inaccuracy.
Disclosure of Invention
The application aims to solve the problems in the prior art, and provides a method and an acquisition system for biodiversity observation by using multi-source AI, which can acquire high-quality biodiversity data by using artificial intelligence and high-precision positioning, provide important parameters for biodiversity evaluation, and provide basis for scientific protection measures and policy formulation of biodiversity.
In order to achieve the above object, the present application provides an acquisition system for biodiversity observation by a multisource AI, the acquisition system comprising: unmanned ship and edge server, unmanned ship in the system includes: the device comprises a water surface rotatable motion camera, an underwater rotatable motion camera, an audio acquisition device, a communication antenna, a central control unit, a Beidou/GPS positioning unit, an IMU unit and a storage unit; the upper surface of the unmanned ship is fixedly connected with a communication antenna, a water surface rotatable moving camera is connected above the antenna, a fisheye lens is fixedly arranged on the moving camera, a camera tripod head is adjusted to enable the fisheye lens to adjust shooting directions, an underwater rotatable moving camera is respectively and fixedly arranged at the left side and the right side of the cavity of the unmanned ship, the underwater rotatable moving camera protrudes out of the cavity, the fisheye lens is arranged on the underwater rotatable moving camera, the underwater rotatable moving camera is adjusted to enable the fisheye lens to face downwards, an audio acquisition device is arranged behind the unmanned ship, a Beidou/GPS positioning unit and an IMU unit realize fusion positioning, and a central control unit is connected with a storage unit, a Beidou/GPS positioning unit, an IMU unit, a control system of the unmanned ship and a control system of the moving camera; and an edge server in the system performs AI identification of animals and plants.
In some embodiments, the Beidou/GPS/IMU fusion positioning mode is adopted, longitude and latitude coordinates (lat, lon) of Beidou or GPS are converted into two-dimensional plane coordinates (x Bucket(s) /GPS,y Bucket(s) /GPS), and pose data of the IMU are converted into two-dimensional plane coordinates (x And ,y And ).
In some embodiments, the fusion coordinate (x Fusion of ,y Fusion of ) of the Beidou/GPS/IMU is obtained by fusing the longitude and latitude coordinates of the Beidou or GPS with the pose data of the IMU, namely the following formula is shown:
x Fusion of = αx Bucket(s) /GPS + βx And (5)
y Fusion of = αy Bucket(s) /GPS + βy And (6)
Wherein α+β=1 (7)
Wherein, alpha and beta are the weight of longitude and latitude coordinates and the weight of IMU pose data respectively.
In some embodiments, when the working state of the beidou/GPS is abnormal, that is, the beidou/GPS positioning fails, the system enters a positioning mode mainly comprising IMU and secondarily comprising beidou/GPS, and at this time, the dynamic updating of the weight is updated in an updating period of IMU, that is, β Abnormality of =1/[1+κ(t Currently, the method is that -tIMU Refreshing ], where t Currently, the method is that is the current time, t IMU Refreshing is the latest IMU refreshing time, and κ is a time decay speed coefficient, and at this time α Abnormality of =1-β Abnormality of . Namely the following formula:
x Fusion of = α Abnormality of x Bucket(s) /GPS + β Abnormality of x And (8)
y Fusion of = α Abnormality of y Bucket(s) /GPS + β Abnormality of y And (9)
Wherein the method comprises the steps of
α Abnormality of =1-β Abnormality of (11)。
In some embodiments, when the working state of the beidou/GPS is normal, i.e. the beidou/GPS positioning is effective, the system enters a positioning mode with the GPS as the main and the IMU as the auxiliary, and at this time, the dynamic updating of the weight is updated in the updating period of the beidou/GPS, i.e. α Normal state =1/[1+κ(t Currently, the method is that -t Bucket(s) /GPS Refreshing ], where t Currently, the method is that is the current time, t Bucket(s) /GPS Refreshing is the last refresh time of the beidou/GPS, and κ is the time decay speed coefficient, and β Normal state =1-α Normal state is the time decay speed coefficient. Namely the following formula:
x Fusion of = α Normal state x Bucket(s) /GPS + β Normal state x And (12)
y Fusion of = α Normal state y Bucket(s) /GPS + β Normal state y And (13)
Wherein the method comprises the steps of
β Normal state =1-α Normal state (15)。
In some embodiments, the two-dimensional curved surface longitude and latitude coordinates of the Beidou/GPS are converted into two-dimensional plane rectangular coordinates through the ink card support conversion, namely, the conversion from the spherical coordinates lat, longitude lon to rectangular plane abscissa x and ordinate y is realized through the formula (1) and the formula (2):
x Bucket(s) /GPS=lat (1)
in some embodiments, preprocessing pose data output by an IMU based on a sliding weighted filtering algorithm to obtain pose data of the unmanned ship in x and y directions, and constructing a five-dimensional vector according to the pose data: linear acceleration and angular velocity in the x direction, linear acceleration and angular velocity in the y direction and course angle, and the five-dimensional data are used as input vectors of the neural network model; and then predicting the relative displacement sum (x And ,y And ) in the x and y directions based on the trained neural network model, and taking the predicted result as the output of the neural network model:
x And =∑Δxi (3)
y And =∑Δyi (4)。
The application also provides a method for acquiring multisource AI biodiversity, which uses the acquisition system according to any of claims 1 to 7, comprising: (1) data acquisition: video acquisition is carried out on animals and plants on the water surface through a water surface rotatable motion camera and an audio acquisition device, then fusion positioning information and time information of an unmanned ship are added in an acquired video image to serve as attribute information, video acquisition is carried out on the animals and plants under water through an underwater rotatable motion camera, and then fusion positioning information and time information of the unmanned ship are added in the acquired video image to serve as attribute information; (2) data transmission: transmitting the water surface video, the underwater video data and the attribute information thereof acquired in the step (1) to an edge server in the cloud through a communication antenna for storage; (3) animal and plant identification: the edge server also performs plant identification on the received video; and (4) reporting the identification result: if the identification result is matched with the target object set by the shore-based control unit, reporting the identification related result of the identified target object and the positioning information of the corresponding unmanned ship to the appointed receiving device in a preset mode; (5) calculating an included angle: after the target object is identified, the edge server calculates the azimuth information of the target object and sends the azimuth information of the target object to the central control unit of the unmanned ship; (6) adjusting the attitude of the unmanned ship: the central control unit calculates an included angle between the target object and the unmanned ship according to the received azimuth information of the target object and the current positioning information of the unmanned ship, determines the heading of the unmanned ship according to the included angle, controls the differential steering movement of the unmanned ship through the lower computer, and drives the unmanned ship to move towards the azimuth of the target object after the posture of the unmanned ship is adjusted; (7) target object acquisition: after reaching the target position, target tracking and video alignment are carried out, and video images and audio data of the target object are collected and stored.
In some embodiments, after receiving the azimuth information of the target object and adjusting the heading attitude of the unmanned ship, the central control unit adjusts a cradle head on the water surface rotatable motion camera to enable a fisheye lens on the cradle head to face to the azimuth of the target object.
The application has the following advantages:
(1) The application uses the fusion positioning mode of Beidou/GPS/IMU, and can solve the problems of object shielding, signal failure and the like in the complex acquisition environment of the unmanned ship;
(2) In order to accurately position the unmanned ship, the application creatively provides a positioning fusion calculation method for adjusting dynamic weights, comprehensively considers factors such as the refreshing frequency of positioning signals, the positioning precision and the like, and dynamically fuses the fused multi-source positioning information, thereby effectively relieving the problem of larger positioning errors caused by fixed weights;
(3) The application provides a scheme for adding positioning and time information into video attributes, and effectively ensures observation and analysis of aspects such as habit and the like of a rare species target object in biodiversity research.
Drawings
FIG. 1 is a schematic view of the unmanned ship of the present application;
FIG. 2 shows the flow of the method for multi-source AI biodiversity observation collection of the application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the drawings related to the present application are shown.
The unmanned ship used in the application has the mass of about 130kg, the length of about 2.8m, the width of about 1.5m, the cruising ability of about 20km, the ship speed is adjustable within the range of 0-1.5 m/s, and the communication distance exceeds 10km. The unmanned ship power supply uses 24V and 12V lithium battery packs to respectively supply power for the propulsion system and the main control and sensor system. The unmanned ship has modes of autonomous cruising, remote control and the like, and in the unmanned ship autonomous cruising mode, the unmanned ship generally reasonably plans a navigation path according to the working place, the target azimuth and the operation requirement of the unmanned ship or completes the automatic cruising according to the expected course, the GPS locus, the route and the like sent by the shore-based control unit.
The acquisition system for multi-source AI biodiversity observation of the application, as shown in figure 1, comprises: unmanned ship and edge server, unmanned ship in the system includes: the device comprises a water surface rotatable motion camera (2), an underwater rotatable motion camera (5), an audio acquisition device (3), a communication antenna (1), a central control unit (4), a Beidou/GPS positioning unit, an IMU unit and a storage unit; the method comprises the steps that a communication antenna (1) is fixedly connected to the upper surface of an unmanned ship, a water surface rotatable moving camera (2) is connected to the upper side of the antenna (1), a fisheye lens is fixedly installed on the water surface rotatable moving camera (2), a camera cradle head is adjusted to enable the fisheye lens to adjust shooting directions, an underwater rotatable moving camera (5) is fixedly installed on the left side and the right side of the lower portion of a cavity of the unmanned ship respectively, the fisheye lens is installed on the underwater rotatable moving camera (5), the underwater rotatable moving camera (5) cradle head is adjusted to enable the fisheye lens to face downwards, an audio acquisition device (3) is installed behind the unmanned ship, a Beidou/GPS positioning unit and an IMU unit are fused and positioned, and a central control unit (4) is connected with a storage unit, a Beidou/GPS positioning unit, an IMU unit, a control system of the unmanned ship and a control system of the moving camera; and the edge server in the system performs AI identification of animals and plants according to the received multisource information collected by the unmanned ship.
The unmanned ship needs to acquire longitude and latitude information of the unmanned ship in the process of driving to a target, and the current longitude and latitude information of the unmanned ship is the Beidou or GPS. The GPS positioning precision is 5 meters, the Beidou positioning precision is almost 5 meters, and the positioning precision is high. While the unmanned ship can use GPS to know the longitude and latitude information of the current position in the process of driving to the target fish shoal, the frequency of updating the Beidou and the GPS is low and is about 1-10 Hz. The unmanned ship is located in most of inland rivers, reservoirs, small lakes and other areas, the terrain of the areas is complex, the space is narrow and is always shielded, beidou/GPS signals are easy to lose, and under the condition that the Beidou/GPS signals are lost, an IMU and other self-body sensors are used for completing geographic position positioning estimation. IMU (Inertialmeasurementunit) is an inertial measurement unit, updated at a frequency up to 1KHz, containing gyroscopes, accelerometers and magnetometers. The gyroscope measures the angular speed of the unmanned ship by using three shafts so as to calculate the attitude of the ship body; the accelerometer measures acceleration and position of the unmanned ship; the magnetometer (M-Sensor), also called geomagnetism and magnetic Sensor, is used for testing the intensity and direction of the current magnetic field of the unmanned ship, and the position of the unmanned ship can be calculated through the data, but the gyroscope and the accelerometer in the IMU can accumulate errors with the increase of time, and the single IMU is not suitable for long-time use. Therefore, the unmanned ship finishes self-positioning by using a multi-sensor fusion mode, namely, the unmanned ship is positioned accurately in real time by adopting a Beidou/GPS/IMU fusion positioning mode.
The GPS output is longitude and latitude information (latitude, longitude) in the earth coordinate system, satellite data such as longitude (longitude), latitude (latitude), moving speed, altitude and the like obtained in the beidou satellite navigation module are also based on the spherical coordinate system of the earth, the coordinate system of the unmanned ship is a two-dimensional plane coordinate system (x, y), and the longitude and latitude information is not suitable for the unmanned ship, so that the longitude and latitude information needs to be converted into a two-dimensional plane rectangular coordinate system. According to the method, the coordinate conversion is carried out on satellite longitude and latitude data obtained from a Beidou satellite navigation module or longitude and latitude data (lat, lon) obtained from a GPS global positioning system according to a conversion algorithm from an earth spherical coordinate system to a two-dimensional plane rectangular coordinate system of the ink-card support, and the two-dimensional plane rectangular coordinate system (x Bucket(s) /GPS,y Bucket(s) /GPS) is converted. The main process of coordinate conversion is to convert the two-dimensional curved surface longitude and latitude coordinates of Beidou/GPS into two-dimensional plane rectangular coordinates through an ink card support conversion algorithm, namely, the conversion from spherical coordinates latitude lat, longitude lon to plane coordinates abscissa x and ordinate y is realized through a formula (1) and a formula (2).
x Bucket(s) /GPS=lat (1)
y Bucket(s) /GPS=ln(tan(π/4+lon/2)) (2)
The pose data output by the IMU unit comprises acceleration, angular velocity, course angle and the like, and the pose data also need to be converted into an expression form of two-dimensional rectangular coordinates. The application adopts an AI mode to carry out conversion. Firstly, preprocessing pose data output by an IMU (inertial measurement unit) based on a sliding weighted filtering algorithm to obtain pose data of an unmanned ship in x and y directions, wherein the pose data are five-dimensional data of linear acceleration and angular velocity in the x direction, linear acceleration and angular velocity in the y direction and course angle respectively, and then the five-dimensional data are used as input vectors of a neural network model; and then predicting the relative displacement sum (x And ,y And ) in the x and y directions based on the trained neural network model, and taking the predicted result as the output of the neural network model.
x And =∑Δxi (3)
y And =∑Δyi (4)
And (3) carrying out weighted fusion on the Beidou/GPS positioning data and the pose data of the IMU to obtain positioning data of two-dimensional plane coordinates, and marking the positioning data as (x Fusion of ,y Fusion of ).
X Fusion of =αx Bucket(s) /GPS+βx And (5)
y Fusion of =αy Bucket(s) /GPS+βy And (6)
Wherein α+β=1 (7)
In order to solve the problems, the application adopts a dynamic weighting updating method for attenuating the positioning confidence coefficient according to the interval of the distance refreshing time to carry out the weighting fusion of positioning data. Specifically, when the working state of the Beidou/GPS is abnormal, namely, the Beidou/GPS positioning fails, a positioning mode which mainly comprises the IMU and is assisted by the Beidou/GPS is entered, and at the moment, the dynamic updating of the weight is updated by the updating period of the IMU, namely, beta Abnormality of =1/[1+κ(t Currently, the method is that -tIMU Refreshing ', wherein t Currently, the method is that is the current time, t IMU Refreshing is the latest refreshing time of the IMU, kappa is a time attenuation speed coefficient, and alpha Abnormality of =1-β Abnormality of is obtained. Namely the following formula:
x Fusion of =α Abnormality of x Bucket(s) /GPS+β Abnormality of x And (8)
y Fusion of =α Abnormality of y Bucket(s) /GPS+β Abnormality of y And (9)
Wherein the method comprises the steps of
α Abnormality of =1-β Abnormality of (11)
When the working state of the Beidou/GPS is normal, namely the Beidou/GPS positioning is effective, a positioning mode which takes the GPS as a main mode and takes the IMU as an auxiliary mode is entered, and at the moment, the dynamic updating of the weight is updated in the updating period of the Beidou/GPS, namely alpha Normal state =1/[1+κ(t Currently, the method is that -t Bucket(s) /GPS Refreshing , wherein t Currently, the method is that is the current time, t Bucket(s) /GPS Refreshing is the latest Beidou/GPS refreshing time, kappa is a time decay speed coefficient, and beta Normal state =1-α Normal state is obtained. Namely the following formula:
x Fusion of =α Normal state x Bucket(s) /GPS+β Normal state x And (12)
y Fusion of =α Normal state y Bucket(s) /GPS+β Normal state y And (13)
Wherein the method comprises the steps of
β Normal state =1-α Normal state (15)
The method for collecting the multi-source information based on the collection system for multi-source AI biodiversity observation is shown in fig. 2, and specifically comprises the following steps:
The method comprises the steps of (1) carrying out video acquisition on animals and plants on the water surface through a water surface rotatable moving camera (2) and an audio acquisition device (3), and adding fusion positioning information of an unmanned ship at the moment, time information at the moment and the like into an acquired video image to serve as attribute information, so that subsequent research and analysis or data backtracking and other processing are facilitated. The underwater animals and plants are subjected to video acquisition through an underwater rotatable motion camera (5), and then fusion positioning information of an unmanned ship, time information and the like at the time are added into an acquired video image to serve as attribute information.
(2) According to a certain time interval, the water surface video and the underwater video data collected in the step (1) and attribute information thereof, such as positioning information, time information, file size and the like, are transmitted to an edge server in the cloud through a communication antenna (1) for storage, and in addition, the edge server also carries out animal and plant identification on the received video, wherein the water surface video can be separated by adopting video and audio, and the animal and plant identification result is obtained after AI identification is carried out respectively or the results of the two are integrated.
(3) After receiving the data uploaded by the unmanned ship, the edge server performs integrity verification on the received data and sends confirmation information to the unmanned ship after verification is complete, and the unmanned ship performs local deletion on the successfully uploaded file according to the confirmation information at the moment so as to prevent the problem of overflow of the storage space.
(4) And if the identification result has a result matched with the target object set by the shore-based control unit, reporting the result related to the identification of the target object, such as the image frame containing the target object and the corresponding information such as the positioning of the unmanned ship, to a designated receiving device in a preset mode.
(5) After the target object is identified, the edge server calculates the azimuth information of the target object and sends the azimuth information of the target object to the central control unit of the unmanned ship; the calculation of the target object azimuth information is obtained by calculating information such as acoustic positioning in audio frequency and real-time fusion positioning of an unmanned ship by combining an edge server, so that the cooperative calculation of sound and ship and the cooperative verification of sound and light are realized, and the azimuth information of the target object is more accurate.
(6) The central control unit (4) calculates an included angle between the target object and the unmanned ship according to the received azimuth information of the target object and the current positioning information of the unmanned ship, determines the heading of the unmanned ship according to the included angle, controls the differential steering movement of the unmanned ship through the lower computer, drives the unmanned ship to move towards the azimuth of the target object after the posture of the unmanned ship is adjusted, and adjusts a cradle head on the water surface rotary motion camera to enable a fish-eye lens on the cradle head to face towards the azimuth of the target object.
(7) After reaching the target position, target tracking and video alignment are carried out, and video images and audio data of the target object are collected and stored.
The foregoing description is only a preferred embodiment of the present application, and is not intended to limit the present application, but the present application is described in detail with reference to the foregoing embodiments, and it will be apparent to those skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for some of the technical features thereof. Any modifications and equivalent substitutions are intended to be included within the scope of the present application, which is also encompassed within the spirit and principles of the present application.
Claims (7)
1. An acquisition system for multisource AI biodiversity observations, the system comprising: unmanned ship and edge server, unmanned ship in the system includes: the device comprises a water surface rotatable motion camera, an underwater rotatable motion camera, an audio acquisition device, a communication antenna, a central control unit, a Beidou/GPS positioning unit, an IMU unit and a storage unit; the upper surface of the unmanned ship is fixedly connected with a communication antenna, a water surface rotatable moving camera is connected above the antenna, a fisheye lens is fixedly arranged on the water surface rotatable moving camera, a fisheye lens can be adjusted to adjust shooting directions by adjusting a water surface rotatable moving camera holder, an underwater rotatable moving camera is fixedly arranged at the left side and the right side of the cavity of the unmanned ship respectively, the fisheye lens is arranged on the underwater rotatable moving camera, the fisheye lens can be downward by adjusting the underwater rotatable moving camera holder, an audio acquisition device is arranged behind the unmanned ship, a Beidou/GPS positioning unit and an IMU unit realize fusion positioning, and a central control unit is connected with a storage unit, a Beidou/GPS positioning unit, an IMU unit, a control system of the unmanned ship and a control system of the moving camera; the edge server in the system carries out AI identification of animals and plants according to the received multisource information collected by the unmanned ship;
the method comprises the steps of converting longitude and latitude coordinates (lat, lon) of Beidou or GPS into two-dimensional plane coordinates (x Bucket(s) /GPS,y Bucket(s) /GPS) and converting pose data of the IMU into two-dimensional plane coordinates (x And ,y And ) by adopting a Beidou/GPS/IMU fusion positioning mode;
The Beidou/GPS/IMU fusion coordinate (x Fusion of ,y Fusion of ) is obtained by carrying out weighted fusion on the Beidou or GPS longitude and latitude coordinate and the IMU pose data;
When the working state of the Beidou/GPS is abnormal, namely, the Beidou/GPS positioning fails, a positioning mode which takes the IMU as a main part and takes the Beidou/GPS as an auxiliary part is entered, and at the moment, the dynamic updating of the weight is updated by the updating period of the IMU, namely, beta Abnormality of =1/[1+κ(t Currently, the method is that -tIMU Refreshing ], wherein t Currently, the method is that is the current time, t IMU Refreshing is the latest updating time of the IMU, kappa is a time attenuation speed coefficient, and at the moment, alpha Abnormality of =1-β Abnormality of ,α Abnormality of and Abnormality of beta are the weights of longitude and latitude coordinates and the pose data of the IMU respectively when the working state of the Beidou/GPS is abnormal, namely, the following formula is adopted:
x Fusion of =α Abnormality of x Bucket(s) /GPS+β Abnormality of x And (8)
y Fusion of =α Abnormality of y Bucket(s) /GPS+β Abnormality of y And (9)
Wherein the method comprises the steps of
α Abnormality of =1-β Abnormality of (11)。
2. The acquisition system according to claim 1, wherein when the working state of the beidou/GPS is normal, i.e. the beidou/GPS positioning is effective, a positioning mode mainly comprising GPS and auxiliary comprising IMU is entered, wherein the dynamic updating of the weight is updated in the updating period of the beidou/GPS, i.e. α Normal state =1/[1+κ(t Currently, the method is that -t Bucket(s) /GPS Refreshing ], wherein t Currently, the method is that is the current time, t Bucket(s) /GPS Refreshing is the refreshing time of the last beidou/GPS, κ is a time attenuation speed coefficient, and β Normal state =1-α Normal state ,α Normal state and β Normal state are the weights of the longitude and latitude coordinates and the IMU pose data respectively when the working state of the beidou/GPS is normal, i.e. the following formula:
x Fusion of =α Normal state x Bucket(s) /GPS+β Normal state x And (12)
y Fusion of =α Normal state y Bucket(s) /GPS+β Normal state y And (13)
Wherein the method comprises the steps of
β Normal state =1-α Normal state (15)。
3. The acquisition system according to claim 1, wherein the conversion from the latitude and longitude coordinates of the two-dimensional curved surface of the beidou/GPS to the rectangular coordinates of the two-dimensional plane is performed by using the formula (1) and the formula (2), that is, the conversion from the latitude and longitude lat of the spherical coordinates to the rectangular plane x and y of the rectangular plane is performed by using the formula (1) and the formula (2):
x Bucket(s) /GPS=lat(1)
y Bucket(s) /GPS=ln(tan(π/4+lon/2))(2)。
4. The acquisition system according to claim 1, wherein the pose data output by the IMU is preprocessed based on a sliding weighted filtering algorithm to obtain pose data of the unmanned ship in x and y directions, and a five-dimensional vector is constructed according to the pose data: linear acceleration and angular velocity in the x direction, linear acceleration and angular velocity in the y direction and course angle, and the five-dimensional data are used as input vectors of the neural network model; and then predicting the relative displacement sum (x And ,y And ) in the x and y directions based on the trained neural network model, and taking the predicted result as the output of the neural network model.
5. A method of acquisition of multisource AI biodiversity using an acquisition system according to any of claims 1-4, comprising: (1) data acquisition: video acquisition is carried out on animals and plants on the water surface through a water surface rotatable motion camera and an audio acquisition device, then fusion positioning information and time information of an unmanned ship are added in an acquired video image to serve as attribute information, video acquisition is carried out on the animals and plants under water through an underwater rotatable motion camera, and then fusion positioning information and time information of the unmanned ship are added in the acquired video image to serve as attribute information; (2) data transmission: transmitting the water surface video, the underwater video data and the attribute information thereof acquired in the step (1) to an edge server in the cloud through a communication antenna for storage; (3) animal and plant identification: the edge server also performs plant identification on the received video; and (4) reporting the identification result: and if the identification result is matched with the target object set by the shore-based control unit, reporting the identification related result of the identified target object and the positioning information of the corresponding unmanned ship to the appointed receiving device in a preset mode.
6. The method of acquisition of claim 5, further comprising: (5) calculating an included angle: after the target object is identified, the edge server calculates the azimuth information of the target object and sends the azimuth information of the target object to the central control unit of the unmanned ship; (6) adjusting the attitude of the unmanned ship: the central control unit calculates an included angle between the target object and the unmanned ship according to the received azimuth information of the target object and the current positioning information of the unmanned ship, determines the heading of the unmanned ship according to the included angle, controls the differential steering movement of the unmanned ship through the lower computer, and drives the unmanned ship to move towards the azimuth of the target object after the posture of the unmanned ship is adjusted; (7) target object acquisition: after reaching the target position, target tracking and video alignment are carried out, and video images and audio data of the target object are collected and stored.
7. The method according to claim 5, wherein the central control unit adjusts a cradle head on the water surface rotatable camera after receiving the azimuth information of the target object and adjusting the heading attitude of the unmanned ship, so that the fisheye lens on the cradle head is directed to the azimuth of the target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311681124.6A CN117690194B (en) | 2023-12-08 | 2023-12-08 | Multi-source AI biodiversity observation method and acquisition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311681124.6A CN117690194B (en) | 2023-12-08 | 2023-12-08 | Multi-source AI biodiversity observation method and acquisition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117690194A CN117690194A (en) | 2024-03-12 |
CN117690194B true CN117690194B (en) | 2024-06-07 |
Family
ID=90131207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311681124.6A Active CN117690194B (en) | 2023-12-08 | 2023-12-08 | Multi-source AI biodiversity observation method and acquisition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117690194B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100456041B1 (en) * | 2004-07-28 | 2004-11-10 | (주)지앤제이 | Equipment for Collecting Global Positioning Information |
CN106873578A (en) * | 2017-04-27 | 2017-06-20 | 南通大学 | Unmanned operation intelligence boat equipment and control system |
CN108303988A (en) * | 2018-03-28 | 2018-07-20 | 大连海事大学 | A kind of the target identification tracing system and its working method of unmanned boat |
CN207908979U (en) * | 2018-03-28 | 2018-09-25 | 大连海事大学 | A kind of target identification tracing system of unmanned boat |
CN110243358A (en) * | 2019-04-29 | 2019-09-17 | 武汉理工大学 | The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion |
CN111897350A (en) * | 2020-07-28 | 2020-11-06 | 谈斯聪 | Underwater robot device, and underwater regulation and control management optimization system and method |
CN113124864A (en) * | 2021-04-19 | 2021-07-16 | 江苏虹湾威鹏信息技术有限公司 | Water surface navigation method adopting machine vision and inertial navigation fusion |
CN113985419A (en) * | 2021-10-22 | 2022-01-28 | 中国科学院合肥物质科学研究院 | Water surface robot cooperative obstacle detection and avoidance method and system |
CN114046792A (en) * | 2022-01-07 | 2022-02-15 | 陕西欧卡电子智能科技有限公司 | Unmanned ship water surface positioning and mapping method, device and related components |
CN115236714A (en) * | 2022-05-24 | 2022-10-25 | 芯跳科技(广州)有限公司 | Multi-source data fusion positioning method, device and equipment and computer storage medium |
CN116448100A (en) * | 2023-03-10 | 2023-07-18 | 华南理工大学 | Multi-sensor fusion type offshore unmanned ship SLAM method |
CN116540696A (en) * | 2023-04-13 | 2023-08-04 | 北京工业大学 | Multi-mode obstacle avoidance system based on multi-sensor fusion of unmanned ship of ROS |
CN116642468A (en) * | 2023-05-31 | 2023-08-25 | 交通运输部天津水运工程科学研究所 | Unmanned aerial vehicle aerial photography and unmanned ship based underwater integrated scanning method |
CN117075158A (en) * | 2023-08-23 | 2023-11-17 | 哈尔滨工业大学 | Pose estimation method and system of unmanned deformation motion platform based on laser radar |
-
2023
- 2023-12-08 CN CN202311681124.6A patent/CN117690194B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100456041B1 (en) * | 2004-07-28 | 2004-11-10 | (주)지앤제이 | Equipment for Collecting Global Positioning Information |
CN106873578A (en) * | 2017-04-27 | 2017-06-20 | 南通大学 | Unmanned operation intelligence boat equipment and control system |
CN108303988A (en) * | 2018-03-28 | 2018-07-20 | 大连海事大学 | A kind of the target identification tracing system and its working method of unmanned boat |
CN207908979U (en) * | 2018-03-28 | 2018-09-25 | 大连海事大学 | A kind of target identification tracing system of unmanned boat |
CN110243358A (en) * | 2019-04-29 | 2019-09-17 | 武汉理工大学 | The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion |
CN111897350A (en) * | 2020-07-28 | 2020-11-06 | 谈斯聪 | Underwater robot device, and underwater regulation and control management optimization system and method |
CN113124864A (en) * | 2021-04-19 | 2021-07-16 | 江苏虹湾威鹏信息技术有限公司 | Water surface navigation method adopting machine vision and inertial navigation fusion |
CN113985419A (en) * | 2021-10-22 | 2022-01-28 | 中国科学院合肥物质科学研究院 | Water surface robot cooperative obstacle detection and avoidance method and system |
CN114046792A (en) * | 2022-01-07 | 2022-02-15 | 陕西欧卡电子智能科技有限公司 | Unmanned ship water surface positioning and mapping method, device and related components |
CN115236714A (en) * | 2022-05-24 | 2022-10-25 | 芯跳科技(广州)有限公司 | Multi-source data fusion positioning method, device and equipment and computer storage medium |
WO2023226155A1 (en) * | 2022-05-24 | 2023-11-30 | 芯跳科技(广州)有限公司 | Multi-source data fusion positioning method and apparatus, device, and computer storage medium |
CN116448100A (en) * | 2023-03-10 | 2023-07-18 | 华南理工大学 | Multi-sensor fusion type offshore unmanned ship SLAM method |
CN116540696A (en) * | 2023-04-13 | 2023-08-04 | 北京工业大学 | Multi-mode obstacle avoidance system based on multi-sensor fusion of unmanned ship of ROS |
CN116642468A (en) * | 2023-05-31 | 2023-08-25 | 交通运输部天津水运工程科学研究所 | Unmanned aerial vehicle aerial photography and unmanned ship based underwater integrated scanning method |
CN117075158A (en) * | 2023-08-23 | 2023-11-17 | 哈尔滨工业大学 | Pose estimation method and system of unmanned deformation motion platform based on laser radar |
Non-Patent Citations (1)
Title |
---|
基于北斗卫星导航系统的林区智能巡检测绘系统研究;赵燕东;涂佳炎;;农业机械学报;20180523(第07期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117690194A (en) | 2024-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bao et al. | Integrated navigation for autonomous underwater vehicles in aquaculture: A review | |
US10989537B2 (en) | Sonar sensor fusion and model based virtual and augmented reality systems and methods | |
Kinsey et al. | A survey of underwater vehicle navigation: Recent advances and new challenges | |
Whitcomb et al. | Advances in underwater robot vehicles for deep ocean exploration: Navigation, control, and survey operations | |
Chutia et al. | A review of underwater robotics, navigation, sensing techniques and applications | |
CN108227751A (en) | The landing method and system of a kind of unmanned plane | |
CN111966133A (en) | Visual servo control system of holder | |
CN111090283B (en) | Unmanned ship combined positioning and orientation method and system | |
US11396354B2 (en) | Covert underwater navigation via polarimetry | |
CN109813306A (en) | A kind of unmanned vehicle planned trajectory satellite location data confidence level calculation method | |
CN109144105A (en) | A kind of hull bottom intelligence underwater cleaning robot, control system and control method | |
US12111155B2 (en) | Systems and methods for measuring water capacity of polar lakes | |
CN114659496B (en) | Method for monitoring inclination of shipborne Beidou all-in-one machine | |
CN111857176A (en) | GPS unmanned aerial vehicle control method | |
CN101556154A (en) | Positioning and path map generation system and data acquisition analysis method thereof | |
CN114046777A (en) | Underwater optical imaging system and method suitable for large-range shallow sea coral reef drawing | |
CN117690194B (en) | Multi-source AI biodiversity observation method and acquisition system | |
CN110954097A (en) | Navigation positioning method for robot combination | |
KR102682319B1 (en) | Apparatus and method for controlling USV(Unmanned Surface Vehicle) for structural monitoring of offshore power plants | |
CN115479605A (en) | High-altitude long-endurance unmanned aerial vehicle autonomous navigation method based on space target directional observation | |
Romeo et al. | Navigation is key to AUV missions | |
Papalia et al. | SARA, an autonomous underwater vehicle for researches in Antarctica | |
Ballard | Mapping the mid-ocean ridge | |
RU2706434C2 (en) | Autonomous mobile object control system, mainly in difficult navigation conditions | |
Bachmann et al. | Design and evaluation of an integrated GPS/INS system for shallow-water AUV navigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |