CN112783180A - Multi-view camouflage type underwater biological recognition system and method - Google Patents

Multi-view camouflage type underwater biological recognition system and method Download PDF

Info

Publication number
CN112783180A
CN112783180A CN202011620877.2A CN202011620877A CN112783180A CN 112783180 A CN112783180 A CN 112783180A CN 202011620877 A CN202011620877 A CN 202011620877A CN 112783180 A CN112783180 A CN 112783180A
Authority
CN
China
Prior art keywords
information
image
underwater
robot
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011620877.2A
Other languages
Chinese (zh)
Other versions
CN112783180B (en
Inventor
孙铭帅
陈作志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Sea Fisheries Research Institute Chinese Academy Fishery Sciences
Southern Marine Science and Engineering Guangdong Laboratory Guangzhou
Original Assignee
South China Sea Fisheries Research Institute Chinese Academy Fishery Sciences
Southern Marine Science and Engineering Guangdong Laboratory Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Sea Fisheries Research Institute Chinese Academy Fishery Sciences, Southern Marine Science and Engineering Guangdong Laboratory Guangzhou filed Critical South China Sea Fisheries Research Institute Chinese Academy Fishery Sciences
Priority to CN202011620877.2A priority Critical patent/CN112783180B/en
Publication of CN112783180A publication Critical patent/CN112783180A/en
Application granted granted Critical
Publication of CN112783180B publication Critical patent/CN112783180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C11/00Equipment for dwelling or working underwater; Means for searching for underwater objects
    • B63C11/52Tools specially adapted for working underwater, not otherwise provided for

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Ocean & Marine Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a multi-view camouflage type underwater biological recognition system and a method, which comprises the following steps: acquiring initial position information and underwater environment information of the robot; determining a moving path according to the initial position information of the robot and the underwater environment information to generate path information; judging whether the robot reaches a preset area, if so, acquiring a target area image and generating image information; carrying out underwater biological recognition according to the image information to obtain result information; and transmitting the result information to the terminal according to a preset mode.

Description

Multi-view camouflage type underwater biological recognition system and method
Technical Field
The invention relates to an underwater biological recognition system, in particular to a multi-view camouflage type underwater biological recognition system and a method.
Background
Marine biology is the science of studying organisms and their life history that inhabit marine living spaces (i.e., the sea and the ocean). The field investigation of marine life is mainly carried out by marine research teams. In addition, fishery also provides certain materials for scientific research. The development and research can also be carried out by the way of the potential for the upper sea area. For deeper sea areas, it is desirable to utilize submarines and submersible remote control mechanisms. Characteristic elements of marine living spaces are water and salinity. The biotopes can be different from each other, and 20279 marine organisms have been recorded in the territorial waters of China through research and study of marine science and technology workers for decades. These marine organisms belong to the kingdom 5, phyla 44. The animal kingdom is the largest (12794 species) and the prokaryote kingdom is the smallest (229 species). The marine organism species in China account for about 10 percent of the total number of marine organisms all over the world, and the quantity accounts for 50 percent. Marine organisms in the sea area of China can be roughly divided into two categories of marine organisms in the water area and marine organisms on the mudflat according to the distribution condition. Among marine organisms in waters, fish, cephalopods (e.g. cuttlefish, also called cuttlefish, which we eat frequently) and shrimps, crabs are the most important marine organisms. The main body of marine organisms in the water area is formed by the largest variety and the largest quantity of fishes. The distribution trend of marine biota species in water areas is that the number of south is more than that of north, namely, the types of south sea are more, and the types of yellow sea and Bohai sea are less.
In order to ensure that accurate identification of underwater creatures needs to be realized, a system matched with the system is developed to control the underwater creatures, initial position information and underwater environment information of a robot are obtained, path information is generated according to the initial position information and the underwater environment information of the robot, when the robot reaches a preset area, images of a target area are collected, underwater creature identification is performed according to the image information, and the images are transmitted to a terminal according to a preset mode, how to realize accurate control on the underwater creature identification system is a problem to be solved urgently.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a multi-view camouflage type underwater organism identification system and a method.
In order to achieve the purpose, the invention adopts the technical scheme that: a multi-view camouflage type underwater biological identification method comprises the following steps:
acquiring initial position information and underwater environment information of the robot;
determining a moving path according to the initial position information of the robot and the underwater environment information to generate path information;
it is determined whether the robot has reached a predetermined area,
if the target area arrives, acquiring a target area image to generate image information;
carrying out underwater biological recognition according to the image information to obtain result information;
and transmitting the result information to the terminal according to a preset mode.
In a preferred embodiment of the present invention, the underwater environment information includes one or more of a current flow rate, a current flow direction, a current pressure, a water depth, and underwater obstacle information.
In a preferred embodiment of the present invention, the determining the moving path according to the initial position information of the robot and the underwater environment information includes:
acquiring an initial position of the robot, and establishing a coordinate network;
the coordinate network is divided into a plurality of grid cells,
obtaining the position information of the barrier, calculating the position of the barrier in the coordinate network,
removing grid units with the positions of the obstacles coinciding with the coordinate network;
and searching the optimal path in the residual grid cells by using an optimization algorithm.
In a preferred embodiment of the present invention, acquiring an image of a target area and generating image information specifically includes:
the robot sends a trapping signal when reaching a target area, and attracts fishes to enter the target area;
setting sampling time, and collecting multi-angle images of a target area;
respectively extracting characteristic values from the multiple images, classifying the characteristic values,
calculating the difference value of any two characteristic values in different images, if the difference value is less than a preset threshold value,
then target location identification is performed.
In a preferred embodiment of the present invention, the method further comprises:
acquiring underwater environment information and generating a current control instruction;
acquiring current state information of the robot, and calculating a control action instruction of the next step according to the current control instruction and the current state information;
receiving an underwater environment evaluation feedback signal;
and adjusting the current control instruction according to a preset rule, and storing result information into a database.
In a preferred embodiment of the present invention, the method further comprises:
acquiring a moving instruction, activating a target point state, and generating target point coordinate information;
generating moving path information according to the coordinate information of the target point;
generating obstacle avoidance decisions according to the moving path information;
and carrying out robot dynamic collision avoidance according to the obstacle avoidance decision.
In a preferred embodiment of the present invention, acquiring an image of a target area, generating image information, further comprises:
acquiring a first angle image of a target area to generate first image information;
carrying out image point segmentation on the first image information, and extracting an image point characteristic value;
receiving a rotation instruction, rotating the camera to a preset angle, shooting the image information of the target area again, and generating second image information;
carrying out image point segmentation on the second image information, and extracting an image point characteristic value;
comparing the image point in the first image information with the image point characteristic value in the second image information to obtain a deviation rate;
and if the deviation rate is smaller than a preset threshold value, focusing image points and identifying underwater organisms.
The second aspect of the present invention also provides a multi-view camouflaged underwater biometric identification system, which includes: the underwater biological recognition system comprises a memory and a processor, wherein the memory comprises a multi-view camouflage type underwater biological recognition method program, and the multi-view camouflage type underwater biological recognition method program realizes the following steps when being executed by the processor:
acquiring initial position information and underwater environment information of the robot;
determining a moving path according to the initial position information of the robot and the underwater environment information to generate path information;
it is determined whether the robot has reached a predetermined area,
if the target area arrives, acquiring a target area image to generate image information;
carrying out underwater biological recognition according to the image information to obtain result information;
and transmitting the result information to the terminal according to a preset mode.
In a preferred embodiment of the present invention, the determining the moving path according to the initial position information of the robot and the underwater environment information includes:
acquiring an initial position of the robot, and establishing a coordinate network;
the coordinate network is divided into a plurality of grid cells,
obtaining the position information of the barrier, calculating the position of the barrier in the coordinate network,
removing grid units with the positions of the obstacles coinciding with the coordinate network;
and searching the optimal path in the residual grid cells by using an optimization algorithm.
In a preferred embodiment of the present invention, acquiring an image of a target area, generating image information, further comprises:
acquiring a first angle image of a target area to generate first image information;
carrying out image point segmentation on the first image information, and extracting an image point characteristic value;
receiving a rotation instruction, rotating the camera to a preset angle, shooting the image information of the target area again, and generating second image information;
carrying out image point segmentation on the second image information, and extracting an image point characteristic value;
comparing the image point in the first image information with the image point characteristic value in the second image information to obtain a deviation rate;
and if the deviation rate is smaller than a preset threshold value, focusing image points and identifying underwater organisms.
The invention solves the defects in the background technology, and has the following beneficial effects:
(1) the method comprises the steps of intelligently positioning and focusing the same target through a multi-angle camera, collecting information of a plurality of images at different angles through a rotary camera, carrying out image point focusing and carrying out underwater biological identification when the deviation rate is smaller than a preset threshold value through image point characteristic value comparison, and avoiding the error of single image identification with high identification precision.
(2) By establishing a coordinate network and separating grid units, the optimal path can be obtained in the process that the robot travels according to the preset path, the accuracy and the rapidity of the path traveling are realized, and intelligent obstacle avoidance can be performed in the traveling process.
(3) Before the robot carries out path planning, the target point state is activated firstly, and only after the target point state is activated, the target point coordinate information can be started, so that the path planning is carried out, the misoperation is prevented, the dynamic collision avoidance in the moving process of the robot is realized, and the safety is higher.
(4) The underwater robot can disguise through appearance and colour to set up the coating that has the reflection of light function in the robot outside, can shoot the effect that the image in-process realized the light filling at the camera, simultaneously, the robot outside hangs the bait of traping or having certain smell, can carry out the attraction of certain degree to fish, thereby attracts fish to the visual within range of camera, improves the discernment precision.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 shows a flow chart of a multi-view camouflaged underwater biometric identification method of the present invention;
FIG. 2 illustrates a flow chart of a method of searching for an optimal path;
FIG. 3 shows a flow chart of a target location identification method;
FIG. 4 illustrates a flow chart of a dynamic collision avoidance method;
FIG. 5 shows a flow chart of a multi-angle image acquisition method;
fig. 6 shows a block diagram of a multi-view camouflaged underwater biometric identification system.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Fig. 1 shows a flow chart of a multi-view camouflaged underwater biometric identification method according to the invention.
As shown in fig. 1, a first aspect of the present invention provides a multi-view camouflaged underwater biometric identification method, including:
s102, acquiring initial position information and underwater environment information of the robot;
s104, determining a moving path according to the initial position information of the robot and the underwater environment information, and generating path information;
s106, judging whether the robot reaches a preset area or not,
s108, if the target area image arrives, acquiring the target area image to generate image information;
s110, performing underwater biological identification according to the image information to obtain result information;
and S112, transmitting the result information to the terminal according to a preset mode.
It should be noted that, underwater robot can disguise through appearance and colour to set up the coating that has the reflection of light function in the robot outside, can be at the effect that the camera shot the image in-process realized the light filling, simultaneously, the bait of traping or having certain smell is hung in the robot outside, can carry out certain attraction to fish, thereby attracts fish to the camera visual within range, improves the recognition accuracy.
According to the embodiment of the invention, the underwater environment information comprises one or more of water flow velocity, water flow direction, water flow pressure, water depth and underwater obstacle information.
As shown in FIG. 2, the present invention discloses a flow chart of a method for searching an optimal path;
according to the embodiment of the invention, the moving path is determined according to the initial position information and the underwater environment information of the robot, and the method specifically comprises the following steps:
s202, acquiring an initial position of the robot, and establishing a coordinate network;
s204, the coordinate network is divided into a plurality of grid units,
s206, obtaining the position information of the obstacle, calculating the position of the obstacle in the coordinate network,
s208, grid units with the positions of the obstacles coinciding with the coordinate network are removed;
s210, searching for an optimal path in the residual grid cells by using an optimization algorithm.
It should be noted that by establishing the coordinate network and performing the partition of the grid cells, the robot can obtain the optimal path in the process of traveling according to the predetermined path, thereby achieving the accuracy and rapidity of the path traveling, and performing intelligent obstacle avoidance in the process of traveling.
As shown in FIG. 3, the present invention discloses a flow chart of a target location identification method;
according to the embodiment of the invention, acquiring the target area image and generating the image information specifically comprise:
s302, when the robot reaches a target area, sending a trapping signal to attract fishes to enter the target area;
s304, setting sampling time and collecting multi-angle images of a target area;
s306, respectively extracting characteristic values of the plurality of images, classifying the characteristic values,
s308, calculating the difference value of any two characteristic values in different images,
and S310, if the difference is smaller than a preset threshold, performing target positioning identification.
According to the embodiment of the invention, the method further comprises the following steps:
acquiring underwater environment information and generating a current control instruction;
acquiring current state information of the robot, and calculating a control action instruction of the next step according to the current control instruction and the current state information;
receiving an underwater environment evaluation feedback signal;
and adjusting the current control instruction according to a preset rule, and storing result information into a database.
As shown in fig. 4, the present invention discloses a flow chart of a dynamic collision avoidance method;
according to the embodiment of the invention, the method further comprises the following steps:
s402, acquiring a moving instruction, activating a target point state, and generating target point coordinate information;
s404, generating moving path information according to the coordinate information of the target point;
s406, generating obstacle avoidance decisions according to the moving path information;
and S408, performing robot dynamic collision avoidance according to the obstacle avoidance decision.
It should be noted that, before the robot performs path planning, the target point state is activated first, and only after the target point state is activated, the target point coordinate information can be started, so that the path planning is performed, misoperation is prevented, dynamic collision avoidance during the moving process of the robot is realized, and the safety is high.
As shown in FIG. 5, the present invention discloses a flow chart of a multi-angle image acquisition method;
according to the embodiment of the invention, acquiring the target area image and generating the image information further comprises:
s502, collecting a first angle image of a target area to generate first image information;
s504, carrying out image point segmentation on the first image information, and extracting an image point characteristic value;
s506, receiving a rotation instruction, rotating the camera to a preset angle, shooting the image information of the target area again, and generating second image information;
s508, carrying out image point segmentation on the second image information, and extracting an image point characteristic value;
s510, comparing image points in the first image information with image point characteristic values in the second image information to obtain a deviation ratio;
and S512, if the deviation rate is smaller than a preset threshold value, focusing image points and identifying underwater organisms.
It should be noted that, the same target is intelligently positioned and focused by the multi-angle camera, a plurality of pieces of image information of different angles are collected by rotating the camera, and image point focusing and underwater biological identification are performed by comparing the characteristic values of the image points when the deviation rate is smaller than a preset threshold value.
As shown in fig. 6, the present invention discloses a block diagram of a view angle camouflaged underwater biometric identification system;
the second aspect of the present invention also provides a multi-view camouflaged underwater biometric identification system, which includes: the underwater biological recognition system comprises a memory and a processor, wherein the memory comprises a multi-view camouflage type underwater biological recognition method program, and the multi-view camouflage type underwater biological recognition method program realizes the following steps when being executed by the processor:
acquiring initial position information and underwater environment information of the robot;
determining a moving path according to the initial position information of the robot and the underwater environment information to generate path information;
it is determined whether the robot has reached a predetermined area,
if the target area arrives, acquiring a target area image to generate image information;
carrying out underwater biological recognition according to the image information to obtain result information;
and transmitting the result information to the terminal according to a preset mode.
According to the embodiment of the invention, the moving path is determined according to the initial position information and the underwater environment information of the robot, and the method specifically comprises the following steps:
acquiring an initial position of the robot, and establishing a coordinate network;
the coordinate network is divided into a plurality of grid cells,
obtaining the position information of the barrier, calculating the position of the barrier in the coordinate network,
removing grid units with the positions of the obstacles coinciding with the coordinate network;
and searching the optimal path in the residual grid cells by using an optimization algorithm.
It should be noted that by establishing the coordinate network and performing the partition of the grid cells, the robot can obtain the optimal path in the process of traveling according to the predetermined path, thereby achieving the accuracy and rapidity of the path traveling, and performing intelligent obstacle avoidance in the process of traveling.
According to the embodiment of the invention, acquiring the target area image and generating the image information further comprises:
acquiring a first angle image of a target area to generate first image information;
carrying out image point segmentation on the first image information, and extracting an image point characteristic value;
receiving a rotation instruction, rotating the camera to a preset angle, shooting the image information of the target area again, and generating second image information;
carrying out image point segmentation on the second image information, and extracting an image point characteristic value;
comparing the image point in the first image information with the image point characteristic value in the second image information to obtain a deviation rate;
and if the deviation rate is smaller than a preset threshold value, focusing image points and identifying underwater organisms.
It should be noted that, the same target is intelligently positioned and focused by the multi-angle camera, a plurality of pieces of image information of different angles are collected by rotating the camera, and image point focusing and underwater biological identification are performed by comparing the characteristic values of the image points when the deviation rate is smaller than a preset threshold value.
According to the embodiment of the invention, acquiring the target area image and generating the image information specifically comprise:
the robot sends a trapping signal when reaching a target area, and attracts fishes to enter the target area;
setting sampling time, and collecting multi-angle images of a target area;
respectively extracting characteristic values from the multiple images, classifying the characteristic values,
calculating the difference value of any two characteristic values in different images, if the difference value is less than a preset threshold value,
then target location identification is performed.
According to the embodiment of the invention, the method further comprises the following steps:
acquiring underwater environment information and generating a current control instruction;
acquiring current state information of the robot, and calculating a control action instruction of the next step according to the current control instruction and the current state information;
receiving an underwater environment evaluation feedback signal;
and adjusting the current control instruction according to a preset rule, and storing result information into a database.
According to the embodiment of the invention, the method further comprises the following steps:
acquiring a moving instruction, activating a target point state, and generating target point coordinate information;
generating moving path information according to the coordinate information of the target point;
generating obstacle avoidance decisions according to the moving path information;
and carrying out robot dynamic collision avoidance according to the obstacle avoidance decision.
It should be noted that, before the robot performs path planning, the target point state is activated first, and only after the target point state is activated, the target point coordinate information can be started, so that the path planning is performed, misoperation is prevented, dynamic collision avoidance during the moving process of the robot is realized, and the safety is high.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of a unit is only one logical function division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A multi-view camouflage type underwater biological recognition method is characterized by comprising the following steps:
acquiring initial position information and underwater environment information of the robot;
determining a moving path according to the initial position information of the robot and the underwater environment information to generate path information;
it is determined whether the robot has reached a predetermined area,
if the target area arrives, acquiring a target area image to generate image information;
carrying out underwater biological recognition according to the image information to obtain result information;
and transmitting the result information to the terminal according to a preset mode.
2. The method according to claim 1, wherein the underwater environment information comprises one or more of water flow velocity, water flow direction, water flow pressure, water depth, and underwater obstacle information.
3. The multi-view camouflage type underwater biological recognition method according to claim 1, wherein a moving path is determined according to the initial position information and the underwater environment information of the robot, and specifically comprises the following steps:
acquiring an initial position of the robot, and establishing a coordinate network;
the coordinate network is divided into a plurality of grid cells,
obtaining the position information of the barrier, calculating the position of the barrier in the coordinate network,
removing grid units with the positions of the obstacles coinciding with the coordinate network;
and searching the optimal path in the residual grid cells by using an optimization algorithm.
4. The multi-view camouflage type underwater biological recognition method according to claim 1, wherein the step of acquiring the image of the target area and generating image information specifically comprises the steps of:
the robot sends a trapping signal when reaching a target area, and attracts fishes to enter the target area;
setting sampling time, and collecting multi-angle images of a target area;
respectively extracting characteristic values from the multiple images, classifying the characteristic values,
calculating the difference value of any two characteristic values in different images, if the difference value is less than a preset threshold value,
then target location identification is performed.
5. The multi-view camouflaged underwater biometric method according to claim 4, further comprising:
acquiring underwater environment information and generating a current control instruction;
acquiring current state information of the robot, and calculating a control action instruction of the next step according to the current control instruction and the current state information;
receiving an underwater environment evaluation feedback signal;
and adjusting the current control instruction according to a preset rule, and storing result information into a database.
6. The multi-view camouflaged underwater biometric method according to claim 1, further comprising:
acquiring a moving instruction, activating a target point state, and generating target point coordinate information;
generating moving path information according to the coordinate information of the target point;
generating obstacle avoidance decisions according to the moving path information;
and carrying out robot dynamic collision avoidance according to the obstacle avoidance decision.
7. The multi-view camouflaged underwater biological recognition method as claimed in claim 1, wherein acquiring an image of a target area, generating image information, further comprising:
acquiring a first angle image of a target area to generate first image information;
carrying out image point segmentation on the first image information, and extracting an image point characteristic value;
receiving a rotation instruction, rotating the camera to a preset angle, shooting the image information of the target area again, and generating second image information;
carrying out image point segmentation on the second image information, and extracting an image point characteristic value;
comparing the image point in the first image information with the image point characteristic value in the second image information to obtain a deviation rate;
and if the deviation rate is smaller than a preset threshold value, focusing image points and identifying underwater organisms.
8. A multi-view camouflaged underwater biometric identification system, the system comprising: the underwater biological recognition system comprises a memory and a processor, wherein the memory comprises a multi-view camouflage type underwater biological recognition method program, and the multi-view camouflage type underwater biological recognition method program realizes the following steps when being executed by the processor:
acquiring initial position information and underwater environment information of the robot;
determining a moving path according to the initial position information of the robot and the underwater environment information to generate path information;
it is determined whether the robot has reached a predetermined area,
if the target area arrives, acquiring a target area image to generate image information;
carrying out underwater biological recognition according to the image information to obtain result information;
and transmitting the result information to the terminal according to a preset mode.
9. The multi-view camouflage type underwater biological recognition system according to claim 7, wherein a moving path is determined according to the initial position information and the underwater environment information of the robot, and specifically comprises:
acquiring an initial position of the robot, and establishing a coordinate network;
the coordinate network is divided into a plurality of grid cells,
obtaining the position information of the barrier, calculating the position of the barrier in the coordinate network,
removing grid units with the positions of the obstacles coinciding with the coordinate network;
and searching the optimal path in the residual grid cells by using an optimization algorithm.
10. The multi-view camouflaged underwater biometric identification system of claim 7, wherein acquiring an image of a target area generates image information, further comprising:
acquiring a first angle image of a target area to generate first image information;
carrying out image point segmentation on the first image information, and extracting an image point characteristic value;
receiving a rotation instruction, rotating the camera to a preset angle, shooting the image information of the target area again, and generating second image information;
carrying out image point segmentation on the second image information, and extracting an image point characteristic value;
comparing the image point in the first image information with the image point characteristic value in the second image information to obtain a deviation rate;
and if the deviation rate is smaller than a preset threshold value, focusing image points and identifying underwater organisms.
CN202011620877.2A 2020-12-31 2020-12-31 Multi-view camouflage type underwater organism recognition system and method Active CN112783180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011620877.2A CN112783180B (en) 2020-12-31 2020-12-31 Multi-view camouflage type underwater organism recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011620877.2A CN112783180B (en) 2020-12-31 2020-12-31 Multi-view camouflage type underwater organism recognition system and method

Publications (2)

Publication Number Publication Date
CN112783180A true CN112783180A (en) 2021-05-11
CN112783180B CN112783180B (en) 2022-11-01

Family

ID=75754329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011620877.2A Active CN112783180B (en) 2020-12-31 2020-12-31 Multi-view camouflage type underwater organism recognition system and method

Country Status (1)

Country Link
CN (1) CN112783180B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113341407A (en) * 2021-06-02 2021-09-03 中国水产科学研究院南海水产研究所 Fishing tracking system and method based on radar detection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407283A (en) * 2015-11-20 2016-03-16 成都因纳伟盛科技股份有限公司 Multi-target active recognition tracking and monitoring method
CN108446585A (en) * 2018-01-31 2018-08-24 深圳市阿西莫夫科技有限公司 Method for tracking target, device, computer equipment and storage medium
CN108536157A (en) * 2018-05-22 2018-09-14 上海迈陆海洋科技发展有限公司 A kind of Intelligent Underwater Robot and its system, object mark tracking
CN108871364A (en) * 2018-06-28 2018-11-23 南京信息工程大学 A kind of underwater robot paths planning method based on Node Algorithm
CN110745218A (en) * 2019-10-29 2020-02-04 西华大学 Bionic fish for inducing and monitoring fish school migration
CN110930436A (en) * 2019-11-27 2020-03-27 深圳市捷顺科技实业股份有限公司 Target tracking method and device
CN112149762A (en) * 2020-11-24 2020-12-29 北京沃东天骏信息技术有限公司 Target tracking method, target tracking apparatus, and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407283A (en) * 2015-11-20 2016-03-16 成都因纳伟盛科技股份有限公司 Multi-target active recognition tracking and monitoring method
CN108446585A (en) * 2018-01-31 2018-08-24 深圳市阿西莫夫科技有限公司 Method for tracking target, device, computer equipment and storage medium
CN108536157A (en) * 2018-05-22 2018-09-14 上海迈陆海洋科技发展有限公司 A kind of Intelligent Underwater Robot and its system, object mark tracking
CN108871364A (en) * 2018-06-28 2018-11-23 南京信息工程大学 A kind of underwater robot paths planning method based on Node Algorithm
CN110745218A (en) * 2019-10-29 2020-02-04 西华大学 Bionic fish for inducing and monitoring fish school migration
CN110930436A (en) * 2019-11-27 2020-03-27 深圳市捷顺科技实业股份有限公司 Target tracking method and device
CN112149762A (en) * 2020-11-24 2020-12-29 北京沃东天骏信息技术有限公司 Target tracking method, target tracking apparatus, and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113341407A (en) * 2021-06-02 2021-09-03 中国水产科学研究院南海水产研究所 Fishing tracking system and method based on radar detection
CN113341407B (en) * 2021-06-02 2024-02-06 中国水产科学研究院南海水产研究所 Fishery fishing tracking system and method based on radar detection

Also Published As

Publication number Publication date
CN112783180B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
Cappo et al. Counting and measuring fish with baited video techniques-an overview
CN112750140B (en) Information mining-based disguised target image segmentation method
Lopez‐Marcano et al. Automatic detection of fish and tracking of movement for ecology
Dujon et al. Machine learning to detect marine animals in UAV imagery: Effect of morphology, spacing, behaviour and habitat
Sun et al. Deep learning in aquaculture: A review
Levy et al. Automated analysis of marine video with limited data
Liao et al. Research on intelligent damage detection of far-sea cage based on machine vision and deep learning
CN111753594A (en) Danger identification method, device and system
CN112783180B (en) Multi-view camouflage type underwater organism recognition system and method
CN113536978B (en) Camouflage target detection method based on saliency
Grothues et al. High-frequency side-scan sonar fish reconnaissance by autonomous underwater vehicles
CN115797844A (en) Fish body fish disease detection method and system based on neural network
Musić et al. Detecting underwater sea litter using deep neural networks: an initial study
CN112859056A (en) Remote early warning system and method for large marine organisms
Jackett et al. A benthic substrate classification method for seabed images using deep learning: Application to management of deep‐sea coral reefs
CN116343018A (en) Intelligent fishery fishing identification method, system and medium based on image processing
CN115376023A (en) Cultivation area detection method based on deformation convolutional network, unmanned aerial vehicle and medium
Connor et al. Analysis of robotic fish using swarming rules with limited sensory input
WO2022075853A1 (en) Generating three-dimensional skeleton representations of aquatic animals using machine learning
CN109632590B (en) Deep-sea luminous plankton detection method
Zhang et al. Underwater autonomous grasping robot based on multi-stage Cascade DetNet
Kandimalla Deep learning approaches to classify and track at-risk fish species
Ho et al. Predicting coordinated group movements of sharks with limited observations using AUVs
Zuodong et al. Automatic Video Tracking of Chinese Mitten Crab Using Particle Filter Based on Multi Features
Tong et al. Automatic single fish detection with a commercial echosounder using YOLO v5 and its application for echosounder calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant