CN108536157A - A kind of Intelligent Underwater Robot and its system, object mark tracking - Google Patents

A kind of Intelligent Underwater Robot and its system, object mark tracking Download PDF

Info

Publication number
CN108536157A
CN108536157A CN201810496705.5A CN201810496705A CN108536157A CN 108536157 A CN108536157 A CN 108536157A CN 201810496705 A CN201810496705 A CN 201810496705A CN 108536157 A CN108536157 A CN 108536157A
Authority
CN
China
Prior art keywords
image
underwater
mark
module
thing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810496705.5A
Other languages
Chinese (zh)
Inventor
彭锐
王维军
孙海欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mai Lu Marine Technology Development Co Ltd
Original Assignee
Shanghai Mai Lu Marine Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mai Lu Marine Technology Development Co Ltd filed Critical Shanghai Mai Lu Marine Technology Development Co Ltd
Priority to CN201810496705.5A priority Critical patent/CN108536157A/en
Publication of CN108536157A publication Critical patent/CN108536157A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles

Abstract

The present invention discloses a kind of Intelligent Underwater Robot, including:Communication module receives the control command of remote control terminal;Image collection module obtains underwater environment image;Object mark identification module identifies each underwater object mark and barrier based on the first deep neural network model from underwater environment image;Control module judges whether purpose thing mark according to the underwater object mark identified, if so, obtaining purpose thing target pixel coordinate;Space constructing module carries out image procossing based on the second deep neural network model to underwater environment image, and the underwater environment three dimensions of construction underwater robot current location determines purpose thing target location information;Control module tracks purpose thing target mobile route also according to the three dimensions of structure according to preset tracing mode planning underwater robot;Execution module controls underwater robot according to the mobile route of planning and is moved under water, realizes and hide to the tracking of purpose thing target and barrier.

Description

A kind of Intelligent Underwater Robot and its system, object mark tracking
Technical field
The present invention relates to robot field more particularly to a kind of Intelligent Underwater Robot and its system, object mark trackings.
Background technology
Underwater robot is also referred to as unmanned remotely controlled submersible vehicle, underwater robot can highly dangerous environment, contaminated environment with And the waters of zero visibility replaces artificial long working under water, be commonly equipped on underwater robot Sonar system, video camera, The devices such as headlamp and mechanical arm, can provide real-time video, sonar image etc., underwater robot safety search and rescue, pipe inspection, It is used widely in the fields such as research and teaching, underwater amusement, energy industry, archaeology, fishery.
Since current underwater robot is merely able to that specific object mark progress image matching algorithm etc. is identified, without The identification of intelligence can be carried out to different types of underwater object mark.Underwater environment also different land, object target in acquired image Readability is less than land, and effect is also urgently improved in terms of path planning and intelligent barrier avoiding based on machine vision.
Therefore, it is necessary to a kind of new underwater robot systems, are suitable under water based on deep neural network algorithm structure Deep neural network model identifies underwater different object mark and biology so as to intelligence.It is calculated based on deep neural network The image depth computation model of method structure, can more accurately calculate object target position in image, more accurate structure The lower three-dimensional environment of water outlet, it is intelligent, cook up optimal motion path in real time, to ensure underwater robot underwater operation It is smoothed out.
Invention content
A kind of Intelligent Underwater Robot of present invention offer and its system, object mark tracking, overcome current underwater machine Device people cannot identify the disadvantage of different types of underwater object mark and biology, optimize the effect of picture depth calculating, improve three The speed rebuild is tieed up, underwater robot accurate avoiding barrier in real time, more intelligent programming movement path are enable.Specifically , technical scheme is as follows:
On the one hand, the invention discloses a kind of Intelligent Underwater Robots, including:Communication module, for receiving remote control The control command of terminal carries out information exchange with the remote control terminal;Image collection module, for obtaining underwater environment figure Picture;Object mark identification module is known for carrying out image procossing to the underwater environment image based on the first deep neural network model Do not go out each underwater object mark and the barrier in described image;Control module, for what is identified according to the object mark identification module Underwater object mark, judges whether that remote control terminal issues the purpose thing mark of tracking, is marked on if so, obtaining the purpose thing Pixel coordinate in described image;Space constructing module, for being based on the second deep neural network model to the underwater environment Image carries out image procossing, constructs the underwater environment three dimensions of the underwater robot current location, and according to the control The purpose thing that module obtains is marked on the pixel coordinate in described image, determines the purpose thing target location information;It is described Control module is additionally operable to the three dimensions built according to the space constructing module, described in the planning of preset tracing mode Underwater robot tracks the purpose thing target mobile route;Execution module, the movement for being planned according to the control module Underwater robot described in path clustering is moved under water, is realized and is hidden to purpose thing target tracking and barrier.
Preferably, described image acquisition module includes:Image Acquisition submodule, the original graph for acquiring underwater environment Picture;The original image progress image of image preprocessing submodule, the underwater environment for being acquired to described image acquisition module is pre- Processing obtains underwater environment image;Described image pre-processes:Denoising, image enhancement, image compensation.
Preferably, the object mark identification module includes:Image segmentation submodule, for being carried out to the underwater environment image Image binary segmentation;Feature extraction submodule, for carrying out feature respectively to the image after described image segmentation submodule segmentation Extraction;Characteristic matching submodule, for according to the feature samples to prestore, being carried out to the feature of feature extraction submodule extraction Characteristic matching identifies each object mark and barrier in the underwater environment image.
Preferably, the space constructing module includes:Depth of field computational submodule, for being based on the second deep neural network mould Type carries out depth of field calculating to collected underwater environment image;Three-dimensional construction submodule, for according to the underwater environment image And the depth of view information that the depth of field computational submodule calculates, construct the three-dimensional space of the underwater environment residing for the underwater robot Between;Position determination submodule is used for three dimensions and the object according to the three-dimensional constructor module structure described Pixel coordinate in underwater environment image determines the purpose thing target location information.
Preferably, described image acquisition module includes at least one set of binocular camera, and the depth of field computational submodule includes: Feature extraction unit, for opening underwater environment image point to the left and right two that binocular camera obtains based on deep neural network algorithm Not carry out 2D characteristic processing extractions, obtain characteristic tensor;Matching power flow creating unit, for being created according to the characteristic tensor of acquisition Two Matching power flows;One Matching power flow is matched for from left to right, another Matching power flow is used for the left matching of dextrad;Stereo matching Unit, two Matching power flows for being created by the Matching power flow creating unit execute Stereo matching respectively, obtain parallax Tensor figure;Depth calculation unit, for extracting regarding for each pixel from the parallax tensor figure that the Stereo matching unit obtains Difference calculates the depth information in image.
Preferably, the three-dimensional construction submodule includes:Object mark calculating coordinate unit, for calculating son according to the depth of field The depth of view information for the underwater environment image that module calculates, obtains object mark identification module described in the underwater environment image and identifies Each underwater object mark and barrier three-dimensional coordinate information;Space Reconstruction unit, for according to the object mark calculating coordinate unit The three-dimensional coordinate information of each underwater object mark and barrier that obtain, reconstructs the three dimensions of the underwater robot ambient enviroment.
Preferably, the tracing mode includes follow the mode, other companion's pattern, bootmode, surround pattern.
Preferably, the control module is additionally operable to when there is no purposes in the object mark that the object mark identification module identifies When object mark, the signal source of the remote control terminal is chosen as interim purpose thing mark, until the object mark identification module identifies The purpose thing mark of tracking is issued to the remote control terminal.
Second aspect, the invention discloses a kind of Intelligent Underwater Robot systems, including:Under intelligent water of the present invention Robot, and the remote control terminal and server with Intelligent Underwater Robot communication connection respectively.
The third aspect, the invention also discloses a kind of Intelligent Underwater Robot object mark trackings, are applied to institute of the present invention The Intelligent Underwater Robot stated, the object mark tracking include:S100 receive remote control terminal control command, obtain with The purpose thing mark information of track;S200 obtains underwater environment image;S300 is based on the first deep neural network model to described underwater Ambient image carries out image procossing, identifies each underwater object mark and the barrier in described image;S400 is according to the water identified Lower object mark judges whether that remote control terminal issues the purpose thing mark of tracking, if so, obtaining the purpose thing is marked on institute State the pixel coordinate in image;S500 is based on the second deep neural network model and is carried out at image to the underwater environment image Reason constructs the underwater environment three dimensions of the underwater robot current location, and according to described in control module acquisition Purpose thing is marked on the pixel coordinate in described image, determines the purpose thing target location information;S600 is according to the three-dimensional of structure Space plans that the underwater robot tracks the purpose thing target mobile route according to preset tracing mode;S700 according to The mobile route of planning controls the underwater robot and is moved under water, realizes to purpose thing target tracking and barrier Hide.
The present invention at least has with the next item down technique effect:
(1) present invention employs deep neural networks (the first deep neural network model) to identify underwater each object mark and barrier Hinder object, relative to the neural network of shallow-layer, deep neural network is increasingly complex, is not limited to identify single object mark, and can be with It identifies different object mark and barrier, and identifies that precision is high.
(2) image information acquired in underwater environment is different from the image information of land, in underwater picture information Disturbing factor can be more than the disturbing factor of land, and therefore, the present invention additionally uses another deep neural network (the second depth god Through network model) depth of field is calculated, the effect of picture depth calculating is optimized, the image based on deep neural network algorithm structure Depth of field computation model (the second deep neural network model), can more accurately calculate object target position in image, to Underwater 3 D environment accurately is constructed, the speed of three dimensions reconstruction is improved, allows the robot to accurately hide in real time Barrier, more intelligent programming movement path.
(3) present invention employs several cameras to obtain underwater environment image, is obtained especially by several binocular cameras The lower ambient image of water intaking, the underwater environment image that binocular camera obtains includes left figure and right figure, to be conducive to subsequent scape Deep calculation processing so that the depth of view information of matching primitives is more accurate.
(4) present invention can carry out image preprocessing to the underwater environment image that camera acquires, to improve and improve The quality of image improves the speed and precision of subsequent image processing.
(5) tracing mode of underwater robot of the invention is varied, including follow the mode, other companion's pattern, guiding mould Formula and circular pattern etc., operating personnel can select different tracing modes to track different object marks according to actual demand.
(6) Intelligent Underwater Robot of the invention is when object mark tracks and loses or can not obtain picture signal, robot pair The signal source of remote terminal is into line trace, after robot gets the tracking object mark in image information again, that is, is switched to pair The tracking of object target is set in image.Thus Intelligent Underwater Robot can intelligently switch follow-up mechanism, intelligence according to actual conditions Change degree is high.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without having to pay creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is a kind of block diagram of Intelligent Underwater Robot embodiment of the present invention;
Fig. 2 is a kind of block diagram of another embodiment of Intelligent Underwater Robot of the present invention;
Fig. 3 is the depth of field Computing Principle schematic diagram based on the second deep neural network model in the present invention;
Fig. 4 is a kind of block diagram of Intelligent Underwater Robot system embodiment of the present invention;
Fig. 5 is the algorithm principle figure of the first deep neural network model in the present invention;
Fig. 6 is that the first deep neural network model builds flow diagram in the present invention;
Fig. 7 is the schematic diagram classified to the object logo image data of acquisition in the present invention;
Fig. 8 is the algorithm Organization Chart of the second deep neural network model in the present invention;
Fig. 9 is a kind of flow chart of Intelligent Underwater Robot object mark tracking embodiment of the present invention;
Figure 10 is a kind of flow chart of another embodiment of Intelligent Underwater Robot object mark tracking of the invention.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, it is clear that the described embodiments are only some of the embodiments of the present invention, rather than whole implementation Example.Based on the embodiments of the present invention, obtained by those of ordinary skill in the art without making creative efforts All other embodiment, shall fall within the protection scope of the present invention.
The invention discloses a kind of Intelligent Underwater Robot, embodiment is as shown in Figure 1, include:Communication module 100, is used for The control command for receiving remote control terminal carries out information exchange with the remote control terminal;Image collection module 200 is used In acquisition underwater environment image;Object mark identification module 300, for being based on the first deep neural network model to the underwater environment Image carries out image procossing, identifies each underwater object mark and the barrier in described image;Control module 400, for according to institute The underwater object mark that object mark identification module 300 identifies is stated, judges whether that remote control terminal issues the purpose thing mark of tracking, If so, obtaining the pixel coordinate that the purpose thing is marked in described image;Space constructing module 500, for deep based on second It spends neural network model and image procossing is carried out to the underwater environment image, construct the underwater of the underwater robot current location Surrounding three-dimensional space, and the pixel coordinate in described image is marked on according to the purpose thing that the control module 400 obtains, really The fixed purpose thing target location information;The control module 400 is additionally operable to be built according to the space constructing module 500 Three dimensions plans that the underwater robot tracks the purpose thing target mobile route according to preset tracing mode;It executes Module 600, the mobile route for being planned according to the control module 400 control the underwater robot and are moved under water, It realizes and purpose thing target tracking and barrier is hidden.
In the present embodiment, communication module 100 is mainly used to receive the control instruction of remote control terminal, and operating personnel can be with Underwater robot is manipulated by remote control terminal, underwater information can also equally be fed back to remote control by underwater robot Terminal knows underwater situation convenient for operating personnel.Communication module 100 is to realize the letter of underwater robot and remote control terminal Breath interaction bridge.Image collection module 200 is mainly used to obtain underwater environment image, general to be shot by photographic device To the image of underwater environment.After getting underwater picture, object mark identification module 300 is according to trained first depth nerve net The image information that network Model Identification image collection module 200 (such as video camera) obtains, to complete to not jljl target under water Intelligent recognition.Various biological, underwater people etc. are underwater object marks under water, pass through the first deep neural network of learning training Model can identify one by one.And space constructing module 500, then depth (is used based on the second deep neural network model The depth of field algorithm of neural network) more accurate reconstruct can be carried out to the three dimensions around robot, it is obtained according to video camera The image information taken carries out three-dimensional Reconstruction to underwater environment, and intelligent judges to track object mark and barrier, control under water Module 400 is according to the carry out path planning of the tracing mode intelligence being remotely manually set.
In the present embodiment, the underwater object that is suitable for based on deep neural network algorithm structure identifies other first depth nerve Network model, be capable of intelligence identifies underwater different object mark and barrier.Use based on deep neural network algorithm structure In the second deep neural network model that image depth calculates, image depth information can be more accurately calculated, positions object Target position, more accurately constructs underwater 3 D environment, intelligent, cook up optimal motion path in real time.
Another embodiment of underwater robot of the present invention, on the basis of the above embodiments, as shown in Fig. 2, described image Acquisition module 200 includes:Image Acquisition submodule 210, the original image for acquiring underwater environment;Image preprocessing submodule 220, the original image of the underwater environment for being acquired to described image acquisition module 210 carries out image preprocessing, obtains underwater Ambient image;Described image pre-processes:Denoising, image enhancement, image compensation.
Several cameras are set in robot, can be used for shooting underwater environment image.In image generation and transmission mistake Cheng Zhong can not must avoid having noise or other interference and distortion, to influence image due to the influence of many factors Quality, and then influence subsequent image procossing.Therefore, after collecting underwater picture, we can be first to collected water Hypograph carries out a series of pretreatment, so as to improve picture quality.Pretreated mode includes mainly denoising, image increasing By force, image compensation etc..Specifically, such as carrying out denoising, there are many modes of denoising, and relatively conventional is smooth Processing, smoothing method is also varied, can both carry out, and can also be carried out in frequency domain, wherein spatial domain exponential smoothing in spatial domain Including:Image averaging method, the field method of average, adaptive-filtering and medium filtering scheduling algorithm.After pretreatment, underwater environment figure The quality of picture is greatly improved, and subsequently can be carried out image procossing.
In terms of image procossing, image recognition processing can be carried out by object mark identification module 300 first.The object mark is other Module 300 includes:Image segmentation submodule 310, for carrying out image binary segmentation to the underwater environment image;Feature extraction Submodule 320 carries out being based on the first deep neural network respectively for the image after dividing described image segmentation submodule 310 The feature extraction of algorithm;Characteristic matching submodule 330, for the feature samples that basis prestores, to the feature extraction submodule The feature of 320 extractions carries out characteristic matching, identifies each object mark and the barrier in the underwater environment image.Trained first Prestored inside deep neural network model the feature samples of each underwater object mark and barrier, therefore, to underwater environment image After carrying out image segmentation and feature extraction processing, can respectively it be carried out with the feature samples to prestore according to each feature of extraction Matching, to which intelligent recognition goes out each underwater object mark and barrier in the underwater environment image.Each underwater object mark letter identified Breath can also be transferred to remote control terminal by communication module 100, to allow operating personnel waterborne to understand underwater situation.
Secondly, other than carrying out image recognition processing to underwater environment image, likewise, can also pass through deep neural network Depth of field algorithm (the second deep neural network model) carries out space construction to the underwater environment image, reconstructs underwater robot and works as Preceding residing underwater 3 D environment.Specifically, the space constructing module 500 includes:Depth of field computational submodule 510 is used for base Depth of field calculating is carried out to collected underwater environment image in the second deep neural network model;Three-dimensional construction submodule 520, is used In the depth of view information according to the underwater environment image and the depth of field computational submodule 510 calculating, the underwater is constructed The three dimensions of underwater environment residing for people;Position determination submodule 530, for according to three-dimensional construction 520 structure of submodule Pixel coordinate of the three dimensions and the object made in the underwater environment image, determines purpose thing target position Confidence ceases.
Depth of field computational submodule 510 carries out scape by the second deep neural network model after training to underwater environment image Deep to calculate, then three-dimensional construction submodule 520 is reconstructed according to the image information and depth of view information of acquisition residing for underwater robot Three dimensions, position determination submodule 530 can then calculate itself location information in three dimensions, purpose thing target Location information etc..So that control module 400 can be calculated up to purpose thing target most according to the follow the mode of setting Shortest path tracks motive objects target by the completion of 600 intelligence of execution module and barrier is hidden.Institute through this embodiment The Intelligent Underwater Robot stated can be realized to the identification of underwater different types of object mark and biology, can more accurately be counted Picture depth is calculated, real-time accurate avoiding barrier, more intelligent programming movement path are allowed the robot to.
Preferably, the image collection module 200 of the present invention uses at least one set of binocular camera to shoot underwater environment The ambient image of image, the shooting of every group of binocular camera has left and right two.Depth of field computational submodule 510 in above-described embodiment Including:Feature extraction unit 511, for opening underwater ring to the left and right two that binocular camera obtains based on deep neural network algorithm Border image carries out 2D characteristic processing extractions respectively, obtains characteristic tensor;Matching power flow creating unit 512, for according to acquisition Characteristic tensor creates two Matching power flows;One Matching power flow is matched for from left to right, another Matching power flow is left for dextrad Matching;Stereo matching unit 513, two Matching power flows for being created by the Matching power flow creating unit 512 are held respectively Row Stereo matching obtains parallax tensor figure;Depth calculation unit 514, the parallax for being obtained from the Stereo matching unit 513 The parallax that each pixel is extracted in tensor figure calculates the depth information in image.
Depth of field computational submodule is based on deep neural network algorithm to the image collected information and carries out depth of field calculating, signal Figure is as shown in Figure 3.First, being based on deep neural network algorithm, (size HxWxC, wherein C=3 are that input is logical to left images The quantity in road) carry out 2D characteristic processing extractions.Then, by the characteristic tensor of generation, (1/2Hx1/2WxF, wherein F=32 are features Quantity) two Matching power flows (cost-volume) are created, one matches for from left to right, another is used for the left matching of dextrad.Its It is secondary, in corresponding location of pixels, left and right feature is connected and copies to (size 1/2Dx1/ in 4D Matching power flow results 2Hx1/2Wx2F, wherein D indicate maximum disparity).Again, it is executed by comparing two Matching power flows created before feature use Stereo matching generates the left and right tensor (dimension D xHxWx1) of the Matching power flow comprising left images pixel.Finally, using from regarding Parallax is extracted in poor tensor figure, calculates the depth information in image, to calculate the depth of view information of underwater environment image.Figure The 8 algorithm frameworks calculated for the depth of field based on deep neural network algorithm, i.e. the algorithm framework of the second deep neural network model Figure.
Matching power flow (cost-volume) calculating is the basis of entire Stereo Matching Algorithm among the above, really to difference Grey similarity measurement is carried out under parallax.Common method has square SD (the squared intensity of gray scale difference Differences), gray scale absolute value of the difference AD (absolute intensity differences) etc..In addition, original asking A upper limit value can be set when Matching power flow, to weaken the influence of the error hiding in additive process.
Stereo matching is according to the calculating to selected feature primitive, and the one-to-one correspondence established between two images feature primitive closes System, and thus obtain corresponding anaglyph.After obtaining anaglyph by Stereo matching, so that it may to determine depth image simultaneously Reconstruct three dimensions.Intelligent Underwater Robot in the present invention be then use the second deep neural network model, that is, It says the depth of view information for calculating image by depth neural network algorithm, is convenient for the structure of subsequent three dimensions.Specifically, such as Shown in Fig. 2, the three-dimensional construction submodule for building three dimensions includes:Object mark calculating coordinate unit, for according to the scape The depth of view information for the underwater environment image that deep computational submodule calculates obtains object described in the underwater environment image and identifies other mould The three-dimensional coordinate information of each underwater object mark and barrier that block 300 identifies;Space Reconstruction unit, for being sat according to the object mark The three-dimensional coordinate information for marking each underwater object mark and barrier that solving unit obtains, reconstructs the underwater robot ambient enviroment Three dimensions.
Tracing mode in any of the above-described embodiment includes follow the mode, other companion's pattern, bootmode, surround pattern.Its Middle follow the mode refers to that robot is maintained at a certain distance with object mark, can be shot to tracking object mark, distance parameter etc. It can be controlled by remote control terminal.Other companion's pattern, which refers to robot, to shoot object mark, it is only necessary to companion It is moved with object mark, ensures robot in tracking object target peripheral motor.Bootmode refers to that robot being capable of predicting tracing object mark Estimation, object mark is shot in front of object target.Refer to that robot can carry out tracking object mark around pattern Lower and left and right surround movement, with the image of subject mark different angle.
In any of the above-described embodiment, the control module is additionally operable to when in the object mark that the object mark identification module identifies There is no when purpose thing mark, choose the signal source of the remote control terminal as interim purpose thing mark, until the object identifies Other module recognizes the purpose thing mark that the remote control terminal issues tracking.Such as when figure is lost or can not be obtained to tracking object mark As signal when, robot to the signal source of remote control terminal into line trace, when underwater robot gets image information again After the purpose thing mark of middle tracking, that is, it is switched to and purpose thing target in image is tracked.
In addition, the invention discloses a kind of Intelligent Underwater Robot systems, including:Intelligence underwater of the present invention People, and the remote control terminal and server with Intelligent Underwater Robot communication connection respectively.
Preferably, the first deep neural network model and the second deep neural network that the Intelligent Underwater Robot uses Model is made of the server learning training.
Current most underwater robot can only identify particular kind of target, and cannot to different types of object mark and Biology is identified and classifies, and the present invention then passes through trained first deep neural network model of server, depth nerve Network is more complicated relative to shallow-layer neural network, and the neural network with shallow-layer is there are bigger difference, the system hardware that is depended on Structure is also different.Then server applies trained first deep neural network model in robot under water, to It may be implemented to the not identification of jljl mark and barrier under water.Likewise, in order to enable the three dimensions of construction is more accurate, together The second deep neural network model that sample is calculated by server training for the depth of field, to obtain more accurate depth of view information, Accurately three-dimensional Reconstruction is realized, convenient for preferably tracking purpose thing mark and timely avoiding barrier.
Another embodiment of Intelligent Underwater Robot system of the present invention, as shown in figure 4, including machine under server 1, intelligent water Device people 2 and remote control terminal 3, wherein picture pick-up device 21, figure that intelligent 2 people of underwater is made of several video cameras 211 The compositions such as shape processing equipment 22, Solid rocket engine equipment 23, communication equipment 24.
In Intelligent Underwater Robot system in the present embodiment, server is based on deep neural network algorithm structure and is suitable for Underwater object identifies other first deep neural network model so that underwater robot be capable of intelligence identify underwater difference Object mark and barrier.Second depth that for image depth calculates of the server also based on deep neural network algorithm structure Neural network model, so that underwater robot can more accurately calculate image depth information, positioning object target position It sets, more accurately constructs underwater 3 D environment, it is intelligent, cook up optimal motion path in real time.
The server identifies other first deep neural network model, the first depth god for building for underwater object Algorithm principle figure through network model is as shown in figure 5, specific first deep neural network model structure flow diagram such as Fig. 6 It is shown, underwater picture data are acquired by robot or other image capture devices.Then, the image collected data are carried out Classification, by sorted data transmission to server.Secondly, server carries out depth god using sorted underwater picture data It is trained through network model, generates and be suitable for the other deep neural network model of underwater object mark.Finally, robot application is by taking Deep neural network model after business device training is realized to jljl target does not identify under water.
The Underwater Camera is used to acquire the image information around robot, and the server uses NVIDIA The platforms such as DIGITS first classify to the object logo image data of acquisition, specifically, classification schematic diagram is as shown in Figure 7.So Afterwards, build the depth under water neural network model based on deep neural network algorithm, to underwater special object target image data into Row training.Finally, the first deep neural network model for being suitable for underwater object mark and biology is generated;
Video camera can use the video cameras such as monocular or binocular, the image information of surrounding be obtained, preferably, using several Binocular camera shoots underwater environment image, and image information is then transmitted to graphics processor;The graphic processing apparatus makes The image that video camera obtains is carried out according to the deep neural network model that server is trained with platforms such as NVIDIA JETSON Processing, to identify that the object mark in image, described image processor are additionally operable to be based on depth god to the image collected information Depth of field calculating (as shown in Figure 3) is carried out through network algorithm:
First, based on deep neural network algorithm, to left images, (size HxWxC, wherein C=3 are input channels Quantity) carry out 2D characteristic processing extractions.Then, by the characteristic tensor of generation, (1/2Hx1/2WxF, wherein F=32 are characteristics Amount) two Matching power flows are created, one matches for from left to right, another is used for the left matching of dextrad.Secondly, in corresponding pixel Left and right feature is connected and copies in 4D Matching power flow results (size 1/2Dx1/2Hx1/2Wx2F, wherein D tables by position Show maximum disparity).Again, Stereo matching is executed by comparing two Matching power flows created before feature use, generation includes Left and right tensor (the size of the Matching power flow of left images pixel
DxHxWx1).Finally, parallax is extracted using from parallax tensor figure, calculates the depth information in image, to Calculate the depth of view information of image around robot.Fig. 8 is the algorithm framework that the depth of field based on deep neural network algorithm calculates, That is the algorithm framework of the second deep neural network model.Then object target three-dimensional space position is calculated further according to depth of view information, To the object mark and barrier for telling tracking of intelligence, the three dimensions for reaching designated position is calculated using systems such as ROS Optimal path;
The Solid rocket engine equipment is using platforms such as Pixhawk according to the optimal path of resolving and the position of itself and shape State controls the rotation of propeller, is moved according to scheduled path so as to control robot;
The communication equipment can be by wired or wirelessly communicated with remote control terminal, operating personnel Robot can be configured and be controlled by remote control terminal.
This system carries out deep neural network training using server to underwater special object mark and biometric image data, according to The image information that trained deep neural network model identification video camera obtains, to complete to jljl target is not intelligent under water Identification.Depth of field algorithm (the second deep neural network model) based on deep neural network can be to the three-dimensional space around robot Between more accurately reconstructed, according to video camera obtain image information to underwater environment carry out three-dimensional Reconstruction, intelligence Judge track object mark and barrier under water, and according to the carry out path planning for the tracing mode intelligence being remotely manually set.
Finally, the invention also discloses a kind of Intelligent Underwater Robot object mark trackings, are applied to of the present invention Intelligent Underwater Robot, as shown in figure 9, the object mark tracking includes:
S100 receives the control command of remote control terminal, obtains the purpose thing mark information of tracking;
S200 obtains underwater environment image;
S300 is based on the first deep neural network model and carries out image procossing to the underwater environment image, identifies described Each underwater object mark in image and barrier;
S400 judges whether that remote control terminal issues the purpose thing mark of tracking according to the underwater object mark identified, If so, obtaining the pixel coordinate that the purpose thing is marked in described image;
S500 is based on the second deep neural network model and carries out image procossing to the underwater environment image, constructs the water The underwater environment three dimensions of lower robot current location, and be marked on according to the purpose thing that the control module obtains described Pixel coordinate in image determines the purpose thing target location information;
S600 plans that the underwater robot tracks the mesh according to the three dimensions of structure according to preset tracing mode Object target mobile route;
S700 controls the underwater robot according to the mobile route of planning and is moved under water, realizes to the purpose thing Target tracks and barrier is hidden.
Object mark tracking through the invention can realize the identification to underwater different types of object mark and biology, energy It is enough more accurately to carry out picture depth calculating, enable underwater robot accurate avoiding barrier in real time, it is more intelligent Programming movement path.
Another embodiment of Intelligent Underwater Robot object mark tracking of the present invention, as shown in Figure 10, including:
S100 receives the control command of remote control terminal, obtains the purpose thing mark information of tracking;
S200 obtains underwater environment image;
S300 is based on the first deep neural network model and carries out image procossing to the underwater environment image, identifies described Each underwater object mark in image and barrier;
S400 judges whether that remote control terminal issues the purpose thing mark of tracking according to the underwater object mark identified, If so, obtaining the pixel coordinate that the purpose thing is marked in described image;
S511 based on deep neural network algorithm to the left and right two that binocular camera obtains open underwater environment image respectively into Row 2D characteristic processings are extracted, and characteristic tensor is obtained;
S512 creates two Matching power flows according to the characteristic tensor of acquisition;One Matching power flow is matched for from left to right, separately One Matching power flow is used for the left matching of dextrad;
S513 executes Stereo matching respectively by described two Matching power flows, obtains parallax tensor figure;
S514 extracts the parallax of each pixel from the parallax tensor figure, and the depth information calculated in image is (inspired by what one sees Deeply convince breath);
S520 constructs the underwater ring residing for the underwater robot according to the underwater environment image and the depth of view information The three dimensions in border;
S530 is according to the three dimensions and the object of the three-dimensional constructor module structure in the underwater environment figure Pixel coordinate as in, determines the purpose thing target location information;
S600 plans that the underwater robot tracks the mesh according to the three dimensions of structure according to preset tracing mode Object target mobile route;
S700 controls the underwater robot according to the mobile route of planning and is moved under water, realizes to the purpose thing Target tracks and barrier is hidden.
The present embodiment is based on deep neural network algorithm structure and is suitable for other first deep neural network of underwater object mark Model identifies underwater different object mark and barrier so as to intelligence.Use based on deep neural network algorithm structure In the second deep neural network model that image depth calculates, image depth information can be more accurately calculated, positions object Target position, more accurately constructs underwater 3 D environment, intelligent, cook up optimal motion path in real time.
The Intelligent Underwater Robot object mark tracking of the present invention is with Intelligent Underwater Robot of the invention relative to this hair Technical detail in bright Intelligent Underwater Robot embodiment can equally be well applied to the object mark tracking embodiment of the present invention, be It reduces and repeats, repeat no more.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art God and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (10)

1. a kind of Intelligent Underwater Robot, which is characterized in that including:
Communication module, the control command for receiving remote control terminal carry out information exchange with the remote control terminal;
Image collection module, for obtaining underwater environment image;
Object mark identification module, for carrying out image procossing to the underwater environment image based on the first deep neural network model, Identify each underwater object mark and the barrier in described image;
Control module, the underwater object mark for being identified according to the object mark identification module judge whether remote control end End issues the purpose thing mark of tracking, if so, obtaining the pixel coordinate that the purpose thing is marked in described image;
Space constructing module, for carrying out image procossing to the underwater environment image based on the second deep neural network model, Construct the underwater environment three dimensions of the underwater robot current location, and the purpose obtained according to the control module Object is marked on the pixel coordinate in described image, determines the purpose thing target location information;
The control module is additionally operable to the three dimensions built according to the space constructing module, according to preset tracing mode Plan that the underwater robot tracks the purpose thing target mobile route;
Execution module, the mobile route for being planned according to the control module control the underwater robot and are moved under water It is dynamic, it realizes and purpose thing target tracking and barrier is hidden.
2. a kind of Intelligent Underwater Robot according to claim 1, which is characterized in that described image acquisition module includes:
Image Acquisition submodule, the original image for acquiring underwater environment;
The original image progress image of image preprocessing submodule, the underwater environment for being acquired to described image acquisition module is pre- Processing obtains underwater environment image;Described image pre-processes:Denoising, image enhancement, image compensation.
3. a kind of Intelligent Underwater Robot according to claim 1, which is characterized in that the object mark identification module includes:
Image segmentation submodule, for carrying out image binary segmentation to the underwater environment image;
Feature extraction submodule, for carrying out feature extraction respectively to the image after described image segmentation submodule segmentation;
Characteristic matching submodule, for according to the feature samples to prestore, being carried out to the feature of feature extraction submodule extraction Characteristic matching identifies each object mark and barrier in the underwater environment image.
4. a kind of Intelligent Underwater Robot according to claim 1, which is characterized in that the space constructing module includes:
Depth of field computational submodule, for carrying out the depth of field to collected underwater environment image based on the second deep neural network model It calculates;
Three-dimensional construction submodule, the depth of field for being calculated according to the underwater environment image and the depth of field computational submodule are believed Breath, constructs the three dimensions of the underwater environment residing for the underwater robot;
Position determination submodule is used for three dimensions and the object according to the three-dimensional constructor module structure in institute The pixel coordinate in underwater environment image is stated, determines the purpose thing target location information.
5. a kind of Intelligent Underwater Robot according to claim 4, which is characterized in that described image acquisition module includes extremely Few one group of binocular camera, the depth of field computational submodule include:
Feature extraction unit, for opening underwater environment figure to the left and right two that binocular camera obtains based on deep neural network algorithm As carrying out 2D characteristic processing extractions respectively, characteristic tensor is obtained;
Matching power flow creating unit, for creating two Matching power flows according to the characteristic tensor of acquisition, a Matching power flow is used for From left to right matches, another Matching power flow is used for the left matching of dextrad;
Stereo matching unit, two Matching power flows for being created by the Matching power flow creating unit execute three-dimensional respectively Match, obtains parallax tensor figure;
Depth calculation unit, the parallax for extracting each pixel from the parallax tensor figure that the Stereo matching unit obtains, Calculate the depth information in image.
6. a kind of Intelligent Underwater Robot according to claim 4, which is characterized in that the three-dimensional construction submodule packet It includes:
Object mark calculating coordinate unit, the depth of view information of the underwater environment image for being calculated according to the depth of field computational submodule, Obtain the three-dimensional coordinate letter of each underwater object mark and barrier that object mark identification module described in the underwater environment image identifies Breath;
Space Reconstruction unit, the three-dimensional of each underwater object mark and barrier for being obtained according to the object mark calculating coordinate unit are sat Information is marked, the three dimensions of the underwater robot ambient enviroment is reconstructed.
7. a kind of Intelligent Underwater Robot according to claim 1, which is characterized in that the tracing mode includes following mould Formula, bootmode, surround pattern at other companion's pattern.
8. according to a kind of Intelligent Underwater Robot of claim 1-7 any one of them, which is characterized in that the control module, It is additionally operable to, when purpose thing mark is not present in the object mark that the object mark identification module identifies, choose the remote control terminal Signal source is as interim purpose thing mark, until the object mark identification module recognizes the mesh that the remote control terminal issues tracking Object mark.
9. a kind of Intelligent Underwater Robot system, which is characterized in that including:Under claim 1-8 any one of them intelligent waters Robot, the remote control terminal and server communicated to connect respectively with the Intelligent Underwater Robot.
10. a kind of Intelligent Underwater Robot object mark tracking, which is characterized in that be applied to described in claim any one of 1-8 Intelligent Underwater Robot, the object mark tracking includes:
S100 receives the control command of remote control terminal, obtains the purpose thing mark information of tracking;
S200 obtains underwater environment image;
S300 is based on the first deep neural network model and carries out image procossing to the underwater environment image, identifies described image In each underwater object mark and barrier;
S400 judges whether that remote control terminal issues the purpose thing mark of tracking according to the underwater object mark identified, if so, Then obtain the pixel coordinate that the purpose thing is marked in described image;
S500 is based on the second deep neural network model and carries out image procossing to the underwater environment image, constructs the underwater machine The underwater environment three dimensions of device people current location, and described image is marked on according to the purpose thing that the control module obtains In pixel coordinate, determine the purpose thing target location information;
S600 plans that the underwater robot tracks the purpose thing according to the three dimensions of structure according to preset tracing mode Target mobile route;
S700 controls the underwater robot according to the mobile route of planning and is moved under water, realizes to the purpose thing target Tracking and barrier are hidden.
CN201810496705.5A 2018-05-22 2018-05-22 A kind of Intelligent Underwater Robot and its system, object mark tracking Pending CN108536157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810496705.5A CN108536157A (en) 2018-05-22 2018-05-22 A kind of Intelligent Underwater Robot and its system, object mark tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810496705.5A CN108536157A (en) 2018-05-22 2018-05-22 A kind of Intelligent Underwater Robot and its system, object mark tracking

Publications (1)

Publication Number Publication Date
CN108536157A true CN108536157A (en) 2018-09-14

Family

ID=63471691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810496705.5A Pending CN108536157A (en) 2018-05-22 2018-05-22 A kind of Intelligent Underwater Robot and its system, object mark tracking

Country Status (1)

Country Link
CN (1) CN108536157A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194764A (en) * 2018-09-25 2019-01-11 杭州翼兔网络科技有限公司 A kind of diving apparatus operating condition analysis system
CN109359574A (en) * 2018-09-30 2019-02-19 宁波工程学院 Wide view field pedestrian detection method based on channel cascaded
CN109495732A (en) * 2018-10-31 2019-03-19 昆山睿力得软件技术有限公司 A kind of 3D imaging vision guidance system
CN109533235A (en) * 2018-12-09 2019-03-29 大连海事大学 A kind of under-water body detection robot and its working method
CN109978243A (en) * 2019-03-12 2019-07-05 北京百度网讯科技有限公司 Track of vehicle planing method, device, computer equipment, computer storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102042835A (en) * 2010-11-05 2011-05-04 中国海洋大学 Autonomous underwater vehicle combined navigation system
WO2014018800A1 (en) * 2012-07-27 2014-01-30 Brain Corporation Apparatus and methods for generalized state-dependent learning in spiking neuron networks
CN104268625A (en) * 2014-10-09 2015-01-07 哈尔滨工程大学 Autonomous underwater vehicle track predicating method based on marine environment information
CN105446821A (en) * 2015-11-11 2016-03-30 哈尔滨工程大学 Improved neural network based fault diagnosis method for intelligent underwater robot propeller
CN105787489A (en) * 2016-03-04 2016-07-20 哈尔滨工程大学 Matching navigation algorithm based on underwater landform
US20180012125A1 (en) * 2016-07-09 2018-01-11 Doxel, Inc. Monitoring construction of a structure
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN108038459A (en) * 2017-12-20 2018-05-15 深圳先进技术研究院 A kind of detection recognition method of aquatic organism, terminal device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102042835A (en) * 2010-11-05 2011-05-04 中国海洋大学 Autonomous underwater vehicle combined navigation system
WO2014018800A1 (en) * 2012-07-27 2014-01-30 Brain Corporation Apparatus and methods for generalized state-dependent learning in spiking neuron networks
CN104268625A (en) * 2014-10-09 2015-01-07 哈尔滨工程大学 Autonomous underwater vehicle track predicating method based on marine environment information
CN105446821A (en) * 2015-11-11 2016-03-30 哈尔滨工程大学 Improved neural network based fault diagnosis method for intelligent underwater robot propeller
CN105787489A (en) * 2016-03-04 2016-07-20 哈尔滨工程大学 Matching navigation algorithm based on underwater landform
US20180012125A1 (en) * 2016-07-09 2018-01-11 Doxel, Inc. Monitoring construction of a structure
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN108038459A (en) * 2017-12-20 2018-05-15 深圳先进技术研究院 A kind of detection recognition method of aquatic organism, terminal device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUHWAN KIM,等31: "Convolutional Neural Network-based Real-time ROV Detection Using Forward-looking Sonar Image", 《IEEE》 *
唐旭东,等: "水下机器人光视觉目标识别系统", 《机器人》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194764A (en) * 2018-09-25 2019-01-11 杭州翼兔网络科技有限公司 A kind of diving apparatus operating condition analysis system
CN109359574A (en) * 2018-09-30 2019-02-19 宁波工程学院 Wide view field pedestrian detection method based on channel cascaded
CN109495732A (en) * 2018-10-31 2019-03-19 昆山睿力得软件技术有限公司 A kind of 3D imaging vision guidance system
CN109533235A (en) * 2018-12-09 2019-03-29 大连海事大学 A kind of under-water body detection robot and its working method
CN109978243A (en) * 2019-03-12 2019-07-05 北京百度网讯科技有限公司 Track of vehicle planing method, device, computer equipment, computer storage medium

Similar Documents

Publication Publication Date Title
Zhang et al. Densely connected pyramid dehazing network
Isler et al. An information gain formulation for active volumetric 3D reconstruction
Menze et al. Object scene flow
Rakibe et al. Background subtraction algorithm based human motion detection
CN104598915B (en) A kind of gesture identification method and device
Liu Robotic online path planning on point cloud
CN104318569B (en) Space salient region extraction method based on depth variation model
CN106607907B (en) A kind of moving-vision robot and its investigating method
CN101419055B (en) Space target position and pose measuring device and method based on vision
US7496226B2 (en) Multi-camera inspection of underwater structures
Sminchisescu et al. Covariance scaled sampling for monocular 3D body tracking
CN102982557B (en) Method for processing space hand signal gesture command based on depth camera
CA2374807C (en) Dynamic visual registration of a 3-d object with a graphical model
JP4912388B2 (en) Visual tracking method for real world objects using 2D appearance and multi-cue depth estimation
Broggi et al. Visual perception of obstacles and vehicles for platooning
CN102638653B (en) Automatic face tracing method on basis of Kinect
EP3552147A1 (en) System and method for semantic simultaneous localization and mapping of static and dynamic objects
CN101604447B (en) No-mark human body motion capture method
Se et al. Vision based modeling and localization for planetary exploration rovers
CN104115192B (en) Three-dimensional closely interactive improvement or associated improvement
Taylor et al. A real-time approach to stereopsis and lane-finding
EP3248029A1 (en) Visual localization within lidar maps
US20180341836A1 (en) Neural network point cloud generation system
Saeedi et al. Vision-based 3-D trajectory tracking for unknown environments
US5930379A (en) Method for detecting human body motion in frames of a video sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination