CN112101158A - Ship navigation auxiliary system and method based on deep learning and visual SLAM - Google Patents

Ship navigation auxiliary system and method based on deep learning and visual SLAM Download PDF

Info

Publication number
CN112101158A
CN112101158A CN202010919263.8A CN202010919263A CN112101158A CN 112101158 A CN112101158 A CN 112101158A CN 202010919263 A CN202010919263 A CN 202010919263A CN 112101158 A CN112101158 A CN 112101158A
Authority
CN
China
Prior art keywords
ship
target
deep learning
tracking
analysis processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010919263.8A
Other languages
Chinese (zh)
Inventor
李鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Zhihai Lian Technology Co ltd
Original Assignee
Sichuan Zhihai Lian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Zhihai Lian Technology Co ltd filed Critical Sichuan Zhihai Lian Technology Co ltd
Priority to CN202010919263.8A priority Critical patent/CN112101158A/en
Publication of CN112101158A publication Critical patent/CN112101158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/20Analytics; Diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)

Abstract

The invention discloses a ship navigation auxiliary system and method based on deep learning and visual SLAM, and the system comprises an intelligent terminal, an Internet of things platform, a streaming media server, a video analysis processing module and a target identification tracking module, wherein the streaming media server, the video analysis processing module and the target identification tracking module are all in interactive connection with the Internet of things platform, the Internet of things platform is in interactive connection with the intelligent terminal, and a semi-automatic labeling module is embedded in the intelligent terminal. The ship navigation auxiliary system and method based on deep learning and visual SLAM combine active sensor (such as radar, visible light camera, thermal imaging camera and the like) detection and passive sensor (such as AIS, GPS and the like) detection technologies and highly-integrated real-time situation and cognition technologies to perform real-time detection, dynamic tracking, intelligent early warning and historical backtracking on the target in a specific area, thereby ensuring the safety of ships and obtaining the effective monitoring of the ships and the targets around the ships.

Description

Ship navigation auxiliary system and method based on deep learning and visual SLAM
Technical Field
The invention relates to the technical field of ship navigation, in particular to a ship navigation auxiliary system and method based on deep learning and visual SLAM.
Background
Shipping refers to a mode of transportation in which people or goods are transported by transportation by water. Shipping has the outstanding advantages of large transportation volume, low energy consumption, low cost, environmental protection and the like, and plays an important role in international trade and national transportation. The development of intelligent shipping can be significant in the aspects of improving shipping and transportation capacity; the safety of shipping is improved.
The internet of things is an intelligent information service system which is based on communication networks such as the internet, a mobile communication network and the like, aims at the requirements of different application fields, automatically acquires various information of a physical world by using intelligent objects with sensing, communication and computing capabilities, interconnects all independently addressable physical objects, realizes comprehensive sensing, reliable transmission and intelligent analysis and processing, and constructs interconnection of people and objects and interconnection of objects and objects.
In the field of ships, the application of technical means such as a Global Positioning System (GPS), A Radar (ARPA), a Closed circuit video surveillance System (CCTV), an Automatic Identification System (AIS), an electronic chart display and information System (emds) makes ships rapidly develop towards the direction of informatization and intellectualization, but has a certain gap from a real intelligent ship with Automatic sensing, subjective analysis and intelligent operation.
The appearance of the Internet of things provides a new idea for intelligent development of ships, the Internet of things technology provides a new framework of intelligent ship service, and the management efficiency of ships is improved.
The ship networking is a specific development form of the Internet of things in the shipping field, aims at fine shipping management, comprehensive industrial service and humanized travel experience, covers the core technologies of navigation channels, ship locks, bridges, ports and docks and fusion networking by taking enterprises, shiphouses, ships and goods as objects, realizes an intelligent shipping information comprehensive service network of man-ship interconnection, ship-cargo interconnection and ship-shore interconnection by taking data as a center,
at present, shipping information management and service levels have some disadvantages, wherein the shipping intelligent efficiency is low, and the cost for acquiring and sharing ship information is too high, which becomes a main bottleneck restricting the shipping development, so how to fully utilize technical means and comprehensively build a ship networking system is realized to solve the problems existing at present, promote business cooperation, ensure the safety of ship navigation, efficient transportation of goods and the like, and the like are more urgent, therefore, a ship navigation auxiliary system and a method based on deep learning and visual SLAM are provided, so as to solve the problems.
Disclosure of Invention
The invention aims to provide a ship navigation auxiliary system and method based on deep learning and visual SLAM, which aim to solve the defects of the prior shipping information management and service level provided by the background technology, wherein the shipping intelligent efficiency is low, and the cost for acquiring and sharing ship information becomes a main bottleneck problem restricting the shipping development.
In order to achieve the purpose, the invention provides the following technical scheme: the utility model provides a boats and ships navigation auxiliary system based on deep learning and vision SLAM, includes wisdom terminal, thing networking platform, streaming media server, video analysis processing module and target identification pursuit module all with the thing networking platform between interactive connection, interactive connection between thing networking platform and the wisdom terminal, the semi-automatic mark module of wisdom terminal inside embedded installation.
Preferably, the target identification tracking system includes camera system, radar system, AIS system, VHF system, hydrology monitor and electron compass, and the target identification tracking system realizes that ship classification detects, the name of a ship detects and discernment, electron tracking and video are unified to superpose based on camera system, radar system, AIS system, VHF system, hydrology monitor and electron compass simultaneously, and the camera system includes a plurality of high definition zoom camera and integration cloud platform camera simultaneously.
Preferably, the video analysis processing module performs video analysis, ship name recognition and deep learning based on an internet of things platform auxiliary target tracking system.
Preferably, the internet of things platform is erected through a central machine room, and the intelligent terminal intelligently transfers data information of the streaming media server, the video analysis processing module and the target identification tracking module through the internet of things platform.
The invention also provides a ship navigation auxiliary method based on deep learning and visual SLAM, which comprises the following steps:
the method comprises the following steps: correspondingly embedding and installing a camera system, a radar system, an AIS system, a VHF system, a hydrological detector and an electronic compass at corresponding positions of a ship;
step two: network erection is carried out, and network transmission of corresponding data information is guaranteed;
step three: when the ship sails, the target is detected through a camera system, a radar system, an AIS system, a VHF system, a hydrological detector and an electronic compass;
step four: the video analysis processing module is used for assisting the target recognition tracking system to capture a camera image of the camera system for analysis processing;
step five: carrying out ship classification detection, ship name detection and identification, electronic tracking and video fusion and superposition in real time through a video analysis processing module and a target identification tracking system;
step six: meanwhile, picture rebroadcasting is carried out in real time through the stream media server through the Internet of things platform, and pictures are forwarded to the monitoring center and the intelligent terminal;
step seven: the intelligent terminal marks the ship based on an internally embedded semi-automatic marking module to assist navigation;
step eight: the video analysis processing module receives the annotation information based on the Internet of things platform and assists the picture information to carry out deep learning.
Preferably, the target identification tracking system monitors and identifies ship navigation surrounding images, water quality environment, geographic environment, surrounding physical environment and the like in real time based on a camera system, a radar system, an AIS system, a VHF system, a hydrological monitor and an electronic compass, and simultaneously monitors ship posture and positioning information.
Preferably, the step five ship classification detection refers to the feature extraction of the ship: color feature extraction, shape feature extraction, motion feature extraction, feature matching, target position prediction and the like.
Preferably, the ship name detection and identification is based on deep convolutional neural network transfer learning detection, millions of samples under the influence factors of different ship types, different ship name identification styles, different picture resolutions, different backgrounds, different seasons, different time periods, different illumination, different angles and the like are collected, and the samples are manually marked based on an intelligent terminal to carry out deep learning.
Preferably, the electronic tracking is performed based on video information, and under the condition that the motion information of the target is not known in advance, the motion state of the target is estimated in real time through data from an information source, so that the target is tracked.
Preferably, the video fusion and superposition is to fuse various types of analyzed and processed ship information to a target in a video, so that the visualization and identification of the target are enhanced, and meanwhile, the storage and extraction of data information can be performed through a streaming media server.
Compared with the prior art, the invention has the beneficial effects that: the ship navigation auxiliary system and method based on deep learning and visual SLAM combine active sensor (such as radar, visible light camera, thermal imaging camera and the like) detection and passive sensor (such as AIS, GPS and the like) detection technologies and highly-integrated real-time situation and cognition technologies to perform real-time detection, dynamic tracking, intelligent early warning and historical backtracking on the target in a specific area, thereby ensuring the safety of ships and obtaining the effective monitoring of the ships and the targets around the ships.
Drawings
FIG. 1 is a schematic flow diagram 1 of the system of the present invention;
FIG. 2 is a schematic flow chart of the system of the present invention 2;
FIG. 3 is a schematic diagram of a system architecture according to the present invention;
FIG. 4 is a layout of the apparatus of the present invention;
fig. 5 is a physical topology diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the present invention provides a technical solution: the utility model provides a boats and ships navigation auxiliary system based on degree of depth learning and vision SLAM, includes wisdom terminal, thing networking platform, streaming media server, video analysis processing module and target identification pursuit module all with the thing networking platform between interactive connection, interactive connection between thing networking platform and the wisdom terminal, the semi-automatic mark module of the inside embedded installation of wisdom terminal.
The invention further provides a target identification tracking system which comprises a camera system, a radar system, an AIS system, a VHF system, a hydrological monitor and an electronic compass, wherein the target identification tracking system realizes ship classification detection, ship name detection and identification, electronic tracking and video integration and superposition based on the camera system, the radar system, the AIS system, the VHF system, the hydrological monitor and the electronic compass, and simultaneously comprises a plurality of high-definition zoom cameras and an integrated pan-tilt camera; the system is tracked and monitored by singly or jointly applying systems such as CCTV, radar, AIS and the like, and a user can track and identify a target ship on a chart by AIS or radar tracking means to obtain a ship dynamic new message in detail; by means of CCTV tracking, a user can track a camera of a target ship and visually know the appearance, navigation environment and related behaviors of the ship, and the user can also perform linkage tracking by combining various tracking modes to more comprehensively and accurately master the information and the dynamic state of a ship with problems.
Further, the video analysis processing module is used for carrying out video analysis, ship name recognition and deep learning on the basis of the Internet of things platform auxiliary target tracking system.
Furthermore, the Internet of things platform is erected through the central machine room, and the intelligent terminal intelligently retrieves data information of the streaming media server, the video analysis processing module and the target identification tracking module through the Internet of things platform;
the video is structured, and the video content is organized into the technology of structured information which can be understood by computers and human beings according to the semantic relation by adopting the processing means of space-time segmentation, feature extraction, object identification and the like, so that the video which can only be seen and can not be called originally can be changed into the information which can be called.
The invention also provides a ship navigation auxiliary method based on deep learning and visual SLAM, which comprises the following steps:
the method comprises the following steps: correspondingly embedding and installing a camera system, a radar system, an AIS system, a VHF system, a hydrological detector and an electronic compass at corresponding positions of a ship;
step two: network erection is carried out, and network transmission of corresponding data information is guaranteed;
step three: when the ship sails, the target is detected through a camera system, a radar system, an AIS system, a VHF system, a hydrological detector and an electronic compass;
step four: the video analysis processing module is used for assisting the target recognition tracking system to capture a camera image of the camera system for analysis processing;
step five: carrying out ship classification detection, ship name detection and identification, electronic tracking and video fusion and superposition in real time through a video analysis processing module and a target identification tracking system;
step six: meanwhile, picture rebroadcasting is carried out in real time through the stream media server through the Internet of things platform, and pictures are forwarded to the monitoring center and the intelligent terminal;
step seven: the intelligent terminal marks the ship based on an internally embedded semi-automatic marking module to assist navigation;
step eight: the video analysis processing module receives the annotation information based on the Internet of things platform and assists the picture information to carry out deep learning.
Further, the target identification tracking system is based on a camera system, a radar system, an AIS system, a VHF system, a hydrological monitor and an electronic compass to monitor and identify images around the sailing of the ship, water quality environment, geographical environment, surrounding physical environment and the like in real time, and simultaneously monitor the posture of the ship and positioning information.
The invention further comprises the step five of ship classification detection, which is to extract the characteristics of the ship: color feature extraction, shape feature extraction, motion feature extraction, feature matching, target position prediction and the like.
Further, the ship name detection and identification is based on deep convolutional neural network transfer learning detection, million-level samples under the influence factors of different ship types, different ship name identification styles, different picture resolutions, different backgrounds, different seasons, different time periods, different illumination, different angles and the like are collected, and the samples are manually marked on the basis of an intelligent terminal to carry out deep learning.
The invention further discloses that the electronic tracking is completed based on the video information, under the condition that the motion information of the target is not known in advance, the motion state of the target is estimated in real time through data from an information source, so that the tracking of the target is realized, the corresponding positions of the same target in different frames of an image sequence are determined, the image sequence is divided into a plurality of moving objects in a spatial domain, and the objects are modeled, detected and tracked in a time domain.
The invention further discloses a video fusion and superposition method, which is characterized in that various analyzed and processed ship information is fused on a target in a video, the visualization and identification of the target are enhanced, meanwhile, data information can be stored and extracted through a streaming media server, the attitude information and the position information of the ship obtained by a sensing equipment system, target space information (radar/AIS) are comprehensively utilized, the detected target video data and depth of field data are combined, a three-dimensional reconstruction technology is adopted, the three-dimensional space coordinate of the target is calculated, and the target video positioning information is obtained, so that the video synthesis and superposition sensing information is achieved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The utility model provides a boats and ships navigation auxiliary system based on deep learning and vision SLAM, includes wisdom terminal, thing networking platform, streaming media server, video analysis processing module and target identification pursuit module, its characterized in that: streaming media server, video analysis processing module and target identification tracking module all with the thing networking platform between interactive connection, interactive connection between thing networking platform and the wisdom terminal, the semi-automatic mark module of the inside embedded installation of wisdom terminal.
2. The vessel voyage assistance system based on deep learning and visual SLAM of claim 1, wherein: the target identification tracking system comprises a camera system, a radar system, an AIS system, a VHF system, a hydrological monitor and an electronic compass, the target identification tracking system realizes ship classification detection, ship name detection and identification, electronic tracking and video integration and superposition based on the camera system, the radar system, the AIS system, the VHF system, the hydrological monitor and the electronic compass, and the camera system comprises a plurality of high-definition zoom cameras and an integrated pan-tilt camera.
3. The vessel voyage assistance system based on deep learning and visual SLAM of claim 1, wherein: the video analysis processing module is used for carrying out video analysis, ship name recognition and deep learning on the basis of the auxiliary target tracking system of the Internet of things platform.
4. The vessel voyage assistance system based on deep learning and visual SLAM of claim 1, wherein: the Internet of things platform is erected through a central machine room, and the intelligent terminal intelligently transfers data information of the streaming media server, the video analysis processing module and the target identification tracking module through the Internet of things platform.
5. A ship navigation auxiliary method based on deep learning and visual SLAM is characterized in that: the auxiliary method comprises the following steps:
the method comprises the following steps: correspondingly embedding and installing a camera system, a radar system, an AIS system, a VHF system, a hydrological detector and an electronic compass at corresponding positions of a ship;
step two: network erection is carried out, and network transmission of corresponding data information is guaranteed;
step three: when the ship sails, the target is detected through a camera system, a radar system, an AIS system, a VHF system, a hydrological detector and an electronic compass;
step four: the video analysis processing module is used for assisting the target recognition tracking system to capture a camera image of the camera system for analysis processing;
step five: carrying out ship classification detection, ship name detection and identification, electronic tracking and video fusion and superposition in real time through a video analysis processing module and a target identification tracking system;
step six: meanwhile, picture rebroadcasting is carried out in real time through the stream media server through the Internet of things platform, and pictures are forwarded to the monitoring center and the intelligent terminal;
step seven: the intelligent terminal marks the ship based on an internally embedded semi-automatic marking module to assist navigation;
step eight: the video analysis processing module receives the annotation information based on the Internet of things platform and assists the picture information to carry out deep learning.
6. The vessel sailing auxiliary method based on deep learning and visual SLAM of claim 5, wherein: the target identification tracking system is based on a camera system, a radar system, an AIS system, a VHF system, a hydrological monitor and an electronic compass for monitoring and identifying images, water quality environment, geographical environment, peripheral physical environment and the like around the navigation of the ship in real time, and simultaneously monitors the posture of the ship and positioning information.
7. The vessel sailing auxiliary method based on deep learning and visual SLAM of claim 5, wherein: the step five of ship classification detection refers to the characteristic extraction of ships: color feature extraction, shape feature extraction, motion feature extraction, feature matching, target position prediction and the like.
8. The vessel sailing auxiliary method based on deep learning and visual SLAM of claim 5, wherein: the ship name detection and identification is based on deep convolutional neural network transfer learning detection, million-level samples under the influence factors of different ship types, different ship name identification styles, different picture resolutions, different backgrounds, different seasons, different time periods, different illumination, different angles and the like are collected, and the samples are artificially labeled based on an intelligent terminal to carry out deep learning.
9. The vessel sailing auxiliary method based on deep learning and visual SLAM of claim 5, wherein: the electronic tracking is completed based on video information, under the condition that the motion information of the target is not known in advance, the motion state of the target is estimated in real time through data from an information source, so that the tracking of the target is realized, the corresponding positions of the same target in different frames of an image sequence are determined, the image sequence is divided into a plurality of moving objects in a spatial domain, and the objects are modeled, detected and tracked in a time domain.
10. The vessel sailing auxiliary method based on deep learning and visual SLAM of claim 5, wherein: the video fusion and superposition is to fuse various types of analyzed and processed ship information to a target in a video, so that the visualization and identification of the target are enhanced, and meanwhile, data information can be stored and extracted through a streaming media server.
CN202010919263.8A 2020-09-04 2020-09-04 Ship navigation auxiliary system and method based on deep learning and visual SLAM Pending CN112101158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010919263.8A CN112101158A (en) 2020-09-04 2020-09-04 Ship navigation auxiliary system and method based on deep learning and visual SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010919263.8A CN112101158A (en) 2020-09-04 2020-09-04 Ship navigation auxiliary system and method based on deep learning and visual SLAM

Publications (1)

Publication Number Publication Date
CN112101158A true CN112101158A (en) 2020-12-18

Family

ID=73757451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010919263.8A Pending CN112101158A (en) 2020-09-04 2020-09-04 Ship navigation auxiliary system and method based on deep learning and visual SLAM

Country Status (1)

Country Link
CN (1) CN112101158A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507965A (en) * 2020-12-23 2021-03-16 北京海兰信数据科技股份有限公司 Target identification method and system of electronic lookout system
CN112969049A (en) * 2021-01-29 2021-06-15 南京长江油运有限公司 Intelligent detection system for ship violation behaviors
CN113222961A (en) * 2021-05-27 2021-08-06 大连海事大学 Intelligent ship body detection system and method
CN113705502A (en) * 2021-09-02 2021-11-26 浙江索思科技有限公司 Ship target behavior understanding system integrating target detection and target tracking
CN113780127A (en) * 2021-08-30 2021-12-10 武汉理工大学 Ship positioning and monitoring system and method
CN117710923A (en) * 2023-12-14 2024-03-15 江苏镇扬汽渡有限公司 Auxiliary navigation method for transition under bad sight

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130093245A (en) * 2012-02-14 2013-08-22 (주)지엠티 Suspected smuggling vessel ais analysis system and it's analysis method on the basis of multi-sensors and sailing pattern analysis
CN106210484A (en) * 2016-08-31 2016-12-07 上海鹰觉科技有限公司 Waters surveillance polynary associating sensing device and cognitive method thereof
CN106199555A (en) * 2016-08-31 2016-12-07 上海鹰觉科技有限公司 A kind of unmanned boat navigation radar for collision avoidance detection method
CN106816038A (en) * 2017-03-17 2017-06-09 武汉理工大学 A kind of inland waters abnormal behaviour ship automatic identification system and method
CN108416361A (en) * 2018-01-18 2018-08-17 上海鹰觉科技有限公司 A kind of information fusion system and method based on sea survaillance
CN108873799A (en) * 2018-06-29 2018-11-23 南京海联智能科技有限公司 Boat-carrying intelligent driving assists terminal
CN109636921A (en) * 2018-12-17 2019-04-16 武汉理工大学 Intelligent vision ship sensory perceptual system and data processing method based on cloud platform
CN109709589A (en) * 2019-01-09 2019-05-03 深圳市芯鹏智能信息有限公司 A kind of air-sea region solid perceives prevention and control system
CN109725310A (en) * 2018-11-30 2019-05-07 中船(浙江)海洋科技有限公司 A kind of ship's fix supervisory systems based on YOLO algorithm and land-based radar system
CN109919113A (en) * 2019-03-12 2019-06-21 北京天合睿创科技有限公司 Ship monitoring method and system and harbour operation prediction technique and system
CN110060508A (en) * 2019-04-08 2019-07-26 武汉理工大学 A kind of ship automatic testing method for inland river bridge zone
CN110110964A (en) * 2019-04-04 2019-08-09 深圳市云恩科技有限公司 A kind of ship and ferry supervisory systems based on deep learning
CN110175186A (en) * 2019-05-15 2019-08-27 中国舰船研究设计中心 A kind of intelligent ship environmental threat target apperception system and method
CN110415224A (en) * 2019-07-22 2019-11-05 北京金交信息通信导航设计院 A kind of marine ships remote sense monitoring system and platform and method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130093245A (en) * 2012-02-14 2013-08-22 (주)지엠티 Suspected smuggling vessel ais analysis system and it's analysis method on the basis of multi-sensors and sailing pattern analysis
CN106210484A (en) * 2016-08-31 2016-12-07 上海鹰觉科技有限公司 Waters surveillance polynary associating sensing device and cognitive method thereof
CN106199555A (en) * 2016-08-31 2016-12-07 上海鹰觉科技有限公司 A kind of unmanned boat navigation radar for collision avoidance detection method
CN106816038A (en) * 2017-03-17 2017-06-09 武汉理工大学 A kind of inland waters abnormal behaviour ship automatic identification system and method
CN108416361A (en) * 2018-01-18 2018-08-17 上海鹰觉科技有限公司 A kind of information fusion system and method based on sea survaillance
CN108873799A (en) * 2018-06-29 2018-11-23 南京海联智能科技有限公司 Boat-carrying intelligent driving assists terminal
CN109725310A (en) * 2018-11-30 2019-05-07 中船(浙江)海洋科技有限公司 A kind of ship's fix supervisory systems based on YOLO algorithm and land-based radar system
CN109636921A (en) * 2018-12-17 2019-04-16 武汉理工大学 Intelligent vision ship sensory perceptual system and data processing method based on cloud platform
CN109709589A (en) * 2019-01-09 2019-05-03 深圳市芯鹏智能信息有限公司 A kind of air-sea region solid perceives prevention and control system
CN109919113A (en) * 2019-03-12 2019-06-21 北京天合睿创科技有限公司 Ship monitoring method and system and harbour operation prediction technique and system
CN110110964A (en) * 2019-04-04 2019-08-09 深圳市云恩科技有限公司 A kind of ship and ferry supervisory systems based on deep learning
CN110060508A (en) * 2019-04-08 2019-07-26 武汉理工大学 A kind of ship automatic testing method for inland river bridge zone
CN110175186A (en) * 2019-05-15 2019-08-27 中国舰船研究设计中心 A kind of intelligent ship environmental threat target apperception system and method
CN110415224A (en) * 2019-07-22 2019-11-05 北京金交信息通信导航设计院 A kind of marine ships remote sense monitoring system and platform and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SINDRE FOSSEN等: "Extended Kalman Filter Design and Motion Prediction of Ships using Live Automatic Identification System (AIS) Data", 《2018 2ND EUROPEAN CONFERENCE ON ELECTRICAL ENGINEERING AND COMPUTER SCIENCE (EECS)》, 25 November 2019 (2019-11-25), pages 464 - 470 *
樊宇杰等: "太湖湖区无人机巡航W 电子巡航的可行性", 《通航管理》, 31 December 2017 (2017-12-31), pages 4 - 7 *
郑天明: "船舶智能监控导航技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2013, 15 January 2013 (2013-01-15), pages 140 - 562 *
高倍力等: "一种内河船舶综合安全监管平台的实现", 《水运工程》, no. 9, 30 September 2012 (2012-09-30), pages 159 - 163 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507965A (en) * 2020-12-23 2021-03-16 北京海兰信数据科技股份有限公司 Target identification method and system of electronic lookout system
CN112969049A (en) * 2021-01-29 2021-06-15 南京长江油运有限公司 Intelligent detection system for ship violation behaviors
CN113222961A (en) * 2021-05-27 2021-08-06 大连海事大学 Intelligent ship body detection system and method
CN113780127A (en) * 2021-08-30 2021-12-10 武汉理工大学 Ship positioning and monitoring system and method
CN113705502A (en) * 2021-09-02 2021-11-26 浙江索思科技有限公司 Ship target behavior understanding system integrating target detection and target tracking
CN117710923A (en) * 2023-12-14 2024-03-15 江苏镇扬汽渡有限公司 Auxiliary navigation method for transition under bad sight
CN117710923B (en) * 2023-12-14 2024-05-03 江苏镇扬汽渡有限公司 Auxiliary navigation method for transition under bad sight

Similar Documents

Publication Publication Date Title
CN112101158A (en) Ship navigation auxiliary system and method based on deep learning and visual SLAM
Dilshad et al. Applications and challenges in video surveillance via drone: A brief survey
Shao et al. Saliency-aware convolution neural network for ship detection in surveillance video
CN109460740B (en) Ship identity recognition method based on AIS and video data fusion
Chen et al. AI-empowered speed extraction via port-like videos for vehicular trajectory analysis
Sharma et al. Fisher’s linear discriminant ratio based threshold for moving human detection in thermal video
CN103795976B (en) A kind of full-time empty 3 d visualization method
Geraldes et al. UAV-based situational awareness system using deep learning
Sebe et al. 3d video surveillance with augmented virtual environments
US9904852B2 (en) Real-time object detection, tracking and occlusion reasoning
Han et al. Aerial image change detection using dual regions of interest networks
CN114399606A (en) Interactive display system, method and equipment based on stereoscopic visualization
Park et al. Advanced wildfire detection using generative adversarial network-based augmented datasets and weakly supervised object localization
Seibert et al. SeeCoast port surveillance
Zhao et al. Deep learning-based object detection in maritime unmanned aerial vehicle imagery: Review and experimental comparisons
Delibasoglu et al. Motion detection in moving camera videos using background modeling and FlowNet
Ding et al. Individual surveillance around parked aircraft at nighttime: Thermal infrared vision-based human action recognition
Yadav et al. Challenging issues of video surveillance system using internet of things in cloud environment
Bloisi et al. Integrated visual information for maritime surveillance
Chen et al. Personnel Trajectory Extraction From Port-Like Videos Under Varied Rainy Interferences
Lu et al. Towards Generalizable Multi-Camera 3D Object Detection via Perspective Debiasing
Karishma et al. Artificial Intelligence in Video Surveillance
Schöller et al. Buoy Light Pattern Classification for Autonomous Ship Navigation Using Recurrent Neural Networks
Dašić et al. Some examples of video surveillance as a service applications
Cafaro et al. Towards Enhanced Support for Ship Sailing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination