CN112200101B - Video monitoring and analyzing method for maritime business based on artificial intelligence - Google Patents
Video monitoring and analyzing method for maritime business based on artificial intelligence Download PDFInfo
- Publication number
- CN112200101B CN112200101B CN202011102923.XA CN202011102923A CN112200101B CN 112200101 B CN112200101 B CN 112200101B CN 202011102923 A CN202011102923 A CN 202011102923A CN 112200101 B CN112200101 B CN 112200101B
- Authority
- CN
- China
- Prior art keywords
- frame
- identification
- tracking
- target object
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000012544 monitoring process Methods 0.000 title claims abstract description 19
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 11
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 26
- 238000004458 analytical method Methods 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001788 irregular Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 description 15
- 230000006399 behavior Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 238000012806 monitoring device Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009429 distress Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000003643 water by type Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video monitoring and analyzing method facing to maritime affairs based on artificial intelligence, which comprises the following steps that 1, a target object in each frame identification area of a video data source is identified by adopting an identification algorithm; 2, distinguishing and marking the target objects of the front frame and the rear frame of the video data source in the buffer area through a marking algorithm, finishing the non-repeated marking of the same target object and ensuring the uniqueness of the identified object; intercepting an identification area from each frame of the video, tracking the identification object in an internal area of the buffer area by using a tracking algorithm, tracking the track of the target object out of the buffer area, and ensuring that the target object is not disordered under the influence of overlapping, shielding and re-separating factors in the tracking process so as to obtain the result of the identification object; and 4, recording the position of the tracking target object out of the identification area, and performing behavior analysis and statistics on the tracking target object by using the position. The invention fully utilizes the video monitoring equipment of the existing inland waterway, thereby greatly saving the equipment replacement cost.
Description
Technical Field
The invention relates to the field of inland waterway monitoring management, in particular to a video monitoring and analyzing method for maritime business based on artificial intelligence.
Background
In recent years, with the increasing number of touring ships, freight ships, ferrying ships and river resource development operation ships in inland waterways, hundreds of water traffic safety accidents are caused every year, hundreds of casualties and immeasurable property losses are caused, and great challenges are brought to supervision work of maritime departments.
In order to enhance the navigation control of inland river navigation sections, at present, the maritime department usually adopts the Automatic Identification of Ships (AIS) and the very high frequency shore ship data communication system (VHF) technology as main technologies at key docks and assists with the channel video technical means of a VTS radar and a closed circuit television monitoring system (CCTV) to carry out the safety supervision of various ships. The automatic ship identification system (AIS) is matched with a Global Positioning System (GPS) to broadcast ship static information such as ship position, ship speed, changed course rate and course and the like dynamically combined with ship names, call signs, draught, dangerous goods and the like to ships and shore stations in nearby water areas through Very High Frequency (VHF) channels, so that the adjacent ships and shore stations can timely master dynamic and static information of all ships on nearby water surfaces, mutual communication and coordination can be realized at once, necessary avoidance actions are taken, and great help is brought to the safety of the ships.
The inland river very high frequency shore vessel data communication system works in a Very High Frequency (VHF) wave band, is one of the most main communication means of inland river and offshore radio mobile services, can carry out ship distress, emergency, safe communication and daily service communication, and is also an important communication tool of search and rescue operation, coordination and avoidance among ships and a ship traffic service system. However, AIS and VTS radars also have exposed many drawbacks in the practical application of electronic cruise, and cannot meet the business requirements of intelligent maritime supervision. For example, AIS has signal blind areas, many ships are not opened or equipped with AIS for various reasons, and the information fusion of AIS and VTS radar is not perfect. The main performance lies in that AIS lacks visual observation and control capability of field conditions, and simultaneously, with continuous improvement of business requirements of a maritime department, the defects of an early-stage standard definition video monitoring system also appear in the application process, for example, when a ship is overspeed, overloaded, turns around randomly or overtakes, an original standard definition video camera cannot provide effective image details, particularly cannot see the name of the ship clearly, and great inconvenience is brought to maritime supervision and law enforcement personnel.
Disclosure of Invention
The invention aims to provide a video monitoring and analyzing method for marine business based on artificial intelligence, which mainly realizes ship tracking and monitoring by taking a video means as a main means and effectively makes up for the defects of the existing ship positioning equipment.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to a maritime business-oriented video monitoring and analyzing method based on artificial intelligence, which comprises the following steps:
YOLO (You Only Look one) is an existing object recognition and positioning algorithm based on a deep neural network, and has the biggest characteristic of high operation speed and can be used for a real-time system; the innovation point of YOLO is that a region suggestion frame type detection framework is improved, the two stages of a candidate region and an object recognition are combined into a whole, and a predefined candidate region is adopted; dividing the whole image, wherein each part is responsible for target detection centered on the part, and predicting candidate frames, positioning confidence degrees and probability vectors of all classes of targets contained in each part at one time; after removing the candidate region, the structure of YOLO includes convolution, pooling and the last two fully-connected layers; the maximum difference is that the final output layer uses a linear function as an activation function to predict the position of a candidate frame and the probability of an object, and the YOLO target detection step is as follows:
step 3.1, readFrame image, call reserve function (function for adjusting picture size) to adjust image size, and divide image into partsA grid;
step 3.2, performing feature extraction on the image by using a convolutional neural network;
step 3.3, predicting the position and the type of the target: if the center of a target object falls in a grid, the grid is responsible for predicting the target object; to predict per meshIn a candidate frameConfidence anda category of (1); an output of magnitudeThe tensor of (a);in order to divide the number of the meshes,the number of frames responsible for each mesh,the number of categories; each mesh will correspond toA boundary frame with width and heightThe range is a full graph and represents the position of a bounding box of the object which is searched by taking the grid as the center; each bounding box corresponds to a score which represents whether an object exists at the position and the positioning accuracy:each grid corresponds toProbability value, finding out the category corresponding to the maximum probabilityAnd the object or a portion of the object is considered to be contained in the grid; each grid corresponding toThe information contained in the dimensional vector is as follows:
2、the position information of each candidate frame includes a center pointThe coordinates,Coordinates of the objectCandidate frame widthwCandidate frame heighth(Center_x,Center_y,width,height),A candidate frame is commonly requiredA number value to indicate its position;
The method embodies the degree of closeness of the predicted candidate frame and the real target frame;
4. traversing the scores, excluding objects with lower scores and higher overlapping degrees, and outputting predicted objects;
and 4, recording the position of the tracking target object out of the identification area, and performing behavior analysis and statistics on the tracking target object by using the position.
The invention relates to an existing video monitoring device in inland river navigation waters, which comprises a shore-based video monitoring device mainly comprising a port and a wharf and a video monitoring device in a cabin, and realizes automatic video monitoring and statistical analysis for maritime affairs by means of technologies such as big data, cloud computing, artificial intelligence, machine learning and the like. The specific application fields comprise ship identification, ship tracking, ship running track monitoring, port ship entry and exit statistics, ship personnel behavior analysis and the like.
On one hand, the invention solves the problems that the prior video monitoring equipment only supports remote viewing and can not complete related maritime supervision services in an automatic mode, and the traditional manual means not only consumes time and labor, but also can not dispose emergent emergency events in time and the like; on the other hand, the problem of software and hardware binding is solved, the software and the hardware are separated, video monitoring equipment of the existing inland waterway can be fully utilized, and equipment replacement cost is greatly saved.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of the YOLO algorithm flow described in this invention.
FIG. 3 is a flow chart of the KCF filtering algorithm of the present invention.
Fig. 4 is a schematic diagram of setting an identification area on each frame according to the present invention.
Fig. 5 is a schematic diagram of the present invention setting a buffer area in the identification area.
Fig. 6 is a schematic diagram of determining new and old objects in step 3.5 according to the embodiment of the present invention.
FIG. 7 is a diagram illustrating the object crossing the buffer at step 3.6 according to the embodiment of the present invention.
FIG. 8 is a diagram of the trigger buffer for tracking the position of the object in step 3.7 according to the embodiment of the present invention.
FIG. 9 is a diagram illustrating the behavior of determining the tracking object in step 4 according to the embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail with reference to the accompanying drawings, which are implemented on the premise of the technical solution of the present invention, and give detailed implementation manners and specific operation procedures, but the scope of the present invention is not limited to the following embodiments.
The invention relates to a video monitoring and analyzing method for marine business based on artificial intelligence, which realizes the automatic video monitoring and statistical analysis for the marine business by relying on technologies such as big data, cloud computing, artificial intelligence, machine learning and the like. As shown in fig. 1, the steps are as follows:
step 3.1, as shown in fig. 2, the step of YOLO target detection:
step 3.1.2, performing feature extraction on the image by using a convolutional neural network;
step 3.1.3, predicting the position and the type of the target: if the center of a target object falls in the grid, the grid is responsible for predicting the target object; to predict per meshIn a candidate frameConfidence anda category of (1); an output of magnitudeThe tensor of (a);in order to divide the number of the meshes,the number of frames responsible for each mesh,the number of the categories; each mesh will correspond toThe wide-height range of the bounding box is a full graph and represents the position of the bounding box for finding an object by taking the grid as a center; each bounding box corresponds to a score which represents whether an object exists at the position and the positioning accuracy:each grid corresponds toProbability value, finding out the category corresponding to the maximum probabilityAnd the object or a portion of the object is considered to be contained in the grid; each grid corresponding toThe information contained in the dimensional vector is as follows:
2,the position information of the candidate frame includes a center pointThe coordinates,Coordinates, frame candidate width w, frame candidate height h (Center _ x, center _ y, width, height),a candidate frame is neededA number value to indicate its position;
step 3.1.4, traversing the scores, excluding objects with lower scores and higher overlapping degrees, and outputting predicted objects;
step 3.2, loss function of yolo algorithm:
the YOLO algorithm considers target detection as a regression problem, adopts a mean square error loss function, and adopts different weight values for different parts; firstly distinguishing positioning error and classification error, adopting larger positioning error, namely boundary frame coordinate prediction error, then distinguishing confidence coefficient of boundary frame not containing target and boundary frame containing target, and adopting smaller weight value for former boundary frameAll other weighted values are set to be 1; then, the mean square error is adopted, the mean square error equally treats the boundary boxes with different sizes, the prediction of the width and the height of the network boundary box is changed into the prediction of the square root, namely the predicted value is changed into the prediction of the square root(ii) a For classification errors it means that there is a mesh of objects to account for the error; the error formula is as follows:
as a candidate frame center pointThe coordinates of the position of the object to be imaged,as the center point of the candidate frameFourier transform of the coordinates;
as the centre of a candidate frameThe coordinates of the position of the object to be imaged,as the center point of the candidate frameSquares of coordinate fourier transform values;
is the value of the square of the candidate frame width,a Fourier transform square value of the candidate frame width;
the square value of the candidate box height is,fourier transform square values of the candidate box heights;
representation gridTo (1)The object exists in each prediction frame, when the object exists, the difference between the coordinate and the Fourier transform of the object is calculated, each prediction frame and each grid are accumulated in sequence, and the result and the weight of the result are obtainedMultiplication of the weight values gives the prediction error of the coordinates, i.e.Width w, height h error of frameThe same as above;
when the object existsAccumulating the difference of the square of the Fourier transform of the target object and each prediction frame and each grid in sequence, and calculating the confidence error of a candidate frame containing the target object;
confidence error for candidate box without target object:
is a firstThe number of the objects is one,is as followsFourier transform values of the individual subjects;
by passingMaking a difference with the square of Fourier transform, and then performing accumulation operation to calculate a category prediction error;
step 3.3, training of the YOLO network:
before trainingFirstly, pre-training is carried out on ImageNet (image training set), the pre-trained classification model adopts the first 53 convolutional layers, and 5 pooling layers and full-connection layers are added; testing of the network, of each candidate frame(associated confidence score for target frame class)(ii) a Calculate to obtain eachIs/are as follows(relevant confidence score of target frame class), setting a threshold value, filtering out candidate frames with low score, and performing NMS (non-maximum suppression) processing on the reserved candidate frames to obtain a final detection result;
step 3.4, the marking algorithm, process the above identified object list, determine if it is a new object, the determination method uses optimized IOU (intersection ratio,) Calculating the overlapping ratio of the new object area, the buffer area 2 and the area of the existing object list, if the overlapping ratio is greater than or equal to the threshold value, the new object is not determined, and if the overlapping ratio is less than the threshold value, the new object is determined; the threshold value is obtained by intersecting and comparing the cache region 2 with the intersection of the new object and the old object, and the value of the common threshold value is more than 0.5; the specific algorithm steps are as follows:
suppose that the coordinates of the upper left vertex and the lower right vertex of the recognition box A are divided into,Identification frameThe coordinates of the upper left vertex and the lower right vertex of (1) are divided into,(ii) a For ease of understanding, the recognition box is described in an algorithmic language: the coordinates are converted into a matrix and,
calculate the matrix(integer value) ifIf the numerical value of the integer value in the matrix is less than 0, the identification frames are not intersected; if it isIf the numerical value of the integer value in the matrix is greater than 0, carrying out transformation multiplication on the matrix;
setting the size of the identified object and the measured threshold value to be 0.5;
step 3.5, as shown in fig. 6, judging the new and old objects: if the object is a new object, marking, establishing a tracker, recording information such as an ID (identity) and an initial position; if the object is not a new object, updating the initial position in the existing object tracker;
step 3.6, tracking algorithm: as shown in fig. 7, when the object passes through the buffer 2, a tracking algorithm is used to record the position of the object 3 at any time, and a KCF (Kernel Correlation Filter) filtering algorithm is used to mainly solve the problem of multi-object tracking overlap;
the KCF is a discrimination tracking method, which generally trains a target detector in the tracking process, uses the target detector to detect whether the next frame prediction position is a target, and then uses a new detection result to update a training set so as to update the target detector; when the target detector is trained, a target area is generally selected as a positive sample, the area around the target is a negative sample, and the probability that the area closer to the target is the positive sample is higher; as shown in fig. 3, the steps are as follows:
step 3.6.1, inIn the frame, at the current positionNearby sampling, training a regressor, wherein the regressor can calculate the response of small-window sampling;
step 3.6.2 atIn a frame, at a previous frame positionNearby sampling, and judging the response of each sample by using the regressor;
Matrix algorithm including circulation matrix Fourier space diagonalization, fourier diagonalization simplified ridge regression, kernel space ridge regression; circulant matrix fourier diagonalization equation:
upper labelRepresents the conjugate transpose:in other words, the first and second electrodes,similar to a diagonal matrix;
Step 3.7, as shown in fig. 8, when the buffer area 2 is triggered by the position of the tracked object 4, the object 4 is monitored at all times, and whether the position of the object is away from the identification area 2 is compared;
step 3.8, after the tracked object 4 leaves the buffer area 2, destroying the corresponding ID tracker, further judging the behavior of the tracked object 4 according to the azimuth relation between the initial area and the leaving area, carrying out classification statistics, outputting the behavior to a screen, and storing the behavior to a database;
suppose that the coordinates of the upper left vertex and the lower right vertex of the recognition box A are divided intoIdentification frameIs divided into the coordinates of the upper left vertex and the lower right vertex,(ii) a For ease of understanding, the identification box is described in terms of algorithmic logic:
according toRespectively calculating the position of mass center from the coordinates of the recognition frame;(ii) a The position of the dotted line is the central line of the identification area, and the central line position can be calculated(ii) a IgnoreInfluence of coordinates, i.e. determinationPosition of;
step 4.1, if the starting centroid and the leaving centroid are both on the left side of the dotted line 5, the target object 6 is turned back for departure;
step 4.2, the starting centroid is on the left side of the dotted line 5, and the departure centroid is on the right side of the dotted line 5, and the departure target object 7 is the departure target object;
step 4.3, if the starting centroid and the leaving centroid are both on the right side of the dotted line 5, turning back a target object 8 for entering;
step 4.4, the starting centroid to the right of the dashed line 5 and the departure centroid to the left of the dashed line 5 is the inbound target object 9.
Claims (2)
1. A video monitoring and analyzing method facing to maritime affairs based on artificial intelligence is characterized in that: the method comprises the following steps:
step 1, identifying a target object in each frame identification area of a video data source by adopting an identification algorithm so as to be suitable for an irregular identification area to complete the complete identification of the target object; namely: sequentially reading each frame of a video data source in sequence and setting an identification area on each frame;
step 2, distinguishing and marking the target objects of the front frame and the rear frame of the video data source in a buffer area through a marking algorithm, completing the non-repeated marking of the same target object, distinguishing new and old target objects and ensuring the uniqueness of an identified object; setting a buffer area according to the set reduction ratio according to the set identification area of each frame, wherein the buffer area is arranged in the identification area and is superposed with the central point of the identification area;
step 3, intercepting an identification area from each frame of the video, tracking the identification object in an internal area of the buffer area by using a tracking algorithm, tracking the track of the target object out of the buffer area, and ensuring that the target object is not disordered under the influence of overlapping, shielding and re-separating factors in the tracking process, thereby obtaining the result of the identification object;
and 4, recording the position of the tracking target object out of the identification area, and performing behavior analysis and statistics on the tracking target object by using the position.
2. The artificial intelligence based maritime service oriented video monitoring and analysis method according to claim 1, wherein: in step 3, tracking the identified object by using a tracking algorithm, comprising the following steps:
step 3.1, reading the P frame image, calling the Resize function to adjust the image size, and dividing the image intoA grid;
step 3.2, performing feature extraction on the image by using a convolutional neural network;
step 3.3, predicting the position and the type of the target: if the center of a target object falls in a certain grid, the grid is responsible for predicting the target object; to predict per meshIn a candidate frameConfidence, and category; an output of magnitude ofThe tensor of (a);in order to divide the number of the meshes,the number of frames responsible for each mesh,the number of categories; each grid corresponds toThe wide-height range of the bounding box is a full graph and represents the position of the bounding box for finding an object by taking the grid as a center; each bounding box corresponds to a score which represents whether an object exists at the position and the positioning accuracy;
each mesh corresponds toProbability value, finding out the category corresponding to the maximum probabilityWhereinIs composed ofSubject is atProbability of occurrence under the condition, and considering the grid as containing the probabilityAn object or a part of the object;
2、the position information of each candidate frame includes a center pointThe coordinates,Coordinates, candidate box width w, candidate box height h,a candidate frame is commonly requiredA number value to indicate its position;
is the probability of an object existing within the candidate box, as distinguished from,The degree of proximity of the predicted candidate frame and the actual target frame is reflected;
4. and traversing all the scores, excluding the objects with lower scores and higher overlapping degrees, and outputting predicted objects.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011102923.XA CN112200101B (en) | 2020-10-15 | 2020-10-15 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011102923.XA CN112200101B (en) | 2020-10-15 | 2020-10-15 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112200101A CN112200101A (en) | 2021-01-08 |
CN112200101B true CN112200101B (en) | 2022-10-14 |
Family
ID=74009065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011102923.XA Active CN112200101B (en) | 2020-10-15 | 2020-10-15 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112200101B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7462113B2 (en) * | 2021-05-17 | 2024-04-04 | Eizo株式会社 | Information processing device, information processing method, and computer program |
CN113516093B (en) * | 2021-07-27 | 2024-09-10 | 浙江大华技术股份有限公司 | Labeling method and device of identification information, storage medium and electronic device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515378A (en) * | 2009-03-17 | 2009-08-26 | 上海普适导航技术有限公司 | Informationization management method for vessel entering and leaving port |
CN104394507A (en) * | 2014-11-13 | 2015-03-04 | 厦门雅迅网络股份有限公司 | Method and system for solving alarm regional report omission through buffer zone |
CN104766064A (en) * | 2015-04-13 | 2015-07-08 | 郑州天迈科技股份有限公司 | Method for recognizing and positioning access station and access field through vehicle-mounted video DVR images |
CN105761490A (en) * | 2016-04-22 | 2016-07-13 | 北京国交信通科技发展有限公司 | Method of carrying out early warning on hazardous chemical substance transport vehicle parking in service area |
CN107067447A (en) * | 2017-01-26 | 2017-08-18 | 安徽天盛智能科技有限公司 | A kind of integration video frequency monitoring method in large space region |
WO2018008893A1 (en) * | 2016-07-06 | 2018-01-11 | 주식회사 파킹패스 | Off-street parking management system using tracking of moving vehicle, and method therefor |
CN109684996A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | Real-time vehicle based on video passes in and out recognition methods |
CN109785664A (en) * | 2019-03-05 | 2019-05-21 | 北京悦畅科技有限公司 | A kind of statistical method and device of the remaining parking stall quantity in parking lot |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991272B (en) * | 2019-11-18 | 2023-07-18 | 东北大学 | Multi-target vehicle track recognition method based on video tracking |
-
2020
- 2020-10-15 CN CN202011102923.XA patent/CN112200101B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515378A (en) * | 2009-03-17 | 2009-08-26 | 上海普适导航技术有限公司 | Informationization management method for vessel entering and leaving port |
CN104394507A (en) * | 2014-11-13 | 2015-03-04 | 厦门雅迅网络股份有限公司 | Method and system for solving alarm regional report omission through buffer zone |
CN104766064A (en) * | 2015-04-13 | 2015-07-08 | 郑州天迈科技股份有限公司 | Method for recognizing and positioning access station and access field through vehicle-mounted video DVR images |
CN105761490A (en) * | 2016-04-22 | 2016-07-13 | 北京国交信通科技发展有限公司 | Method of carrying out early warning on hazardous chemical substance transport vehicle parking in service area |
WO2018008893A1 (en) * | 2016-07-06 | 2018-01-11 | 주식회사 파킹패스 | Off-street parking management system using tracking of moving vehicle, and method therefor |
CN107067447A (en) * | 2017-01-26 | 2017-08-18 | 安徽天盛智能科技有限公司 | A kind of integration video frequency monitoring method in large space region |
CN109684996A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | Real-time vehicle based on video passes in and out recognition methods |
CN109785664A (en) * | 2019-03-05 | 2019-05-21 | 北京悦畅科技有限公司 | A kind of statistical method and device of the remaining parking stall quantity in parking lot |
Non-Patent Citations (3)
Title |
---|
Vessel Detection and Tracking Method Based on Video Surveillance;Natalia Wawrzyniak 等;《MDPI》;20191128;全文 * |
基于深度学习的多船舶目标跟踪与流量统计;冼允廷等;《微型电脑应用》;20200320(第03期);全文 * |
面向普速车站的智能到发作业系统设计与实现;吴兴华;《铁路计算机应用》;20180725;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112200101A (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6759474B2 (en) | Vessel automatic tracking methods and systems based on deep learning networks and average shifts | |
CN105389567B (en) | Group abnormality detection method based on dense optical flow histogram | |
CN109725310B (en) | Ship positioning supervision system based on YOLO algorithm and shore-based radar system | |
CN110223302A (en) | A kind of naval vessel multi-target detection method extracted based on rotary area | |
CN110097568A (en) | A kind of the video object detection and dividing method based on the double branching networks of space-time | |
CN110060508B (en) | Automatic ship detection method for inland river bridge area | |
CN112200101B (en) | Video monitoring and analyzing method for maritime business based on artificial intelligence | |
CN110348304A (en) | A kind of maritime affairs distress personnel search system being equipped on unmanned plane and target identification method | |
CN105184271A (en) | Automatic vehicle detection method based on deep learning | |
CN109977897A (en) | A kind of ship's particulars based on deep learning recognition methods, application method and system again | |
CN110378308A (en) | The improved harbour SAR image offshore Ship Detection based on Faster R-CNN | |
CN104318258A (en) | Time domain fuzzy and kalman filter-based lane detection method | |
CN110427981A (en) | SAR ship detecting system and method based on deep neural network | |
CN101145200A (en) | Inner river ship automatic identification system of multiple vision sensor information fusion | |
CN106778540B (en) | Parking detection is accurately based on the parking event detecting method of background double layer | |
CN110458160A (en) | A kind of unmanned boat waterborne target recognizer based on depth-compression neural network | |
CN113808282A (en) | Multi-navigation-factor data fusion method | |
Bloisi et al. | Camera based target recognition for maritime awareness | |
CN110334703B (en) | Ship detection and identification method in day and night image | |
CN111091095A (en) | Method for detecting ship target in remote sensing image | |
Ozcelik et al. | A vision based traffic light detection and recognition approach for intelligent vehicles | |
CN113743260B (en) | Pedestrian tracking method under condition of dense pedestrian flow of subway platform | |
CN116434159A (en) | Traffic flow statistics method based on improved YOLO V7 and Deep-Sort | |
Zhang et al. | A warning framework for avoiding vessel‐bridge and vessel‐vessel collisions based on generative adversarial and dual‐task networks | |
Zhu et al. | Arbitrary-oriented ship detection based on retinanet for remote sensing images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 450046 No.9 Zeyu street, Zhengdong New District, Zhengzhou City, Henan Province Patentee after: Henan Zhonggong Design and Research Institute Group Co.,Ltd. Country or region after: China Address before: 450046 No.9 Zeyu street, Zhengdong New District, Zhengzhou City, Henan Province Patentee before: HENAN PROVINCIAL COMMUNICATIONS PLANNING & DESIGN INSTITUTE Co.,Ltd. Country or region before: China |