CN112200101A - Video monitoring and analyzing method for maritime business based on artificial intelligence - Google Patents
Video monitoring and analyzing method for maritime business based on artificial intelligence Download PDFInfo
- Publication number
- CN112200101A CN112200101A CN202011102923.XA CN202011102923A CN112200101A CN 112200101 A CN112200101 A CN 112200101A CN 202011102923 A CN202011102923 A CN 202011102923A CN 112200101 A CN112200101 A CN 112200101A
- Authority
- CN
- China
- Prior art keywords
- frame
- identification
- tracking
- target
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 22
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 11
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 26
- 238000004458 analytical method Methods 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001788 irregular Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 description 15
- 230000006399 behavior Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000005070 sampling Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009429 distress Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video monitoring and analyzing method facing to maritime affairs based on artificial intelligence, which comprises the following steps that 1, a target object in each frame identification area of a video data source is identified by adopting an identification algorithm; 2, distinguishing and marking the target objects of the front frame and the rear frame of the video data source in the buffer area through a marking algorithm, finishing the non-repeated marking of the same target object and ensuring the uniqueness of the identified object; intercepting an identification area from each frame of the video, tracking the identification object in an internal area of the buffer area by using a tracking algorithm, tracking the track of the target object out of the buffer area, and ensuring that the target object is not disordered under the influence of overlapping, shielding and re-separating factors in the tracking process so as to obtain the result of the identification object; and 4, recording the position of the tracking target object out of the identification area, and performing behavior analysis and statistics on the tracking target object by using the position. The invention fully utilizes the video monitoring equipment of the existing inland waterway, thereby greatly saving the equipment replacement cost.
Description
Technical Field
The invention relates to the field of inland waterway monitoring management, in particular to a video monitoring and analyzing method for maritime business based on artificial intelligence.
Background
In recent years, with the increasing number of touring ships, freight ships, ferrying ships and river resource development operation ships in inland waterways, hundreds of water traffic safety accidents are caused every year, hundreds of casualties and immeasurable property losses are caused, and great challenges are brought to supervision work of maritime departments.
In order to enhance the navigation control of inland river navigation sections, the maritime department usually adopts the technology of Automatic Identification of Ships (AIS) and inland river very high frequency shore ship data communication system (VHF) as main technologies at key docks and takes the channel video technical means of VTS radar and closed circuit television monitoring system (CCTV) as auxiliary technologies to carry out the safety supervision of various ships at present. The Automatic Identification System (AIS) of the ship is cooperated with a Global Positioning System (GPS) to broadcast the ship position, ship speed, course rate and course and other ship dynamic information combined with ship names, call signs, draft and dangerous goods and other ship static information to nearby ships and shore stations through a Very High Frequency (VHF) channel, so that the nearby ships and shore stations can timely master the dynamic and static information of all ships on nearby water surfaces, and can immediately coordinate with each other to take necessary avoidance actions, thereby greatly helping the safety of the ships.
The inland river very high frequency shore ship data communication system works in a Very High Frequency (VHF) wave band, is one of the most main communication means of inland river and offshore radio mobile services, can carry out ship distress, emergency, safe communication and daily service communication, and is also an important communication tool of search and rescue operation, coordination and avoidance among ships and a ship traffic service system. However, AIS and VTS radars also have exposed many drawbacks in the practical application of electronic cruise, and cannot meet the business requirements of intelligent maritime supervision. For example, AIS has signal blind areas, many ships are not opened or equipped with AIS for various reasons, and the information fusion of AIS and VTS radar is not perfect. The main performance lies in that AIS lacks visual observation and control ability of field conditions, and simultaneously, with continuous improvement of business requirements of a maritime department, the early standard definition video monitoring system has defects in an application process, for example, when a ship is overspeed, overloaded, turns around randomly or overtakes, an original standard definition camera cannot provide effective image details, particularly cannot see ship names clearly, and great inconvenience is brought to maritime supervision and law enforcement personnel.
Disclosure of Invention
The invention aims to provide a video monitoring and analyzing method for marine business based on artificial intelligence, which mainly realizes ship tracking and monitoring by taking a video means as a main means and effectively makes up for the defects of the existing ship positioning equipment.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to a maritime business-oriented video monitoring and analyzing method based on artificial intelligence, which comprises the following steps:
YOLO (You Only Look one) is an existing object recognition and positioning algorithm based on a deep neural network, and has the biggest characteristic of high operation speed and can be used for a real-time system; the innovation point of YOLO is that a region suggestion frame type detection framework is improved, the two stages of a candidate region and an object recognition are combined into a whole, and a predefined candidate region is adopted; dividing the whole image, wherein each part is responsible for target detection centered on the part, and predicting candidate frames, positioning confidence degrees and probability vectors of all classes of targets contained in each part at one time; after removing the candidate region, the structure of YOLO includes convolution, pooling and the last two fully-connected layers; the maximum difference is that the final output layer uses a linear function as an activation function to predict the position of a candidate frame and the probability of an object, and the YOLO target detection step is as follows:
step 3.1, readFrame image, call reserve function (function for adjusting picture size) to adjust image size, and divide image into partsA grid;
step 3.2, performing feature extraction on the image by using a convolutional neural network;
step 3.3, predicting the position and the type of the target: if the center of a target object falls in a grid, the grid is responsible for predicting the target object; to predict per meshIn a candidate frameConfidence anda category of (1); an output of magnitudeThe tensor of (a);in order to divide the number of the meshes,the number of frames responsible for each mesh,the number of categories; each mesh will correspond toThe wide-height range of the bounding box is a full graph and represents the position of the bounding box for finding an object by taking the grid as a center; each bounding box corresponds to a score which represents whether an object exists at the position and the positioning accuracy:each grid corresponds toProbability value, finding out the category corresponding to the maximum probabilityAnd the object or a portion of the object is considered to be contained in the grid; each grid corresponding toThe information contained in the dimensional vector is as follows:
2、the position information of each candidate frame includes a center pointThe coordinates,Coordinates, frame width candidateswCandidate frame heighth(Center_x,Center_y,width,height),A candidate frame is commonly requiredA number value to indicate its position;
The method embodies the degree of closeness of the predicted candidate frame and the real target frame;
4. traversing the scores, excluding objects with lower scores and higher overlapping degrees, and outputting predicted objects;
and 4, recording the position of the tracking target object out of the identification area, and performing behavior analysis and statistics on the tracking target object by using the position.
The video monitoring equipment in the inland river navigation water area comprises shore-based video monitoring equipment mainly used for ports and docks and video monitoring equipment in cabins, and realizes automatic video monitoring and statistical analysis for maritime business by relying on technologies such as big data, cloud computing, artificial intelligence, machine learning and the like. The specific application fields comprise ship identification, ship tracking, ship running track monitoring, port ship entry and exit statistics, ship personnel behavior analysis and the like.
On one hand, the invention solves the problems that the prior video monitoring equipment only supports remote viewing and can not complete related maritime supervision services in an automatic mode, and the traditional manual means not only consumes time and labor, but also can not dispose emergent emergency events in time and the like; on the other hand, the problem of software and hardware binding is solved, the software and the hardware are separated, video monitoring equipment of the existing inland waterway can be fully utilized, and equipment replacement cost is greatly saved.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of the YOLO algorithm flow described in this invention.
FIG. 3 is a flow chart of the KCF filtering algorithm of the present invention.
Fig. 4 is a schematic diagram of setting an identification area on each frame according to the present invention.
Fig. 5 is a schematic diagram of the present invention for setting a buffer in the identification area.
Fig. 6 is a schematic diagram of determining new and old objects in step 3.5 according to the embodiment of the present invention.
FIG. 7 is a diagram illustrating the object crossing the buffer at step 3.6 according to the embodiment of the present invention.
FIG. 8 is a diagram of the trigger buffer for tracking the position of the object in step 3.7 according to the embodiment of the present invention.
FIG. 9 is a diagram illustrating the behavior of determining the tracking object in step 4 according to the embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail with reference to the drawings, which are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are provided, but the scope of the present invention is not limited to the following embodiments.
The invention relates to a video monitoring and analyzing method for marine business based on artificial intelligence, which realizes the automatic video monitoring and statistical analysis for the marine business by relying on technologies such as big data, cloud computing, artificial intelligence, machine learning and the like. As shown in fig. 1, the steps are as follows:
step 3.1, as shown in fig. 2, the step of YOLO target detection:
step 3.1.2, performing feature extraction on the image by using a convolutional neural network;
step 3.1.3, predicting the position and the type of the target: if the center of a target object falls in the grid, the grid is responsible for predicting the target object; to predict per meshIn a candidate frameConfidence anda category of (1); an output of magnitudeThe tensor of (a);in order to divide the number of the meshes,the number of frames responsible for each mesh,the number of categories; each mesh will correspond toThe wide-height range of the bounding box is a full graph and represents the position of the bounding box for finding an object by taking the grid as a center; each bounding box corresponds to a score which represents whether an object exists at the position and the positioning accuracy:each grid corresponds toProbability value, finding out the category corresponding to the maximum probabilityAnd the object or a portion of the object is considered to be contained in the grid; each grid corresponding toThe information contained in the dimensional vector is as follows:
2,the position information of each candidate frame includes a center pointThe coordinates,Coordinates, frame candidate width w, frame candidate height h (Center _ x, Center _ y, width, height),a candidate frame is commonly requiredA number value to indicate its position;
step 3.1.4, traversing the scores, excluding objects with lower scores and higher overlapping degrees, and outputting predicted objects;
step 3.2, loss function of the YOLO algorithm:
the YOLO algorithm treats target detection as a regression problem, uses a mean square error loss function, but uses different weights for different partsA value; firstly distinguishing positioning error and classification error, adopting larger positioning error, namely boundary frame coordinate prediction error, then distinguishing confidence coefficient of boundary frame not containing target and boundary frame containing target, and adopting smaller weight value for former boundary frameAll other weighted values are set to be 1; then, the mean square error is adopted, the mean square error equally treats the boundary boxes with different sizes, the prediction of the width and the height of the network boundary box is changed into the prediction of the square root, namely the predicted value is changed into the prediction of the square root(ii) a For classification errors it means that there is a mesh of objects to account for the error; the error formula is as follows:
as the center point of the candidate frameThe coordinates of the position of the object to be imaged,as the center point of the candidate frameFourier transform of the coordinates;
as the centre of a candidate frameThe coordinates of the position of the object to be imaged,as the center point of the candidate frameSquares of coordinate fourier transform values;
is the value of the square of the candidate frame width,fourier transform square values for the candidate frame widths;
the square value of the candidate box height is,fourier transform square values of the candidate box heights;
representation gridTo (1)When the object exists, the difference between the coordinate and the Fourier transform is calculated, each prediction frame and each grid are accumulated in sequence, and the result is multiplied by the weight value to obtain the prediction error of the coordinate, namelyWidth w, height h error of frameThe same as above;
when the object existsAccumulating the difference of the square of the Fourier transform of the target object and each prediction frame and each grid in sequence, and calculating the confidence error of a candidate frame containing the target object;
confidence error for candidate box without target object:
is as followsThe number of the objects is one,is as followsFourier transform values of the individual subjects;
by passingMaking a difference with the square of Fourier transform, and then performing accumulation operation to calculate a category prediction error;
step 3.3, training of a YOLO network:
before training, firstly, pre-training is carried out on ImageNet (image training set), the pre-trained classification model adopts the first 53 convolutional layers, and 5 pooling layers and full-connection layers are added; testing of the network, of each candidate frame(associated confidence score for target frame class)(ii) a Calculate to obtain eachIs/are as follows(relevant confidence score of target frame class), setting a threshold value, filtering out candidate frames with low score, and performing NMS (non-maximum suppression) processing on the reserved candidate frames to obtain a final detection result;
step 3.4, the marking algorithm, process the above identified object list, determine if it is a new object, the determination method uses optimized IOU (intersection ratio,) Calculating the overlapping ratio of the new object area, the buffer area 2 and the area of the existing object list, if the overlapping ratio is greater than or equal to the threshold value, the new object is not determined, and if the overlapping ratio is less than the threshold value, the new object is determined; the threshold value is obtained by intersecting and comparing the cache region 2 with the intersection of the new object and the old object, and the value of the common threshold value is more than 0.5; the specific algorithm steps are as follows:
suppose that the coordinates of the upper left vertex and the lower right vertex of the recognition box A are divided into,Identification frameIs divided into the coordinates of the upper left vertex and the lower right vertex,(ii) a For ease of understanding, the recognition box is described in an algorithmic language: the coordinates are converted into a matrix and,
calculate the matrix(integer value) ifIf the numerical value of the integer value in the matrix is less than 0, the identification frames are not intersected; if it isIf the numerical value of the integer value in the matrix is greater than 0, carrying out transformation multiplication on the matrix;
setting the size of the identified object and a measured threshold value to be 0.5;
step 3.5, as shown in fig. 6, judging the new and old objects: if the object is a new object, marking, creating a tracker, recording information such as an ID (identity) and an initial position; if the object is not a new object, updating the initial position in the existing object tracker;
step 3.6, tracking algorithm: as shown in fig. 7, when the object passes through the buffer 2, a tracking algorithm is used to constantly record the position of the object 3, and a kcf (kernel Correlation filter) filtering algorithm mainly solves the problem of multi-object tracking overlap;
the KCF is a discrimination tracking method, which generally trains a target detector in the tracking process, uses the target detector to detect whether the next frame prediction position is a target, and then uses a new detection result to update a training set so as to update the target detector; when the target detector is trained, a target area is generally selected as a positive sample, the area around the target is a negative sample, and the probability that the area closer to the target is the positive sample is higher; as shown in fig. 3, the steps are as follows:
step 3.6.1, inIn the frame, at the current positionNearby sampling, training a regressor, wherein the regressor can calculate the response of small-window sampling;
step 3.6.2 inIn a frame, at a previous frame positionSampling the vicinity, judging each with the above-mentioned regressorA response of the sampling;
The matrix algorithm comprises circulation matrix Fourier space diagonalization, Fourier diagonalization simplified ridge regression and kernel space ridge regression; circulant matrix fourier diagonalization equation:
upper labelRepresents conjugate transpose:in other words, the first and second electrodes,similar to a diagonal matrix;
Step 3.7, as shown in fig. 8, when the buffer area 2 is triggered by the position of the tracked object 4, the object 4 is monitored at all times, and whether the position of the object is away from the identification area 2 is compared;
step 3.8, after the tracked object 4 leaves the buffer area 2, destroying the corresponding ID tracker, further judging the behavior of the tracked object 4 according to the azimuth relation between the initial area and the leaving area, carrying out classification statistics, outputting the behavior to a screen, and storing the behavior to a database;
suppose that the coordinates of the upper left vertex and the lower right vertex of the recognition box A are divided intoIdentification frameIs divided into the coordinates of the upper left vertex and the lower right vertex,(ii) a For easy understandingThe recognition box is described in algorithmic logic:
respectively calculating the position of the mass center according to the coordinates of the identification frame;(ii) a The position of the dotted line is the central line of the identification area, and the central line position can be calculated(ii) a IgnoreInfluence of coordinates, i.e. determinationPosition of;
step 4.1, if the starting centroid and the leaving centroid are both on the left side of the dotted line 5, the target object 6 is turned back for departure;
step 4.2, the starting centroid is on the left side of the dotted line 5, and the departure centroid is on the right side of the dotted line 5, and the departure target object 7 is the departure target object;
step 4.3, if the starting centroid and the leaving centroid are both on the right side of the dotted line 5, the target object 8 is turned back for entering;
step 4.4, the starting centroid to the right of the dashed line 5 and the departure centroid to the left of the dashed line 5 is the inbound target object 9.
Claims (2)
1. A video monitoring and analyzing method for maritime affairs based on artificial intelligence is characterized in that: the method comprises the following steps:
step 1, identifying a target object in each frame identification area of a video data source by adopting an identification algorithm so as to be suitable for an irregular identification area to complete the complete identification of the target object; namely: sequentially reading each frame of a video data source in sequence and setting an identification area on each frame;
step 2, distinguishing and marking the target objects of the front frame and the rear frame of the video data source in a buffer area through a marking algorithm, completing the non-repeated marking of the same target object, distinguishing new and old target objects and ensuring the uniqueness of an identified object; namely: setting a buffer area according to a set reduction ratio according to the set identification area of each frame, wherein the buffer area is superposed with the central point of the identification area;
step 3, intercepting an identification area from each frame of the video, tracking the identification object in an internal area of the buffer area by using a tracking algorithm, tracking the track of the target object out of the buffer area, and ensuring that the target object is tracked without confusion under the influence of overlapping, shielding and re-separating factors in the tracking process so as to obtain the result of the identification object;
and 4, recording the position of the tracking target object out of the identification area, and performing behavior analysis and statistics on the tracking target object by using the position.
2. The artificial intelligence based maritime service oriented video monitoring and analysis method according to claim 1, wherein: in step 3, tracking the identified object by using a tracking algorithm, comprising the following steps:
step 3.1, reading the P frame image, calling the Resize function to adjust the image size, and dividing the image intoA grid;
step 3.2, performing feature extraction on the image by using a convolutional neural network;
step 3.3, predicting the position and the type of the target: if the center of a target object is located at a certain target objectIn the grid, the grid is responsible for predicting the target object; to predict per meshIn a candidate frameConfidence anda category of (1); an output of magnitudeThe tensor of (a);in order to divide the number of the meshes,the number of frames responsible for each mesh,the number of categories; each mesh corresponds toThe wide-height range of the bounding box is a full graph and represents the position of the bounding box for finding an object by taking the grid as a center; each bounding box corresponds to a score which represents whether an object exists at the position and the positioning accuracy;
each mesh corresponds toProbability value, finding out the category corresponding to the maximum probability,Subject is atA probability of occurrence under the condition, and considering that the object or a part of the object is contained in the grid; each grid corresponding toThe information contained in the dimensional vector is as follows:
2、the position information of each candidate frame includes a center pointThe coordinates,Coordinates, frame candidate width w, frame candidate height h,a candidate frame is commonly requiredA number value to indicate its position;
is the probability of an object existing within the candidate box, as distinguished from,The method embodies the degree of closeness of the predicted candidate frame and the real target frame;
4. and traversing all scores, excluding the objects with lower scores and higher overlapping degrees, and outputting the predicted objects.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011102923.XA CN112200101B (en) | 2020-10-15 | 2020-10-15 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011102923.XA CN112200101B (en) | 2020-10-15 | 2020-10-15 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112200101A true CN112200101A (en) | 2021-01-08 |
CN112200101B CN112200101B (en) | 2022-10-14 |
Family
ID=74009065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011102923.XA Active CN112200101B (en) | 2020-10-15 | 2020-10-15 | Video monitoring and analyzing method for maritime business based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112200101B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113516093A (en) * | 2021-07-27 | 2021-10-19 | 浙江大华技术股份有限公司 | Marking method and device of identification information, storage medium and electronic device |
JPWO2022244062A1 (en) * | 2021-05-17 | 2022-11-24 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515378A (en) * | 2009-03-17 | 2009-08-26 | 上海普适导航技术有限公司 | Informationization management method for vessel entering and leaving port |
CN104394507A (en) * | 2014-11-13 | 2015-03-04 | 厦门雅迅网络股份有限公司 | Method and system for solving alarm regional report omission through buffer zone |
CN104766064A (en) * | 2015-04-13 | 2015-07-08 | 郑州天迈科技股份有限公司 | Method for recognizing and positioning access station and access field through vehicle-mounted video DVR images |
CN105761490A (en) * | 2016-04-22 | 2016-07-13 | 北京国交信通科技发展有限公司 | Method of carrying out early warning on hazardous chemical substance transport vehicle parking in service area |
CN107067447A (en) * | 2017-01-26 | 2017-08-18 | 安徽天盛智能科技有限公司 | A kind of integration video frequency monitoring method in large space region |
WO2018008893A1 (en) * | 2016-07-06 | 2018-01-11 | 주식회사 파킹패스 | Off-street parking management system using tracking of moving vehicle, and method therefor |
CN109684996A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | Real-time vehicle based on video passes in and out recognition methods |
CN109785664A (en) * | 2019-03-05 | 2019-05-21 | 北京悦畅科技有限公司 | A kind of statistical method and device of the remaining parking stall quantity in parking lot |
CN110991272A (en) * | 2019-11-18 | 2020-04-10 | 东北大学 | Multi-target vehicle track identification method based on video tracking |
-
2020
- 2020-10-15 CN CN202011102923.XA patent/CN112200101B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515378A (en) * | 2009-03-17 | 2009-08-26 | 上海普适导航技术有限公司 | Informationization management method for vessel entering and leaving port |
CN104394507A (en) * | 2014-11-13 | 2015-03-04 | 厦门雅迅网络股份有限公司 | Method and system for solving alarm regional report omission through buffer zone |
CN104766064A (en) * | 2015-04-13 | 2015-07-08 | 郑州天迈科技股份有限公司 | Method for recognizing and positioning access station and access field through vehicle-mounted video DVR images |
CN105761490A (en) * | 2016-04-22 | 2016-07-13 | 北京国交信通科技发展有限公司 | Method of carrying out early warning on hazardous chemical substance transport vehicle parking in service area |
WO2018008893A1 (en) * | 2016-07-06 | 2018-01-11 | 주식회사 파킹패스 | Off-street parking management system using tracking of moving vehicle, and method therefor |
CN107067447A (en) * | 2017-01-26 | 2017-08-18 | 安徽天盛智能科技有限公司 | A kind of integration video frequency monitoring method in large space region |
CN109684996A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | Real-time vehicle based on video passes in and out recognition methods |
CN109785664A (en) * | 2019-03-05 | 2019-05-21 | 北京悦畅科技有限公司 | A kind of statistical method and device of the remaining parking stall quantity in parking lot |
CN110991272A (en) * | 2019-11-18 | 2020-04-10 | 东北大学 | Multi-target vehicle track identification method based on video tracking |
Non-Patent Citations (3)
Title |
---|
NATALIA WAWRZYNIAK 等: "Vessel Detection and Tracking Method Based on Video Surveillance", 《MDPI》 * |
冼允廷等: "基于深度学习的多船舶目标跟踪与流量统计", 《微型电脑应用》 * |
吴兴华: "面向普速车站的智能到发作业系统设计与实现", 《铁路计算机应用》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2022244062A1 (en) * | 2021-05-17 | 2022-11-24 | ||
WO2022244062A1 (en) * | 2021-05-17 | 2022-11-24 | Eizo株式会社 | Information processing device, information processing method, and computer program |
JP7462113B2 (en) | 2021-05-17 | 2024-04-04 | Eizo株式会社 | Information processing device, information processing method, and computer program |
CN113516093A (en) * | 2021-07-27 | 2021-10-19 | 浙江大华技术股份有限公司 | Marking method and device of identification information, storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN112200101B (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105389567B (en) | Group abnormality detection method based on dense optical flow histogram | |
CN110097568A (en) | A kind of the video object detection and dividing method based on the double branching networks of space-time | |
CN106878674A (en) | A kind of parking detection method and device based on monitor video | |
CN106778540B (en) | Parking detection is accurately based on the parking event detecting method of background double layer | |
CN110060508B (en) | Automatic ship detection method for inland river bridge area | |
CN109657541A (en) | A kind of ship detecting method in unmanned plane image based on deep learning | |
CN104318258A (en) | Time domain fuzzy and kalman filter-based lane detection method | |
CN105184271A (en) | Automatic vehicle detection method based on deep learning | |
CN112200101B (en) | Video monitoring and analyzing method for maritime business based on artificial intelligence | |
CN109977897A (en) | A kind of ship's particulars based on deep learning recognition methods, application method and system again | |
CN104881643B (en) | A kind of quick remnant object detection method and system | |
CN110458160A (en) | A kind of unmanned boat waterborne target recognizer based on depth-compression neural network | |
CN110334703B (en) | Ship detection and identification method in day and night image | |
Bloisi et al. | Camera based target recognition for maritime awareness | |
CN113743260B (en) | Pedestrian tracking method under condition of dense pedestrian flow of subway platform | |
CN112819068A (en) | Deep learning-based real-time detection method for ship operation violation behaviors | |
Wu et al. | A new multi-sensor fusion approach for integrated ship motion perception in inland waterways | |
CN113763427B (en) | Multi-target tracking method based on coarse-to-fine shielding processing | |
CN116434159A (en) | Traffic flow statistics method based on improved YOLO V7 and Deep-Sort | |
Zhang et al. | A warning framework for avoiding vessel‐bridge and vessel‐vessel collisions based on generative adversarial and dual‐task networks | |
CN113989487A (en) | Fault defect detection method and system for live-action scheduling | |
CN114565824A (en) | Single-stage rotating ship detection method based on full convolution network | |
CN112861762B (en) | Railway crossing abnormal event detection method and system based on generation countermeasure network | |
CN110188607A (en) | A kind of the traffic video object detection method and device of multithreads computing | |
Bloisi et al. | Integrated visual information for maritime surveillance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 450046 No.9 Zeyu street, Zhengdong New District, Zhengzhou City, Henan Province Patentee after: Henan Zhonggong Design and Research Institute Group Co.,Ltd. Country or region after: China Address before: 450046 No.9 Zeyu street, Zhengdong New District, Zhengzhou City, Henan Province Patentee before: HENAN PROVINCIAL COMMUNICATIONS PLANNING & DESIGN INSTITUTE Co.,Ltd. Country or region before: China |
|
CP03 | Change of name, title or address |