CN112001320A - Gate detection method based on video - Google Patents

Gate detection method based on video Download PDF

Info

Publication number
CN112001320A
CN112001320A CN202010864191.1A CN202010864191A CN112001320A CN 112001320 A CN112001320 A CN 112001320A CN 202010864191 A CN202010864191 A CN 202010864191A CN 112001320 A CN112001320 A CN 112001320A
Authority
CN
China
Prior art keywords
gate
top line
video
detected
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010864191.1A
Other languages
Chinese (zh)
Other versions
CN112001320B (en
Inventor
薛超
高旭麟
刘琰
陈澎祥
孙雅彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiandy Technologies Co Ltd
Original Assignee
Tiandy Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tiandy Technologies Co Ltd filed Critical Tiandy Technologies Co Ltd
Priority to CN202010864191.1A priority Critical patent/CN112001320B/en
Publication of CN112001320A publication Critical patent/CN112001320A/en
Application granted granted Critical
Publication of CN112001320B publication Critical patent/CN112001320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a gate detection method based on a video, which is characterized by comprising the following steps of: s1, establishing and training a deep learning model for gate detection; s2, setting a gate detection rule; s3, preprocessing an image to be detected; and S4, inputting the preprocessed image into a depth learning model for gate detection and detecting. The gate detection method based on the video analyzes the video image to detect the state of the gate, realizes the real-time and automation of the gate state detection, saves a large amount of labor cost and time cost, effectively improves the identification accuracy and reduces the false alarm rate, and has the applicability of various scenes.

Description

Gate detection method based on video
Technical Field
The invention belongs to the technical field of video monitoring, and particularly relates to a gate detection method based on videos.
Background
Along with the wide application of video monitoring systems in water conservancy industry, water conservancy video monitoring systems play an increasingly important role in daily management of water conservancy departments, and staff can monitor each water conservancy facility in real time on a monitoring center or a working computer through the monitoring systems, so that the current situation of water resources is observed in real time, and the working efficiency is greatly improved. The gate is various among the hydraulic engineering, because the gate is huge in atress in the switching process, easy damage, the condition such as the switching is not in place, the card is dead often appears, and one current solution adopts gate switching detection switch, but the gate need be operated in the state of partly closing sometimes, and switching detection switch just was difficult to detect this moment.
Disclosure of Invention
In view of this, the present invention is directed to a method for detecting a gate based on video, so as to solve the problem that the gate is not easy to detect during the opening and closing process. The gate detection method based on the video can analyze the video stream transmitted by the existing camera, and can have better detection effect on various gates for videos with different definition degrees.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a gate detection method based on video comprises the following steps:
s1, establishing and training a deep learning model for gate detection;
s2, setting a gate detection rule;
s3, preprocessing an image to be detected;
and S4, inputting the preprocessed image into a deep learning model for gate detection and detecting.
Further, the deep learning model detection gate in step S4 includes: judging the state of the gate, judging the movement direction of the gate and calculating the opening percentage of the gate.
Further, in the step S1, the deep learning model training is performed by repeatedly iteratively training the model by using a stochastic gradient descent method, where each iteration makes the loss function smaller, and the loss function used is as follows:
Figure BDA0002649185570000021
wherein i is the ith grid, j is the jth bbox,
Figure BDA0002649185570000022
indicates that if the ith grid, j is the jth bbox is a gate; the first two terms are the prediction of coordinates,
Figure BDA0002649185570000023
for predicted coordinates of the centre point of the gate, xi,yiIs the marked centre point of the gate, omegai、hiThe width and the height of the gate box are,
Figure BDA0002649185570000024
width and height of gate box for predictive output, CiThe confidence level of the gate box is indicated,
Figure BDA0002649185570000025
representing the confidence of the gate box prediction; the third item is to predict the confidence of the box; the fourth item is to predict the box containing no gate; lambda [ alpha ]coord,λnoobjIs the weight coefficient, B is the number of anchor boxes, s2Is the total number of cells on the feature map, i.e., the number of grids.
Further, the detection setting in step S2 mainly includes setting of a top line position when the shutter is opened and setting of a top line position when the shutter is closed. When the gate is opened, the top line is the position of the upper edge of the gate when the gate is completely opened, and a line segment is drawn to mark the position of the upper edge; when the gate is closed, the top line is the position of the upper edge of the gate, a line segment is drawn to mark the position of the upper edge, and the current state of the gate can be judged by judging the position relation between the detected position of the upper edge of the current position of the gate and the two top lines.
Further, in step S3, the image is preprocessed by using a gaussian filtering method to reduce the image noise.
Further, in step S4, the deep learning model detection gate detects the image obtained in step S2, and obtains a score corresponding to the position of the detected gate, where the score is calculated according to the following formula:
Figure BDA0002649185570000031
where θ is a vector of parameters, for a given input x, hθ(x) The probability that the corresponding class label belongs to the positive example, namely the score in the text, is shown.
Further, the gate state judging process is as follows:
if the gate target is not detected in the continuous multiframes in the step S4, the gate is considered to be closed; if the shutter target is detected in step S4, the positional relationship between the top line of the shutter position detected in step S4 and the top line when the set shutter is completely closed is analyzed, and if the top line of the current shutter position is above the set closing top line, the shutter is considered to be opened, otherwise, the shutter is considered to be closed.
Further, the process of judging the movement direction of the gate is as follows:
in the judging of the movement direction of the gate, the top line of the gate position detected in the step S4 is stored for two continuous frames, and if the top line of the gate position of the current frame is above the top line of the gate of the previous frame, the gate is considered to move upwards; if the top line of the current frame gate position is superposed with the top line of the previous frame gate position, the gate is considered to be still; and if the top line of the current frame gate position is below the top line of the previous frame gate, the gate is considered to move downwards.
Further, the calculation method of the opening percentage of the gate comprises the following steps: and (4) calculating the distance d between the top line when the gate is completely opened and the top line when the gate is completely closed, and then calculating the distance delta d between the top line of the position of the gate detected in the step S4 and the top line when the gate is completely closed, wherein delta d/d is the percentage of opening of the gate.
Further, the process of detecting the image obtained in step S3 in step S4 is as follows: detecting the whole image through a trained YOLO model, recording the position and the score of a detected target, if the detected target is more than 0.8, determining the detected target as an effective target, and judging the motion direction of the effective target; and if the detection target is less than 0.8, the gate is in a closed state.
Compared with the prior art, the gate detection method based on the video has the following advantages:
(1) according to the gate detection method based on the video, the gate state is detected by analyzing the video image, so that the real-time performance and the automation of the gate state detection are realized;
(2) according to the gate detection method based on the video, not only are a large amount of labor cost and time cost saved, but also the identification accuracy is effectively improved, and the false alarm rate is reduced;
(3) the gate detection method based on the video has applicability to various scenes.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of a gate detection flowchart deep learning model training process of a video-based gate detection method according to an embodiment of the present invention;
fig. 2 is a gate detection flowchart of a gate detection method based on video according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1 and fig. 2, the present invention provides a technical solution: a gate detection method based on video comprises the following steps:
and step S1, deep learning model training. The method specifically comprises sample collection, data enhancement, sample marking and model training under a DarkNet framework. The collection of the sample needs to cover various possible styles of the target in the application scene, and the sample should include a scene without the gate but with the gate easily detected by mistake so as to reduce the false detection rate; after the sample is collected, the image of the sample is enhanced, the information such as the brightness, the angle, the contrast and the like of the image is adjusted, the diversity of the sample is increased, and the robustness of the model can be improved; marking the gate in the sample after the data enhancement is finished, wherein the marking of the sample requires the accuracy of the marked target position; after labeling is complete, the YOLO model is trained under the DarkNet framework.
And step S2, setting a detection rule. The rule setting mainly comprises the setting of the position of a top line when the gate is opened and the setting of the position of the top line when the gate is closed. When the gate is opened, the top line is the position of the upper edge of the gate when the gate is completely opened, and a line segment is drawn to mark the position of the upper edge; when the gate is closed, the top line is the position of the upper edge of the gate, a line segment is drawn to mark the position of the upper edge, and the current state of the gate can be judged by judging the position relation between the detected position of the upper edge of the current position of the gate and the two top lines.
And step S3, image preprocessing. The image preprocessing is to smooth and eliminate the noise of the image to be processed before the detection so as to achieve better detection effect. The method applies a Gaussian filtering method to preprocess the image and reduce the noise of the image.
And step S4, detecting the gate by the deep learning model. And (4) detecting the whole image by using the YOLO model trained in the step (S1), recording the positions and scores of the detected targets, and if the score of the target is more than 0.8, determining that the target is a valid target.
And judging the state of the gate. Since the gate cannot be seen after the gate is completely closed in some scenes, if the gate target is not detected in the continuous multiframes in step S4, the gate is considered to be closed; if the shutter target is detected in step S4, the positional relationship between the top line of the shutter position detected in step S4 and the top line when the set shutter is completely closed is analyzed, and if the top line of the current shutter position is above the set closing top line, the shutter is considered to be opened, otherwise, the shutter is considered to be closed.
And judging the movement direction of the gate. Storing the top line of the gate position detected in the step S4 for two continuous frames, and if the top line of the gate position of the current frame is above the top line of the gate of the previous frame, considering that the gate moves upwards; if the top line of the current frame gate position is superposed with the top line of the previous frame gate position, the gate is considered to be still; and if the top line of the current frame gate position is below the top line of the previous frame gate, the gate is considered to move downwards.
And calculating the opening percentage of the gate. And (4) calculating the distance d between the top line when the gate is completely opened and the top line when the gate is completely closed, and calculating the distance delta d between the top line of the position of the gate detected in the step (4) and the top line when the gate is completely closed, wherein delta d/d is the opening percentage of the gate.
The working process of the embodiment is as follows:
the method needs to train a detection model in advance, so that samples of various gates under various scenes need to be collected first, the samples are labeled, namely real position coordinates are marked according to the positions of the gates in an image, the model is repeatedly trained by adopting a random gradient descent method, each iteration leads to a smaller loss function, and the used loss function is as follows:
Figure BDA0002649185570000071
wherein i is the ith grid, j is the jth bbox,
Figure BDA0002649185570000072
indicates that if the ith grid, j is the jth bbox is a gate; the first two terms are the prediction of coordinates,
Figure BDA0002649185570000073
for predicted coordinates of the centre point of the gate, xi,yiIs the marked centre point of the gate, omegai、hiThe width and the height of the gate box are,
Figure BDA0002649185570000074
width and height of gate box for predictive output, CiThe confidence level of the gate box is indicated,
Figure BDA0002649185570000075
representing the confidence of the gate box prediction; the third item is to predict the confidence of the box; the fourth item is to predict the box containing no gate; lambda [ alpha ]coord,λnoobjIs the weight coefficient, B is the number of anchor boxes, s2Is the total number of cells on the feature map, i.e., the number of grids.
And continuous iteration enables box errors to be smaller and smaller, and prediction is more and more accurate.
And finally, determining the specific position of the gate in the image by using the YOLO model with the best detection effect.
Starting detection;
step S2 setting the orientation when the shutter is fully opened and the top line when the shutter is fully closed;
step S3, preprocessing the video image and removing image noise; the image is preprocessed by using Gaussian filtering, so that noise can be effectively suppressed, and the image is smoothed;
step S4 detects the image obtained in step S3 using the deep learning YOLO model, and obtains a score corresponding to the detected position of the shutter, where the score is calculated by the following formula:
Figure BDA0002649185570000081
where θ is a vector of parameters, for a given input x, hθ(x) The probability that the corresponding class mark belongs to the positive example, namely the score in the text is represented; the lowest score is 0 and the highest score is 1, the result with the score less than 0.8 is filtered out, and a correct detection result is left;
the position of the top line of the effective gate target of each frame of video image obtained in step S4 is compared with the set position of the top line when the gate is completely opened and the set position of the top line when the gate is completely closed, so as to determine the current state, the movement direction and the opening percentage of the gate.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A gate detection method based on video is characterized by comprising the following steps:
s1, establishing and training a deep learning model for gate detection;
s2, setting a gate detection rule;
s3, preprocessing an image to be detected;
and S4, inputting the preprocessed image into a deep learning model for gate detection and detecting.
2. The video-based gate detection method according to claim 1, wherein: the deep learning model detection gate in step S4 includes: judging the state of the gate, judging the movement direction of the gate and calculating the opening percentage of the gate.
3. The video-based gate detection method according to claim 1, wherein:
in the step S1, the deep learning model training is performed by repeatedly iteratively training the model by using a stochastic gradient descent method, and the loss function used is as follows:
Figure FDA0002649185560000011
wherein i is the ith grid, j is the jth bbox,
Figure FDA0002649185560000012
indicates that if the ith grid, j is the jth bbox is a gate; the first two terms are the prediction of coordinates,
Figure FDA0002649185560000013
for predicted coordinates of the centre point of the gate, xi,yiIs the marked centre point of the gate, omegai、hiIs a gateThe width and the height of the box are increased,
Figure FDA0002649185560000014
width and height of gate box for predictive output, CiThe confidence level of the gate box is indicated,
Figure FDA0002649185560000015
representing the confidence of the gate box prediction; the third item is to predict the confidence of the box; the fourth item is to predict the box containing no gate; lambda [ alpha ]coord,λnoobjIs the weight coefficient, B is the number of anchor boxes, s2Is the total number of cells on the feature map, i.e., the number of grids.
4. The video-based gate detection method according to claim 1, wherein: the detection setting in step S2 mainly includes setting of the top line position when the shutter is opened and setting of the top line position when the shutter is closed.
5. The video-based gate detection method according to claim 1, wherein: in step S3, the image is preprocessed by using a gaussian filtering method to reduce the noise of the image.
6. The video-based gate detection method according to claim 1, wherein: in step S4, the deep learning model detection gate detects the image obtained in step S2, and obtains a score corresponding to the position of the detected gate, where the score is calculated according to the following formula:
Figure FDA0002649185560000021
where θ is a vector of parameters, for a given input x, hθ(x) The probability that the corresponding class label belongs to the positive example, namely the score in the text, is shown.
7. The video-based gate detection method according to claim 2, wherein: the gate state judging process is as follows:
if the gate target is not detected in the continuous multiframes in the step S4, the gate is considered to be closed; if the shutter target is detected in step S4, the positional relationship between the top line of the shutter position detected in step S4 and the top line when the set shutter is completely closed is analyzed, and if the top line of the current shutter position is above the set closing top line, the shutter is considered to be opened, otherwise, the shutter is considered to be closed.
8. The video-based gate detection method according to claim 2, wherein: the process of judging the movement direction of the gate is as follows:
in the judging of the movement direction of the gate, the top line of the gate position detected in the step S4 is stored for two continuous frames, and if the top line of the gate position of the current frame is above the top line of the gate of the previous frame, the gate is considered to move upwards; if the top line of the current frame gate position is superposed with the top line of the previous frame gate position, the gate is considered to be still; and if the top line of the current frame gate position is below the top line of the previous frame gate, the gate is considered to move downwards.
9. The video-based gate detection method according to claim 2, wherein: the calculation method of the opening percentage of the gate comprises the following steps: and (4) calculating the distance d between the top line when the gate is completely opened and the top line when the gate is completely closed, and then calculating the distance delta d between the top line of the position of the gate detected in the step S4 and the top line when the gate is completely closed, wherein delta d/d is the percentage of opening of the gate.
10. The video-based gate detection method according to claim 5, wherein: the process of detecting the image obtained in step S3 in step S4 is as follows: detecting the whole image through a trained YOLO model, and recording the position and the score of a detected target; if the detected target is larger than 0.8, the detected target is considered to be an effective target, and the motion direction of the effective target is judged; and if the detection target is less than 0.8, the gate is in a closed state.
CN202010864191.1A 2020-08-25 2020-08-25 Gate detection method based on video Active CN112001320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010864191.1A CN112001320B (en) 2020-08-25 2020-08-25 Gate detection method based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010864191.1A CN112001320B (en) 2020-08-25 2020-08-25 Gate detection method based on video

Publications (2)

Publication Number Publication Date
CN112001320A true CN112001320A (en) 2020-11-27
CN112001320B CN112001320B (en) 2024-04-23

Family

ID=73471492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010864191.1A Active CN112001320B (en) 2020-08-25 2020-08-25 Gate detection method based on video

Country Status (1)

Country Link
CN (1) CN112001320B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158842A (en) * 2021-03-31 2021-07-23 中国工商银行股份有限公司 Identification method, system, device and medium
WO2024098681A1 (en) * 2022-11-08 2024-05-16 中国长江电力股份有限公司 Hydropower-station bulkhead gate opening and closing method based on yolo automatic recognition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209893A1 (en) * 1992-05-05 2003-11-13 Breed David S. Occupant sensing system
CN103973999A (en) * 2013-02-01 2014-08-06 佳能株式会社 Imaging apparatus and control method therefor
CN204010121U (en) * 2014-07-11 2014-12-10 山东新北洋信息技术股份有限公司 The position detecting mechanism of gate and paper money processing machine
CN108318581A (en) * 2017-12-08 2018-07-24 中国兵器科学研究院宁波分院 A kind of arc surface workpiece ultrasonic C-scanning automatic testing method without Set and Positioning
CN109405808A (en) * 2018-10-19 2019-03-01 天津英田视讯科技有限公司 A kind of hydrological monitoring spherical camera
CN110516615A (en) * 2019-08-29 2019-11-29 广西师范大学 Human and vehicle shunting control method based on convolutional neural networks
US20200005468A1 (en) * 2019-09-09 2020-01-02 Intel Corporation Method and system of event-driven object segmentation for image processing
CN110999765A (en) * 2018-10-08 2020-04-14 台湾积体电路制造股份有限公司 Irrigation system for irrigating land
CN111209811A (en) * 2019-12-26 2020-05-29 的卢技术有限公司 Method and system for detecting eyeball attention position in real time

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209893A1 (en) * 1992-05-05 2003-11-13 Breed David S. Occupant sensing system
CN103973999A (en) * 2013-02-01 2014-08-06 佳能株式会社 Imaging apparatus and control method therefor
CN204010121U (en) * 2014-07-11 2014-12-10 山东新北洋信息技术股份有限公司 The position detecting mechanism of gate and paper money processing machine
CN108318581A (en) * 2017-12-08 2018-07-24 中国兵器科学研究院宁波分院 A kind of arc surface workpiece ultrasonic C-scanning automatic testing method without Set and Positioning
CN110999765A (en) * 2018-10-08 2020-04-14 台湾积体电路制造股份有限公司 Irrigation system for irrigating land
CN109405808A (en) * 2018-10-19 2019-03-01 天津英田视讯科技有限公司 A kind of hydrological monitoring spherical camera
CN110516615A (en) * 2019-08-29 2019-11-29 广西师范大学 Human and vehicle shunting control method based on convolutional neural networks
US20200005468A1 (en) * 2019-09-09 2020-01-02 Intel Corporation Method and system of event-driven object segmentation for image processing
CN111209811A (en) * 2019-12-26 2020-05-29 的卢技术有限公司 Method and system for detecting eyeball attention position in real time

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"" *
JOSEPH REDMON等: "YOLOv3: An Incremental Improvement", 《ARXIV》, pages 1 - 6 *
涂从刚: "基于ARM的嵌入式闸门智能测控仪表的设计", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 07, pages 030 - 5 *
肖志怀: "水利枢纽闸门维护自动化-故障诊断技术研究", 《中国优秀博硕士学位论文全文数据库 (博士) 工程科技Ⅱ辑》, no. 02, pages 037 - 32 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158842A (en) * 2021-03-31 2021-07-23 中国工商银行股份有限公司 Identification method, system, device and medium
WO2024098681A1 (en) * 2022-11-08 2024-05-16 中国长江电力股份有限公司 Hydropower-station bulkhead gate opening and closing method based on yolo automatic recognition

Also Published As

Publication number Publication date
CN112001320B (en) 2024-04-23

Similar Documents

Publication Publication Date Title
US11410002B2 (en) Ship identity recognition method based on fusion of AIS data and video data
CN109644255B (en) Method and apparatus for annotating a video stream comprising a set of frames
CN108898085B (en) Intelligent road disease detection method based on mobile phone video
CN103164706B (en) Object counting method and device based on video signal analysis
US8243990B2 (en) Method for tracking moving object
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
EP2927871A1 (en) Method and device for calculating number of pedestrians and crowd movement directions
CN110674680B (en) Living body identification method, living body identification device and storage medium
CN112001320A (en) Gate detection method based on video
CN111582358B (en) Training method and device for house type recognition model, and house type weight judging method and device
CN111652035B (en) Pedestrian re-identification method and system based on ST-SSCA-Net
CN112597928B (en) Event detection method and related device
CN111598928A (en) Abrupt change moving target tracking method based on semantic evaluation and region suggestion
CN111582182A (en) Ship name identification method, system, computer equipment and storage medium
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN106127798B (en) Dense space-time contextual target tracking based on adaptive model
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN116385485B (en) Video tracking method and system for long-strip-shaped tower crane object
CN113657151A (en) Water traffic violation detection method based on YOLO target detection algorithm
CN113392726A (en) Method, system, terminal and medium for identifying and detecting human head in outdoor monitoring scene
CN113673534B (en) RGB-D image fruit detection method based on FASTER RCNN
CN114927236A (en) Detection method and system for multiple target images
CN114492657A (en) Plant disease classification method and device, electronic equipment and storage medium
CN105957093A (en) ATM retention detection method of texture discrimination optimization HOG operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant