CN114399458A - Crossing fence detection method and system based on deep learning target detection - Google Patents

Crossing fence detection method and system based on deep learning target detection Download PDF

Info

Publication number
CN114399458A
CN114399458A CN202111438815.4A CN202111438815A CN114399458A CN 114399458 A CN114399458 A CN 114399458A CN 202111438815 A CN202111438815 A CN 202111438815A CN 114399458 A CN114399458 A CN 114399458A
Authority
CN
China
Prior art keywords
fence
target
image
area monitoring
crossing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111438815.4A
Other languages
Chinese (zh)
Other versions
CN114399458B (en
Inventor
张善秀
王国伟
李宁
孙佳媛
李鹂鹏
魏丽
聂芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 15 Research Institute
Original Assignee
CETC 15 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 15 Research Institute filed Critical CETC 15 Research Institute
Priority to CN202111438815.4A priority Critical patent/CN114399458B/en
Publication of CN114399458A publication Critical patent/CN114399458A/en
Application granted granted Critical
Publication of CN114399458B publication Critical patent/CN114399458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention provides a crossing fence detection method and system based on deep learning target detection, which comprises the following steps: determining the position of a monitoring camera and adjusting the visual angle of the monitoring camera; determining the position of a fence boundary in a fence area monitoring image; sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement processing on each historical fence area monitoring image in a historical fence area monitoring image set shot by a monitoring camera; inputting the processed historical fence area monitoring image set into a YOLO5 model for training; acquiring a current fence area monitoring image of a monitoring camera in real time, sequentially performing denoising processing and color image histogram equalization processing, and inputting the image into a trained YOLO5 model for target recognition; a determination is made as to whether the target is crossing the fence. The invention can reliably detect the behavior of crossing the highway barrier under various weather conditions, thereby reducing the occurrence rate of traffic accidents.

Description

Crossing fence detection method and system based on deep learning target detection
Technical Field
The invention belongs to the technical field of safety, and particularly relates to a crossing fence detection method and system based on deep learning target detection.
Background
The border crossing detection plays an important role in the aspects of factory safety production, public safety and the like. The border crossing detection method comprises the methods of pedestrian border crossing detection, target detection border crossing detection, image processing and the like of Gaussian mixture modeling. This crossing behavior is dangerous when someone intends to cross the barrier on a highway or on a motor vehicle.
From the perspective of border crossing detection products in the market, background extraction methods, deep learning target detection methods, moving target detection, gaussian background modeling methods and the like are mostly used for border crossing detection. Wherein, the background extraction method mainly comprises the following processes: acquiring an interested area image, which comprises a rectangular area image and a quadrilateral area template; establishing a gray level image background for the rectangular area by adopting a Gaussian mixture modeling method; background difference and binary image morphology processing: and counting the duration that the proportion value of the previous numbers in the quadrilateral template is larger than the read value in terms of area ratio. And if the foreground proportion in the quadrilateral template of the current frame is greater than a set interval value and the duration time is greater than a set reading value, sending out an out-of-range warning, and selecting an area which is easy to have potential safety hazards outside the rectangular area. But the background extraction method cannot be used in a moving camera; the inability to identify objects that are stationary or moving at a slow speed; under the condition that the surface of the moving object has a large-area gray value similar area, holes appear in the image during difference; the method is not friendly to the adaptation of environmental changes (for example, chromaticity changes due to illumination changes), image jitter can be caused by camera shake, and a Ghost area (Ghost area: Ghost area often appears in an inter-frame difference method, when an originally static object starts to move, the inter-frame difference method can detect that the originally covered area of the object is in motion, the area detected in error is called Ghost.
Disclosure of Invention
One of the objectives of the present invention is to provide a crossing barrier detection method based on deep learning target detection, which can reliably detect crossing highway barrier behavior in various weather conditions, and reduce the occurrence rate of traffic accidents caused by non-compliance with regulations.
The second objective of the present invention is to provide a crossing barrier detection system based on deep learning target detection.
In order to achieve one of the purposes, the invention adopts the following technical scheme:
a crossing fence detection method based on deep learning target detection comprises the following steps:
step one, determining the position of a monitoring camera and adjusting the visual angle of the monitoring camera;
secondly, determining the position of a fence boundary line in the fence area monitoring image according to the adjusted visual angle of the monitoring camera;
sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement processing on each historical fence area monitoring image in the historical fence area monitoring image set shot by the monitoring camera to obtain a processed historical fence area monitoring image set;
inputting the processed historical fence area monitoring image set into a YOLO5 model for training to obtain a trained YOLO5 model;
acquiring a current fence area monitoring image of the monitoring camera in real time, sequentially performing denoising processing and color image histogram equalization processing, and inputting the image into a trained YOLO5 model for target recognition to obtain target position information in the current fence area monitoring image;
and step six, judging whether the target crosses the fence or not according to the target position information and the position of the fence boundary.
Further, the monitoring camera is arranged on the left side or the right side of the fence; or the monitoring camera and the fence are positioned on the same line.
Further, in step three, the specific process of the image enhancement processing is as follows:
performing target extraction on the image subjected to color image histogram equalization processing;
and according to the target extraction result, performing target splicing, target random scaling, target cutting, target overturning or target rotation on the image subjected to the color image histogram equalization processing.
Further, the specific implementation process of the step six includes:
step 61, determining each rectangular target frame in the target image according to the position information of the target image;
step 62, acquiring a first intersection point and a second intersection point of the extension lines of the two horizontal parallel lines in each rectangular target frame and the barrier boundary line;
step 63, calculating coordinates of the first intersection point and the second intersection point according to coordinates of an upper left corner point and a lower right corner point of each rectangular target frame and fence boundary lines;
step 64, judging whether the ordinate of the first intersection point is larger than the ordinate of the upper left corner point and whether the ordinate of the second intersection point is smaller than the ordinate of the lower right corner point, if so, the target crosses the fence, and finishing; if not, the target does not cross the fence and the operation is finished.
Further, the specific implementation process of step six further includes:
and step 65, when the target crosses the fence behavior, performing acousto-optic early warning.
Further, the barrier boundary line is a straight line.
In order to achieve the second purpose, the invention adopts the following technical scheme:
a crossing fence detection system based on deep learning target detection, the crossing fence detection system comprising:
the first determining module is used for determining the position of the monitoring camera and adjusting the visual angle of the monitoring camera;
the second determining module is used for determining the position of the fence boundary line in the fence area monitoring image according to the adjusted viewing angle of the monitoring camera;
the preprocessing module is used for sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement processing on each historical fence area monitoring image in the historical fence area monitoring image set shot by the monitoring camera to obtain a processed historical fence area monitoring image set;
the training module is used for inputting the processed historical fence area monitoring image set into a YOLO5 model for training to obtain a trained YOLO5 model;
the target identification module is used for acquiring a current fence area monitoring image of the monitoring camera in real time, sequentially performing denoising processing and color image histogram equalization processing, and inputting the image into a trained YOLO5 model for target identification to obtain target image position information in the current fence area monitoring image;
and the judgment processing module is used for judging whether the target crosses the fence or not according to the position information of the target image and the position of the fence boundary line.
Further, the monitoring cameras are arranged on the left side and the right side of the fence; or the monitoring camera and the fence are positioned on the same line.
Further, the judgment processing module includes:
the determining submodule is used for determining each rectangular target frame in the target image according to the position information of the target image;
the acquisition submodule is used for acquiring a first intersection point and a second intersection point of the extension lines of two parallel horizontal lines in each rectangular target frame and the barrier boundary line;
the calculation submodule is used for calculating the coordinates of the first intersection point and the second intersection point according to the coordinates of the upper left corner point and the lower right corner point of each rectangular target frame and the fence boundary;
the judgment word module is used for judging whether the abscissa of the first intersection point is larger than the abscissa of the upper left corner point and whether the abscissa of the second intersection point is smaller than the abscissa of the lower right corner point, if so, the target crosses the fence and the process is finished; if not, the target does not cross the fence and the operation is finished.
Further, the judging and processing module further includes:
and the acousto-optic early warning sub-module is used for carrying out acousto-optic early warning when the target crosses the fence behavior.
The invention has the beneficial effects that:
the position of the fence boundary in the fence area monitoring image is determined through the position of the monitoring camera, the position of the fence boundary does not need to be determined every time of identification, and the boundary can be determined only by installing the monitoring camera for calibration for the first time and adjusting a better visual angle as much as possible; the method comprises the steps of sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement on historical fence area monitoring images, and meanwhile training a YOLO5 model through the historical fence area monitoring images subjected to denoising processing, color image histogram equalization processing and image enhancement processing, so that the YOLO5 model is guaranteed to be a fence area monitoring image under night environment or a fence area monitoring image under bright-sun and high-light daytime environment, and position information of a target (such as a person) in the fence area monitoring image can be accurately and quickly extracted; whether the target crosses the fence or not is judged according to the target position information and the position of the fence boundary, so that the behavior of crossing the fence of the highway is reliably detected under various weather conditions, and the occurrence rate of traffic accidents is reduced; the method is simple and easy to understand, the YOLO5 model can be applied to different actual scenes only by training once, the algorithm detection speed is high and can reach 45 frames/second, the accuracy rate is more than 95%, and the border crossing behavior can be reliably detected under various weather conditions.
Drawings
FIG. 1 is a schematic flow chart of a crossing fence detection method based on deep learning target detection according to the present invention;
FIG. 2 is a schematic diagram of a first out-of-range condition;
FIG. 3 is a schematic diagram of a second out-of-range condition;
FIG. 4 is a diagram illustrating a first border crossing behavior;
FIG. 5 is a diagram illustrating a second border crossing behavior.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
The embodiment provides a crossing fence detection method based on deep learning target detection, and with reference to fig. 1, the crossing fence detection method includes the following steps:
step one, determining the position of a monitoring camera and adjusting the visual angle of the monitoring camera.
And step two, determining the position of the fence boundary line in the fence area monitoring image according to the adjusted viewing angle of the monitoring camera.
And determining the position of the boundary in the picture according to the position of the monitoring camera. The position of the boundary line does not need to be determined every time of identification, and the boundary line can be determined only by installing a monitoring camera for calibration for the first time and adjusting a better visual angle as much as possible.
The surveillance cameras in this embodiment may be arranged on posts on the left or right side of the fence, see fig. 2, or mounted on posts in the same line as the fence, see fig. 3. The monitoring camera is installed on a post which is not far away from the fence as far as possible, the visual angle of the camera is wider as far as possible, and then the position of the cross-boundary line is determined according to the installation position of the field camera and the position of the fence. When someone tries to cross the fence, the fence can be detected and an audible and visual alarm is given.
In the present embodiment, the barrier boundary line is a straight line, and the position of the barrier boundary line is determined according to the picture size and the position in the camera view angle, as shown in fig. 4, when the boundary crossing line (barrier boundary line) approximately conforms to the straight line, the boundary crossing line (barrier boundary line) in the image is expressed as y ═ Kx + b by a mathematical straight line expression according to the digital image processing knowledge, and the straight line equation of the boundary crossing line (barrier boundary line) can be determined by two coordinate points on the barrier.
And thirdly, sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement processing on each historical fence area monitoring image in the historical fence area monitoring image set shot by the monitoring camera to obtain a processed historical fence area monitoring image set.
The historical fence area monitoring image and the current fence area monitoring image of the embodiment are generally COCO data or VOC data, the COCO data is sponsored by Microsoft, the annotation information of the images not only has category and position information, but also has semantic text description of the images, and the open source of the COCO data enables image segmentation semantic understanding to make a huge progress in two or three years, and the COCO data almost becomes a 'standard' data set for image semantic understanding algorithm performance evaluation. The open source show and tell generative model of Google was tested on this data set.
The embodiment carries out denoising processing and color image histogram equalization processing on the historical fence area monitoring image, so that the position of a person can be successfully detected no matter in a night environment or under the condition of bright sun and high illumination. The specific process of the image enhancement processing in this embodiment is as follows:
performing target extraction on the image subjected to color image histogram equalization processing;
and according to the target extraction result, performing target splicing, target random scaling, target cutting, target overturning or target rotation on the image subjected to the color image histogram equalization processing.
And step four, inputting the processed historical fence area monitoring image set into a YOLO5 model for training to obtain a trained YOLO5 model.
The historical fence area monitoring image in this embodiment is subjected to image enhancement processing (stitching, random scaling, clipping, flipping and/or rotation) to obtain 3 ten thousand images. 3000 pieces of the test data are used as a test set, 2000 pieces of the test data are used as a verification set, and 2.5 pieces of the test data are used as a training set. The number of training iterations is 300 and the batchsizes is set to 32. And finally, outputting the trained YOLO5 model, namely a human recognition model. The trained YOLO5 model is downloaded into an edge device that can be connected to a camera. The YOLO5 model in this embodiment is a YOLO5 model commonly used in the art, and the structure thereof is not described herein again.
And step five, acquiring a current fence area monitoring image of the monitoring camera in real time, sequentially performing denoising processing and color image histogram equalization processing, and inputting the image into a trained YOLO5 model for target recognition to obtain target position information in the current fence area monitoring image.
And step six, judging whether the target crosses the fence or not according to the target position information and the position of the fence boundary.
And in the characteristic scene, the fence boundary line position is determined in advance according to factors such as the relative position of the camera and the fence. If the person is judged to be out of range, an audible and visual alarm is triggered to remind the person trying to go out of range to abandon the out-of-range behavior. If it is judged that the person does not cross the border, no reaction is caused. When there are many people in a graph, only one attempt to cross the border can be detected.
The specific implementation process of the step comprises the following steps:
step 61, determining each rectangular target frame in the current fence area monitoring image according to the target position information;
step 62, acquiring a first intersection point and a second intersection point of the extension lines of the two parallel horizontal lines in each rectangular target frame and the barrier boundary line;
step 63, calculating coordinates of the first intersection point and the second intersection point according to coordinates of an upper left corner point and a lower right corner point of each rectangular target frame and fence boundary lines;
as shown in fig. 4 and 5, the coordinate positions of the upper left corner and the lower right corner of the rectangular target frame are (X1, Y1) and (X2, Y2), respectively, the upper left corner is used as the origin, the line in the horizontal direction is the Y axis, and the line in the vertical direction is the X axis. The first intersection point where the horizontal line intersects the barrier boundary is made by the point (x1, y1), and can be calculated by the mathematical expression of the boundary in the picture: yn — Kx1+ b. Similarly, the lower right corner is taken as a horizontal line to obtain a first intersection point with the fence boundary line being ym ═ Kx2+ b.
Step 64, judging whether the abscissa of the first intersection point is larger than the abscissa of the upper left corner point and whether the abscissa of the second intersection point is smaller than the abscissa of the lower right corner point, if so, the target crosses the fence, and entering step 65; if not, the target does not cross the fence and the operation is finished.
When yn > y1 and ym < y2, it can be determined that border crossing occurs.
The monitoring camera inevitably catches the people in the vehicle or the driver riding the motorcycle normally on the highway, and the people can be successfully detected whether the people are out of range or not. Since and at the same time yn > y1 is not satisfied and ym < y2, no out of bounds behavior is determined to have occurred.
And step 65, when the target crosses the fence behavior, performing acousto-optic early warning.
The specific process of the image enhancement processing in this embodiment is as follows:
performing target extraction on the images (including the historical fence area monitoring images and the current fence area monitoring images) subjected to color image histogram equalization processing;
and according to the target extraction result, performing target splicing, target random scaling, target cutting, target overturning or target rotation on the image subjected to the color image histogram equalization processing.
In the embodiment, the position of the fence boundary in the fence area monitoring image is determined through the position of the monitoring camera, the position of the fence boundary does not need to be determined every time of identification, and the boundary can be determined only by installing the monitoring camera for calibration for the first time and adjusting a better visual angle as much as possible; the method comprises the steps of sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement on historical fence area monitoring images, and meanwhile training a YOLO5 model through the historical fence area monitoring images subjected to denoising processing, color image histogram equalization processing and image enhancement processing, so that the YOLO5 model is guaranteed to be a fence area monitoring image under night environment or a fence area monitoring image under bright-sun and high-light daytime environment, and position information of a target (such as a person) in the fence area monitoring image can be accurately and quickly extracted; whether the target crosses the fence or not is judged according to the target position information and the position of the fence boundary, so that the behavior of crossing the fence of the highway is reliably detected under various weather conditions, and the occurrence rate of traffic accidents is reduced; the method is simple and easy to understand, the YoLO5 model can be applied to different actual scenes only by training once, the algorithm detection speed is high and can reach 45 frames/second, the accuracy rate is over 95%, and the border-crossing behavior can be reliably detected under various weather conditions.
The present embodiment can be implemented by using the crossing fence detection system based on deep learning target detection in the following embodiments:
another embodiment provides a crossing fence detection system based on deep learning target detection, the crossing fence detection system including:
the first determining module is used for determining the position of the monitoring camera and adjusting the visual angle of the monitoring camera;
and the second determining module is used for determining the position of the barrier boundary line in the barrier area monitoring image according to the adjusted viewing angle of the monitoring camera. The monitoring cameras are arranged on the left side and the right side of the fence; or the monitoring camera and the fence are positioned on the same line.
The preprocessing module is used for sequentially carrying out denoising processing and image enhancement processing on each historical fence area monitoring image in the historical fence area monitoring image set of the monitoring camera to obtain a processed historical fence area monitoring image set;
the training module is used for inputting the processed historical fence area monitoring image set into a YOLO5 model for training to obtain a trained YOLO5 model;
the target identification module is used for acquiring a current fence area monitoring image of the monitoring camera in real time, sequentially performing denoising processing and enhancement processing, inputting the image into a trained YOLO5 model for target identification, and obtaining target position information in the current fence area monitoring image;
and the judgment processing module is used for judging whether the target crosses the fence or not according to the target position information and the fence boundary. The judgment processing module comprises:
the determining submodule is used for determining each rectangular target frame in the current fence area monitoring image according to the target position information;
the acquisition submodule is used for acquiring a first intersection point and a second intersection point of the extension lines of two parallel horizontal lines in each rectangular target frame and the barrier boundary line;
the calculation submodule is used for calculating the coordinates of the first intersection point and the second intersection point according to the coordinates of the upper left corner point and the lower right corner point of each rectangular target frame and the fence boundary;
the judgment word module is used for judging whether the abscissa of the first intersection point is larger than the abscissa of the upper left corner point and whether the abscissa of the second intersection point is smaller than the abscissa of the lower right corner point, if so, the target crosses the fence and the process is finished; if not, the target does not cross the fence and the operation is finished.
The judgment processing module of this embodiment further includes:
and the acousto-optic early warning sub-module is used for carrying out acousto-optic early warning when the target crosses the fence behavior.
Although the embodiments of the present invention have been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the embodiments of the present invention.

Claims (10)

1. A crossing fence detection method based on deep learning target detection is characterized by comprising the following steps:
step one, determining the position of a monitoring camera and adjusting the visual angle of the monitoring camera;
secondly, determining the position of a fence boundary line in the fence area monitoring image according to the adjusted visual angle of the monitoring camera;
sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement processing on each historical fence area monitoring image in the historical fence area monitoring image set shot by the monitoring camera to obtain a processed historical fence area monitoring image set;
inputting the processed historical fence area monitoring image set into a YOLO5 model for training to obtain a trained YOLO5 model;
acquiring a current fence area monitoring image of the monitoring camera in real time, sequentially performing denoising processing and color image histogram equalization processing, and inputting the image into a trained YOLO5 model for target recognition to obtain target position information in the current fence area monitoring image;
and step six, judging whether the target crosses the fence or not according to the target position information and the position of the fence boundary.
2. The crossing fence detection method of claim 1, wherein the monitoring camera is disposed on the left or right side of the fence; or the monitoring camera and the fence are positioned on the same line.
3. The crossing fence detection method of claim 2, wherein in step three, the specific process of the image enhancement processing is as follows:
performing target extraction on the image subjected to color image histogram equalization processing;
and according to the target extraction result, performing target splicing, target random scaling, target cutting, target overturning or target rotation on the image subjected to the color image histogram equalization processing.
4. The crossing fence detection method according to any one of claims 1 to 3, wherein the concrete implementation process of the sixth step comprises the following steps:
step 61, determining each rectangular target frame in the current fence area monitoring image according to the target position information;
step 62, acquiring a first intersection point and a second intersection point of the extension lines of the two parallel horizontal lines in each rectangular target frame and the barrier boundary line;
step 63, calculating coordinates of the first intersection point and the second intersection point according to coordinates of an upper left corner point and a lower right corner point of each rectangular target frame and fence boundary lines;
step 64, judging whether the abscissa of the first intersection point is larger than the abscissa of the upper left corner point or whether the abscissa of the second intersection point is smaller than the abscissa of the lower right corner point, if so, the target crosses the fence, and finishing; if not, the target does not cross the fence and the operation is finished.
5. The crossing fence detection method of claim 4, wherein the specific implementation process of the sixth step further comprises:
and step 65, when the target crosses the fence behavior, performing acousto-optic early warning.
6. The crossing fence detection method of claim 4 wherein said fence boundary is a straight line.
7. A crossing fence detection system based on deep learning target detection, the crossing fence detection system comprising:
the first determining module is used for determining the position of the monitoring camera and adjusting the visual angle of the monitoring camera;
the second determining module is used for determining the position of the fence boundary line in the fence area monitoring image according to the adjusted viewing angle of the monitoring camera;
the preprocessing module is used for sequentially carrying out denoising processing, color image histogram equalization processing and image enhancement processing on each historical fence area monitoring image in the historical fence area monitoring image set of the monitoring camera to obtain a processed historical fence area monitoring image set;
the training module is used for inputting the processed historical fence area monitoring image set into a YOLO5 model for training to obtain a trained YOLO5 model;
the target identification module is used for acquiring a current fence area monitoring image of the monitoring camera in real time, sequentially performing denoising processing and color image histogram equalization processing, inputting the image into a trained YOLO5 model for target identification, and obtaining target position information in the current fence area monitoring image;
and the judgment processing module is used for judging whether the target crosses the fence or not according to the target position information and the position of the fence boundary.
8. The crossing fence detection system of claim 7 wherein said surveillance cameras are disposed on left and right sides of a fence; or the monitoring camera and the fence are positioned on the same line.
9. The crossing fence detection system of claim 7 or 8, wherein said determination processing module comprises:
the determining submodule is used for determining each rectangular target frame in the current fence area monitoring image according to the target position information;
the acquisition submodule is used for acquiring a first intersection point and a second intersection point of the extension lines of two parallel horizontal lines in each rectangular target frame and the barrier boundary line;
the calculation submodule is used for calculating the coordinates of the first intersection point and the second intersection point according to the coordinates of the upper left corner point and the lower right corner point of each rectangular target frame and the fence boundary;
the judgment word module is used for judging whether the abscissa of the first intersection point is larger than the abscissa of the upper left corner point or whether the abscissa of the second intersection point is smaller than the abscissa of the lower right corner point, if so, the target crosses the fence and the process is finished; if not, the target does not cross the fence and the operation is finished.
10. The crossing fence detection system of claim 9 wherein said decision processing module further comprises:
and the acousto-optic early warning sub-module is used for carrying out acousto-optic early warning when the target crosses the fence behavior.
CN202111438815.4A 2021-11-30 2021-11-30 Crossing fence detection method and system based on deep learning target detection Active CN114399458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111438815.4A CN114399458B (en) 2021-11-30 2021-11-30 Crossing fence detection method and system based on deep learning target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111438815.4A CN114399458B (en) 2021-11-30 2021-11-30 Crossing fence detection method and system based on deep learning target detection

Publications (2)

Publication Number Publication Date
CN114399458A true CN114399458A (en) 2022-04-26
CN114399458B CN114399458B (en) 2023-02-10

Family

ID=81225648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111438815.4A Active CN114399458B (en) 2021-11-30 2021-11-30 Crossing fence detection method and system based on deep learning target detection

Country Status (1)

Country Link
CN (1) CN114399458B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329569A (en) * 2020-10-27 2021-02-05 武汉理工大学 Freight vehicle state real-time identification method based on image deep learning system
CN113139427A (en) * 2021-03-12 2021-07-20 浙江智慧视频安防创新中心有限公司 Steam pipe network intelligent monitoring method, system and equipment based on deep learning
CN113435278A (en) * 2021-06-17 2021-09-24 华东师范大学 Crane safety detection method and system based on YOLO

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329569A (en) * 2020-10-27 2021-02-05 武汉理工大学 Freight vehicle state real-time identification method based on image deep learning system
CN113139427A (en) * 2021-03-12 2021-07-20 浙江智慧视频安防创新中心有限公司 Steam pipe network intelligent monitoring method, system and equipment based on deep learning
CN113435278A (en) * 2021-06-17 2021-09-24 华东师范大学 Crane safety detection method and system based on YOLO

Also Published As

Publication number Publication date
CN114399458B (en) 2023-02-10

Similar Documents

Publication Publication Date Title
CN110136449B (en) Deep learning-based traffic video vehicle illegal parking automatic identification snapshot method
Kim et al. Learning-based approach for license plate recognition
US8902053B2 (en) Method and system for lane departure warning
CN107891808B (en) Driving reminding method and device and vehicle
CN111161312B (en) Object trajectory tracking and identifying device and system based on computer vision
CN111723644A (en) Method and system for detecting occlusion of surveillance video
US20090110286A1 (en) Detection method
Zhang et al. A multi-feature fusion based traffic light recognition algorithm for intelligent vehicles
CN111626170B (en) Image recognition method for railway side slope falling stone intrusion detection
CN110766899A (en) Method and system for enhancing electronic fence monitoring early warning in virtual environment
CN110929676A (en) Deep learning-based real-time detection method for illegal turning around
Cai et al. Intelligent video analysis-based forest fires smoke detection algorithms
CN109671090A (en) Image processing method, device, equipment and storage medium based on far infrared
CN111696135A (en) Intersection ratio-based forbidden parking detection method
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN110557628A (en) Method and device for detecting shielding of camera and electronic equipment
Sulehria et al. Mathematical morphology methodology for extraction of vehicle number plates
CN114399458B (en) Crossing fence detection method and system based on deep learning target detection
CN113657264A (en) Forest fire smoke root node detection method based on fusion of dark channel and KNN algorithm
CN116228756B (en) Method and system for detecting bad points of camera in automatic driving
CN113311507A (en) Typhoon path identification method and device
Munajat et al. Vehicle detection and tracking based on corner and lines adjacent detection features
JP7264428B2 (en) Road sign recognition device and its program
JP2008152586A (en) Automatic identification monitor system for area surveillance
Wennan et al. Lane detection in some complex conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant