CN115410136A - Laser explosive disposal system emergency safety control method based on convolutional neural network - Google Patents
Laser explosive disposal system emergency safety control method based on convolutional neural network Download PDFInfo
- Publication number
- CN115410136A CN115410136A CN202211353043.9A CN202211353043A CN115410136A CN 115410136 A CN115410136 A CN 115410136A CN 202211353043 A CN202211353043 A CN 202211353043A CN 115410136 A CN115410136 A CN 115410136A
- Authority
- CN
- China
- Prior art keywords
- target
- target detection
- laser
- area
- targets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F42—AMMUNITION; BLASTING
- F42D—BLASTING
- F42D5/00—Safety arrangements
- F42D5/04—Rendering explosive charges harmless, e.g. destroying ammunition; Rendering detonation of explosive charges harmless
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention provides an emergency safety control method of a laser explosive disposal system based on a convolutional neural network, and belongs to the technical field of laser explosive disposal. The method comprises the following steps: and extracting a video image from the real-time video, and carrying out segmentation pretreatment on the video image to divide the video image into a target detection area and a laser beam area. And sequentially inputting the laser beam areas of the target detection areas divided in the horizontal direction from two sides to the middle into a target detection algorithm. And performing vertical direction segmentation on each input target detection area to obtain the probability of the boundary frame of the prediction object and the corresponding category of the boundary frame. And carrying out difference verification on the detected targets, determining whether the targets detected by different target detection areas are the same target or not, and comparing the differences of the two frames of images by utilizing the gray features of the two targets. And a specially designed safety control strategy is adopted, and the laser explosive-handling control system controls the laser explosive-handling system according to a target detection result. The invention provides a new technical approach for improving the working safety of the laser and reducing the accidental injury probability.
Description
Technical Field
The invention relates to an emergency safety control method of a laser explosive disposal system based on a convolutional neural network, and belongs to the technical field of laser explosive disposal.
Background
The laser explosive disposal is a new technical means for destroying waste ammunition at present. The laser explosive disposal system is used for destroying waste ammunition, can be erected remotely and controlled remotely, does not need special protective equipment, does not need explosive disposal personnel to be close to ammunition, and has obvious safety advantage in actual operation. Meanwhile, the laser explosion-removing system can also undertake the task of removing obstacles in the air, such as kites and plastic bags wound on high-voltage lines, nests of which the towers influence the communication safety or the power transmission safety, and the like.
At present, the laser explosive disposal system adopts the integrated scheme of laser instrument + camera more, utilizes the panel computer to carry out remote control, and operating personnel passes through video image, remote control system light-emitting, stop light, observation target strike the condition. Because the explosion-removing site is usually open, the operator is far away from the target, when the laser emits light, if personnel, animals and the like mistakenly run into the laser light path, serious burns, fire and other unexpected situations can be caused, especially in the system debugging and testing process, few hands can be arranged for ensuring safety, the surrounding environment cannot be completely sealed and controlled, the accidental running of irrelevant personnel, dogs, cats and other pets cannot be stopped, the energy of the operator is usually concentrated on observing the light spot situation of a hitting point, when the mistaken invasion situation occurs, the light stopping operation can be completed within a reaction time, the operation delay increases the probability of mistaken injuries to the personnel or animals during the system working, and the safety of the system is reduced.
Disclosure of Invention
The invention aims to provide an emergency safety control method of a laser explosion-venting system based on a convolutional neural network, which improves the working safety of a laser and provides a new technical approach for reducing the accidental injury probability.
In order to achieve the purpose, the invention is realized by the following technical scheme:
step 1: extracting a video image from a real-time video, and carrying out segmentation pretreatment on the video image to divide the video image into a target detection area and a laser beam area, wherein the laser beam area is positioned in the middle of the video image, and the two sides of the laser beam area are provided with the target detection area;
the specific segmentation method comprises the following steps:
horizontal division of the image into 2nA target detection zone and a laser beam zone,na value range ofnA positive integer more than or equal to 1, and the width of each target detection area obtained by segmentation isThe specific calculation method is as follows:
in the formula: the TRUNC function carries out numerical value rounding, namely, no matter the numerical value in brackets is a positive number or a negative number, the numerical value is directly rounded without carrying after decimal fraction is removed;
In the formula: x is the video image width.
Step 2: and sequentially inputting the laser beam areas of the target detection areas divided in the horizontal direction from two sides to the middle into a target detection algorithm.
And 3, step 3: dividing each input target detection area in the vertical direction to obtain target detection area widthThe height is obtained by dividing the image into meshes along the vertical direction, and the height of the divided region in the vertical direction is not equal to that of the meshFootFilling is carried out; and (3) defining an identification category for each grid, wherein each grid corresponds to an m-dimensional vector:
the meanings represented by the characters in the vector are respectively as follows:which is indicative of the probability of the presence of the object,for the detected center of the target bounding boxxThe position of the mobile phone is determined,is the center of the bounding boxyThe position of the mobile phone is determined,is the height of the bounding box,is the width of the bounding box and,representing the object class.
And 4, step 4: and applying image classification and positioning processing to each grid to obtain the probability of the bounding box of the predicted object and the corresponding category of the predicted object.
And 5: and carrying out difference verification on the detected targets, determining whether the targets detected by different target detection areas are the same target, and comparing the differences of the two frames of images by utilizing the gray features of the two targets.
And 6: the laser explosion-removing control system detects target detection areas on the left side and the right side of a video image, the left target detection area sequentially detects from left to right, the right target detection area sequentially detects from right to left, and a traversal cycle is completed; and emitting light by the laser explosion-removing system according to a target detection result.
Preferably, when a plurality of detection targets are provided, 2 different anchor frame designs are added in the identification network of the target detection method, wherein one anchor frame width and the width of the target detection areaThe height is the actual height of the image; the width and the height of the other anchor frame are equal to the width of the target detection areaAnd (5) the consistency is achieved.
calculated by comparing two waysAnd selecting the smaller value as the width of the target detection area.
Preferably, the specific steps of comparing the differences of the two frames of images by using the gray features of the two frames of images are as follows:
step 5-1: converting each pixel of the identified target into a gray histogram, and calculating a gray value in the following manner:
Gray = R*0.299 + G*0.587 + B*0.114
in the formula: r, G, B is the RGB value of a certain pixel, and the Gray scale value is 0 to 255;
step 5-2: drawing a gray level graph of the gray level histogram under the same coordinate system according to the calculated gray level value, and uniformly converting the gray level histogram into a global percentage form according to the size of the identified target;
step 5-3: according to the size of the identified target, uniformly converting the gray level histogram into a universe percentage form, namely: let a point of a certain gray value beThe conversion relationship is:
in the formula:the number of points of a certain gray value is represented, and N represents the percentage of the number of points of the certain gray value in the sum of all pixels of the image;
step 5-4: selecting a certain gray value in different target areas, calculating the proportion of the gray value in the target detection area, calculating the average value of the sum of absolute values of the differences of the proportion percentages of the certain gray value in front and back images through the proportion, comparing the average value with a set threshold value, wherein the difference is considered to be small when the difference is smaller than the threshold value, the objects identified by the front and back images belong to the same object, and the difference is considered to be large when the difference is larger than the threshold value and does not belong to the same object;
preferably, the control method comprises two security control strategies: a red emergency response mode and a yellow emergency response mode;
when the red emergency response mode system detects a target, the system immediately sends out an emergency light-stopping signal and automatically stops light;
in any traversal period, if the existence of a target is detected in a detection area, the center position, the frame size, the object type and the target detection area sequence number of each target in the target detection area are recorded, and the position coordinates are converted into the coordinates of the whole frame of image; the specific contents are as follows:
1) In a first traversal cycle, detecting an effective target by a target detection area, and recording the coordinate position, the frame size, the category and the detection time of the target;
1-1) if the target is not detected, maintaining a normal state;
1-2) if the target detection areas on two adjacent sides of the laser beam area detect the target, stopping light;
1-3) if one target is detected by the target detection areas on two adjacent sides except the laser beam area, only sending alarm information without stopping lighting;
1-4) if a plurality of targets are detected by the target detection areas on two adjacent sides except the laser beam area, calling differential detection to determine whether the results are the same target: if the targets are the same, determining whether the targets transversely move from outside to inside according to the position coordinates, if the targets are close to the laser area, determining that the targets belong to dangerous behaviors, and stopping the light emergently; if the laser area is far away from the laser area, only the alarm state is kept, and light is not stopped; if a plurality of targets are different targets, only sending alarm information to remind an operator that personnel or animals are in the dangerous area without stopping lighting;
2) In a second traversal cycle;
2-1) if the target is not detected, no matter how the 1 st traversal period is, the normal state is kept in the period, and the alarm or emergency control state is cleared;
2-2) if the target detection areas on two adjacent sides of the laser beam area detect the target, stopping the light in an emergency;
2-3) if the target detection areas except the two adjacent sides of the laser beam area detect the target:
2-3-1) if an object is detected, compare with cycle 1: if the 1 st period is not detected, only alarming and not stopping lighting;
if the target is detected in the 1 st period, judging whether the type of the target is the same as that of the target in the 1 st period, if so, comparing the moving directions of the target coordinates in the front period and the rear period, if moving inwards, stopping lighting emergently, and if moving outwards, stopping lighting; if the target category is different from that of the 1 st period, only giving an alarm and not stopping lighting;
2-3-2) if a plurality of targets are detected, firstly judging in the period: if the targets are the same, confirming whether the targets transversely move from outside to inside, if the targets are close to a laser area, performing dangerous behavior, and stopping light emergently; if the laser area is far away from the laser area, only the alarm is kept, and the light is not stopped; if the original area is kept still, only the alarm is kept, and the light is not stopped;
if a plurality of targets are different targets, comparing the targets with the target class in the 1 st period respectively, and performing transverse movement from outside to inside in the two traversing periods in the same class and the two traversing periods to stop light emergently; the same category is from inside to outside, or the transverse movement does not occur, alarm information is sent out to remind an operator that personnel or animals are in a dangerous area; and different classes only send out alarm information to remind operators that personnel or animals are in the dangerous area.
3) Cycle 3:
starting from the 3 rd traversal period, and taking the current period as the 2 nd traversal period; changing the original 2 nd traversal cycle into the 1 st traversal cycle, and starting a new control cycle; when no target is detected in a certain period, all the states are cleared, and the next period starts from the initial state.
The invention has the advantages that: the invention reduces the image size of target detection and reduces the calculated amount; the sequential relation exists between the segmented images, which is beneficial to determining the moving direction of the object and judging the intrusion danger degree more finely. By utilizing the image detection result and the segmented image position information, a danger judgment strategy is designed, and early warning information or laser control signals are generated for the system according to different levels, so that the laser is controlled to stop light automatically, accidental injury is avoided, and the autonomous safety control capability of the system under the emergency condition is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a schematic view of video image segmentation according to the present invention.
FIG. 2 is a gray scale graph of two target detection area images according to the present invention.
FIG. 3 is a gray scale plot of two target detection area images in percentage form according to the present invention.
FIG. 4 is a schematic diagram of a target detection algorithm according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
A laser explosive disposal system emergency safety control method based on a convolutional neural network comprises the following steps:
image segmentation: according to the video image characteristics and the working scene characteristics of the camera of the laser explosion-venting system and the characteristics of personnel and animals, an image segmentation scheme is designed, the calculation structure of a neural network is simplified, the input elements of the neural network are reduced, the calculation amount of an identification algorithm is reduced, and the real-time application capability is improved;
target detection: performing target detection on each segmented image by using a Convolutional Neural Network (CNN), adopting a YOLO method with real-time detection capability, designing an Anchor box (Anchor frame) suitable for the image characteristics of a laser explosion-venting system for the YOLO detection according to the image characteristics, and further improving the system identification and calculation efficiency;
and (3) difference verification: according to the output result of the neural network target detection, performing difference verification on the detection output of the front image and the detection output of the rear image, and determining whether the front image and the rear image are the same target, thereby further designing safety control strategies of different levels;
and (3) safety control strategy: by utilizing the image detection result and the segmented image position information, a danger judgment strategy is designed, and early warning information or laser control signals are generated for the system according to different levels, so that the laser is controlled to stop light automatically, accidental injury is avoided, and the autonomous safety control capability of the system under the emergency condition is improved.
The method is applicable to the following conditions: the system is integrated with a camera and can provide real-time video images; if work at night, the camera needs to support the night vision function and can provide night vision video images.
Specifically, the method comprises the following steps:
step 1: extracting a video image from a real-time video, and carrying out segmentation pretreatment on the video image to divide the video image into a target detection area and a laser beam area, wherein the laser beam area is positioned in the middle of the video image, and the two sides of the laser beam area are the target detection areas;
the specific segmentation method comprises the following steps:
dividing the image into 2n target detection regions and a laser beam region in horizontal direction to obtain each target detection region with width ofThe specific calculation methods include the following two methods:
in the formula: rounding the numerical value by the TRUNC function, namely, directly rounding without carrying after removing decimal numbers regardless of whether the numerical value in brackets is a positive number or a negative number;
in the formula:Xis the video image width;
specifically, if the resolution X × Y =1920 × 1080 of the camera and the whole picture is divided into 7 parts, the following formula is used to calculate:
that is, after the whole image is divided into 7 parts, the size of each image to be recognized is 274 × 1080;
except for 6 target detection area image blocks, the rest are laser beam areas with the size of 1920-274 × 6=276. The target detection zone width size is therefore:
i.e. the whole image is divided into 7 parts, each image to be recognized has a size of 270 x 1080.
The size of the middle laser beam region calculated by both methods does not coincide with the size of the other regions, but this has no effect because the middle laser beam region does not participate in the subsequent calculation. Therefore, in practice, a small number of redundant pixels generated by different partitions or rounding can be totally assigned to the laser beam area.
Step 2: and sequentially inputting the laser beam areas of the target detection area divided in the horizontal direction from two sides to the middle part into a target detection algorithm YOLO.
And 3, step 3: dividing each input target detection area in the vertical direction to obtain target detection area widthAs the height, the image is divided into grids along the vertical direction, and the height of the divided area along the vertical direction is insufficientFilling is carried out;
if the width of each target detection area in the horizontal direction is 274 pixels, the target detection areas are divided into 274 pixels layer by layer in the vertical direction, and the areas obtained by final division have
That is, it is divided into less than 4 zones in the vertical direction, the height of the first 3 zones is 274, and the height of the uppermost zone isSlightly less than the height of the first 3 zones. This inconsistency requires the addition of padding steps in the neural network training, which are supplemented to the same size as other regions, or the corresponding adjustment of its training set labels, or the special setting of the Anchor box when it is involved in the YOLO Anchor box.
If the width of each target detection area in the horizontal direction is 270 pixels, the target detection areas are divided into 270 pixels layer by layer in the vertical direction, and the areas obtained by the final division are provided with
That is, the vertical direction is divided into exactly 4 regions, and each small region is exactly a square region and has the size。
After the video image segmentation process, there are two advantages:
one is to reduce the image size of the target detection without the need to reduce the size of the target detection imageX×YAll image data with the size are input into the CNN network, and the calculated amount is reduced;
and secondly, the sequential relation exists among the segmented images, which is beneficial to determining the moving direction of the object and judging the intrusion danger degree more finely.
One frame of image extracted from the camera video is divided into left and right imagesA plurality of target detection areas of identical size,the target detection is divided into a left group and a right group for simultaneous detection, but the same group has a sequence, the left part enters the CNN according to the left sequence of left 1 and left 2 … …, the right part enters the CNN according to the right sequence of right 1 and right 2 … …,or is marked asL 1 、L 2 ……L n 、R 1 、R 2 ……R n . This is achieved byThe sizes of the target detection images are consistent, and the same trained neural network can be completely adopted to complete input and output.
Each input image is divided into 4 x1 grids, each grid being exactly a square. Each grid is responsible for identifying 4 categories, person c1, dog c2, cat c3, and others c4. Therefore, the label corresponding to each grid is a 9-dimensional variable:
and (3) defining an identification category for each grid, wherein each grid corresponds to one m-dimensional vector:
there are 9 elements in the vector, and the meanings represented are respectively:indicating whether an object of a certain category is present in the grid, i.e. the probability of the object being present,for the centre of the detected target bounding boxxThe position of the mobile phone is determined,is the center of the bounding boxyThe position of the mobile phone is determined,is the height of the bounding box,is the width of the bounding box and,it indicates that the object of detection is a person,indicating that the object of the test is a dog,indicating that the target of the test is a cat,indicating that the detected object is in other category. If the target category of the YOLO output is people, then(ii) a If the output target category comprises people, dogs and cats at the same time。
The category of the target can be expanded according to the need, such as adding an identification vehicleBird, birdAnd the like. When the number of identification objects is increased, it should be noted that the number of label dimensions corresponding to each grid is also increased.
The model was trained as in fig. 4.
With the above input image 270 × 1080 × 3, the color image has three RGB channels, and the output is 4 × 1 × 9, that is, 36 elements are output, and represent the 9-dimensional variables corresponding to the 1 st to 4 th grids respectively in sequence. CNN is a convolutional neural network for detecting human or animal, which is constructed according to the specific situation, and its input characteristic elements are 270 × 1080 × 3.
Since each object detection area is divided into 4 grids, each grid has a corresponding label, that is, there are 4 object detection areasLabels of eachThere are 9 elements inside the tag. These 4 latticesThe labels are combined to form the pictureLabels, i.e. of the figureThe dimensions of the tag are 4 × 1 × 9.
And 4, step 4: and applying image classification and positioning processing to each grid to obtain the probability of the bounding box of the predicted object and the corresponding category of the predicted object.
And 5: carrying out difference verification on the detected targets, determining whether the targets detected by different target detection areas are the same target or not, and comparing the differences of the two frames of images by utilizing the gray features of the two targets;
through target detection, the target type and related position information in the image are obtained, and the detected targets need to be subjected to differential verification to determine whether the targets detected by different target detection areas are the same target or not, so as to provide preconditions for analyzing the behavior patterns of the targets.
The processing speed of YOLO can reach 45 frames/second, and compared with 24-25 frames/second video images, the processing speed of YOLO can reserve enough computing resources for subsequent differential verification, direction determination and the like. Determining whether the targets identified in the two previous and next frames of images are the same target, and designing a difference verification algorithm as follows:
firstly, images of a front target detection area and an image of a rear target detection area are compared, and the method has the following characteristics:
(1) The forming interval of the front image and the rear image is very short, about 30 to 40ms, and the ambient brightness and the contrast are basically consistent;
(2) The motion of the object in the two images is not changed much, and the border size of the object obtained by the YOLO detection algorithm may be slightly different.
Because the sizes of the frame of the object identified from front to back are different, the two images cannot be directly compared. If the smaller picture is extended according to the larger picture, new information or clutter is introduced; if a large image is compressed by a small picture, information loss may be caused, and a certain error may be caused. Therefore, the difference of the two images is compared by utilizing the gray features of the two images.
Let a certain identified object frame size bem×nEach pixel is converted to a grayscale histogram as follows:
Gray = R*0.299 + G*0.587 + B*0.114
r, G, B is the RGB value of a pixel, and the Gray level Gray has a value of 0 to 255. Because the gray scale value range is fixed, images with different sizes can be unified to the same coordinate to draw a gray scale image thereof, and the condition of contrast difference under the same coordinate is provided. According to the gray histogram of the image, the number of times of occurrence of the gray value of each pixel in the image in the whole image is counted and plotted as a curve, an example is shown in fig. 2:
and uniformly converting the gray level histogram into a global percentage form according to the size of the identified target, namely: let a point of a certain gray value beThe conversion relationship is:
the above equation represents the percentage of the number of points of a certain gray value in the image to the sum of all pixels of the image.
After processing, the gray scale graph becomes as shown in fig. 3:
setting the gray value in the identified target frame in the previous image asImage ofThe ratio of pixels in the image of the frame isIn the target frame identified in the latter image, the gray value isHas a pixel ratio in the image of the current frame ofThen, the difference between the images identified by the two previous and next frames is calculated by the following formula:
i.e. the average of the sum of the absolute values of the differences in the percentage of occupation of a certain gray value in the preceding and following images. The threshold is adjusted and determined according to a specific system, the difference of the belief that is smaller than the threshold is small, the objects identified by the two images belong to the same object, and the difference of the belief that is larger than the threshold is large and does not belong to the same object.
And 6: the laser explosion-removing control system detects target detection areas on the left side and the right side of a video image, the left target detection area sequentially detects from left to right, the right target detection area sequentially detects from right to left, and a traversal cycle is completed; and emitting light by the laser explosion-removing system according to a target detection result.
Before the laser emits light, it is assumed that the operator has confirmed that no people or animals have entered. When the laser emits light, the related functions of target detection are activated, and the emergency safety control state is entered. The safety control strategies of the emergency safety control state are divided into 2 types: the red emergency response mode and the yellow emergency response mode are selected according to actual conditions or individual requirements during operation.
Red emergency response mode
The red emergency response mode has the highest level, the simplest strategy, but a slightly higher false alarm rate.
If the system is to use a red response,system according toL 1 ToL n ,R 1 ToR n The system can immediately send out an emergency stop light signal and automatically stop light no matter which target detection area detects people or animals, so that the safety is ensured.
Yellow emergency response mode
The yellow emergency response mode level is slightly lower, the strategy is complex, and the control is finer.
In any traversal period, if the target is detected to exist in a detection area, the central position, the frame size, the object type and the target detection area serial number of each target in the target detection area are recorded, and the position coordinates are converted into the coordinates of the whole frame of image.
In the 1 st traversal cycle:
in a traversal cycle, no matter which detection area on the left or the right detects a valid target, the coordinate position, the frame size, the category and the detection time of the target are recorded.
1) In the 1 st traversal period, if the target is not detected, the normal state is kept;
2) In the 1 st traversal cycle, in any case, as long asL n OrR n The detection area detects a target, and the light is stopped emergently no matter whether the target is a human or an animal;
3) In the 1 st traversal cycle, if inL n-1 OrR n-1 When a target is detected by the previous detection area, only alarm information is sent out without stopping lighting;
4) In the 1 st traversal cycle, if inL n-1 OrR n-1 And if a plurality of targets are detected by the previous detection area, calling difference detection to determine whether the result is the same target:
4.1 If the targets are the same, confirming whether the targets move transversely from outside to inside according to the position coordinates, and if the targets are close to the laser area, belonging to dangerous behaviors and stopping light emergently; if the laser is far away from the laser area, only the alarm state is kept, and light is not stopped;
4.2 If multiple objects are different objects, only an alarm message is sent to remind the operator that personnel or animals are in the dangerous area without stopping lighting.
In the 2 nd traversal cycle:
5) In the 2 nd traversal cycle, if the target is not detected, no matter what the 1 st traversal cycle is, the normal state is kept in the cycle, and the alarm or emergency control state is cleared;
6) In the 2 nd traversal cycle, in any case, as long asL n OrR n The detection area detects a target, and the light is stopped urgently no matter whether the target is a human or an animal;
7) In the 2 nd traversal cycle, if inL n-1 OrR n-1 And the previous detection zone detects a target:
7.1 If an object is detected, compare to cycle 1:
7.1.1 If no detection is made in the 1 st period, only alarming and light stopping are carried out;
7.1.1 If the target is detected in the 1 st period, judging whether the target type is the same as that in the 1 st period, comparing the moving directions of the target coordinates in the two periods before and after the target type is the same, if the target moves inwards, stopping the light urgently, and if the target moves outwards, stopping the light; if the target type is different from the 1 st period target type, only alarming and not stopping lighting;
7.2 If multiple targets are detected, then in this cycle:
7.2.1 If a plurality of targets are the same target, whether the target transversely moves from outside to inside is confirmed, and if the target is close to a laser area, the target is in dangerous behavior and is stopped in emergency; if the laser area is far away from the laser area, only the alarm is kept, and the light is not stopped; if the alarm is kept still in the original area, only the alarm is kept, and the light is not stopped.
7.2.2 If a plurality of targets are different targets, comparing the targets with the type of the target in the 1 st period respectively, and performing transverse movement from outside to inside in the same type and two traversal periods in front and back to perform emergency light stopping; the same category is from inside to outside, or the transverse movement does not occur, alarm information is sent out to remind an operator that personnel or animals are in a dangerous area; and different types of the animals only send out alarm information to remind operators that personnel or animals are in the dangerous areas.
Cycle 3:
8) Starting from the 3 rd traversal cycle, taking the current cycle as the 2 nd traversal cycle;
9) The original 2 nd traversal cycle is changed into the 1 st traversal cycle;
10 Go back to step 1) and start a new control cycle.
When no target is detected in a certain period, all the states are cleared, and the next period starts from the initial state.
Particularly, when a plurality of detection targets are provided, 2 different Anchor box designs are added to the identification network of the target detection method, wherein the size of the Anchor box1 is 270 × 1080, and the size of the Anchor box2 is 270 × 270, and the Anchor boxes are respectively responsible for detection of people, dogs or cats. Each grid is assigned both Anchor boxes. The sizes of the two Anchor boxes can be correspondingly adjusted according to factors such as the image size of a specific system, the size of a target detection area after image segmentation and the like, and meanwhile, the marks of the training set are adjusted.
The network output tag added with the Anchor box is as follows:
the first 9 rows belong to Anchor box1 and the remaining 9 rows belong to Anchor box2. The shape of the Anchor box1 is similar to the human bounding box, so the first 9 elements are assigned to output the detection result of the human, and the last 9 elements are assigned to output the detection result of the dog or cat. The output in this case would be 4 × 1 × 16. After two Anchor boxes are added, a person and an animal can be detected in the same grid at the same time, the absolute position of the object in the input image can be obtained through coordinate conversion, and the absolute position of the object in a complete frame of image can be further obtained through conversion.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (5)
1. A laser explosive disposal system emergency safety control method based on a convolutional neural network is characterized by comprising the following steps:
step 1: extracting a video image from a real-time video, and carrying out segmentation pretreatment on the video image to divide the video image into a target detection area and a laser beam area, wherein the laser beam area is positioned in the middle of the video image, and the two sides of the laser beam area are the target detection areas;
the specific segmentation method comprises the following steps:
dividing the image into 2nA target detection zone and a laser beam zone,na value range ofnA positive integer more than or equal to 1, and the width of each target detection area obtained by segmentation isThe specific calculation method is as follows:
in the formula: rounding the numerical value by the TRUNC function, namely, directly rounding without carrying after removing decimal numbers regardless of whether the numerical value in brackets is a positive number or a negative number;
In the formula:Xis the video image width;
and 2, step: sequentially inputting the laser beam areas of the two sides to the middle of a target detection area divided in the horizontal direction into a target detection algorithm;
and 3, step 3: dividing each input target detection area in the vertical direction to obtain target detection area widthAs the height, the image is divided into grids along the vertical direction, and the height of the divided area along the vertical direction is insufficientFilling is carried out; and (3) defining an identification category for each grid, wherein each grid corresponds to an m-dimensional vector:
the meanings represented by the characters in the vector are respectively as follows:which is indicative of the probability of the presence of the object,for the centre of the detected target bounding boxxThe position of the mobile phone is determined,is the center of the bounding boxyThe position of the mobile phone is determined,is the height of the bounding box,is the width of the bounding box,representing a target category;
and 4, step 4: applying image classification and positioning processing to each grid to obtain the probability of a bounding box of a predicted object and a corresponding category of the predicted object;
and 5: carrying out difference verification on the detected targets, determining whether the targets detected by different target detection areas are the same target or not, and comparing the differences of the two frames of images by utilizing the gray features of the two targets;
and 6: the laser explosion-eliminating control system detects target detection areas on the left side and the right side of a video image, the left target detection area sequentially detects according to the left-to-right sequence, the right target detection area sequentially detects according to the right-to-left sequence, and a traversal cycle is completed; and emitting light by the laser explosion-removing system according to a target detection result.
2. The convolutional neural network-based emergency safety control method for laser detonation elimination system, as claimed in claim 1, wherein when there are multiple targets to be detected, 2 different anchor frame designs are added to the identification network of the target detection method, one anchor frame width and target detection area widthThe height is the actual height of the image, and the width and the height of the other anchor frame are both equal to the width of the target detection areaAnd (5) the consistency is achieved.
3. The laser detonation elimination system emergency safety control method based on convolutional neural network as claimed in claim 1, wherein the method is characterized in thatThe calculation is also done as follows:
4. The emergency safety control method for the laser explosion-venting system based on the convolutional neural network as claimed in claim 1, wherein the specific steps of comparing the differences of the two frames of images by using the gray features of the two frames of images are as follows:
step 5-1: converting each pixel of the identified target into a gray histogram, and calculating a gray value in the following manner:
Gray = R*0.299 + G*0.587 + B*0.114
in the formula: r, G, B is the RGB value of a certain pixel, and the Gray scale value is 0 to 255;
step 5-2: drawing a gray level graph of the gray level histogram under the same coordinate system according to the calculated gray level value, and uniformly converting the gray level histogram into a global percentage form according to the size of the identified target;
step 5-3: according to the size of the identified target, uniformly converting the gray level histogram into a universe percentage form, namely: let a point of a certain gray value beOne, its conversion relation is:
in the formula:the number of points of a certain gray value is represented, and N represents the percentage of the number of points of the certain gray value in the sum of all pixels of the image;
step 5-4: selecting a certain gray value in different target areas, calculating the proportion of the gray value in the target detection area, calculating the average value of the sum of absolute values of the differences of the proportion percentages of the certain gray value in front and back images through the proportion, comparing the average value with a set threshold, considering that the difference is small when the average value is smaller than the threshold, considering that the differences of the objects identified by the front image and the back image belong to the same object, considering that the differences of the objects are large when the average value is larger than the threshold, and not belonging to the same object.
5. The emergency safety control method for the laser explosive-handling system based on the convolutional neural network as claimed in claim 1, wherein the control method comprises two safety control strategies: a red emergency response mode and a yellow emergency response mode;
when the red emergency response mode system detects a target, the system immediately sends out an emergency light-stopping signal and automatically stops light;
in any traversal period, if the existence of a target is detected in a detection area, recording the central position, the frame size, the object type and the target detection area serial number of each target in the target detection area, and converting the position coordinates into the coordinates of the whole frame of image; the specific content is as follows:
1) In a first traversal cycle, detecting an effective target by a target detection area, and recording the coordinate position, the frame size, the category and the detection time of the target;
1-1) if the target is not detected, maintaining a normal state;
1-2) if the target detection areas on two adjacent sides of the laser beam area detect the target, stopping light;
1-3) if one target is detected in the target detection areas at two adjacent sides except the laser beam area, only sending alarm information without stopping lighting;
1-4) if a plurality of targets are detected by the target detection areas on two adjacent sides except the laser beam area, calling differential detection to determine whether the results are the same target: if the targets are the same, determining whether the targets transversely move from outside to inside according to the position coordinates, if the targets are close to the laser area, determining that the targets belong to dangerous behaviors, and stopping the light emergently; if the laser is far away from the laser area, only the alarm state is kept, and light is not stopped; if a plurality of targets are different targets, only sending out alarm information to remind an operator that personnel or animals are in the dangerous area without stopping lighting;
2) In a second traversal cycle;
2-1) if the target is not detected, no matter what the 1 st traversal period is, keeping a normal state in the period, and clearing an alarm or emergency control state;
2-2) if the target detection areas at two adjacent sides of the laser beam area detect the target, the light is stopped emergently;
2-3) if the target detection areas except the two adjacent sides of the laser beam area detect the target:
2-3-1) if an object is detected, compare with cycle 1: if the 1 st period is not detected, only alarming and not stopping lighting are carried out;
if the target is detected in the 1 st period, judging whether the target type is the same as that in the 1 st period, comparing the moving directions of the target coordinates in the two periods before and after the target type is the same, if the target moves inwards, stopping lighting emergently, and if the target moves outwards, stopping lighting; if the target type is different from the 1 st period target type, only alarming and not stopping lighting;
2-3-2) if a plurality of targets are detected, firstly judging in the period: if the targets are the same, confirming whether the targets transversely move from outside to inside, if the targets are close to a laser area, performing dangerous behavior, and stopping light emergently; if the laser area is far away from the laser area, only the alarm is kept, and the light is not stopped; if the original area is kept still, only the alarm is kept, and the light is not stopped;
if a plurality of targets are different targets, comparing the targets with the type of the target in the 1 st period respectively, and performing transverse movement from outside to inside in the front traversal period and the back traversal period in the same type to perform emergency light stopping; the same category is from inside to outside, or does not move transversely, and alarm information is sent out to remind an operator that personnel or animals are in a dangerous area; only sending alarm information to remind operators of personnel or animals in the dangerous area in different categories;
3) Cycle 3:
starting from the 3 rd traversal period, and taking the current period as the 2 nd traversal period; changing the original 2 nd traversal cycle into the 1 st traversal cycle, and starting a new control cycle; when no target is detected in a certain period, all the states are cleared, and the next period starts from the initial state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211353043.9A CN115410136B (en) | 2022-11-01 | 2022-11-01 | Laser explosive disposal system emergency safety control method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211353043.9A CN115410136B (en) | 2022-11-01 | 2022-11-01 | Laser explosive disposal system emergency safety control method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115410136A true CN115410136A (en) | 2022-11-29 |
CN115410136B CN115410136B (en) | 2023-01-13 |
Family
ID=84168708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211353043.9A Active CN115410136B (en) | 2022-11-01 | 2022-11-01 | Laser explosive disposal system emergency safety control method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115410136B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103271784A (en) * | 2013-06-06 | 2013-09-04 | 山东科技大学 | Man-machine interactive manipulator control system and method based on binocular vision |
WO2018130016A1 (en) * | 2017-01-10 | 2018-07-19 | 哈尔滨工业大学深圳研究生院 | Parking detection method and device based on monitoring video |
CN109241913A (en) * | 2018-09-10 | 2019-01-18 | 武汉大学 | In conjunction with the ship detection method and system of conspicuousness detection and deep learning |
CN109412689A (en) * | 2018-10-19 | 2019-03-01 | 苏州融萃特种机器人有限公司 | A kind of robotic laser communication system and its method based on image procossing |
CN109697424A (en) * | 2018-12-19 | 2019-04-30 | 浙江大学 | A kind of high-speed railway impurity intrusion detection device and method based on FPGA and deep learning |
CN110348312A (en) * | 2019-06-14 | 2019-10-18 | 武汉大学 | A kind of area video human action behavior real-time identification method |
CN111612002A (en) * | 2020-06-04 | 2020-09-01 | 广州市锲致智能技术有限公司 | Multi-target object motion tracking method based on neural network |
CN111695482A (en) * | 2020-06-04 | 2020-09-22 | 华油钢管有限公司 | Pipeline defect identification method |
-
2022
- 2022-11-01 CN CN202211353043.9A patent/CN115410136B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103271784A (en) * | 2013-06-06 | 2013-09-04 | 山东科技大学 | Man-machine interactive manipulator control system and method based on binocular vision |
WO2018130016A1 (en) * | 2017-01-10 | 2018-07-19 | 哈尔滨工业大学深圳研究生院 | Parking detection method and device based on monitoring video |
CN109241913A (en) * | 2018-09-10 | 2019-01-18 | 武汉大学 | In conjunction with the ship detection method and system of conspicuousness detection and deep learning |
CN109412689A (en) * | 2018-10-19 | 2019-03-01 | 苏州融萃特种机器人有限公司 | A kind of robotic laser communication system and its method based on image procossing |
CN109697424A (en) * | 2018-12-19 | 2019-04-30 | 浙江大学 | A kind of high-speed railway impurity intrusion detection device and method based on FPGA and deep learning |
CN110348312A (en) * | 2019-06-14 | 2019-10-18 | 武汉大学 | A kind of area video human action behavior real-time identification method |
CN111612002A (en) * | 2020-06-04 | 2020-09-01 | 广州市锲致智能技术有限公司 | Multi-target object motion tracking method based on neural network |
CN111695482A (en) * | 2020-06-04 | 2020-09-22 | 华油钢管有限公司 | Pipeline defect identification method |
Non-Patent Citations (1)
Title |
---|
李欢欢 等: "基于以太网的排爆机器人测控系统设计", 《测控技术》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115410136B (en) | 2023-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108053427B (en) | Improved multi-target tracking method, system and device based on KCF and Kalman | |
CN110414400B (en) | Automatic detection method and system for wearing of safety helmet on construction site | |
CN112861635B (en) | Fire disaster and smoke real-time detection method based on deep learning | |
CN114140503A (en) | Power distribution network dangerous area identification device and method based on deep learning | |
CN111062373A (en) | Hoisting process danger identification method and system based on deep learning | |
CN111626170A (en) | Image identification method for railway slope rockfall invasion limit detection | |
CN103945197B (en) | Electric power facility external force damage prevention early warning scheme based on Video Motion Detection technology | |
CN112330915A (en) | Unmanned aerial vehicle forest fire prevention early warning method and system, electronic equipment and storage medium | |
CN116311081B (en) | Medical laboratory monitoring image analysis method and system based on image recognition | |
CN116052082A (en) | Power distribution station room anomaly detection method and device based on deep learning algorithm | |
CN110532937A (en) | Method for distinguishing is known to targeting accuracy with before disaggregated model progress train based on identification model | |
CN117765480B (en) | Method and system for early warning migration of wild animals along road | |
Zhang et al. | Transmission line abnormal target detection based on machine learning yolo v3 | |
CN113191273A (en) | Oil field well site video target detection and identification method and system based on neural network | |
WO2023104557A1 (en) | Machine-learning for safety rule violation determination | |
CN117523437B (en) | Real-time risk identification method for substation near-electricity operation site | |
CN115410136B (en) | Laser explosive disposal system emergency safety control method based on convolutional neural network | |
Raj et al. | Wild Animals Intrusion Detection for Safe Commuting in Forest Corridors using AI Techniques | |
KR102585665B1 (en) | Risk Situation Analysis and Hazard Object Detection System | |
CN117649741A (en) | Real-time monitoring system and method for high-risk animals around transformer substation based on deep learning | |
Tschürtz et al. | System of systems safety analysis and evaluation in ZalaZONE | |
EP4287147A1 (en) | Training method, use, software program and system for the detection of unknown objects | |
CN116030404A (en) | Artificial intelligence-based construction and safety monitoring method for electronic warning fence of operation area | |
CN114758326A (en) | Real-time traffic post working behavior state detection system | |
Addai et al. | Power and Telecommunication Lines Detection and Avoidance for Drones |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |