CN112270253A - High-altitude parabolic detection method and device - Google Patents

High-altitude parabolic detection method and device Download PDF

Info

Publication number
CN112270253A
CN112270253A CN202011159027.7A CN202011159027A CN112270253A CN 112270253 A CN112270253 A CN 112270253A CN 202011159027 A CN202011159027 A CN 202011159027A CN 112270253 A CN112270253 A CN 112270253A
Authority
CN
China
Prior art keywords
region
candidate
target object
object region
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011159027.7A
Other languages
Chinese (zh)
Inventor
王维治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Infineon Information Co.,Ltd.
Original Assignee
Shenzhen Infinova Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Infinova Ltd filed Critical Shenzhen Infinova Ltd
Priority to CN202011159027.7A priority Critical patent/CN112270253A/en
Publication of CN112270253A publication Critical patent/CN112270253A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a detection method and device for high-altitude parabolas, relates to the technical field of intelligent security monitoring, and can effectively improve the detection accuracy of the high-altitude parabolas. The method comprises the following steps: acquiring multiple frames of monitoring video frames of the same wall of a building body; acquiring a plurality of candidate object areas from the plurality of monitoring video frames; acquiring an object feature vector corresponding to each candidate object region; acquiring a target object region from the candidate object region according to the object feature vector; similarity between object feature vectors corresponding to any two object regions in the target object region is greater than a preset threshold value; and detecting a high altitude parabola according to the target object area.

Description

High-altitude parabolic detection method and device
Technical Field
The application relates to the technical field of intelligent security monitoring, in particular to a high-altitude parabolic detection method and device.
Background
In recent years, casualty events caused by high-altitude parabolic behaviors are frequently generated in communities of various cities, and safety threats are brought to residents. Because the high-altitude parabolic behavior is special, the actual scene interference factors are also complex. In the prior art, the problem of high false alarm rate exists in the detection of high altitude parabolas by a mobile detection technology.
Disclosure of Invention
The embodiment of the application provides a high-altitude parabolic detection method and device, which can improve the high-altitude parabolic detection accuracy.
In a first aspect, the present application provides a method for detecting a high altitude parabola, including: acquiring multiple frames of monitoring video frames of the same wall of a building body; acquiring a plurality of candidate object areas from the plurality of monitoring video frames; acquiring an object feature vector corresponding to each candidate object region; acquiring a target object region from the candidate object region according to the object feature vector; similarity between object feature vectors corresponding to any two object regions in the target object region is greater than a preset threshold value; and detecting a high altitude parabola according to the target object area.
Optionally, the obtaining an object feature vector corresponding to each candidate object region includes: acquiring feature codes corresponding to the object feature points in each candidate object region through a neural network identification model; and generating an object feature vector corresponding to each candidate region according to the feature code corresponding to the object feature point.
Optionally, the method further comprises: identifying object objects in the each candidate object region through a neural network detection model; according to the object, filtering a first candidate object region in the candidate object regions to obtain a filtered candidate object region; the object in the first candidate object region belongs to a preset false alarm object;
the obtaining of the object feature vector corresponding to each candidate object region includes: obtaining object feature vectors corresponding to the filtered candidate object regions;
correspondingly, the obtaining a target object region from the candidate object region according to the object feature vector includes: and acquiring a target object region from the filtered candidate object region according to the object feature vector corresponding to the filtered candidate object region.
Optionally, the detecting a high altitude parabola according to the target object region includes: acquiring a horizontal movement distance and a vertical movement distance between two adjacent target object areas; and under the condition that the object in the target object region is determined to meet a preset high altitude parabolic condition according to the horizontal movement distance and the vertical movement distance, determining that the object in the target object region is the high altitude parabolic object.
Optionally, the preset high altitude parabolic condition includes: determining that the object in the target object area moves towards the ground in the vertical direction and is accelerated motion according to the vertical moving distance; and determining that the object in the target object area moves away from the wall surface in the horizontal direction and moves in a deceleration manner according to the horizontal movement distance.
Optionally, the method further comprises: marking a high-altitude parabolic mark in a monitoring video frame where the high-altitude parabolic is located to obtain a processed monitoring video frame; and generating a monitoring mark video according to the processed monitoring video frame.
Optionally, the method further comprises: and sending the monitoring mark video to an early warning platform, wherein the early warning platform is used for displaying the monitoring mark video to monitoring personnel.
By adopting the high-altitude object detection method, the candidate object regions can be obtained from the monitoring video frames of the same wall surface, and the target object regions are obtained by filtering the candidate object regions in consideration of the fact that different object objects may exist in the candidate object regions, and the similarity of the object characteristics between any two target object regions is high, so that the object objects in the target object regions are the same object. Therefore, the high-altitude parabolic object is detected based on the target object region, the detection accuracy of the high-altitude parabolic object is improved, and the problem that the false alarm rate is high when the high-altitude parabolic object is detected through a mobile detection technology in the prior art is solved.
In a second aspect, the present application provides a high altitude parabola detection device, comprising:
the acquisition module is used for acquiring multiple frames of monitoring video frames of the same wall of a building body;
acquiring a plurality of candidate object areas from the plurality of monitoring video frames;
acquiring an object feature vector corresponding to each candidate object region; and the number of the first and second groups,
acquiring a target object region from the candidate object region according to the object feature vector; similarity between object feature vectors corresponding to any two object regions in the target object region is greater than a preset threshold value;
and the detection module is used for detecting the high-altitude object throwing according to the target object area.
In a third aspect, the present application provides a high altitude parabolic detection apparatus, a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect or any alternative of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements a method according to the first aspect or any of the alternatives of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a detection apparatus for a high altitude parabola, causes the detection apparatus for a high altitude parabola to perform the steps of the method of the first aspect or any alternative of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a high altitude parabola detection method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a comparison between multiple surveillance video frames provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a differential image provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating comparison between detection results of a background image and a plurality of surveillance video frames according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a comparison between multiple surveillance video frames provided by an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a comparison between multiple surveillance video frames provided by an embodiment of the present application;
fig. 7 is a schematic flow chart of another high altitude parabola detection method provided in the embodiments of the present application;
fig. 8 is a schematic flow chart of another high altitude parabola detection method provided in the embodiments of the present application;
FIG. 9 is a schematic diagram illustrating a comparison between multiple surveillance video frames provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a high altitude parabola detection device provided by an embodiment of the application;
fig. 11 is a schematic structural diagram of a high altitude parabolic detection apparatus according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
It should also be appreciated that reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The detection method of the high altitude parabola provided by the present application is exemplarily described below by specific embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of a high altitude parabola detection method provided by the present application. The execution main body of the detection method for the high altitude parabola in the embodiment can be detection equipment for the high altitude parabola, the detection equipment is intelligent equipment with a camera, the camera can be arranged on the surface of the detection equipment and can also be arranged independently from the detection equipment, and the camera can be communicated with a processor in display equipment. Of course, the execution subject may also be a camera or the like having data processing capability. The following embodiments are described taking as an example that the execution main body includes a camera.
As shown in fig. 1, the high altitude parabola detection method may include:
s101, obtaining multiple frames of monitoring video frames of the same wall of the building.
In the embodiment of the application, the camera collects the monitoring videos of the same wall surface of the building body, and acquires multi-frame monitoring video frames from the monitoring videos. The multi-frame monitoring video frames are sequenced according to the acquisition sequence of the cameras.
It can be understood that different wall surfaces of a building are different, so if surveillance video frames of different wall surfaces of a building are obtained, the wall surface parts in the surveillance video frames of different wall surfaces are different, and thus the wall surface parts in the surveillance video frames of different wall surfaces cannot be used as background images, and high-altitude parabolas cannot be detected. Based on this, the application needs to detect the high altitude parabola aiming at the same wall surface.
It should also be understood that if there is no space on a wall surface where a window, balcony, etc. can perform a parabolic action, no high-altitude parabolic action occurs on the wall surface. Based on this, the camera in this application needs to monitor the wall surface where the window, the balcony, etc. exist, that is, the same wall surface as described above has the space of the window, the balcony, etc.
In one embodiment, the method and the device can perform frame extraction processing on the monitoring video to obtain a plurality of monitoring video frames. Further, the method and the device for frame extraction processing can firstly determine a frame rate for frame extraction processing, and then perform frame extraction processing on the monitoring video according to the determined frame rate to obtain a plurality of monitoring video frames. For example, if the frame rate is 10 frames per second, the application may extract 10 frames of surveillance video frames from the surveillance video frames of each second of the surveillance video, respectively.
In another embodiment, it is considered that the moving speed of the high altitude parabola is faster, and the closer the high altitude parabola is to the ground, the faster the corresponding moving speed. In order to avoid missing detection of high-altitude parabolas, the frame rate can be set to be the full frame rate of the camera, so that all monitoring video frames in the monitoring video can be used as multi-frame monitoring video frames in the application.
It will also be appreciated that the present application is applicable to high altitude parabolic scenes, and therefore requires cameras to be mounted in position to monitor the corresponding building.
In a scene, this application can be at a distance department installation camera of a distance building body wall. For example, a camera is installed at a position 10m from the wall surface of the building.
In another scenario, the field of view of one camera is limited, and if the size (i.e., height and width) of the building wall is large, a single camera may not capture images of the entire building wall. Like this, this application can set up corresponding camera to the different floor within range of each wall of building body. At this time, the multiple frames of monitoring video frames may include monitoring video frames of the same wall of the building body in different floor ranges.
It will be appreciated that the different floor ranges described above include a succession of floors; and, there may be overlapping floors, or no overlapping floors, of different floor ranges.
Illustratively, one camera can typically monitor 8-12 floors. If a building comprises 32 floors, the first way is: the different floor ranges include: 1 to 8 layers, 8 to 15 layers, 15 to 22 layers and 22 to 32 layers; the second way is: the different floor ranges include: 1 layer to 8 layers, 9 layers to 16 layers, 17 layers to 24 layers and 25 layers to 32 layers. Thus, one camera is arranged for monitoring each floor range. It can be seen that for a 32-story building, each wall requires 4 cameras to cover. For the first mode, the multi-frame surveillance video frames collected by the present application may include: monitoring video frames corresponding to layers 1-8, monitoring video frames corresponding to layers 8-15, monitoring video frames corresponding to layers 15-22, and monitoring video frames corresponding to layers 22-32.
The cameras corresponding to different floor ranges of one wall surface can be installed at the same vertical height, for example, if a 32-floor building is monitored, the vertical height of each camera can be 3 m. Or, the camera can be arranged at different vertical heights according to different floor ranges, so that the floor ranges can be monitored.
Further, considering that the width of a single wall surface is large, a single camera may not be able to completely photograph a wide wall surface. Therefore, the method can be used for arranging a plurality of cameras in the same floor range of the same wall surface of a building body; the larger the width of the wall surface is, the more the number of the cameras is; conversely, the smaller the width of the wall surface, the smaller the number of cameras. Like this, can combine the size of specific wall to set up the camera that the quantity is suitable, avoid the incomplete scheduling problem of wall collection.
Alternatively, the general 3-storied buildings and the following buildings are affected by green belts and the like, and do not cause human casualties. Therefore, in one embodiment, the corresponding cameras can be installed on the wall surfaces above 3 floors for monitoring without detecting the high altitude object for the floors above 3 floors. Therefore, the cameras with proper quantity are set through specific application scenes, redundant installation of the cameras is avoided, and cost is reduced.
Alternatively, it is considered that a high altitude parabola exists at a window, a balcony, etc. of one wall surface and a nearby area, and does not exist at other areas. Therefore, the installation position and the angle of view of the camera in the application can be in accordance with the space and the nearby area such as windows and balconies for detecting wall surfaces.
In another scenario, considering that the installation of the camera may cause the privacy disclosure of users in a building, in order to avoid the privacy disclosure of users, the installation elevation angle and the installation position of the camera need to be adjusted to avoid shooting indoors through windows or balconies on the wall surface. Illustratively, the mounting elevation angle may be an angle between 45 ° and 60 °. Or, the privacy area part in the surveillance video frame can be subjected to fuzzy processing to obtain a fuzzy surveillance video frame, and thus, a plurality of candidate object areas are obtained for the fuzzy surveillance video frame in the subsequent steps.
It can be understood that if the cameras of different floor ranges are arranged at the same vertical height, the higher the highest floor in the floor range, the larger the installation elevation angle of the corresponding camera, and conversely, the lower the highest floor in the floor range, the smaller the installation elevation angle of the corresponding camera. For example, the range of different floors includes: 1-8 layers, 8-15 layers, 15-22 layers and 22-32 layers, the installation elevation angle corresponding to the 1-8 layers is 45 degrees, the installation elevation angle corresponding to the 8-15 layers is 50 degrees, the installation elevation angle corresponding to the 15-22 layers is 55 degrees, and the installation elevation angle corresponding to the 22-32 layers is 63 degrees. In this way, monitoring of the range of each floor is achieved. The above examples are merely illustrative, and the present application is not limited thereto.
It should be noted that the field angle of the camera and the adjusted focal length in the present application correspond to the scene of the monitored building. And the resolution ratio of the camera is higher than a certain numerical value, so that objects with the size of the cigarette end can be detected, and missing detection of high-altitude parabolas is avoided.
Illustratively, as shown in fig. 2, a diagram of a comparison between multiple surveillance video frames is shown. Fig. 2 is a diagram illustrating an example in which a building includes 10 floors and a certain wall surface of the building is monitored. Wherein, (a) in fig. 2 illustrates a surveillance video frame acquired at a first time, wherein, (b) in fig. 2 illustrates a surveillance video frame acquired at a second time, wherein, (c) in fig. 2 illustrates a surveillance video frame acquired at a third time, and (d) in fig. 2 illustrates a surveillance video frame acquired at a fourth time, and wherein the time sequence from the first time to the fourth time is from front to back.
S102, obtaining a plurality of candidate object areas from the plurality of monitoring video frames.
In the embodiment of the application, a plurality of candidate object regions are obtained from a plurality of monitoring video frames through a preset movement detection algorithm.
It should be understood that the motion detection algorithm may include, but is not limited to, frame differencing, background differencing, and the like.
For example, if the frame difference method is adopted in the present application, the camera may subtract the pixel values of the same pixel position in two adjacent monitored video frames to obtain a first difference image corresponding to the two adjacent monitored video frames; then, a first motion area can be obtained from the first difference image; and then according to the position information of the first motion area, marking a second motion area in the next monitored video frame of the two adjacent monitored video frames, wherein the pixels in the second motion area and the first motion area are in one-to-one correspondence. In this way, if the candidate object area of the previous frame of the two adjacent frames of the surveillance video frames exists in the second motion area, the candidate object area of the previous frame of the surveillance video frames in the second motion area is removed, and the candidate object area of the next frame of the two adjacent frames of the surveillance video frames is obtained.
The camera can perform binarization processing on the first difference image to obtain a binarized image. In the binarization processing process, if a pixel point of which the pixel value in the first differential image is smaller than a certain threshold value is set as a first pixel value; and setting the pixel points with the pixel values more than or equal to a certain threshold value in the first differential image as second pixel values. Thus, the pixel point with the pixel value being the second pixel value constitutes the first motion region.
Illustratively, the two adjacent surveillance video frames include the diagram (a) and the diagram (b) in fig. 2. The first differential image corresponding to the two adjacent surveillance video frames can be as shown in fig. 3, so the first motion region can include the region 2A and the region 2B. In this way, the second motion region (i.e., region 2A and region 2B) may be marked at the same position on the graph of fig. 2 (B). Since the graph (a) in fig. 2 includes the object candidate region 2A, the region 2A on the graph (B) in fig. 2 needs to be removed, and thus the graph (B) in fig. 2 includes the object candidate region 2B.
For another example, if a background difference method is adopted, the method can obtain a background image of the same wall of the building in advance, and difference is performed between each frame of the monitoring video frame and the background image to obtain a second difference image. In this way, in the case where it is determined from the second difference image that a motion region exists in a certain frame of the surveillance video frame, the motion region in the frame of the surveillance video frame can be taken as a candidate object region.
Exemplarily, fig. 4 (a) illustrates a background image. In this way, the motion detection is performed on each of the surveillance video frames in fig. 2 by using the background image shown in fig. 4 (a), and the candidate object region in each of the surveillance video frames is obtained. As shown in fig. 4, the graph (b) in fig. 4 is the detection result of the graph (a) in fig. 2, that is, the candidate object region includes the region 2A; fig. 4 (c) is a detection result of fig. 2 (B), that is, the object candidate region includes a region 2B; fig. 4 (d) is a detection result of fig. 2 (C), that is, the object candidate region includes a region 2C; fig. 4 (e) is a detection result of fig. 2 (D), that is, the object candidate region includes a region 2D.
It should be noted that each area in fig. 2 is represented by a dashed line frame, where the dashed line frame is set to be larger for easy understanding, and the dashed line frame may be an area where the object occupies pixels in reality, or may be a rectangular area determined according to the longest width and the longest height of the object, which is not particularly limited in this application.
In an alternative embodiment of the present application, it is contemplated that the plurality of surveillance video frames may include surveillance video frames of the same wall of the building on different floor ranges. Because the monitoring video frames of each floor range are acquired according to the acquisition sequence, the candidate object area of each floor range can be acquired according to the monitoring video frames of each floor range. In this way, the object candidate regions of the respective floor ranges constitute a plurality of object candidate regions in the present application. The candidate object regions can be sorted according to the acquisition sequence of the corresponding monitoring video frames.
Illustratively, the different floor ranges include: the monitoring system comprises the following steps of 1 floor-8 floors, 8 floors-15 floors, 15 floors-22 floors and 22 floors-32 floors, and monitoring video frames at different moments are collected in each floor range. As shown in fig. 5, for 22-32 layers, no corresponding object candidate region is detected; for 15-22 layers, corresponding candidate object regions are detected, namely, the region 5A detected at the time t2 and the region 5B detected at the time t 3; for 8-15 layers, corresponding candidate object regions are detected, namely, the region 5C detected at the time t4 and the region 5D detected at the time t 5; for layers 1-8, the object falls on layers 1-8 due to the gravity effect and the air resistance effect of the object, so that the corresponding object candidate region (not shown in fig. 5) can be detected on layers 1-8. As can be seen, for the plurality of floor ranges, the plurality of candidate object regions may include: region 5A, region 5B, region 5C, and region 5D. The above examples are merely illustrative, and the present application is not limited thereto.
And S103, acquiring object feature vectors corresponding to each candidate object region.
In the embodiment of the application, the feature codes corresponding to the object feature points in each candidate object region can be obtained through a neural network identification model; and generating an object feature vector corresponding to each candidate region according to the feature codes corresponding to the object feature points.
Optionally, the method includes the steps of firstly acquiring key object feature points in a candidate object region through a neural network identification model; and then, taking the key object feature points as a reference, acquiring the surrounding object feature points, and so on to acquire all object feature points in the candidate object region, thereby acquiring the feature codes corresponding to the object feature points.
Optionally, the method and the device can also directly acquire all object feature points in the candidate object region through the neural network identification model, so as to acquire the feature codes corresponding to the object feature points. The specific acquisition process of the object feature points is not particularly limited.
For example, if there are 128 object feature points in the candidate object region and the feature code of each object feature point is 4 bytes, the candidate object region may have a feature code of 512 bytes. Thus, 512 bytes of feature codes can be used as the object feature vectors of the candidate object regions.
In an alternative embodiment of the present application, a plurality of cameras are used to collect high-altitude parabolic image samples and parabolic type mark vectors corresponding to the high-altitude parabolic image samples in advance. Considering that the neural network deep learning network can autonomously extract deep features of the image, the method can input the high-altitude parabolic image sample into the preset neural network deep learning network to obtain a label result vector; then, calculating an error function according to the label result vector and the parabolic type mark vector; and then, adjusting parameters of a preset neural network deep learning network through an error function. Therefore, the steps are repeated to adjust the parameters of the neural network deep learning network, so as to achieve the aim of model training, and further obtain the neural network recognition model.
It can be understood that a plurality of network layers exist in the neural network deep learning network, so that the feature vectors output by the specified network layers in the neural network deep learning network need to be used as the object feature vectors corresponding to the candidate object regions in the present application.
S104, acquiring a target object region from the candidate object region according to the object feature vector; and the similarity between the object feature vectors corresponding to any two object regions in the target object region is greater than a preset threshold value.
It can be understood that if a bird exists in one surveillance video frame and a beverage bottle exists in the other surveillance video frame, the two surveillance video frames are different objects, and therefore the application cannot further determine whether a high-altitude parabola exists in the two surveillance video frames through the high-altitude parabola feature. Based on this, the present application first needs to acquire a target object region where the same object exists.
Further, if the similarity between the object feature vectors corresponding to the two object regions in the candidate object region is greater than a preset threshold, it may be determined that the object objects included in the two object regions in the candidate object region are the same object; if the similarity between the object feature vectors corresponding to the two object regions in the candidate object region is less than or equal to the preset threshold, it may be determined that the object objects in the two object regions in the candidate object region are different objects. Therefore, the similarity between the object feature vectors corresponding to any two object regions in the candidate object regions is obtained, so that the target object regions belonging to the same object can be accurately obtained, and the candidate object regions not belonging to the same object can be filtered. In this way, detection of non-high altitude parabolas is avoided.
It should be understood that the candidate object regions may be sorted according to the collecting sequence of the monitoring video frames. In this way, it is determined whether the object objects included in the two object regions in each two adjacent candidate object regions are the same object according to the result of the ranking of the candidate object regions.
Illustratively, as shown in fig. 6, a diagram illustrating a comparison between frames of a multi-frame surveillance video is shown. Fig. 6 illustrates an example in which a building includes 10 floors and a certain wall surface of the building is monitored. Wherein, (a) in fig. 6 illustrates a surveillance video frame acquired at a first time, wherein, (b) in fig. 6 illustrates a surveillance video frame acquired at a second time, wherein, (c) in fig. 6 illustrates a surveillance video frame acquired at a third time, and wherein, (d) in fig. 6 illustrates a surveillance video frame acquired at a fourth time, and wherein the time sequence from the first time to the fourth time is from front to back.
The object candidate regions of the diagram (a) in fig. 6 include a region 6A in which an object 6A exists, the object candidate regions of the diagram (B) in fig. 6 include a region 6B in which an object 6B exists, the object candidate regions of the diagram (C) in fig. 6 include a region 6C in which an object 6C exists, and the object candidate regions of the diagram (D) in fig. 6 include a region 6D in which an object 6D exists. Since the object in the diagram (C) of fig. 6 is a bird, and the object objects in the diagrams (a), (B), and (D) of fig. 6 are beverage bottles, the present application needs to filter the object candidate region 6C shown in the diagram (C) of fig. 6, and to use the corresponding object candidate regions (i.e., the regions 6A, 6B, and 6D) in the diagrams (a), (B), and (D) of fig. 6 as the target object regions.
And S105, detecting a high-altitude parabola according to the target object region.
In the embodiment of the application, the camera can acquire the horizontal movement distance and the vertical movement distance between two adjacent target object areas; and under the condition that the object in the target object region is determined to meet a preset high altitude parabolic condition according to the horizontal moving distance and the vertical moving distance, determining that the object in the target object region is the high altitude parabolic.
It is understood that, in step S104, the target object region needs to be acquired from the candidate object region, so there is filtering of the candidate object region. In this way, the two adjacent target object areas are candidate object areas in the two adjacent surveillance video frames included in the first target surveillance video frame, the first target surveillance video frame includes the surveillance video frame in which the target object area exists, and the first target surveillance video frame is sequenced according to the collection sequence of the cameras.
It should also be understood that if the candidate object regions are all target object regions, the filtering of the candidate object regions may not be required. Therefore, the two adjacent target object areas are candidate object areas in adjacent surveillance video frames included in the second target surveillance video frame, the second target surveillance video frame includes surveillance video frames with the candidate object areas, and the second target surveillance video frames are sequenced according to the collection sequence of the cameras.
The preset high altitude parabolic condition may include: determining that the object in the target object area moves towards the ground in the vertical direction and is accelerated motion according to the vertical moving distance; and determining that the object in the target object region is moving away from the wall surface in the horizontal direction and is moving at a deceleration according to the horizontal moving distance.
It can be understood that the camera acquires vertical position information of the same feature point of the object in two adjacent target object regions, and calculates a difference between the first vertical position information and the second vertical position information to obtain a vertical movement distance between the two adjacent target object regions; the first vertical position information is vertical position information of the same characteristic point of the object in the previous object area in the two adjacent object areas, and the second vertical position information is vertical position information of the same characteristic point of the object in the next object area in the two adjacent object areas.
Thus, the process of determining the movement direction of the object in the target object region in the vertical direction from the vertical movement distance may include: if the vertical movement distance between two adjacent target object areas is a positive number, it can be determined that the object objects in the two adjacent target object areas move towards the ground in the vertical direction; in contrast, if the vertical movement distance between two adjacent target object regions is negative, it can be determined that the object in the two adjacent target object regions is moving away from the ground in the vertical direction. Based on the method, the movement directions of the object objects in all the target object areas in the vertical direction can be acquired. For example, if the total target object area is divided into two adjacent target object areas, and the object in each of the two adjacent target object areas is moving toward the ground in the vertical direction, it is determined that the object in the target object area is moving toward the ground in the vertical direction.
If the two adjacent target object regions have a regular shape, for example, a rectangular shape, the same feature point of the object may be replaced by an upper frame or a lower frame in the two adjacent target object regions.
Optionally, the present application may further calculate a difference between the second vertical position information and the first vertical position information, so as to obtain a vertical movement distance between two adjacent target object regions. In this way, if the vertical movement distance between two adjacent target object regions is negative, it can be determined that the object objects in the two adjacent target object regions move towards the ground in the vertical direction; in contrast, if the vertical movement distance between two adjacent target object regions is a positive number, it can be determined that the object in the two adjacent target object regions is moving away from the ground in the vertical direction.
It should also be understood that the process of determining the type of motion of the object in the target object region in the vertical direction from the vertical movement distance may include: comparing the magnitude between the first vertical movement distance and the second vertical movement distance; the first vertical movement distance is the absolute value of the vertical movement distance corresponding to the first group of object regions, the second vertical movement distance is the absolute value of the vertical movement distance corresponding to the second group of object regions, and the first group of object regions and the second group of object regions respectively comprise two adjacent target object regions; and the former target object region in the first group of object regions and the latter target object region in the second group of object regions are the same target object region. Thus, if the first vertical movement distance is greater than the second vertical movement distance, it can be determined that the object objects in the two groups of object areas are accelerated motion in the vertical direction; if the first vertical movement distance is smaller than the second vertical movement distance, it can be determined that the object objects in the two sets of object areas are in deceleration motion in the vertical direction. Based on the method, the motion types of the object objects in the vertical direction in all the target object areas can be acquired. For example, the entire target object region is divided into a plurality of groups of object regions, and the object objects in each two groups of object regions are accelerated in the vertical direction, and it is determined that the object objects in the target object region are accelerated in the vertical direction.
Optionally, the present application may further obtain a difference between the first vertical movement distance and the second vertical movement distance, and if the difference between the first vertical movement distance and the second vertical movement distance is a positive number, it may be determined that the object objects in the two groups of object regions are in acceleration motion in the vertical direction; if the difference between the first vertical movement distance and the second vertical movement distance is negative, it can be determined that the object objects in the two sets of object regions move with deceleration in the vertical direction.
Similarly, the camera acquires horizontal position information of the same characteristic point of the object in two adjacent target object areas, and calculates the difference value between the first horizontal position information and the second horizontal position information to obtain the horizontal movement distance between the two adjacent target object areas; the first horizontal position information is horizontal position information of the same characteristic point of the object in the previous target object area of the two adjacent target object areas, and the second horizontal position information is horizontal position information of the same characteristic point of the object in the next target object area of the two adjacent target object areas.
In this way, the process of determining the movement direction of the object in the target object region in the horizontal direction from the horizontal movement distance may include: if the horizontal movement distance between two adjacent target object regions is a negative number, it can be determined that the object objects in the two adjacent target object regions move away from the wall surface in the horizontal direction; on the contrary, if the horizontal movement distance between two adjacent target object regions is a positive number, it can be determined that the object objects in the two adjacent target object regions move close to the wall surface in the horizontal direction. Based on the above method, the movement directions of the object objects in the horizontal direction in all the target object regions can be acquired. For example, the total target object area is divided into two adjacent target object areas, and the object in each two adjacent target object areas moves away from the wall surface in the horizontal direction, and it is determined that the object in the target object area moves away from the wall surface in the horizontal direction.
It should be noted that, if two adjacent target object regions are regular shapes, for example, rectangles, the same feature point of the object may be replaced by a left frame or a right frame in the two adjacent target object regions.
Optionally, the present application may further calculate a difference between the second horizontal position information and the first horizontal position information, so as to obtain a horizontal movement distance between two adjacent target object regions. Thus, if the horizontal movement distance between two adjacent target object regions is a positive number, it can be determined that the object objects in the two adjacent target object regions move away from the wall surface in the horizontal direction; on the contrary, if the horizontal movement distance between two adjacent target object regions is negative, it can be determined that the object objects in the two adjacent target object regions move close to the wall surface in the horizontal direction.
It should also be understood that the process of determining the type of motion of the object in the target object region in the horizontal direction according to the horizontal movement distance may include: comparing the magnitude between the first horizontal movement distance and the second horizontal movement distance; the first horizontal movement distance is an absolute value of a horizontal movement distance corresponding to the first group of object regions, the second horizontal movement distance is an absolute value of a horizontal movement distance corresponding to the second group of object regions, and relevant contents of the first group of object regions and the second group of object regions are not repeated here. Thus, if the first horizontal movement distance is greater than the second horizontal movement distance, it can be determined that the object objects in the two groups of object areas are in acceleration motion in the horizontal direction; if the first horizontal movement distance is smaller than the second horizontal movement distance, it can be determined that the object objects in the two sets of object regions move in the horizontal direction at a deceleration. Based on the method, the motion types of the object objects in the horizontal direction in all the target object areas can be acquired. For example, the entire target object region is divided into a plurality of sets of object regions, and the object objects in each of the two sets of object regions move in the horizontal direction as deceleration, it is determined that the object objects in the target object region move in the horizontal direction as deceleration.
Optionally, the present application may further obtain a difference between the first horizontal movement distance and the second horizontal movement distance, and if the difference between the first horizontal movement distance and the second horizontal movement distance is a positive number, it may be determined that the object objects in the two groups of object regions are in acceleration motion in the horizontal direction; if the difference between the first horizontal movement distance and the second horizontal movement distance is negative, it can be determined that the object objects in the two sets of object regions move in the horizontal direction at a deceleration.
For example, the example is given by taking the multiple surveillance video frames in fig. 2 as an example, and the target object area is rectangular. As shown in fig. 2, the graphs (a) and (B) in fig. 2 are two adjacent frames of the surveillance video frame, and therefore, the target object region 2A in the graph (a) in fig. 2 and the target object region 2B in the graph (B) in fig. 2 are two adjacent target object regions. In this way, first vertical position information corresponding to the lower frame of the target object area 2A may be acquired, and second vertical position information corresponding to the lower frame of the target object area 2B may be acquired, and then a difference between the first vertical position information and the second vertical position information may be calculated to obtain a vertical movement distance, which may be represented as h as shown in (B) of fig. 2. When h is a positive number, it can be obtained by combining the graphs (a) and (b) in fig. 2, and it can be determined that the object objects in the two adjacent target object regions move toward the ground in the vertical direction.
Similarly, first horizontal position information corresponding to the left frame of the target object region 2A may be acquired, and second horizontal position information corresponding to the left frame of the target object region 2B may be acquired, and then a difference between the first horizontal position information and the second horizontal position information may be calculated to obtain a horizontal movement distance, which may be represented as w as shown in (B) diagram in fig. 2. As can be obtained by combining the graphs (a) and (b) in fig. 2, w is a negative number, so that it can be determined that the object objects in the two adjacent target object regions move away from the wall surface in the horizontal direction.
By adopting the method provided by the embodiment of the application, the candidate object regions can be obtained from the monitoring video frames of the same wall, and the object regions are obtained by filtering the candidate object regions in consideration of the fact that different object objects may exist in the candidate object regions, and the similarity of the object characteristics between any two object regions is high, so that the object objects in the object regions are the same object. Therefore, the high-altitude parabolic object is detected based on the target object region, the detection accuracy of the high-altitude parabolic object is improved, and the problem that the false alarm rate is high when the high-altitude parabolic object is detected through a mobile detection technology in the prior art is solved.
Referring to fig. 7, fig. 7 is a schematic flow chart of a high altitude parabola detection method provided by the present application. As shown in fig. 7, the high altitude parabola detection method may include:
s701, obtaining multiple frames of monitoring video frames of the same wall of the building.
S702, acquiring a plurality of candidate object areas from the plurality of monitoring video frames.
S703, identifying the object in each candidate object area through a neural network detection model.
It is understood that the existing motion detection technology may regard the false alarm object as a high-altitude object, for example, leaves, clouds, raindrops, etc. may cause the detection of the candidate object region, but actually the leaves, clouds, raindrops, etc. do not belong to the high-altitude object.
Based on this, the application can acquire a false-alarm image sample, and the false-alarm image sample can include leaves, color clouds, raindrops and the like. In this way, false-positive objects in the false-positive image sample can be marked; and then carrying out model training and model tuning on the neural network deep learning network through the marked false alarm image sample to obtain a neural network detection model.
Optionally, the trained neural network detection model may not be adapted to all false alarm scenarios, so the present application may also train the corresponding neural network detection model according to the type of the false alarm object. For example, a neural network detection model for false falling leaf alarm, a neural network detection model for false clouding alarm, and a neural network detection model for false raindrop alarm can be trained. Therefore, the false alarm scene is single, so that the detection accuracy of the neural network detection model obtained by training is high. Thus, the method and the device can detect whether the object in the candidate object region is the false alarm object or not through the neural network detection model corresponding to each false alarm object.
S704, filtering a first candidate object region in the candidate object regions according to the object to obtain a filtered candidate object region; the object in the first candidate object region belongs to a preset false alarm object.
S705, obtaining object characteristic vectors corresponding to the filtered candidate object regions.
S706, according to the object feature vector corresponding to the filtered candidate object region, acquiring a target object region from the filtered candidate object region.
And S707, detecting a high altitude parabola according to the target object area.
The specific contents of S701, S702, and S705 to S707 may refer to the descriptions in S101 to S105, respectively, and are not described herein again.
It should be noted that the present application does not limit the timing sequence of S702, for example, S702 may also be executed after S706, and at this time, the present application may perform filtering processing on the target object region to obtain a final object region, and then detect a high altitude parabola according to the final object region.
In summary, in the present application, it is considered that the false alarm object may detect the candidate object region in the motion detection technology, and if the false alarm object in the candidate object region is directly used as the high altitude object, the detection accuracy of the high altitude object is low. Therefore, the method and the device can filter the candidate object region caused by the false alarm object, avoid taking the false alarm object in the candidate object region as the high altitude parabola, and improve the detection accuracy of the high altitude parabola.
In conjunction with the embodiment shown in fig. 1, as shown in fig. 8, in an alternative embodiment of the present application, after S105, the following steps may be further included:
and S106, marking a high-altitude parabolic mark in the monitoring video frame where the high-altitude parabolic is located, and obtaining the processed monitoring video frame.
It will be appreciated that the high altitude parabolic sign may be a dashed rectangle box as shown in FIG. 2, or may be a designated icon, such as "! "and the like. In this way, the monitoring personnel can quickly recognize that the high-altitude parabolic monitoring video frame exists.
Illustratively, fig. 9 shows a schematic diagram of a comparison of multiple surveillance video frames. As shown in fig. 9, since the high altitude parabola exists in the diagrams (a), (b) and (d) of fig. 9, a "| is provided in the diagrams (a), (b) and (d) of fig. 9 at a position near the high altitude parabola! "high altitude parabolic sign. The high altitude parabola does not exist in the diagram (c) of fig. 9, and therefore the mark of the high altitude parabola is not provided in the diagram (c) of fig. 9.
It should also be understood that, in the embodiment of the present application, a starting position identifier may be further marked at the starting position of the high altitude parabola, so as to facilitate quick finding of a suspected floor where the high altitude parabola occurs.
And S107, generating a monitoring mark video according to the processed monitoring video frame.
It will be appreciated that a single surveillance video frame cannot view the parabolic trajectory of a high altitude parabola and, as such, may not be able to accurately trace back to the offender. Therefore, in the embodiment of the application, the monitoring video frames marked with the high-altitude parabolic marks can be combined into the monitoring mark video according to the sequence of the acquisition time, so that monitoring personnel can track the monitoring mark video afterwards.
Optionally, in order to ensure the fluency of the video, the application may further generate the monitoring mark video according to the processed monitoring video frame and the unprocessed monitoring video frame; the unprocessed monitoring video frames are the video frames of the multi-frame monitoring video frames except the processed monitoring video frames. Therefore, the problem of poor viewing effect caused by frame missing in the viewing process of monitoring personnel is avoided.
Optionally, the embodiment of the present application may further include: and sending a monitoring mark video to an early warning platform, wherein the early warning platform is used for displaying the monitoring mark video to monitoring personnel. Therefore, the police platform can find the police situation and trace the source in time.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Based on the detection method of the high altitude parabola provided by the embodiment, the embodiment of the invention further provides an embodiment of a device for realizing the embodiment of the method.
Referring to fig. 10, fig. 10 is a schematic view of a high altitude parabolic detection apparatus provided in an embodiment of the present application. The modules included are used to perform the steps in the embodiments corresponding to fig. 1, fig. 7 or fig. 8. Please refer to fig. 1, fig. 7 or fig. 8 for the corresponding embodiments. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 10, the high altitude parabolic detection apparatus 10 includes:
the acquisition module 101 is used for acquiring multiple frames of monitoring video frames of the same wall of a building;
acquiring a plurality of candidate object areas from the plurality of monitoring video frames;
acquiring an object feature vector corresponding to each candidate object region; and the number of the first and second groups,
acquiring a target object region from the candidate object region according to the object feature vector; similarity between object feature vectors corresponding to any two object regions in the target object region is greater than a preset threshold value;
and the detecting module 102 is configured to detect a high altitude parabola according to the target object region.
Optionally, the obtaining module 101 is further configured to obtain, through a neural network recognition model, a feature code corresponding to an object feature point in each candidate object region; and generating an object feature vector corresponding to each candidate region according to the feature code corresponding to the object feature point.
Optionally, the obtaining module 101 is further configured to identify an object in each candidate object region through a neural network detection model; according to the object, filtering a first candidate object region in the candidate object regions to obtain a filtered candidate object region; the object in the first candidate object region belongs to a preset false alarm object;
an obtaining module 101, further configured to obtain an object feature vector corresponding to the filtered candidate object region; and acquiring a target object region from the filtered candidate object region according to the object feature vector corresponding to the filtered candidate object region.
Optionally, the detection module 102 is further configured to obtain a horizontal movement distance and a vertical movement distance between two adjacent target object regions; and under the condition that the object in the target object region is determined to meet a preset high altitude parabolic condition according to the horizontal movement distance and the vertical movement distance, determining that the object in the target object region is the high altitude parabolic object.
Optionally, the preset high altitude parabolic condition includes: determining that the object in the target object area moves towards the ground in the vertical direction and is accelerated motion according to the vertical moving distance; and determining that the object in the target object area moves away from the wall surface in the horizontal direction and moves in a deceleration manner according to the horizontal movement distance.
Optionally, the high altitude parabola detection device further comprises:
the processing module is used for marking a high-altitude parabolic mark in a monitoring video frame where the high-altitude parabolic is located to obtain a processed monitoring video frame; and generating a monitoring mark video according to the processed monitoring video frame.
Optionally, the high altitude parabola detection device further comprises:
and the sending module is used for sending the monitoring mark video to an early warning platform and displaying the monitoring mark video to monitoring personnel by the early warning platform.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules are based on the same concept as that of the embodiment of the method of the present application, specific functions and technical effects thereof may be specifically referred to a part of the embodiment of the method, and details are not described here.
Fig. 11 is a schematic diagram of a high altitude parabolic detection apparatus provided in an embodiment of the present application. As shown in fig. 11, the detection apparatus 11 of the high altitude parabola of the embodiment includes: a processor 110, a memory 111 and a computer program 112, such as a high altitude parabolic detection program, stored in the memory 111 and executable on the processor 110. The processor 110, when executing the computer program 112, implements the steps in the various embodiments of the high altitude parabola detection method described above, such as S101-S105 shown in fig. 1. Alternatively, the processor 110, when executing the computer program 112, implements the functions of each module/unit in each device embodiment described above, such as the functions of the acquisition module 101 and the detection module 102 shown in fig. 10.
Illustratively, the computer program 112 may be partitioned into one or more modules/units that are stored in the memory 111 and executed by the processor 110 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 112 in the high altitude parabolic detection apparatus 11. For example, the computer program 112 may be divided into an obtaining module and a detecting module, and specific functions of each module are described in the embodiment corresponding to fig. 1, which is not described herein again.
The high altitude parabolic detection device may include, but is not limited to, a processor 110, a memory 111. Those skilled in the art will appreciate that fig. 11 is merely an example of a high altitude parabolic detection apparatus 11, and does not constitute a limitation of the high altitude parabolic detection apparatus 11, and may include more or less components than those shown, or combine certain components, or different components, for example, the high altitude parabolic detection apparatus may also include an input-output device, a network access device, a bus, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 111 may be an internal storage unit of the high altitude parabolic detection device 11, such as a hard disk or an internal memory of the high altitude parabolic detection device 11. The memory 111 may also be an external storage device of the high altitude parabolic detection device 11, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the high altitude parabolic detection device 11. Further, the memory 111 may also include both an internal storage unit and an external storage device of the high altitude parabolic detection device 11. The memory 111 is used for storing the computer program and other programs and data required by the detection device of the high altitude parabola. The memory 111 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program can realize the detection method of the high altitude parabola when being executed by a processor.
The embodiment of the application provides a computer program product, and when the computer program product runs on a detection device of a high-altitude parabolic object, the detection device of the high-altitude parabolic object can realize the detection method of the high-altitude parabolic object when being executed.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for detecting a high altitude parabola is characterized by comprising the following steps:
acquiring multiple frames of monitoring video frames of the same wall of a building body;
acquiring a plurality of candidate object areas from the plurality of monitoring video frames;
acquiring an object feature vector corresponding to each candidate object region;
acquiring a target object region from the candidate object region according to the object feature vector; similarity between object feature vectors corresponding to any two object regions in the target object region is greater than a preset threshold value;
and detecting a high altitude parabola according to the target object area.
2. The method according to claim 1, wherein the obtaining the object feature vector corresponding to each candidate object region comprises:
acquiring feature codes corresponding to the object feature points in each candidate object region through a neural network identification model;
and generating an object feature vector corresponding to each candidate region according to the feature code corresponding to the object feature point.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
identifying object objects in the each candidate object region through a neural network detection model;
according to the object, filtering a first candidate object region in the candidate object regions to obtain a filtered candidate object region; the object in the first candidate object region belongs to a preset false alarm object;
the obtaining of the object feature vector corresponding to each candidate object region includes:
obtaining object feature vectors corresponding to the filtered candidate object regions;
correspondingly, the obtaining a target object region from the candidate object region according to the object feature vector includes:
and acquiring a target object region from the filtered candidate object region according to the object feature vector corresponding to the filtered candidate object region.
4. The method of claim 1 or 2, wherein the detecting a high altitude parabola from the target object region comprises:
acquiring a horizontal movement distance and a vertical movement distance between two adjacent target object areas;
and under the condition that the object in the target object region is determined to meet a preset high altitude parabolic condition according to the horizontal movement distance and the vertical movement distance, determining that the object in the target object region is the high altitude parabolic object.
5. The method of claim 4, wherein the preset high altitude parabolic condition comprises:
determining that the object in the target object area moves towards the ground in the vertical direction and is accelerated motion according to the vertical moving distance; and the number of the first and second groups,
and determining that the object in the target object area moves away from the wall surface in the horizontal direction and moves in a deceleration manner according to the horizontal movement distance.
6. The method according to claim 1 or 2, characterized in that the method further comprises:
marking a high-altitude parabolic mark in a monitoring video frame where the high-altitude parabolic is located to obtain a processed monitoring video frame;
and generating a monitoring mark video according to the processed monitoring video frame.
7. The method of claim 6, further comprising:
and sending the monitoring mark video to an early warning platform, wherein the early warning platform is used for displaying the monitoring mark video to monitoring personnel.
8. A detection device for a high altitude parabola is characterized by comprising:
the acquisition module is used for acquiring multiple frames of monitoring video frames of the same wall of a building body;
acquiring a plurality of candidate object areas from the plurality of monitoring video frames;
acquiring an object feature vector corresponding to each candidate object region; and the number of the first and second groups,
acquiring a target object region from the candidate object region according to the object feature vector; similarity between object feature vectors corresponding to any two object regions in the target object region is greater than a preset threshold value;
and the detection module is used for detecting the high-altitude object throwing according to the target object area.
9. A high altitude parabolic detection apparatus, characterized by a processor, a memory and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202011159027.7A 2020-10-26 2020-10-26 High-altitude parabolic detection method and device Pending CN112270253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011159027.7A CN112270253A (en) 2020-10-26 2020-10-26 High-altitude parabolic detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011159027.7A CN112270253A (en) 2020-10-26 2020-10-26 High-altitude parabolic detection method and device

Publications (1)

Publication Number Publication Date
CN112270253A true CN112270253A (en) 2021-01-26

Family

ID=74342441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011159027.7A Pending CN112270253A (en) 2020-10-26 2020-10-26 High-altitude parabolic detection method and device

Country Status (1)

Country Link
CN (1) CN112270253A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926444A (en) * 2021-02-24 2021-06-08 北京爱笔科技有限公司 Method and device for detecting parabolic behavior
CN113269046A (en) * 2021-04-28 2021-08-17 深圳市海清视讯科技有限公司 High-altitude falling object identification method and system
CN114339367A (en) * 2021-12-29 2022-04-12 杭州海康威视数字技术股份有限公司 Video frame processing method, device and equipment
CN115294744A (en) * 2022-07-29 2022-11-04 杭州海康威视数字技术股份有限公司 Image display system, method, device and equipment
WO2023109664A1 (en) * 2021-12-13 2023-06-22 深圳先进技术研究院 Monitoring method and related product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160224833A1 (en) * 2015-02-04 2016-08-04 Alibaba Group Holding Limited Method and apparatus for target acquisition
CN111260693A (en) * 2020-01-20 2020-06-09 北京中科晶上科技股份有限公司 Detection method of high-altitude object throwing
CN111401311A (en) * 2020-04-09 2020-07-10 苏州海赛人工智能有限公司 High-altitude parabolic recognition method based on image detection
CN111488799A (en) * 2020-03-13 2020-08-04 安徽小眯当家信息技术有限公司 Falling object identification method and system based on image identification
CN111553257A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 High-altitude parabolic early warning method
CN111627049A (en) * 2020-05-29 2020-09-04 北京中科晶上科技股份有限公司 High-altitude parabola determination method and device, storage medium and processor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160224833A1 (en) * 2015-02-04 2016-08-04 Alibaba Group Holding Limited Method and apparatus for target acquisition
CN111260693A (en) * 2020-01-20 2020-06-09 北京中科晶上科技股份有限公司 Detection method of high-altitude object throwing
CN111488799A (en) * 2020-03-13 2020-08-04 安徽小眯当家信息技术有限公司 Falling object identification method and system based on image identification
CN111401311A (en) * 2020-04-09 2020-07-10 苏州海赛人工智能有限公司 High-altitude parabolic recognition method based on image detection
CN111553257A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 High-altitude parabolic early warning method
CN111627049A (en) * 2020-05-29 2020-09-04 北京中科晶上科技股份有限公司 High-altitude parabola determination method and device, storage medium and processor

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926444A (en) * 2021-02-24 2021-06-08 北京爱笔科技有限公司 Method and device for detecting parabolic behavior
CN113269046A (en) * 2021-04-28 2021-08-17 深圳市海清视讯科技有限公司 High-altitude falling object identification method and system
WO2023109664A1 (en) * 2021-12-13 2023-06-22 深圳先进技术研究院 Monitoring method and related product
CN114339367A (en) * 2021-12-29 2022-04-12 杭州海康威视数字技术股份有限公司 Video frame processing method, device and equipment
CN114339367B (en) * 2021-12-29 2023-06-27 杭州海康威视数字技术股份有限公司 Video frame processing method, device and equipment
CN115294744A (en) * 2022-07-29 2022-11-04 杭州海康威视数字技术股份有限公司 Image display system, method, device and equipment
CN115294744B (en) * 2022-07-29 2024-03-22 杭州海康威视数字技术股份有限公司 Image display system, method, device and equipment

Similar Documents

Publication Publication Date Title
CN112270253A (en) High-altitude parabolic detection method and device
CN104303193B (en) Target classification based on cluster
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
JP4852765B2 (en) Estimating connection relationship between distributed cameras and connection relationship estimation program
CN112016414A (en) Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system
US9792505B2 (en) Video monitoring method, video monitoring system and computer program product
CN109766779B (en) Loitering person identification method and related product
CN111800507A (en) Traffic monitoring method and traffic monitoring system
WO2014092552A2 (en) Method for non-static foreground feature extraction and classification
CN107437318B (en) Visible light intelligent recognition algorithm
CN107657244B (en) Human body falling behavior detection system based on multiple cameras and detection method thereof
EP2709066A1 (en) Concept for detecting a motion of a moving object
CN110544271B (en) Parabolic motion detection method and related device
WO2022078182A1 (en) Throwing position acquisition method and apparatus, computer device and storage medium
CN109842787A (en) A kind of method and system monitoring throwing object in high sky
CN103152558B (en) Based on the intrusion detection method of scene Recognition
KR102434154B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN106600628B (en) Target object identification method and device based on thermal infrared imager
CN112800846B (en) High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN115937746A (en) Smoke and fire event monitoring method and device and storage medium
CN104361366B (en) A kind of licence plate recognition method and car license recognition equipment
Ng et al. Outdoor illegal parking detection system using convolutional neural network on Raspberry Pi
CN115731247A (en) Target counting method, device, equipment and storage medium
CN112668389A (en) High-altitude parabolic target detection method, device, system and storage medium
EP2709065A1 (en) Concept for counting moving objects passing a plurality of different areas within a region of interest

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221222

Address after: 518000 Yingfei Haocheng Science Park, Guansheng 5th Road, Luhu Community, Guanhu Street, Longhua District, Shenzhen, Guangdong 1515

Applicant after: Shenzhen Infineon Information Co.,Ltd.

Address before: 3 / F, building H-3, East Industrial Zone, Huaqiaocheng, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN INFINOVA Ltd.