CN112800846A - High-altitude parabolic monitoring method and device, electronic equipment and storage medium - Google Patents

High-altitude parabolic monitoring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112800846A
CN112800846A CN202011627329.2A CN202011627329A CN112800846A CN 112800846 A CN112800846 A CN 112800846A CN 202011627329 A CN202011627329 A CN 202011627329A CN 112800846 A CN112800846 A CN 112800846A
Authority
CN
China
Prior art keywords
mask
motion
image
value
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011627329.2A
Other languages
Chinese (zh)
Inventor
王杉杉
胡文泽
王孝宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202011627329.2A priority Critical patent/CN112800846A/en
Publication of CN112800846A publication Critical patent/CN112800846A/en
Priority to PCT/CN2021/114803 priority patent/WO2022142414A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Abstract

The embodiment of the invention provides a high-altitude parabolic monitoring method, a high-altitude parabolic monitoring device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a frame image sequence of a current monitoring scene; calculating to obtain a first motion mask image between adjacent frame images according to the difference between the frame images; constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image; calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image; monitoring the high altitude parabola based on the second motion mask map. The method can improve the real-time monitoring effect, and meanwhile, the second mask image obtained through the current background image and the current frame image is used for monitoring the high-altitude parabolic object, so that the complexity of the numerical value is reduced, the calculation speed is increased, whether the high-altitude parabolic object exists or not can be judged in real time, and the monitoring effect of the high-altitude parabolic object is further improved.

Description

High-altitude parabolic monitoring method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a high-altitude parabolic monitoring method and device, electronic equipment and a storage medium.
Background
Along with the development of real estate, the floors of newly-built residential areas are higher and higher, and the problem of high-altitude object throwing is more and more prominent. Most install the surveillance camera in present residential quarter and monitor the condition in the district, when the parabolic incident in high altitude takes place, relevant personnel can call according to the surveillance video of gathering this parabolic action in high altitude and look over, but the parabolic condition in specific high altitude needs the manual work to look over frame by frame or look over through slowly putting the camera lens, and not only work load is big, still takes place easily to omit, moreover, the parabolic condition in high altitude of discovery that can't be timely. Furthermore, due to the installation of the camera and the resolution of the camera, the object with a high altitude parabola is sometimes small and is not easily found by naked eyes in the monitoring video. Therefore, the existing high-altitude parabolic event has poor monitoring effect.
Disclosure of Invention
The embodiment of the invention provides a high-altitude parabolic monitoring method which can improve the monitoring effect of high-altitude parabolic behavior.
In a first aspect, an embodiment of the present invention provides a method for monitoring a high altitude parabola, where the method includes:
acquiring a frame image sequence of a current monitoring scene;
calculating to obtain a first motion mask image between adjacent frame images according to the difference between the frame images;
constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image;
calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image;
monitoring the high altitude parabola based on the second motion mask map.
Optionally, the calculating the first motion mask map between adjacent frame images according to the difference between the frame images includes:
calculating the difference value of the corresponding pixel point pair between the adjacent frame images, and judging whether the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference value threshold value or not;
if the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference value threshold, assigning the corresponding first mask pixel point as a motion mask value;
if the difference value of the corresponding pixel point pair between the adjacent frame images is smaller than a preset first difference value threshold, assigning the corresponding first mask pixel point as a static mask value;
and obtaining a first motion mask image based on the assigned first mask pixel points.
Optionally, the constructing a current background map according to the first motion mask map and the frame image corresponding to the first motion mask map includes:
partitioning the first motion mask image to obtain a plurality of mask areas;
constructing a plurality of background areas with the same shape as the mask area according to the mask area and the frame image corresponding to the first motion mask image;
and constructing a current background image based on the background area.
Optionally, the mask region includes a motion mask value and a still mask value, and the constructing, according to the mask region and the frame image corresponding to the first motion mask map, a plurality of background regions having the same shape as the mask region includes:
calculating a motion state of the mask region according to the motion mask value and the static mask value of the mask region;
and selecting an image area corresponding to the first motion mask image in the frame image as a background area according to the motion state of the mask area.
Optionally, the number of the first motion mask map is n frames, and the number of frame images corresponding to the first motion mask map is also n frames, where n is a positive integer greater than 0, and the calculating the motion state of the mask region according to the motion mask value and the static mask value of the mask region includes:
extracting a mask value sequence of each mask area in the first motion mask image according to the n frames of the first motion mask image, wherein the dimension of the mask value sequence is n;
extracting a corresponding target image area index according to a mask value sequence corresponding to the mask area;
and constructing a current background image based on the frame image corresponding to the first motion mask image and the target image area index.
Optionally, the calculating a second motion mask map between the current background map and the current frame image according to the difference between the current background map and the current frame image includes:
calculating the difference value of the corresponding pixel point pair of the current background image and the current frame image, and judging whether the difference value of the pixel point pair is greater than or equal to a preset second difference value threshold value or not;
if the difference value of the corresponding pixel point pair of the current background image and the current frame image is larger than or equal to a preset first difference value threshold, assigning the corresponding second mask pixel point as a motion mask value;
if the difference value of the corresponding pixel point pair of the current background image and the current frame image is smaller than a preset second difference value threshold, assigning the corresponding second mask pixel point as a static mask value;
and obtaining a second motion mask image based on the assigned second mask pixel points.
Optionally, the monitoring the high altitude parabola based on the second motion mask map includes:
monitoring whether a first motion area with the area larger than a preset area threshold appears in a second motion mask image corresponding to the current frame image, wherein the first motion area is composed of motion mask values;
if the second motion region exists, monitoring whether a second motion region with the area larger than a preset area threshold value exists in a second motion mask image corresponding to the next frame image, and under the condition that the second motion region exists, calculating the similarity between a frame image region corresponding to the first motion region and a frame image region corresponding to the second motion region, wherein the second motion region is formed by motion mask values;
if the similarity between the frame image area corresponding to the first motion area and the frame image area corresponding to the second motion area is greater than a preset similarity threshold, calculating the motion trail of the first motion area and the second motion area;
and judging whether the motion track meets the condition of high-altitude parabolic motion.
Optionally, the monitoring the high altitude parabola based on the second motion mask map includes:
removing interference information in the second motion mask image through morphological open operation to obtain a third motion mask image, wherein the third motion mask image comprises a motion mask value and a static mask value;
monitoring the high altitude parabola based on the third motion mask map.
Optionally, the method further includes:
when the high-altitude parabolic phenomenon is monitored, a preset number of front and rear frame images are obtained and used as evidence obtaining frame image sequences.
In a second aspect, an embodiment of the present invention further provides a device for monitoring a high altitude parabola, where the device includes:
the first acquisition module is used for acquiring a frame image sequence of a current monitoring scene;
the first calculation module is used for calculating to obtain a first motion mask image between adjacent frame images according to the difference value between the frame images;
the background construction module is used for constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image;
the second calculation module is used for calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image;
and the monitoring module is used for monitoring the high altitude parabola based on the second motion mask map.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the monitoring method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps in the monitoring method for the high altitude parabola provided by the embodiment of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the method for monitoring a high altitude parabola provided by the embodiment of the present invention.
In the embodiment of the invention, a frame image sequence of a current monitoring scene is obtained; calculating to obtain a first motion mask image between adjacent frame images according to the difference between the frame images; constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image; calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image; monitoring the high altitude parabola based on the second motion mask map. Meanwhile, the second mask image obtained through the current background image and the current frame image monitors the high-altitude parabolic object, so that the complexity of numerical values is reduced, the calculation speed is increased, whether the high-altitude parabolic object exists or not can be judged in real time, and the monitoring effect of the high-altitude parabolic object is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for monitoring a high altitude parabola according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for constructing a background diagram according to an embodiment of the present invention;
FIG. 3 is a block diagram of a first motion mask map provided by an embodiment of the present invention;
FIG. 4 is a flow chart of another method for monitoring high altitude parabolas provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a high altitude parabolic monitoring device according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a first computing module according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a background building block according to an embodiment of the present invention;
FIG. 8 is a block diagram of a first building submodule according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a computing unit according to an embodiment of the present invention;
FIG. 10 is a block diagram of a second computing module according to an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a monitoring module according to an embodiment of the present invention;
FIG. 12 is a schematic structural diagram of another monitoring module provided in the embodiments of the present invention;
fig. 13 is a schematic structural diagram of another high altitude parabolic monitoring device provided in an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for monitoring a high altitude parabola according to an embodiment of the present invention, as shown in fig. 1, the method is used for monitoring the high altitude parabola regularly or in real time, and includes the following steps:
101. acquiring a frame image sequence of a current monitoring scene.
In the embodiment of the present invention, the current monitoring scene may be a building scene such as a residential building, a commercial building, or an office building that is being monitored by the camera. The monitoring range of the camera may be all floors of a building or floors with more than a certain number of floors, for example, floors with more than 4 floors, and when the camera is installed, the monitoring range of the camera may be determined as needed, and the shooting angle of the camera may be adjusted so that the camera can monitor the floors in the corresponding range.
The preset frame image sequence may be an image frame sequence of a preset number of frames in a video stream captured by a current camera, where the sequence is arranged according to a time sequence, and the video stream may be a real-time video stream or a video stream of a certain period of time.
102. And calculating to obtain a first motion mask image between adjacent frame images according to the difference between the frame images.
In the embodiment of the present invention, the difference between the frame images may be understood as a difference between pixel values of corresponding pixels between two frame images. Because the video streams shot by the same camera have the same resolution and size of each frame image in the video, the frame image of the nth frame can be subtracted from the frame image of the (N + 1) th frame to obtain a difference image, and the value of each difference pixel point in the difference image is the difference value of the corresponding pixel point pair.
For example, assume that two adjacent frames of images K1 and K2, let K1 be 3 × 3 image and K2 be 3 × 3 image, as shown in the following matrix:
Figure BDA0002873322290000061
Figure BDA0002873322290000062
the difference graph K-K2-K1 between adjacent frame images is:
Figure BDA0002873322290000063
in the difference map K, the larger the absolute value of the difference is, the larger the change of the pixel value of the pixel point is, and the change is generally small for the background pixel point serving as the background. Therefore, the corresponding pixel point can be determined to be in a motion state or a static state according to the pixel value change of the corresponding pixel point between the adjacent frame images.
Specifically, the difference value of the corresponding pixel point pair between the adjacent frame images may be calculated, and whether the difference value of the corresponding pixel point pair between the adjacent frame images is greater than or equal to a preset first difference threshold value is judged; if the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference threshold value, assigning the corresponding first mask pixel point as a motion mask value; if the difference value of the corresponding pixel point pair between the adjacent frame images is smaller than a preset first difference value threshold, assigning the corresponding first mask pixel point as a static mask value; and obtaining a first motion mask image based on the assigned first mask pixel points.
Further, the first motion mask map between adjacent frame images can be calculated by the absolute value of the difference between the frame images. The first motion mask map may be a binary mask map. In a binary mask map, the motion mask map is represented by two preset mask values, one of which may represent a motion state, i.e., a motion mask value, and the other one may represent a still state, i.e., a still mask value. Specifically, the following formula can be used:
Figure BDA0002873322290000064
in the above formula, motion _ mask (i, j) represents a mask value of a first mask pixel (i, j), frame _ diff (i, j) represents an absolute value of a difference between adjacent frame images of the pixel (i, j), motion _ thr1 represents a preset first difference threshold, b represents a motion mask value, and c represents a static mask value. The first motion mask map motion _ mask1 records the motion change of the corresponding pixel point between the adjacent frame images. For example, assuming that the first difference threshold motion _ thr1 is set to 30 and the first motion mask map motion _ mask1 is motion _ mask (k), there is the first motion mask map motion _ mask1 as follows:
Figure BDA0002873322290000071
in one possible embodiment, to increase the visibility of the first motion mask value, the motion mask value may be 255, the visual result of outputting the motion mask value 255 in the form of pixels 255 may be white, the still mask value may be 0, and the visual result of outputting the still mask value 0 in the form of pixels 0 may be black.
In one possible embodiment, to increase the computation speed between frame images, the frame image sequence may be preprocessed to scale the frame images in the frame image sequence to a predetermined size, such as a pixel size of 720 × 480, 480 × 320, etc.
103. And constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image.
In this embodiment of the present invention, the mask value in the first motion mask map may represent a motion state of a corresponding pixel, for example, the mask value may be a binary mask, that is, different motion states of each pixel are represented by two different mask values and a motion mask value and a static mask value, and the motion states may be moving or static and correspond to the motion mask value and the static mask value, respectively.
For the pixel point with static motion state, the pixel point can be used as a background pixel point. For a pixel point with a moving state, a frame image corresponding to the pixel point in a static state in a frame image sequence can be searched, and the pixel point is determined as a background pixel point in the frame image.
Further, the number of the first motion mask images is n frames, and the number of the frame images corresponding to the first motion mask images is also n frames. And judging the motion state of the corresponding pixel point in the frame image through the pixel point in the first motion mask image, thereby determining which pixel point in which frame image can be used as a background pixel point.
When the corresponding motion mask map is obtained according to the preset image sequence, a first data set may be maintained, which is used to accommodate the latest frame image sampled from the preset image sequence in sequence, for example, when the t +5 th frame is sampled, the t +5 th frame is added to the first data set. And maintaining a second data set for accommodating a corresponding motion mask map, for example, after adding the frame image of the t +5 th frame in the first data set, adding the motion mask map corresponding to the frame image of the t +5 th frame in the second data set. The holding capacity of the first data set and the holding capacity of the second data set are both set to be n frames, and after the n frames are exceeded, the frame image or the motion mask image added first is removed, for example, if n is 11, when the frame image or the motion mask image in the data set exceeds 11, the frame image or the motion mask image added first in the data set is removed, and the frame image or the motion mask image in the data set is kept not to exceed 11. The removed frame image or motion mask image can be stored separately for data multiplexing or deleted directly. The frame image of the first data set is in one-to-one correspondence with the motion mask map in the second data set.
The pixel points with static motion states in the frame images corresponding to the n frames can be selected as background pixel points according to the motion states of all the pixel points in the n frames of the first motion mask image. And when a background image with the same size as the frame image or the motion mask image is obtained, finishing the construction of the current background image.
104. And calculating a second motion mask image between the current background image and the current frame image according to the difference value of the current background image and the current frame image.
In the embodiment of the present invention, the difference between the current background image and the current frame image may be understood as the difference between the pixel values of the corresponding pixels between the current background image and the current frame image. Because the current background image is constructed in real time and the constructed current background image and the corresponding frame image have the same resolution and size, the current background image can be subtracted from the current frame image to obtain a corresponding difference image, and in the difference image, the value of each difference pixel point is the difference value of the corresponding pixel point pair in the current background image and the current frame image.
In the difference map, the larger the absolute value of the difference is, the larger the change of the pixel value of the pixel point is, and the change is generally small for the background pixel point serving as the background. Therefore, the pixel values of the corresponding pixel points between the current background image and the current frame image can be changed to determine whether each pixel point in the current frame image is in a motion state or a static state.
Specifically, the difference between the corresponding pixel point pair of the current background image and the current frame image may be calculated, and whether the difference of the pixel point pair is greater than or equal to a preset second difference threshold value is determined; if the difference value of the corresponding pixel point pair of the current background image and the current frame image is larger than or equal to a preset first difference value threshold, assigning the corresponding second mask pixel point as a motion mask value; if the difference value of the corresponding pixel point pair of the current background image and the current frame image is smaller than a preset second difference value threshold value, assigning the corresponding second mask pixel point as a static mask value; and obtaining a second motion mask image based on the assigned second mask pixel points.
Similarly to step 102, a second motion mask map between the current background map and the current frame image can be calculated by the absolute value of the difference between the current background map and the current frame image. The second motion mask map may be a binary mask map. In a binary mask map, the motion mask map is represented by two preset mask values, one of which may represent a motion state, i.e., a motion mask value, and the other one may represent a still state, i.e., a still mask value. Specifically, the following formula can be used:
Figure BDA0002873322290000091
in the above formula, motion _ mask (i, j) represents a mask value of the second mask pixel (i, j), frame _ diff (i, j) represents an absolute value of a difference between the current background image and the current frame image of the pixel (i, j), motion _ thr2 represents a preset second difference threshold, b represents a motion mask value, and c represents a static mask value. The second motion mask map motion _ mask2 records the motion change of the corresponding pixel point between the current background map and the current frame image. For example, assuming that the second difference threshold motion _ thr2 is set to 30, the first motion mask map motion _ mask2 is motion _ mask (K2), K2 is a difference map between the current background map and the current frame image, and K2 is as follows:
Figure BDA0002873322290000092
there is a second motion mask map motion _ mask as follows:
Figure BDA0002873322290000093
in one possible embodiment, to increase the visibility of the second motion mask value, the motion mask value may be 255, the visual result of outputting the motion mask value 255 in the form of pixels 255 may be white, the still mask value may be 0, and the visual result of outputting the still mask value 0 in the form of pixels 0 may be black.
105. And monitoring the high altitude parabola based on the second motion mask image.
In the embodiment of the present invention, the second motion mask map includes a motion mask value and a static mask value, it can be understood that, in the case of no high-altitude parabola, only the static mask value exists in the second motion mask map, and when a motion mask value with a certain area appears in the second motion mask map, it indicates that there is a moving object in the current frame image and there is suspicion of high-altitude parabola behavior.
Meanwhile, the second motion mask image comprises a motion mask value and a static mask value, and the second motion mask image is visualized, so that a worker can easily see a moving object, and the worker can conveniently monitor the monitoring area. For example, a white building has a white object thrown out, and the monitoring video is observed by naked eyes and is difficult to observe. And through the second motion mask image, the white object can be separated from the background of the white building through the mask value, and the identification degree is increased. For example, the motion mask value is 255, the static mask value is 0, the motion mask value 255 is white in a visual result outputted in a pixel 255 form, and the static mask value 0 is black in a visual result outputted in a pixel 0 form; in this case, the white building is a stationary background and is black after the second mask image is visualized, and the white object is a moving object and is white after the second mask image is visualized.
In the second motion mask image corresponding to the current frame image, if a large-area motion mask value is detected, it is indicated that a moving object exists, motion analysis can be performed on the moving object, the motion direction of the moving object is judged, and if the motion direction of the moving object is downward, it can be considered that a high-altitude parabolic behavior is detected. The situation that the motion mask value appears in the second motion mask image is automatically detected, the monitoring area is automatically monitored and detected, the monitoring workload can be reduced, and the labor cost is reduced. In addition, the second motion mask image is updated along with the current frame image, so that the second motion mask image has real-time performance and meets the real-time performance of video monitoring.
In the embodiment of the invention, a frame image sequence of a current monitoring scene is obtained; calculating to obtain a first motion mask image between adjacent frame images according to the difference between the frame images; constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image; calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image; monitoring the high altitude parabola based on the second motion mask map. Meanwhile, the second mask image obtained through the current background image and the current frame image monitors the high-altitude parabolic object, so that the complexity of numerical values is reduced, the calculation speed is increased, whether the high-altitude parabolic object exists or not can be judged in real time, and the monitoring effect of the high-altitude parabolic object is further improved.
Optionally, referring to fig. 2, fig. 2 is a flowchart of a background diagram constructing method provided by the embodiment of the present invention, and as shown in fig. 2, the method includes the following steps:
201. and partitioning the first motion mask image to obtain a plurality of mask areas.
In the embodiment of the present invention, the first motion mask map may be partitioned according to a preset rule, or the first motion mask map may be partitioned according to image recognition.
Specifically, the preset rule may be an average division, and specifically may be that the first motion mask map is divided into M × N blocks of the same size, each block corresponds to one mask region, so as to obtain M × N mask regions, and the size of each mask region is: (img _ width/M, img _ height/N), wherein img _ width is the total width of the first motion mask pattern, and img _ height is the total height of the first motion mask pattern.
202. And constructing a plurality of background areas with the same shape as the mask area according to the mask area and the frame image corresponding to the first motion mask image.
In the embodiment of the present invention, the motion state of the mask region may be calculated from the mask value of the mask region; and selecting the image area corresponding to the first motion mask image in the frame image as a background area according to the motion state of the mask area.
Specifically, the mask area includes a motion mask value and a still mask value, and a motion state of one mask area may be determined based on the motion mask value and the still mask value, and when the motion state of one mask area is determined to be still, an image area corresponding to the mask area may be taken out from the frame image corresponding to the first motion mask image as a background area. And when the motion state of the mask area is judged to be motion, judging the motion state of the mask area corresponding to the first motion mask image of the next frame. The motion state of the mask area may be determined based on an average value of the mask area motion mask value and the still mask value, or may be determined based on the number or ratio of the mask area motion mask values.
Further, in this embodiment of the present invention, the number of the first motion mask maps is n frames, and the number of the frame images corresponding to the first motion mask maps is also n frames, where n is a positive integer greater than 0, and a mask value sequence of each mask region in the first motion mask map may be extracted according to the n frames of the first motion mask maps, where a dimension of the mask value sequence is n; extracting a corresponding target image area index according to a mask value sequence corresponding to the mask area; and constructing a corresponding background area according to the frame image corresponding to the first motion mask image and the target image area index.
Furthermore, the frame image sequence includes N +1 frame images, and N first motion mask maps may be obtained through calculation, where the nth first motion mask map corresponds to the N +1 th frame image, and each first motion mask map includes M × N mask regions. The latest N first motion mask pictures may be placed into one array motion _ value _ array (N, M) and the latest N frame pictures may be placed into another array frame _ patch _ array (N, height, width). The number of each array is N, and if N is exceeded, the first motion mask or frame image that was added to the array at the earliest time is deleted.
The motion state of the mask region may be calculated by:
Figure BDA0002873322290000121
wherein, motion _ value (pi, pj) in the formula represents the motion state of the mask region (pi, pj), M × N represents the number of mask regions, img _ width represents the total width of the first motion mask image, img _ height represents the total height of the first motion mask image, img _ height represents the pixel area of the first motion mask image, (i, j) represents the first mask pixel point in the mask region (pi, pj), and patch (pi, pj) (i, j) represents the mask value of the first mask pixel point (i, j) in the mask region (pi, pj).
For the motion state motion _ value of each mask area, there is an n-dimensional array to represent the motion state of the block in the latest n frames, the index min _ index with the smallest value in the array is taken as the target image area index, and the whole image area corresponding to the index min _ index position block in the array frame _ patch _ array (n, height, width) of the latest n frame images is taken as the background area. For example, when M is 2 and N is 2, the first motion mask map includes 4 mask regions, which are respectively mask region 1, mask region 2, mask region 3 and mask region 4, as shown in fig. 3, when the value of mask region 1 of the 2 nd frame first motion mask map in the first motion mask map array motion _ value _ array (N, M) is minimum, mask region 1 of the 2 nd frame first motion mask map is determined to be in a static state, index min _ index is obtained as (2 nd frame, mask region 1) correspondingly, the frame image of the 2 nd frame is searched from the index (2 nd frame, mask region 1) to the array frame _ batch _ array (N, height, width) of the frame image, and the region corresponding to mask region 1 in the frame image of the 2 nd frame is taken as background region 1; when the value of the mask area 2 of the first motion mask image array motion _ value _ array (N, M) of the 4 th frame is the minimum, the mask area 2 of the first motion mask image of the 4 th frame is judged to be in a static state, and the index min _ index is correspondingly obtained as (4 th frame, mask area 2), the frame image of the 4 th frame is searched from the array frame _ patch _ array (N, height, width) of the frame image according to the index (4 th frame, mask area 2), and the area corresponding to the mask area 2 in the frame image of the 4 th frame is taken as the background area 2; the same way can be found for the background area 3 and the background area 4.
203. And constructing a current background image based on the background area.
In the embodiment of the invention, the background area obtained by corresponding calculation of the mask area has a corresponding position relationship with each mask area, and the background area is spliced according to the position of the mask area in the first motion mask image, so that the current background image can be obtained.
In a possible embodiment, the frame image sequence participates in the real-time construction of the current background image, and in order to speed up the construction of the current background image, the number n of frame images in the frame image sequence may be limited, for example, to be set as a smaller number, such as 20, 30, etc. For a common camera, the acquisition of video stream image frames is about 30 frames per second, so n can be set to be a positive integer less than 30, thereby increasing the speed of constructing the current background image and further increasing the real-time performance of monitoring.
In the embodiment of the invention, the first motion mask image is partitioned, the current background image is constructed in a partitioning mode, reconstruction calculation is not needed to be carried out pixel by pixel, and the construction speed of the real-time background image is further improved.
Optionally, referring to fig. 4, fig. 4 is a flowchart of another high-altitude parabolic monitoring method according to an embodiment of the present invention, and based on the embodiment of fig. 1, the monitoring of the high-altitude parabolic is implemented by detecting a second motion mask map, as shown in fig. 4, including the following steps:
401. and monitoring whether a first motion region with the area larger than a preset area threshold appears in a second motion mask image corresponding to the current frame image.
In an embodiment of the present invention, the first motion region is formed of a motion mask value. The area of the first motion area is larger than the preset area, the first motion area is an object which can be detected, and the probability of noise interference is small.
402. And if so, monitoring whether a second motion region with the area larger than a preset area threshold value appears in a second motion mask image corresponding to the next frame image, and calculating the similarity between the frame image region corresponding to the first motion region and the frame image region corresponding to the second motion region under the condition that the second motion region appears.
In an embodiment of the present invention, the second motion region is formed of a motion mask value. The area of the second motion area is larger than the preset area, the second motion area is an object which can be detected, and the probability of noise interference is small.
Because the first motion area and the second motion area are in the relation of front and back frames, whether the first motion area and the second motion area are motion areas corresponding to the same object or not can be judged through similarity calculation.
In a possible embodiment, the first motion region and the second motion region may be frame-shaped regions obtained by contour fitting based on motion mask values. If a plurality of first motion areas and a plurality of second motion areas exist, similarity is calculated one by one between frame image areas corresponding to the plurality of first motion areas and frame image areas corresponding to the plurality of second motion areas, and a structural similarity ssim is used as a similarity calculation method, when the structural similarity ssim between the image area corresponding to the first motion area and the image area corresponding to the second motion area is greater than a preset similarity threshold, for example, the structural similarity ssim is greater than 0.8, it can be determined that the first motion area and the second motion area are motion areas of the same object. It should be noted that the frame image area corresponding to the first motion area is an image area in the current frame image, and the frame image area corresponding to the second motion area is an image area in the next frame image.
403. And if the similarity between the frame image area corresponding to the first motion area and the frame image area corresponding to the second motion area is greater than a preset similarity threshold, calculating the motion tracks of the first motion area and the second motion area.
In the embodiment of the present invention, if the similarity between the frame image area corresponding to the first motion area and the frame image area corresponding to the second motion area is greater than the preset similarity threshold, it indicates that the object in the first motion area and the object in the second motion area are the same object, and the object is in a motion state.
Specifically, a vector distance between the center position of the first motion region and the center position of the second motion region may be calculated as the motion trajectory.
404. And judging whether the motion track meets the condition of high-altitude parabolic motion.
In the embodiment of the present invention, if the direction of the motion trajectory (which may be a vector distance) is downward, and the numerical value of the motion trajectory (which may be a vector distance) is greater than a preset numerical value, it may be determined that a high altitude parabola is generated, and otherwise, it may be determined that no high altitude parabola is generated.
In some possible embodiments, a difference obtained by subtracting the vertical coordinate cur _ roi _ y of the center position of the first motion region from the vertical coordinate next _ roi _ y of the center position of the second motion region may also be calculated, if the difference is positive and the value is greater than a preset value, it may be determined that a high altitude parabola is generated, and in other cases, it may be determined that no high altitude parabola is generated; whether the next _ roi _ y is larger than acur _ roi _ y can also be judged, wherein a is larger than 1, for example, whether the next _ roi _ y is larger than 1.2acur _ roi _ y can be judged, if so, the high altitude parabola can be determined, and otherwise, the high altitude parabola is not generated.
Optionally, before step 401 and step 402, the second motion mask image may be preprocessed to eliminate noise interference in the second motion mask image, so as to obtain a more accurate and clear second motion mask image as the third motion mask image. In this way, the high altitude parabola can be monitored based on the third motion mask map, so that the detection of the high altitude parabola is more accurate. Monitoring the high altitude parabola based on the third motion mask map may refer to steps 401 and 402.
Optionally, when the occurrence of the high-altitude parabola is monitored, a preset number of front and rear frame images can be acquired as the forensics frame image sequence. For example, the first 100 frames and the last 100 frames of the occurrence time are retained as evidence image sequences, manual confirmation is performed, which user residents make the high altitude parabolas is found, and subsequent processing, such as warning, punishment, pursuit and the like, is performed.
Optionally, when the high-altitude parabolic phenomenon is monitored, a high-altitude parabolic prompting alarm can be automatically sent to the current monitoring scene and/or a management department.
The current monitoring scene refers to a scene of a location where the corresponding camera is deployed, such as a residential area a, a residential area B, and a unit C. When the high-altitude object throwing is detected by the camera of the C unit in the residential area A, a danger alarm is sent out at the C unit in the residential area A so as to prompt people nearby the C unit in the residential area A.
The management department may be a property management department or a city management department or other organization with management authority, such as an owner's committee, a garden fair, etc. In a possible implementation manner, in the prompt alarm sent to the management department, video information of the current monitoring scene is also included, the video information includes continuous frame images of high altitude parabola occurrence, and the alarm information can be sent to the management department or a contact terminal of related personnel through various contact ways, such as mail, mobile phone APP or wechat public signal push and the like.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a high altitude parabola monitoring device according to an embodiment of the present invention, as shown in fig. 5, the device includes:
a first obtaining module 501, configured to obtain a frame image sequence of a current monitoring scene;
a first calculating module 502, configured to calculate a first motion mask map between adjacent frame images according to a difference between the frame images;
a background construction module 503, configured to construct a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image;
a second calculating module 504, configured to calculate a second motion mask map between the current background map and the current frame image according to a difference between the current background map and the current frame image;
a monitoring module 505, configured to monitor the high altitude parabola based on the second motion mask map.
Optionally, as shown in fig. 6, the first motion mask map includes a motion mask value and a static mask value, and the first calculating module 502 includes:
the first calculating submodule 5021 is used for calculating the difference value of the corresponding pixel point pair between the adjacent frame images and judging whether the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference value threshold value or not;
the first mask submodule 5022 is used for assigning a corresponding first mask pixel point to be a motion mask value if the difference value of the corresponding pixel point pair between the adjacent frame images is greater than or equal to a preset first difference value threshold;
the second mask submodule is used for assigning the corresponding first mask pixel point to be a static mask value if the difference value of the corresponding pixel point pair between the adjacent frame images is smaller than a preset first difference value threshold 5023;
the first processing submodule 5024 is used for obtaining a first motion mask image based on the assigned first mask pixel points.
Optionally, as shown in fig. 7, the background constructing module 503 includes:
a blocking sub-module 5031, configured to block the first motion mask map to obtain a plurality of mask regions;
a first constructing sub-module 5032 configured to construct, according to the mask regions and the frame image corresponding to the first motion mask map, a plurality of background regions having the same shape as the mask regions;
the second construction sub-module 5033 constructs a current background map based on the background region.
Optionally, as shown in fig. 8, the mask region includes a motion mask value and a static mask value, and the first building sub-module 5032 includes:
a calculating unit 50321, configured to calculate a motion state of the mask region according to the motion mask value and the static mask value of the mask region;
a selecting unit 50322, configured to select, according to the motion state of the mask region, a corresponding image region in the frame image corresponding to the first motion mask image as a background region.
Optionally, as shown in fig. 9, the number of the first motion mask maps is n frames, and the number of frame images corresponding to the first motion mask maps is also n frames, where n is a positive integer greater than 0, and the calculating unit 50321 includes:
a first extraction subunit 503211, configured to extract, according to the n frames of the first motion mask map, a mask value sequence of each mask region in the first motion mask map, where a dimension of the mask value sequence is n;
a second extraction subunit 503212, configured to extract a corresponding target image area index according to the mask value sequence corresponding to the mask area;
a constructing subunit 503213, configured to construct a current background map based on the frame image corresponding to the first motion mask map and the target image area index.
Optionally, as shown in fig. 10, the second calculating module 504 includes:
the second calculating submodule 5041 is configured to calculate a difference value between a corresponding pixel point pair of the current background image and the current frame image, and determine whether the difference value of the pixel point pair is greater than or equal to a preset second difference threshold;
a third mask submodule 5042, configured to assign a corresponding second mask pixel point to a motion mask value if a difference between a corresponding pixel point pair of the current background image and the current frame image is greater than or equal to a preset first difference threshold;
a fourth mask submodule 5043, configured to assign a corresponding second mask pixel point to a static mask value if a difference between a corresponding pixel point pair of the current background image and the current frame image is smaller than a preset second difference threshold;
the second processing sub-module 5044 is configured to obtain a second motion mask map based on the assigned second mask pixel points.
Optionally, as shown in fig. 11, the monitoring module 505 includes:
the first monitoring submodule 5051 is configured to monitor whether a first motion region having an area larger than a preset area threshold appears in a second motion mask image corresponding to a current frame image, where the first motion region is formed by motion mask values;
a third calculating submodule 5052, configured to monitor whether a second motion region with an area larger than a preset area threshold appears in a second motion mask image corresponding to a next frame of image if the second motion region exists, and calculate a similarity between a frame image region corresponding to the first motion region and a frame image region corresponding to the second motion region when the second motion region appears, where the second motion region is formed by motion mask values;
a fourth calculating submodule 5053, configured to calculate a motion trajectory of the first motion region and the second motion region if a similarity between the frame image region corresponding to the first motion region and the frame image region corresponding to the second motion region is greater than a preset similarity threshold;
the judgment sub-module 5054 is configured to judge whether the motion trajectory meets a high-altitude parabolic condition.
Optionally, as shown in fig. 12, the monitoring module 505 further includes:
the preprocessing submodule 5055 is configured to remove, through morphological open operation, interference information in the second motion mask map to obtain a third motion mask map, where the third motion mask map includes a motion mask value and a static mask value;
a second monitoring submodule 5056 is configured to monitor the high altitude parabola based on the third motion mask map.
Optionally, as shown in fig. 13, the apparatus further includes:
a second obtaining module 506, configured to obtain a preset number of previous and subsequent frame images as a forensics frame image sequence when it is detected that the high-altitude parabola occurs.
It should be noted that the high-altitude parabolic monitoring device provided by the embodiment of the present invention may be applied to a mobile phone, a monitor, a computer, a server, and other devices that can monitor a high-altitude parabolic object.
The high-altitude parabolic monitoring device provided by the embodiment of the invention can realize each process realized by the high-altitude parabolic monitoring method in the method embodiment, and can achieve the same beneficial effects. To avoid repetition, further description is omitted here.
Referring to fig. 14, fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 14, including: a memory 1402, a processor 1401, and a computer program stored on the memory 1402 and executable on the processor 1401, wherein:
the processor 1401 is used for calling the computer program stored in the memory 1402, and executing the following steps:
acquiring a frame image sequence of a current monitoring scene;
calculating to obtain a first motion mask image between adjacent frame images according to the difference between the frame images;
constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image;
calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image;
monitoring the high altitude parabola based on the second motion mask map.
Optionally, the first motion mask map includes a motion mask value and a still mask value, and the calculating, by the processor 1401, the first motion mask map between adjacent frame images by using the difference between the frame images includes:
calculating the difference value of the corresponding pixel point pair between the adjacent frame images, and judging whether the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference value threshold value or not;
if the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference value threshold, assigning the corresponding first mask pixel point as a motion mask value;
if the difference value of the corresponding pixel point pair between the adjacent frame images is smaller than a preset first difference value threshold, assigning the corresponding first mask pixel point as a static mask value;
and obtaining a first motion mask image based on the assigned first mask pixel points.
Optionally, the constructing, by the processor 1401, a current background map according to the first motion mask map and the frame image corresponding to the first motion mask map includes:
partitioning the first motion mask image to obtain a plurality of mask areas;
constructing a plurality of background areas with the same shape as the mask area according to the mask area and the frame image corresponding to the first motion mask image;
and constructing a current background image based on the background area.
Optionally, the constructing, by the processor 1401, a plurality of background regions having the same shape as the mask region according to the mask region and the frame image corresponding to the first motion mask image includes:
calculating a motion state of the mask region according to a mask value of the mask region;
and selecting an image area corresponding to the first motion mask image in the frame image as a background area according to the motion state of the mask area.
Optionally, the number of the first motion mask map is n frames, and the number of frame images corresponding to the first motion mask map is also n frames, where n is a positive integer greater than 0, and the calculating, by the processor 1401, the motion state of the mask region according to the motion mask value and the static mask value of the mask region includes:
extracting a mask value sequence of each mask area in the first motion mask image according to the n frames of the first motion mask image, wherein the dimension of the mask value sequence is n;
extracting a corresponding target image area index according to a mask value sequence corresponding to the mask area;
and constructing a current background image based on the frame image corresponding to the first motion mask image and the target image area index.
Optionally, the calculating, by the processor 1401, a second motion mask map between the current background map and the current frame image according to a difference between the current background map and the current frame image includes:
calculating the difference value of the corresponding pixel point pair of the current background image and the current frame image, and judging whether the difference value of the pixel point pair is greater than or equal to a preset second difference value threshold value or not;
if the difference value of the corresponding pixel point pair of the current background image and the current frame image is larger than or equal to a preset first difference value threshold, assigning the corresponding second mask pixel point as a motion mask value;
if the difference value of the corresponding pixel point pair of the current background image and the current frame image is smaller than a preset second difference value threshold, assigning the corresponding second mask pixel point as a static mask value;
and obtaining a second motion mask image based on the assigned second mask pixel points.
Optionally, the monitoring of the high altitude parabola based on the second motion mask map performed by processor 1401 includes:
monitoring whether a first motion area with the area larger than a preset area threshold appears in a second motion mask image corresponding to the current frame image, wherein the first motion area is composed of motion mask values;
if the second motion region exists, monitoring whether a second motion region with the area larger than a preset area threshold value exists in a second motion mask image corresponding to the next frame image, and under the condition that the second motion region exists, calculating the similarity between a frame image region corresponding to the first motion region and a frame image region corresponding to the second motion region, wherein the second motion region is formed by motion mask values;
if the similarity between the frame image area corresponding to the first motion area and the frame image area corresponding to the second motion area is greater than a preset similarity threshold, calculating the motion trail of the first motion area and the second motion area;
and judging whether the motion track meets the condition of high-altitude parabolic motion.
Optionally, the monitoring of the high altitude parabola based on the second motion mask map performed by processor 1401 includes:
removing interference information in the second motion mask image through morphological open operation to obtain a third motion mask image, wherein the third motion mask image comprises a motion mask value and a static mask value;
monitoring the high altitude parabola based on the third motion mask map.
Optionally, the processor 1401 further performs a process including:
when the high-altitude parabolic phenomenon is monitored, a preset number of front and rear frame images are obtained and used as evidence obtaining frame image sequences.
The electronic device may be a mobile phone, a monitor, a computer, a server, or the like, which can be applied to monitoring a high altitude parabola.
The electronic device provided by the embodiment of the invention can realize each process realized by the high-altitude parabolic monitoring method in the method embodiment, can achieve the same beneficial effects, and is not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the high altitude parabola monitoring method provided in the embodiment of the present invention, and can achieve the same technical effect, and is not described here again to avoid repetition.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (12)

1. A high altitude parabola monitoring method is characterized by comprising the following steps:
acquiring a frame image sequence of a current monitoring scene;
calculating to obtain a first motion mask image between adjacent frame images according to the difference between the frame images;
constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image;
calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image;
monitoring the high altitude parabola based on the second motion mask map.
2. The method of claim 1, wherein the first motion mask pattern comprises a motion mask value and a still mask value, and wherein calculating the first motion mask pattern between adjacent frame images by a difference between the frame images comprises:
calculating the difference value of the corresponding pixel point pair between the adjacent frame images, and judging whether the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference value threshold value or not;
if the difference value of the corresponding pixel point pair between the adjacent frame images is larger than or equal to a preset first difference value threshold, assigning the corresponding first mask pixel point as a motion mask value;
if the difference value of the corresponding pixel point pair between the adjacent frame images is smaller than a preset first difference value threshold, assigning the corresponding first mask pixel point as a static mask value;
and obtaining a first motion mask image based on the assigned first mask pixel points.
3. The method of claim 1, wherein constructing a current background map from the first motion mask map and a frame image corresponding to the first motion mask map comprises:
partitioning the first motion mask image to obtain a plurality of mask areas;
constructing a plurality of background areas with the same shape as the mask area according to the mask area and the frame image corresponding to the first motion mask image;
and constructing a current background image based on the background area.
4. The method of claim 3, wherein the mask region includes a motion mask value and a still mask value, and the constructing the plurality of background regions having the same shape as the mask region from the mask region and the frame image corresponding to the first motion mask map includes:
calculating a motion state of the mask region according to the motion mask value and the static mask value of the mask region;
and selecting an image area corresponding to the first motion mask image in the frame image as a background area according to the motion state of the mask area.
5. The method of claim 4, wherein the number of the first motion mask map is n frames, and the number of frame images corresponding to the first motion mask map is also n frames, where n is a positive integer greater than 0, and wherein calculating the motion state of the mask region from the motion mask value and the still mask value of the mask region comprises:
extracting a mask value sequence of each mask area in the first motion mask image according to the n frames of the first motion mask image, wherein the dimension of the mask value sequence is n;
extracting a corresponding target image area index according to a mask value sequence corresponding to the mask area;
and constructing a current background image based on the frame image corresponding to the first motion mask image and the target image area index.
6. The method of claim 1, wherein said calculating a second motion mask map between the current background map and the current frame image from a difference between the current background map and the current frame image comprises:
calculating the difference value of the corresponding pixel point pair of the current background image and the current frame image, and judging whether the difference value of the pixel point pair is greater than or equal to a preset second difference value threshold value or not;
if the difference value of the corresponding pixel point pair of the current background image and the current frame image is larger than or equal to a preset first difference value threshold, assigning the corresponding second mask pixel point as a motion mask value;
if the difference value of the corresponding pixel point pair of the current background image and the current frame image is smaller than a preset second difference value threshold, assigning the corresponding second mask pixel point as a static mask value;
and obtaining a second motion mask image based on the assigned second mask pixel points.
7. The method of claim 1, wherein the monitoring the high altitude parabola based on the second motion mask map comprises:
monitoring whether a first motion area with the area larger than a preset area threshold appears in a second motion mask image corresponding to the current frame image, wherein the first motion area is composed of motion mask values;
if the second motion region exists, monitoring whether a second motion region with the area larger than a preset area threshold value exists in a second motion mask image corresponding to the next frame image, and under the condition that the second motion region exists, calculating the similarity between a frame image region corresponding to the first motion region and a frame image region corresponding to the second motion region, wherein the second motion region is formed by motion mask values;
if the similarity between the frame image area corresponding to the first motion area and the frame image area corresponding to the second motion area is greater than a preset similarity threshold, calculating the motion trail of the first motion area and the second motion area;
and judging whether the motion track meets the condition of high-altitude parabolic motion.
8. The method of claim 1, wherein the monitoring the high altitude parabola based on the second motion mask map comprises:
removing interference information in the second motion mask image through morphological open operation to obtain a third motion mask image, wherein the third motion mask image comprises a motion mask value and a static mask value;
monitoring the high altitude parabola based on the third motion mask map.
9. The method of any of claims 1 to 8, further comprising:
when the high-altitude parabolic phenomenon is monitored, a preset number of front and rear frame images are obtained and used as evidence obtaining frame image sequences.
10. A device for monitoring an aerial object, the device comprising:
the first acquisition module is used for acquiring a frame image sequence of a current monitoring scene;
the first calculation module is used for calculating to obtain a first motion mask image between adjacent frame images according to the difference value between the frame images;
the background construction module is used for constructing a current background image according to the first motion mask image and the frame image corresponding to the first motion mask image;
the second calculation module is used for calculating a second motion mask image between the current background image and the current frame image according to the difference value between the current background image and the current frame image;
and the monitoring module is used for monitoring the high altitude parabola based on the second motion mask map.
11. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the method of monitoring a high altitude parabola according to any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the method for monitoring a high altitude parabola according to any one of claims 1 to 9.
CN202011627329.2A 2020-12-30 2020-12-30 High-altitude parabolic monitoring method and device, electronic equipment and storage medium Pending CN112800846A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011627329.2A CN112800846A (en) 2020-12-30 2020-12-30 High-altitude parabolic monitoring method and device, electronic equipment and storage medium
PCT/CN2021/114803 WO2022142414A1 (en) 2020-12-30 2021-08-26 High-rise littering monitoring method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011627329.2A CN112800846A (en) 2020-12-30 2020-12-30 High-altitude parabolic monitoring method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112800846A true CN112800846A (en) 2021-05-14

Family

ID=75807876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011627329.2A Pending CN112800846A (en) 2020-12-30 2020-12-30 High-altitude parabolic monitoring method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112800846A (en)
WO (1) WO2022142414A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297949A (en) * 2021-05-20 2021-08-24 科大讯飞股份有限公司 High-altitude parabolic detection method and device, computer equipment and storage medium
WO2022142414A1 (en) * 2020-12-30 2022-07-07 深圳云天励飞技术股份有限公司 High-rise littering monitoring method and apparatus, electronic device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117423058A (en) * 2023-11-02 2024-01-19 江苏三棱智慧物联发展股份有限公司 High-altitude parabolic detection system and method based on urban safety eyes

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330916A (en) * 2017-06-15 2017-11-07 精伦电子股份有限公司 A kind of Mobile object detection method and system
CN111079663A (en) * 2019-12-19 2020-04-28 深圳云天励飞技术有限公司 High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN111260684A (en) * 2020-03-02 2020-06-09 成都信息工程大学 Foreground pixel extraction method and system based on combination of frame difference method and background difference method
CN112016414A (en) * 2020-08-14 2020-12-01 熵康(深圳)科技有限公司 Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0519698D0 (en) * 2005-09-28 2005-11-02 Univ Dundee Apparatus and method for movement analysis
CN110647822A (en) * 2019-08-30 2020-01-03 重庆博拉智略科技有限公司 High-altitude parabolic behavior identification method and device, storage medium and electronic equipment
CN111476163B (en) * 2020-04-07 2022-02-18 浙江大华技术股份有限公司 High-altitude parabolic monitoring method and device and computer storage medium
CN111723654B (en) * 2020-05-12 2023-04-07 中国电子系统技术有限公司 High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
CN112800846A (en) * 2020-12-30 2021-05-14 深圳云天励飞技术股份有限公司 High-altitude parabolic monitoring method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330916A (en) * 2017-06-15 2017-11-07 精伦电子股份有限公司 A kind of Mobile object detection method and system
CN111079663A (en) * 2019-12-19 2020-04-28 深圳云天励飞技术有限公司 High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN111260684A (en) * 2020-03-02 2020-06-09 成都信息工程大学 Foreground pixel extraction method and system based on combination of frame difference method and background difference method
CN112016414A (en) * 2020-08-14 2020-12-01 熵康(深圳)科技有限公司 Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142414A1 (en) * 2020-12-30 2022-07-07 深圳云天励飞技术股份有限公司 High-rise littering monitoring method and apparatus, electronic device, and storage medium
CN113297949A (en) * 2021-05-20 2021-08-24 科大讯飞股份有限公司 High-altitude parabolic detection method and device, computer equipment and storage medium
CN113297949B (en) * 2021-05-20 2024-02-20 科大讯飞股份有限公司 High-altitude parabolic detection method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2022142414A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN111079663B (en) High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN112800846A (en) High-altitude parabolic monitoring method and device, electronic equipment and storage medium
WO2018176624A1 (en) Methods and systems for fire detection
CN112016414A (en) Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system
US6774905B2 (en) Image data processing
Albiol et al. Detection of parked vehicles using spatiotemporal maps
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN107437318B (en) Visible light intelligent recognition algorithm
CN103106766A (en) Forest fire identification method and forest fire identification system
WO2022078182A1 (en) Throwing position acquisition method and apparatus, computer device and storage medium
WO2018026427A1 (en) Methods and systems of performing adaptive morphology operations in video analytics
CN112270253A (en) High-altitude parabolic detection method and device
KR20120035734A (en) A method for detecting fire or smoke
CN108830161B (en) Smog identification method based on video stream data
JPH0844874A (en) Image change detector
EP1266525B1 (en) Image data processing
CN109841022B (en) Target moving track detecting and alarming method, system and storage medium
US20130027550A1 (en) Method and device for video surveillance
KR20220000226A (en) A system for providing a security surveillance service based on edge computing
CN114699702B (en) Fire fighting equipment detection method and related device
CN115861236A (en) Method and device for determining dripping event, storage medium and electronic device
KR20210008574A (en) A Real-Time Object Detection Method for Multiple Camera Images Using Frame Segmentation and Intelligent Detection POOL
CN114898279A (en) Object detection method and device, computer equipment and storage medium
JP2016103246A (en) Image monitoring device
CN115731247A (en) Target counting method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination