WO2023109016A1 - 一种视频图像干扰检测方法、系统、设备以及介质 - Google Patents

一种视频图像干扰检测方法、系统、设备以及介质 Download PDF

Info

Publication number
WO2023109016A1
WO2023109016A1 PCT/CN2022/095383 CN2022095383W WO2023109016A1 WO 2023109016 A1 WO2023109016 A1 WO 2023109016A1 CN 2022095383 W CN2022095383 W CN 2022095383W WO 2023109016 A1 WO2023109016 A1 WO 2023109016A1
Authority
WO
WIPO (PCT)
Prior art keywords
current frame
frame image
image
threshold
edge
Prior art date
Application number
PCT/CN2022/095383
Other languages
English (en)
French (fr)
Inventor
张永兴
吴睿振
陈静静
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Publication of WO2023109016A1 publication Critical patent/WO2023109016A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present application relates to a video image interference detection method, system, device and storage medium.
  • the interference mainly comes from three aspects: one is the noise generated by the video signal itself in the monitoring system during the collection and transmission process or the abnormal interference caused by the failure of the monitoring system; the other is the objective of the scene monitored by the camera. Interference caused by changes in the environment such as weather, light and other conditions; third, deliberate destruction or interference by criminals in order to achieve some ulterior purposes, resulting in the monitoring system not working normally and losing its monitoring function.
  • the main interference detection methods include frame difference method and background subtraction method.
  • the frame difference method is that when there is no interference in the surveillance video, the content of consecutive multi-frame images does not change much, and the difference between them is relatively small and relatively stable; when interference occurs, the difference changes more obviously.
  • the inter-frame difference method uses this principle to detect whether interference occurs.
  • the principle of the inter-frame difference method is relatively low in complexity, relatively fast in operation speed, can quickly detect interference, and has good real-time performance, but it is sensitive to short-term accidental changes in surveillance video and is prone to false alarms.
  • the background subtraction is: the monitoring scene of each camera is fixed, and the image content of each frame of the monitoring video can be divided into two parts: changing and unchanged.
  • the unchanged part is called the background
  • the changing part is called the foreground or target.
  • the proportion of the foreground or target part in the surveillance image is relatively small, so the background image in the surveillance video has a high degree of similarity to the image containing the foreground target.
  • the monitoring system is normal, the difference between the background image and the current frame image is small, and the image content is similar; when there is interference in the monitoring system, there are obvious differences between the background image and the current frame image, and the image content changes drastically.
  • Background subtraction uses a certain method to obtain the background image, and then detects whether interference occurs by extracting appropriate image features and comparing the difference between the current frame image and the background image.
  • the key of background subtraction is the establishment of background model and background update.
  • An appropriate background model can obtain high-quality background images in scenes with complex objective environments; the update process can adapt the background images to various objective changes and disturbances in the monitoring scene, such as changes in external lighting and weather, so that the detection effect more accurate.
  • the inventor realizes that the establishment of the background model and the background update of the background subtraction are generally more complicated, the calculation amount is relatively large, and the operation speed is relatively slow. How to establish a suitable background model, choose an appropriate background update method, reduce the amount of calculation, and improve the calculation speed are the difficulties of background subtraction.
  • a video image interference detection method including the following steps:
  • determining whether the current frame image is disturbed according to the ratio of the first pixel number to the second pixel number includes:
  • determining that the current frame image is disturbed includes:
  • determining that the current frame image is disturbed includes:
  • determining that the current frame image is disturbed includes:
  • Different sizes of the first threshold, the second threshold and the third threshold are set according to different scenarios.
  • constructing the edge maps of the background image and the current frame image respectively including:
  • the edge map of the background image and the edge map of the current frame image are constructed by an edge detection algorithm using the same operator and parameters.
  • an embodiment of the present application also provides a video image interference detection system, including:
  • the acquisition module is configured to acquire the background image and the current frame image
  • a construction module configured to construct edge maps of the background image and the current frame image respectively
  • An extraction module configured to intersect the edge map of the background image and the edge map of the current frame image to obtain the same boundary map between the edge map of the background image and the edge map of the current frame image;
  • a statistical module configured to count the number of first pixels in the edge map of the background image and the number of second pixels in the same border map
  • a judging module configured to determine whether the current frame image is disturbed according to the ratio of the first pixel number to the second pixel number.
  • an embodiment of the present application also provides a computer device, including a memory and one or more processors, computer-readable instructions are stored in the memory, and the computer When the readable instructions are executed by the one or more processors, the one or more processors are made to execute the steps of any one of the video image interference detection methods described above.
  • the embodiments of the present application further provide one or more non-volatile computer-readable storage media storing computer-readable instructions, the computer-readable instructions being controlled by a When one or more processors are executed, the one or more processors are made to execute the steps of any video image interference detection method described above.
  • FIG. 1 is a schematic flowchart of a video image interference detection method provided by an embodiment of the present application.
  • Figure 2 is an edge map extracted for a background image.
  • FIG. 3 is an edge map extracted for an image after occlusion interference occurs.
  • FIG. 4 is an edge map extracted for an image with steering interference.
  • FIG. 5 is an edge map extracted for an image with out-of-focus interference.
  • Fig. 6 is a schematic diagram of a background image.
  • FIG. 7 is a schematic diagram of an image after occlusion interference occurs in the background image shown in FIG. 6 .
  • FIG. 8 is a schematic diagram of the background image shown in FIG. 6 after steering interference occurs.
  • FIG. 9 is a schematic diagram of an image after the background image shown in FIG. 6 turns out of focus.
  • FIG. 10 is a schematic structural diagram of a video image interference detection system provided by an embodiment of the present application.
  • Fig. 11 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • an embodiment of the present application proposes a video image interference detection method, as shown in FIG. 1 , which may include steps:
  • S5. Determine whether the current frame image is disturbed according to a ratio of the first pixel number to the second pixel number.
  • the solution proposed in this application can detect the interference behavior in the video surveillance system, can quickly identify the interference behavior in the video surveillance, and alarm the interference behavior in time.
  • step S1 in obtaining the background image, specifically, the background can be calculated using an average background model, and each frame of the surveillance video will be updated with the background, assuming that the background image is Background, the current frame The image is Current-frame, then the updated background calculation formula is:
  • ⁇ in the above formula is an update coefficient, and the value of ⁇ is between 0 and 1.
  • step S2 constructing the edge maps of the background image and the current frame image respectively, includes:
  • the edge map of the background image and the edge map of the current frame image are constructed by an edge detection algorithm using the same operator and parameters.
  • the image edge refers to the boundary of the object in the image.
  • the edge is an important structural feature in the image.
  • the edge often exists between the target and the background, and between different regions, so it can be used as an important basis for image segmentation.
  • There are sharp grayscale changes in the pixels on the edge of the object in the image such as the extracted edge maps shown in Figure 2 to Figure 5, the grayscale profile of the edge area can be regarded as a step, that is, the grayscale of the image The degree changes from a small area to another area with a very obvious difference.
  • the pixel value of the edge position is "1".
  • Common edge algorithms are implemented by calculating the gradient of pixel values in the image, such as the common Roberts, Prewitt, Sobel, and Lapacian canny operators.
  • the same operator needs to be used, and the parameters Consistent to ensure that there will be no strong boundary or self-boundary situation, which will affect the subsequent comparison of the same boundary.
  • step S3 the edge map of the background image and the edge map of the current frame image are intersected to obtain the edge map of the background image and the edge map of the current frame image
  • the same boundary (Static-Edge) can be calculated by Current-Edge and BG-Edge Obtained, that is, the same boundary is determined by checking whether there are pixels at the positions corresponding to the same coordinates in the two edge maps, and after traversing the entire coordinates, the same boundary map can be obtained.
  • step S4 the number of first pixels in the edge map of the background image and the second number of pixels in the same border map are counted, specifically, after the same border map is obtained.
  • Each coordinate in the same boundary map can be traversed to count the number of pixels in the same boundary map, and the number of pixels in the edge map of the background image can be obtained by using the same method.
  • S5. Determine whether the current frame image is disturbed according to the ratio of the first pixel number to the second pixel number, including:
  • the interference behavior can be judged and identified according to the value range of the ratio P.
  • disturbances may include steering, occlusion, and loss of focus.
  • occlusion refers to the fact that the surface of the camera lens is covered by foreign objects due to objective environment or human factors, such as long-term dust accumulation, being sprayed with paint, being deliberately blocked, etc., resulting in a sharp decrease in scene information in the surveillance video or complete disappearance, for example, for Figure 6
  • the background image shown in FIG. 7 is the image after occlusion. Therefore, occlusion refers to partial occlusion, and usually the occlusion area is not greater than 30% of the screen.
  • the picture can be divided into normal area and occlusion area at this time. There is no difference between the normal area and the background, and the edge of the occluded area has almost no intersection with the background. Overall, the P value at this time is about 50.
  • Steering means that the camera is turned at a certain angle due to human-made sabotage or other reasons, and deviates from the normal monitoring position, resulting in errors in the monitoring scene and missing the scene information of the monitoring site.
  • Figure 6 The image shown is after steering. Therefore, for the camera after turning, although most of the pictures may be the same, the objects in the picture are uniformly displaced, so the number of common edges in the two edge maps is limited. At this time, the value of P is close to 0.
  • Out of focus refers to the change of the focal length of the camera due to various reasons, resulting in inaccurate focus, resulting in a decrease in the quality of the surveillance video and blurring.
  • the image shown in Figure 9 is the out-of-focus image. Image. Therefore, after the camera is out of focus, the image content of the image is relatively blurred.
  • the imaging content of the out-of-focus image and the background image is the same, but the image details in the out-of-focus image will be much less than those of the background image.
  • the edge map reflects the details of the picture, so the number of edges of the out-of-focus picture will be less than that of the background picture, and the remaining edges are basically consistent with the edges of the background picture. So the P value is between steering and occlusion.
  • determining that the current frame image is disturbed includes:
  • the thresholds among the three are set to be Th1, Th2, and Th3 respectively, which satisfy the following relationship: P turn ⁇ Th3 ⁇ P out of focus ⁇ Th2 ⁇ P occlusion ⁇ Th1 ⁇ P normal.
  • the value of P is greater than Th1, it can be determined that there is no interference behavior in the current frame image, and no alarm is required at this time. If the value of P is less than Th1, it means that there is interference, and the interference behavior needs to be judged according to P. If the value of P is between Th2 and Th1, it can be determined that this interference behavior is a blocking behavior. If the value of P is between Th3 and Th2, it can be determined that this interference behavior is an out-of-focus behavior. If the value of P is between 0 and Th3, it can be determined that this interference behavior is a steering behavior.
  • the solution proposed in this application can be used to detect interference behaviors in a video surveillance system by using an edge comparison algorithm to identify interference behaviors, and can quickly identify interference behaviors in video surveillance, thereby timely alarming interference behaviors.
  • an embodiment of the present application also provides a video image interference detection system 400, as shown in FIG. 10 , including:
  • a construction module 402 configured to construct edge maps of the background image and the current frame image respectively
  • the judging module 405 is configured to determine whether the current frame image is disturbed according to the ratio of the first pixel number to the second pixel number.
  • the judging module 405 is further configured to:
  • the judging module 405 is further configured to:
  • the judging module 405 is further configured to:
  • a threshold setting module is also included, and the threshold setting module is configured to:
  • Different sizes of the first threshold, the second threshold and the third threshold are set according to different scenarios.
  • the edge map of the background image and the edge map of the current frame image are constructed by an edge detection algorithm using the same operator and parameters.
  • the solution proposed in this application can be used to detect interference behaviors in a video surveillance system by using an edge comparison algorithm to identify interference behaviors, and can quickly identify interference behaviors in video surveillance, thereby timely alarming interference behaviors.
  • the embodiment of the present application also provides a computer device 501, including: one or more processors 520 and a memory 510, the memory 510 stores Computer-readable instructions 511 that can run on the processor, when the computer-readable instructions 511 are executed by one or more processors, one or more processors 520 perform the video image interference detection of any embodiment as described above method steps.
  • the solution proposed in this application can be used to detect interference behaviors in a video surveillance system by using an edge comparison algorithm to identify interference behaviors, and can quickly identify interference behaviors in video surveillance, thereby timely alarming interference behaviors.
  • the embodiment of the present application also provides one or more non-volatile computer-readable storage media 601.
  • Computer-readable instructions 610 When the computer-readable instructions 610 are executed by the processor, the steps of the video image interference detection method in any of the foregoing embodiments are executed.
  • the solution proposed in this application can be used to detect interference behaviors in a video surveillance system by using an edge comparison algorithm to identify interference behaviors, and can quickly identify interference behaviors in video surveillance, thereby timely alarming interference behaviors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

一种视频图像干扰检测方法、系统、计算机设备以及介质,所述方法包括以下步骤:获取背景图像和当前帧图像;分别构建所述背景图像和所述当前帧图像的边缘图;对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图;统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数;根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰。

Description

一种视频图像干扰检测方法、系统、设备以及介质
相关申请的交叉引用
本申请要求于2021年12月14日提交中国专利局,申请号为202111524219.8,申请名称为“一种视频图像干扰检测方法、系统、设备以及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种视频图像干扰检测方法、系统、设备以及存储介质。
背景技术
近年来,随着对视频监控要求的不断提高,视频监控的智能化需求也越来越多,人们提出了智能视频监控的概念,智能视频监控技术逐渐发展起来。然而随着智能视频监控技术的不断发展和应用需求的不断增加,智能视频监控系统面临的挑战也在不断地增多,许多新的现实问题又相继被提出来和研究,干扰检测问题就是其中之一。
在实际应用中,由于客观环境、人为因素或者其它原因导致监控系统中出现了大量的各种各样的干扰。这些干扰使得监控系统采集的视频出现异常,甚至使整个监控系统失去作用。尤其是在大型监控系统中,摄像机的数目较多,采集的视频数据量也非常大。当一个或多个摄像机采集的视频中出现严重的干扰时,很难被工作人员及时发现。
一般来说,当监控系统中出现干扰时,监控画面会产生剧烈的变化,并且会持续一定的时间,从而影响监控系统的作用。通过大量的研究发现,干扰主要来自三个方面:一是监控系统中视频信号本身在采集、传输等过程中产生的噪声或监控系统出现故障而产生的异常干扰;二是摄像机所监控场景的客观环境如天气、光照等条件变化引起的干扰;三是不法分子为了达到一些不可告人的目的而进行的蓄意破坏或干扰,导致监控系统不能正常工作,失去监控作用。
在智能视频监控系统中,除了工作人员主动控制摄像机运动之外,一般来说摄像机的位置和方向是固定不变的,换言之每一个摄像机的监控场景是固定不变的,其采集的视频内容比较相似,当出现干扰时视频内容则产生剧烈的变化。基于这个特点,主要的干扰检测方法有帧差法和背景减法等。
其中,帧差法是当监控视频中没有干扰时,连续多帧图像的内容变化不大,它们之间的差异相对较小并且比较稳定;当干扰发生时这个差异变化较为明显。帧间差法就是利用这个原理来检测干扰是否发生的。帧间差法的原理复杂度较低,运算速度相对较快,能够快速地检测出干扰,实时性较好,但对监控视频中短暂的偶然变化比较敏感,容易出现虚警现象。
而背景减法为:每个摄像机的监控场景是固定不变的,监控视频的每一帧图像内容可以分成变化和不变两部分。通常将不变的部分称之为背景,而变化的部分则称之为前景或者目标。一般而言,前景或者目标部分在监控图像中占的比例较小,所以监控视频中的背景图像与含有前景目标的图像有较高的相似程度。当监控系统正常时,背景图像与当前帧图像差异较小,图像内容相似;当监控系统中存在干扰时,背景图像与当前帧图像有明显的差异,图像内容变化剧烈。背景减法利用一定的方法获取背景图像,通过提取合适的图像特征对比当前帧图像与背景图像之间的差异进而检测干扰是否发生。背景减法的关键是背景模型的建立和背景更新。恰当的背景模型能够在客观环境复杂的场景中获取高质量的背景图像;更新过程能够使背景图像适应监控场景的各种客观变化和干扰,如外界光照变化、天气变化等,从而使检测的效果更为准确。发明人意识到,背景减法的背景模型建立和背景更新一般较为复杂,计算量比较大,运算速度相对较慢。如何建立合适的背景模型,选择恰当的背景更新方法,减小计算量,提高运算速度是背景减法的难点。
发明内容
根据本申请公开的各种实施例,提出一种视频图像干扰检测方法,包括以下步骤:
获取背景图像和当前帧图像;
分别构建所述背景图像和所述当前帧图像的边缘图;
对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图;
统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数;以及
根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰。
在一个或多个实施例中,根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰,包括:
响应于所述比值大于第一阈值,确定所述当前帧图像未受到干扰;以及
响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰。
在一个或多个实施例中,响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
响应于所述比值不大于所述第一阈值且大于第二阈值,确定所述帧图像受到的干扰为遮挡。
在一个或多个实施例中,响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
响应于所述比值不大于所述第二阈值且大于第三阈值,确定所述帧图像受到的干扰为失焦。
在一个或多个实施例中,响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
响应于所述比值不大于所述第三阈值,确定所述帧图像受到的干扰为转向。
在一个或多个实施例中,还包括:
根据不同场景设置不同的第一阈值、第二阈值和第三阈值的大小。
在一个或多个实施例中,分别构建所述背景图像和所述当前帧图像的边缘图,包括:
通过边缘检测算法使用相同的算子和参数构建所述背景图像的边缘图和所述当前帧图像的边缘图。
基于同一发明构思,根据本申请的另一个方面,本申请的实施例还提供了一种视频图像干扰检测系统,包括:
获取模块,配置为获取背景图像和当前帧图像;
构建模块,配置为分别构建所述背景图像和所述当前帧图像的边缘图;
提取模块,配置为对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图;
统计模块,配置为统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数;以及
判断模块,配置为根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰。
基于同一发明构思,根据本申请的另一个方面,本申请的实施例还提供了一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行如上 所述的任一种视频图像干扰检测方法的步骤。
基于同一发明构思,根据本申请的另一个方面,本申请的实施例还提供了一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如上所述的任一种视频图像干扰检测方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一个或多个实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的实施例。
图1为本申请的实施例提供的视频图像干扰检测方法的流程示意图。
图2为针对一种背景图像提取到的边缘图。
图3为针对一种出现遮挡干扰后的图像提取到的边缘图。
图4为针对一种出现转向干扰后的图像提取到的边缘图。
图5为针对一种出现失焦干扰后的图像提取到的边缘图。
图6为一种背景图像的示意图。
图7为图6所示的背景图像出现遮挡干扰后的图像的示意图。
图8为图6所示的背景图像出现转向干扰后的图像的示意图。
图9为图6所示的背景图像出现转向失焦后的图像的示意图。
图10为本申请的实施例提供的视频图像干扰检测系统的结构示意图。
图11为本申请的实施例提供的计算机设备的结构示意图。
图12为本申请的实施例提供的计算机可读存储介质的结构示意图。
具体实施方式
为了为使本申请的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本申请实施例进一步详细说明。
需要说明的是,本申请实施例中所有使用“第一”和“第二”的表述均是为了区分两个相同名称非相同的实体或者非相同的参量,可见“第一”“第二”仅为了表述的方便, 不应理解为对本申请实施例的限定,后续实施例对此不再一一说明。
根据本申请的一个方面,本申请的实施例提出一种视频图像干扰检测方法,如图1所示,其可以包括步骤:
S1,获取背景图像和当前帧图像;
S2,分别构建所述背景图像和所述当前帧图像的边缘图;
S3,对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图;
S4,统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数;以及
S5,根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰。
本申请提出的方案可以对视频监控系统中的干扰行为检测,可以快速的甄别视频监控中的干扰行为,并及时将干扰行为报警。
在一个或多个实施例中,步骤S1,获取背景图像中,具体地,背景可以采用平均背景模型计算得出,监控视频的每一帧都会用更新背景,假设背景图像为Background,当前帧的图像为Current-frame,那么更新后的背景计算公式为:
Backgroud=α*Current_frame+(1-α)*Backgroud
上式中的α为更新系数,α的取值在0至1之间的。
在一个或多个实施例中,步骤S2,分别构建所述背景图像和所述当前帧图像的边缘图,包括:
通过边缘检测算法使用相同的算子和参数构建所述背景图像的边缘图和所述当前帧图像的边缘图。
具体的,图像边缘是指图像中物体的边界,边缘是图像中的重要的结构性特征,边缘往往存在于目标和背景之间,不同的区域之间,因此它可以作为图像分割的重要依据。在图像中物体的边缘的像素点存在急剧的灰度变化,例如如图2至图5所示的提取到的边缘图,边缘区域的灰度剖面可以看作是一个阶跃,即图像的灰度在一个很小的区域内变化到另一个相差十分明显的区域,常规图片中边缘信息图像,边界位置像素值为“1”。常见的边缘算法都是通过计算图像中像素值的梯度实现的,例如常见的Roberts、Prewitt、Sobel、Lapacian canny算子。
在利用边缘检测算法提取当前帧图像(current-frame)的边缘图(Frame-Edge)和获取背景图像(Back-ground)的边缘图(BG-Edge)时,需要利用相同的算子,并且参数一致,以 保证不会出现强边界或自边界的情况,从而影响后续相同边界的比对。
在一个或多个实施例中,步骤S3中,对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图中,具体的,当前帧图像的边缘图为Current-Edge,背景图像的边缘图为BG-Edge,那么相同边界(Static-Edge)可以由Current-Edge跟BG-Edge计算得到,即通过对两个边缘图中相同坐标对应的位置上是否有像素点来确定相同边界,遍历整个坐标后,即可得到相同边界图。
在一个或多个实施例中,步骤S4中,统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数,具体的,当得到相同边界图后,可以遍历相同边界图中每一个坐标,从而统计出相同边界图中像素点的数目,利用同样的方法即可得到背景图像的边缘图中的像素点数。
在一个或多个实施例中,S5,根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰,包括:
响应于所述比值大于第一阈值,确定所述当前帧图像未受到干扰;以及
响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰。
具体的,当计算相同边界图的像素数目与背景图像的边缘图中像素数目的比值P后,即可根据比值P的取值范围来判断以及甄别干扰行为。
由比值计算公式可知,比值越大则说明当前帧图像与背景图像越相似,也可以说明干扰出现的概率越低。
在一个或多个实施例中,干扰可以包括转向、遮挡和失焦。
其中,遮挡指由于客观环境或人为因素使摄像机镜头表面被异物覆盖,如长时间的灰尘积累、被喷漆、被刻意遮挡等,导致监控视频中场景信息急剧变少或者全部消失,例如对于图6所示的背景图像,图7所示的图像即为遮挡后的图像。因此,遮挡是指部分遮挡,通常遮挡面积不大于画面的30%。对于遮挡相机的行为,此时画面可以划分为正常区域和遮挡区域。正常区域与背景毫无区别,遮挡区域的边缘与背景几乎没有交集,总体下来,此时的P值大致在50。
转向指因人为刻意破坏或其它原因导致摄像机转动了一定的角度,偏离了正常的监控位置,致使监控场景发生错误,漏掉监控场所的场景信息,例如对于图6所示的背景图像,图8所示的图像即为转向后的图像。因此转向后的相机,由于虽然大部分画面可能相同,但是画面中的物体统一产生位移,因此两幅边缘图的共同边缘数目有限。此时P的取值接近于0。
失焦指由各种原因导致摄像机的焦距产生变化,造成对焦不准,导致监控视频质量下降,模糊不清,例如对于图6所示的背景图像,图9所示的图像即为失焦后的图像。因此失焦后的相机,成像画面内容比较模糊。失焦画面与背景画面的成像内容相同,但是失焦画面中的图像细节会比背景图片细节少很多。边缘图体现画面的细节,所以失焦画面的边缘会比背景画面的边缘数目少,残余的边缘与背景画面的边缘基本一致。所以P值介于转向与遮挡之间。
在一个或多个实施例中,响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
响应于所述比值不大于所述第一阈值且大于第二阈值,确定所述帧图像受到的干扰为遮挡。
在一个或多个实施例中,响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
响应于所述比值不大于所述第二阈值且大于第三阈值,确定所述帧图像受到的干扰为失焦。
在一个或多个实施例中,响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
响应于所述比值不大于所述第三阈值,确定所述帧图像受到的干扰为转向。
具体的,由于转向、遮挡、失焦三种干扰行为会引发监控画面的边缘发生变化,并且三种行为的相同边界占比也不相同,因此可以通过比对当前帧图像与背景图像的相同边界中像素点的数量来判断及甄别干扰行为。三种干扰行为画面与背景画面的相同边界占比满足如下关系:P转向<P失焦<P遮挡<P正常。
因此,为了量化三者之间的边界,设三者间的阈值分别为Th1、Th2、Th3,即满足如下关系:P转向≤Th3<P失焦≤Th2<P遮挡≤Th1<P正常。
这样,当P的取值大于Th1,可以判定当前帧图像不存在干扰行为,此时无需报警。如果P的数值小于Th1,说明存在干扰,需要根据P判别干扰行为。如果P的取值在Th2与Th1之间,那么可以判定此种干扰行为是遮挡行为。如果P的取值在Th3与Th2之间,那么可以判定此种干扰行为是失焦行为。如果P的取值在0与Th3之间,那么可以判定此种干扰行为是转向行为。
在一个或多个实施例中,还包括:
根据不同场景设置不同的第一阈值、第二阈值和第三阈值的大小。
具体的,可以针对不同的场景设置不同的第一阈值、第二阈值、第三阈值的大小, 例如可以针对某一场景的干扰视频训练集中上百个典型干扰视频的相同边缘占比P的取值进行统计,从而确定出一组比较有效的阈值:Th1=45,Th2=25,Th3=13。
本申请提出的方案通过采用边缘对比算法识别干扰行为,可以用于视频监控系统中的干扰行为检测,可以快速的甄别视频监控中的干扰行为,从而及时将干扰行为报警。
应该理解的是,虽然图1的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
基于同一发明构思,根据本申请的另一个方面,本申请的实施例还提供了一种视频图像干扰检测系统400,如图10所示,包括:
获取模块401,配置为获取背景图像和当前帧图像;
构建模块402,配置为分别构建所述背景图像和所述当前帧图像的边缘图;
提取模块403,配置为对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图;
统计模块404,配置为统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数;以及
判断模块405,配置为根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰。
在一个或多个实施例中,判断模块405还配置为:
响应于所述比值大于第一阈值,确定所述当前帧图像未受到干扰;以及
响应于所述比值不小于所述第一阈值,确定所述当前帧图像受到干扰。
在一个或多个实施例中,判断模块405还配置为:
响应于所述比值不大于所述第一阈值且大于第二阈值,确定所述帧图像受到的干扰为遮挡。
在一个或多个实施例中,判断模块405还配置为:
响应于所述比值不大于所述第二阈值且大于第三阈值,确定所述帧图像受到的干扰为失焦。
在一个或多个实施例中,判断模块405还配置为:
响应于所述比值不大于所述第三阈值,确定所述帧图像受到的干扰为转向。
在一个或多个实施例中,还包括阈值设置模块,阈值设置模块配置为:
根据不同场景设置不同的第一阈值、第二阈值和第三阈值的大小。
在一个或多个实施例中,构建模块402还配置为:
通过边缘检测算法使用相同的算子和参数构建所述背景图像的边缘图和所述当前帧图像的边缘图。
本申请提出的方案通过采用边缘对比算法识别干扰行为,可以用于视频监控系统中的干扰行为检测,可以快速的甄别视频监控中的干扰行为,从而及时将干扰行为报警。
基于同一发明构思,根据本申请的另一个方面,如图11所示,本申请的实施例还提供了一种计算机设备501,包括:一个或多个处理器520以及存储器510,存储器510存储有可在处理器上运行的计算机可读指令511,计算机可读指令511被一个或多个处理器执行时,使得一个或多个处理器520执行如上所述的任一实施例的视频图像干扰检测方法的步骤。
本申请提出的方案通过采用边缘对比算法识别干扰行为,可以用于视频监控系统中的干扰行为检测,可以快速的甄别视频监控中的干扰行为,从而及时将干扰行为报警。
基于同一发明构思,根据本申请的另一个方面,如图12所示,本申请的实施例还提供了一个或多个非易失性计算机可读存储介质601,计算机可读存储介质601存储有计算机可读指令610,计算机可读指令610被处理器执行时执行上述的任一实施例的视频图像干扰检测方法的步骤。
本申请提出的方案通过采用边缘对比算法识别干扰行为,可以用于视频监控系统中的干扰行为检测,可以快速的甄别视频监控中的干扰行为,从而及时将干扰行为报警。
最后需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,可以通过计算机可读指令来指示相关硬件来完成,计算机可读指令可存储于一个或多个非易失性计算机可读取存储介质中,该计算机可读指令在被一个或多个处理器执行时,可实现包括如上述各方法的实施例的步骤。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
本领域技术人员还将明白的是,结合这里的公开所描述的各种示例性逻辑块、模块、电路和算法步骤可以被实现为电子硬件、计算机软件或两者的组合。为了清楚地说明硬件和软件的这种可互换性,已经就各种示意性组件、方块、模块、电路和步骤的功能对其进行了一般性的描述。这种功能是被实现为软件还是被实现为硬件取决于具体应用以及施加给整个系统的设计约束。本领域技术人员可以针对每种具体应用以各种方式来实现的功能,但是这种实现决定不应被解释为导致脱离本申请实施例公开的范围。
以上是本申请公开的示例性实施例,但是应当注意,在不背离权利要求限定的本申请实施例公开的范围的前提下,可以进行多种改变和修改。根据这里描述的公开实施例的方法权利要求的功能、步骤和/或动作不需以任何特定顺序执行。此外,尽管本申请实施例公开的元素可以以个体形式描述或要求,但除非明确限制为单数,也可以理解为多个。
应当理解的是,在本文中使用的,除非上下文清楚地支持例外情况,单数形式“一个”旨在也包括复数形式。还应当理解的是,在本文中使用的“和/或”是指包括一个或者一个以上相关联地列出的项目的任意和所有可能组合。
上述本申请实施例公开实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本申请实施例公开的范围(包括权利要求)被限于这些例子;在本申请实施例的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,并存在如上的本申请实施例的不同方面的许多其它变化,为了简明它们没有在细节中提供。因此,凡在本申请实施例的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本申请实施例的保护范围之内。

Claims (10)

  1. 一种视频图像干扰检测方法,其特征在于,包括:
    获取背景图像和当前帧图像;
    分别构建所述背景图像和所述当前帧图像的边缘图;
    对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图;
    统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数;以及
    根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰,包括:
    响应于所述比值大于第一阈值,确定所述当前帧图像未受到干扰;以及
    响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰。
  3. 如权利要求2所述的方法,其特征在于,所述响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
    响应于所述比值不大于所述第一阈值且大于第二阈值,确定所述帧图像受到的干扰为遮挡。
  4. 如权利要求3所述的方法,其特征在于,所述响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
    响应于所述比值不大于所述第二阈值且大于第三阈值,确定所述帧图像受到的干扰为失焦。
  5. 如权利要求4所述的方法,其特征在于,所述响应于所述比值不大于所述第一阈值,确定所述当前帧图像受到干扰,包括:
    响应于所述比值不大于所述第三阈值,确定所述帧图像受到的干扰为转向。
  6. 如权利要求5所述的方法,其特征在于,还包括:
    根据不同场景设置不同的第一阈值、第二阈值和第三阈值的大小。
  7. 如权利要求1所述的方法,其特征在于,所述分别构建所述背景图像和所述当前帧图像的边缘图,包括:
    通过边缘检测算法使用相同的算子和参数构建所述背景图像的边缘图和所述当前帧 图像的边缘图。
  8. 一种视频图像干扰检测系统,其特征在于,包括:
    获取模块,配置为获取背景图像和当前帧图像;
    构建模块,配置为分别构建所述背景图像和所述当前帧图像的边缘图;
    提取模块,配置为对所述背景图像的边缘图和所述当前帧图像的边缘图取交集以得到所述背景图像的边缘图和所述当前帧图像的边缘图之间的相同边界图;
    统计模块,配置为统计所述背景图像的边缘图中的第一像素点数和所述相同边界图中的第二像素点数;以及
    判断模块,配置为根据所述第一像素点数和所述第二像素点数的比值确定所述当前帧图像是否受到干扰。
  9. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1-7任意一项所述的方法的步骤。
  10. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1-7任意一项所述的方法的步骤。
PCT/CN2022/095383 2021-12-14 2022-05-26 一种视频图像干扰检测方法、系统、设备以及介质 WO2023109016A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111524219.8 2021-12-14
CN202111524219.8A CN113936242B (zh) 2021-12-14 2021-12-14 一种视频图像干扰检测方法、系统、设备以及介质

Publications (1)

Publication Number Publication Date
WO2023109016A1 true WO2023109016A1 (zh) 2023-06-22

Family

ID=79288936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095383 WO2023109016A1 (zh) 2021-12-14 2022-05-26 一种视频图像干扰检测方法、系统、设备以及介质

Country Status (2)

Country Link
CN (1) CN113936242B (zh)
WO (1) WO2023109016A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936242B (zh) * 2021-12-14 2022-03-11 苏州浪潮智能科技有限公司 一种视频图像干扰检测方法、系统、设备以及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030202698A1 (en) * 2002-04-25 2003-10-30 Simard Patrice Y. Block retouching
CN101599175A (zh) * 2009-06-11 2009-12-09 北京中星微电子有限公司 确定拍摄背景发生改变的检测方法及图像处理设备
CN111598906A (zh) * 2020-03-31 2020-08-28 广州杰赛科技股份有限公司 车辆检测方法、系统、设备和存储介质
CN111898486A (zh) * 2020-07-14 2020-11-06 浙江大华技术股份有限公司 监控画面异常的检测方法、装置及存储介质
CN113936242A (zh) * 2021-12-14 2022-01-14 苏州浪潮智能科技有限公司 一种视频图像干扰检测方法、系统、设备以及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030202698A1 (en) * 2002-04-25 2003-10-30 Simard Patrice Y. Block retouching
CN101599175A (zh) * 2009-06-11 2009-12-09 北京中星微电子有限公司 确定拍摄背景发生改变的检测方法及图像处理设备
CN111598906A (zh) * 2020-03-31 2020-08-28 广州杰赛科技股份有限公司 车辆检测方法、系统、设备和存储介质
CN111898486A (zh) * 2020-07-14 2020-11-06 浙江大华技术股份有限公司 监控画面异常的检测方法、装置及存储介质
CN113936242A (zh) * 2021-12-14 2022-01-14 苏州浪潮智能科技有限公司 一种视频图像干扰检测方法、系统、设备以及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜伟 等 (JIANG, WEI ET AL.): "基于边缘背景差法和Hough变换的公交乘客头部检测方法研究 (Bus Passengers' Head Detecting Based on Edge-Based Background Subtraction and Hough Transform)", 青岛大学学报(工程技术版) (JOURNAL OF QINGDAO UNIVERSITY(ENGINEERING & TECHNOLOGY EDITION)), no. No. 02, 15 June 2010 (2010-06-15), XP093071895 *

Also Published As

Publication number Publication date
CN113936242B (zh) 2022-03-11
CN113936242A (zh) 2022-01-14

Similar Documents

Publication Publication Date Title
US8611593B2 (en) Foreground object detection system and method
US10713798B2 (en) Low-complexity motion detection based on image edges
CN103729858B (zh) 一种视频监控系统中遗留物品的检测方法
US20170161905A1 (en) System and method for background and foreground segmentation
CN111260693B (zh) 一种高空抛物的检测方法
WO2020094088A1 (zh) 一种图像抓拍方法、监控相机及监控系统
CN108198208B (zh) 一种基于目标跟踪的移动侦测方法
JP5762250B2 (ja) 画像信号処理装置および画像信号処理方法
US20120155764A1 (en) Image processing device, image processing method and program
Xu et al. Segmentation and tracking of multiple moving objects for intelligent video analysis
US10692225B2 (en) System and method for detecting moving object in an image
WO2023109016A1 (zh) 一种视频图像干扰检测方法、系统、设备以及介质
KR20090043416A (ko) 카메라 이동 영향을 검출하고 억제하는 감시 카메라 장치및 그 제어 방법
Gruenwedel et al. An edge-based approach for robust foreground detection
CN111741186A (zh) 一种视频抖动检测方法、装置以及系统
Verma et al. Analysis of moving object detection and tracking in video surveillance system
US20240048672A1 (en) Adjustment of shutter value of surveillance camera via ai-based object recognition
Miao et al. Size and angle filter based rain removal in video for outdoor surveillance systems
KR20190026625A (ko) 영상 표시 방법 및 컴퓨터 프로그램, 그 기록매체
CN116866711A (zh) 节能型监控方法、系统、设备及存储介质
Tsesmelis et al. Tamper detection for active surveillance systems
Kim et al. Background subtraction using generalised Gaussian family model
CN117152453A (zh) 道路病害的检测方法、装置、电子设备和存储介质
US10194072B2 (en) Method and apparatus for remote detection of focus hunt
US20190088108A1 (en) Camera tampering detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22905784

Country of ref document: EP

Kind code of ref document: A1