WO2013152625A1 - Procédé et système d'annulation de bruit adhésif - Google Patents

Procédé et système d'annulation de bruit adhésif Download PDF

Info

Publication number
WO2013152625A1
WO2013152625A1 PCT/CN2013/000430 CN2013000430W WO2013152625A1 WO 2013152625 A1 WO2013152625 A1 WO 2013152625A1 CN 2013000430 W CN2013000430 W CN 2013000430W WO 2013152625 A1 WO2013152625 A1 WO 2013152625A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
attached noise
perspective
transformed
Prior art date
Application number
PCT/CN2013/000430
Other languages
English (en)
Chinese (zh)
Inventor
王寰宇
谭志明
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Publication of WO2013152625A1 publication Critical patent/WO2013152625A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images

Definitions

  • the present invention relates to the field of image processing, and more particularly to a method and system for removing attached noise.
  • the removal of the attached noise is achieved by cleaning the camera protection mirror.
  • the camera protection mirrors of most surveillance systems are not automatically cleaned and are difficult to clean manually.
  • Another common way to remove the attached noise is to avoid the generation of attached noise.
  • professional cameras have a lens protector or a special anti-adhesion oil. However, this does not completely prevent the occurrence of attached noise. Therefore, it is necessary to process the attached noise by digital processing technology. Of course, some methods are needed to detect the attached noise before removing the attached noise.
  • the existing noise detection methods are not applicable to the above situations for the following reasons: 1) Most of the attached noise detection methods are for fixed shape or texture noise, and the attached noise caused by complex outdoor environments does not have Fixed shapes and textures. 2) Some attached noise detection methods are specific to the rain or snow that is falling, but these methods are based on the assumption that noise is constant motion, but in fact the attached noise may be stationary relative to the camera. 3) — Some attached noise detection methods are based on the assumption that the camera is accurate in motion or that the attached noise is detected under the condition that the motion of the camera and the level of the imaging plane are specified. For the actual situation, the motion of the camera is usually Complexity is not known. Summary of the invention
  • the present invention provides a method and system for removing adhesion noise System.
  • An attached noise detecting method is configured to perform attached noise detection on a video to be detected, wherein all frames in the video to be detected are arranged in chronological order to obtain a three-dimensional spatiotemporal image I(x, y, t),
  • the attached noise detecting method includes: selecting any one of the three-dimensional spatiotemporal images I(x, y, t) as a reference frame, and performing perspective transformation on other frames in the three-dimensional spatiotemporal image I x, y, t) to obtain a transformed 3D spatiotemporal image r(x, y, t); Model the static background image using the transformed 3D spatiotemporal image r(x, y, t), and transform the transformed 3D spatiotemporal image I, (x, y , t) subtracting from the modeled static background image to obtain a three-dimensional difference image I d (x, y, t) ; binarizing the three-dimensional difference image I d (x, y, t
  • An attached noise detecting system is configured to perform attached noise detection on a video to be detected, wherein all frames in the video to be detected are arranged in chronological order to obtain a three-dimensional spatiotemporal image I(x, y, t),
  • the attached noise detecting system comprises: a perspective transform unit, configured to select any one of the three-dimensional spatiotemporal images I(x, y, t) as a reference frame, and perform other frames in the three-dimensional spatiotemporal image I(x, y, t) Perspective transformation to obtain transformed 3D spatiotemporal image r(x, y, t); static background modeling unit for modeling static background image using transformed 3D spatiotemporal image r(x, y, t) And subtracting the transformed three-dimensional spatiotemporal image r(x, y, t) from the modeled static background image to obtain a three-dimensional difference image I d ( X , y, t) ; a modeling error removing unit, For binar
  • the attached noise detecting method and system according to an embodiment of the present invention can automatically detect noise attached to a camera under irregular camera motion conditions, and is highly suitable for an outdoor monitoring system.
  • FIG. 1 is a block diagram showing an attached noise detecting system according to an embodiment of the present invention
  • FIG. 2 is a flow chart showing an attached noise detecting method according to an embodiment of the present invention. detailed description
  • the position of the noise in the image does not change when the direction of the camera changes. This is because noise is attached to the surface of the camera's protective mirror and moves with the camera.
  • the position of the static background and the position of the moving target change.
  • the attached noise detecting system and method according to an embodiment of the present invention attempts to detect the attached noise by using the above characteristics of the attached noise, the static background, and the moving target.
  • FIG. 1 shows a block diagram of an attached noise detection system in accordance with an embodiment of the present invention.
  • Fig. 2 shows a flow chart of an attached noise detecting method according to an embodiment of the present invention.
  • An attached noise detecting system and method according to an embodiment of the present invention will be described in detail below with reference to FIGS. 1 and 2.
  • each frame F(x, y) in the video to be detected needs to be arranged in time series to obtain a three-dimensional spatiotemporal image I ( x, y, t), as inputs to the attached noise detection method and system in accordance with an embodiment of the present invention.
  • an attached noise detecting system includes a perspective transform unit 102, a static background modeling unit 104, a modeling error removing unit 106, and a moving target removing unit 108.
  • the perspective transform unit 102 selects any one of the three-dimensional spatio-temporal images I(x, y, t) as a reference frame, and performs perspective transformation on other frames in the three-dimensional spatio-temporal image I(x, y, t) to The transformed three-dimensional spatiotemporal image I, (x, y, t) is obtained (ie, step S202 is performed).
  • the static background modeling unit 104 models the static background image by using the transformed three-dimensional spatiotemporal image I, (x, y, t), and models the transformed three-dimensional spatiotemporal image r(x, y, t).
  • the static background image is subtracted to obtain a three-dimensional difference image I d (x, y, t) (ie, step S204 is performed).
  • the modeling error removing unit 106 performs binarization processing on the three-dimensional difference image I d (x, y, t) to obtain a binarized three-dimensional difference image I d '(x, y, t), where, in The modeling error in the valued three-dimensional difference image I d '(x, y, t) is removed (ie, step S206 is performed).
  • the moving target removal unit 108 detects the influence of the moving target by performing inverse perspective transformation on the binarized three-dimensional difference image I d '(x, y, t), thereby detecting the attached noise in the video to be detected (ie, performing Step S208).
  • any one of the frames to be detected is selected as a reference frame (hereinafter also referred to as an R frame), and the imaging plane of the frame is used as a reference imaging plane.
  • the other frames in the detected video are then perspective transformed to project other frames onto the reference imaging plane.
  • the perspective transformation may be performed by multiplying the perspective projection matrix between the target frame and the reference frame by the original coordinate plane of the target frame. achieve.
  • the perspective projection matrix between the target frame and the reference frame can be estimated by an automatic image correction method.
  • the specific steps are as follows: First, the static points in the target frame and the reference frame are respectively found by Speeded Up Robust Features (SURF), and these static points are matched by the K-Nearest Neighbor (KNN) matching algorithm (That is, the matching points in the static points are found, and the matching points are optimized by the random sampling consensus algorithm (RANSAC); then the perspective projection matrix between the target frame and the reference frame is obtained by optimizing the backward projection error.
  • SURF Speeded Up Robust Features
  • KNN K-Nearest Neighbor
  • an indirect perspective projection matrix estimation method is proposed here, that is, a global perspective projection matrix is obtained by a partial perspective projection matrix (a perspective projection matrix between two temporally adjacent frames) (a perspective projection matrix between any two frames) ).
  • I is an identity matrix, which is a global perspective projection matrix between the target frame i and the reference frame R, -u +1 ) is located in the three-dimensional spatiotemporal image l (x, y, 0 in the target frame i and the reference frame R)
  • I(x, y, t) A 1) partial perspective projective matrix between frames.
  • the transformed three-dimensional spatiotemporal image I, (x, y, t) can be obtained.
  • the attached noise can be detected based on the difference between the true static background and the transformed three-dimensional space-time image I, (x, y, t).
  • the modeling process of the static background is as follows: First, the transformed 3D spatiotemporal image I'(x, y, t) is considered to consist of a series of pixel sequences along the time axis. These sequences can be divided into two categories - single mode. Sequence and multimodal sequences.
  • a single-mode sequence refers to a sequence in which the gray value of a pixel does not change much, such as sky, ground, etc.
  • a multi-modal sequence refers to a sequence in which the gray value of a pixel changes sharply and frequently, such as a region through which a moving target passes or Crowns and so on.
  • the unsupervised K-means clustering method can be used to classify the pixel sequences of the transformed three-dimensional spatiotemporal image I, (x, y, t) into two categories.
  • the median value of the gray value can be considered as the real static background value; for the multi-modal sequence, it can be modeled by the background modeling method based on the mixed Gaussian model.
  • the three-dimensional difference image I d (x, y, t) can be obtained by subtracting the static background image from the transformed three-dimensional spatiotemporal image r(x, y, t).
  • these differences are caused by modeling errors, moving targets, and attached noise. In order to detect the attached noise, separate modeling is required. The difference between the error and the moving target.
  • the modeling error can be removed by binarizing the three-dimensional difference image I d (x, y, t).
  • the specific operations are as follows: First, the three-dimensional difference image I d (x, y, t) is regarded as a multi-frame image arranged along the time axis; then, adaptive threshold-based binarization is performed for each frame image.
  • the strategy of adaptive threshold is: The area of moving target and attached noise per frame is less than 15% of the entire frame area. After removing the modeling error, the remaining area becomes a potentially attached noise area.
  • the binarized three-dimensional difference image I d '(x, y, t) is inversely transformed, that is, each frame of the image is projected onto its original imaging plane. At this point, it is known that all the attached noise has been aligned along the time axis. Then, along the time axis, by voting on the probability of potential attached noise, the area where the noise is attached can be obtained.
  • the voting strategy is: The area of attached noise is less than 10%.
  • the attached noise detecting system and method according to an embodiment of the present invention can automatically detect noise attached to a camera under the condition of irregular camera motion, and is very suitable for an outdoor monitoring system.
  • Embodiments of the invention may utilize programmed general purpose digital computers, ASICs, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms. achieve.
  • the functionality of the present invention can be implemented by any means known in the art. Distributed or networked systems, components, and circuits can be used. The communication or transmission of data can be wired, wireless or by any other means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé et un système d'annulation de bruit adhésif, le procédé de détection de bruit adhésif comprenant : la sélection d'une trame quelconque d'une image spatio-temporelle tridimensionnelle I (x, y, t) en tant que trame de référence, et l'application d'une transformation de perspective aux autres trames de l'image spatio-temporelle tridimensionnelle I (x, y, t), de manière à obtenir une image spatio-temporelle tridimensionnelle transformée I' (x, y, t) ; l'utilisation de l'image spatio-temporelle tridimensionnelle transformée I' (x, y, t) pour modéliser une image d'arrière-plan statique, et la soustraction de l'image d'arrière-plan statique modélisée de l'image spatio-temporelle tridimensionnelle transformée I' (x, y, t) pour obtenir une image de différence tridimensionnelle Id (x, y, t) ; la binarisation de l'image de différence tridimensionnelle Id (x, y, t) pour obtenir une image de différence tridimensionnelle binarisée Id' (x, y, t), une erreur de modélisation dans l'image de différence tridimensionnelle binarisée Id' (x, y, t) ayant été retirée ; et l'application d'une transformation de perspective inverse à l'image de différence tridimensionnelle binarisée Id' (x, y, t) pour retirer l'effet d'un objet mobile, détectant ainsi le bruit adhésif dans une vidéo à détecter.
PCT/CN2013/000430 2012-04-13 2013-04-12 Procédé et système d'annulation de bruit adhésif WO2013152625A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210115279.9 2012-04-13
CN201210115279.9A CN103377472B (zh) 2012-04-13 2012-04-13 用于去除附着噪声的方法和系统

Publications (1)

Publication Number Publication Date
WO2013152625A1 true WO2013152625A1 (fr) 2013-10-17

Family

ID=49327058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/000430 WO2013152625A1 (fr) 2012-04-13 2013-04-12 Procédé et système d'annulation de bruit adhésif

Country Status (2)

Country Link
CN (1) CN103377472B (fr)
WO (1) WO2013152625A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911630A (zh) * 2024-03-18 2024-04-19 之江实验室 一种三维人体建模的方法、装置、存储介质及电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254864B (zh) * 2016-09-30 2017-12-15 杭州电子科技大学 监控视频中的雪花和噪点噪声检测方法
CN109479120A (zh) * 2016-10-14 2019-03-15 富士通株式会社 背景模型的提取装置、交通拥堵状况检测方法和装置
CN109389563A (zh) * 2018-10-08 2019-02-26 天津工业大学 一种基于sCMOS相机的随机噪声自适应检测与校正方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256630A (zh) * 2007-02-26 2008-09-03 富士通株式会社 用于改善文档图像二值化性能的去噪声装置和方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5162485B2 (ja) * 2009-02-02 2013-03-13 公益財団法人鉄道総合技術研究所 鉄道信号機の視認可否を確認する方法及び装置
CN102201058B (zh) * 2011-05-13 2013-06-05 北京航空航天大学 共孔径主被动成像系统的“猫眼”效应目标识别算法

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256630A (zh) * 2007-02-26 2008-09-03 富士通株式会社 用于改善文档图像二值化性能的去噪声装置和方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIN, MEIYU: "Research and Implementation of Image Segmentation Based on Inverse Perspective Mapping", CHINA MASTER'S THESES FULL-TEXT DATABASE, 17 October 2008 (2008-10-17) *
ZHAO, WEI: "The Detection Techniques of Motion Regions in Time-Differenced Image", CHINA MASTER'S THESES FULL-TEXT DATABASE, 16 December 2006 (2006-12-16) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911630A (zh) * 2024-03-18 2024-04-19 之江实验室 一种三维人体建模的方法、装置、存储介质及电子设备
CN117911630B (zh) * 2024-03-18 2024-05-14 之江实验室 一种三维人体建模的方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN103377472A (zh) 2013-10-30
CN103377472B (zh) 2016-12-14

Similar Documents

Publication Publication Date Title
JP7164417B2 (ja) きれい又はよごれたキャプチャ画像判定
Maddalena et al. Towards benchmarking scene background initialization
KR101624210B1 (ko) 초해상도 영상 복원 방법, 및 이를 이용한 불법 주정차 단속 시스템
AU2011253910B2 (en) Method, apparatus and system for tracking an object in a sequence of images
EP3373248A1 (fr) Procédé, dispositif de commande et système pour suivre et photographier une cible
US20120019728A1 (en) Dynamic Illumination Compensation For Background Subtraction
CN109949347B (zh) 人体跟踪方法、装置、系统、电子设备和存储介质
JP2011123887A (ja) 画像のセットからピクセルを抽出するための方法およびシステム
JP6788619B2 (ja) 静的汚れ検出及び補正
Vidas et al. Hand-held monocular slam in thermal-infrared
WO2013152625A1 (fr) Procédé et système d'annulation de bruit adhésif
Panicker et al. Detection of moving cast shadows using edge information
US20130027550A1 (en) Method and device for video surveillance
Jeong et al. Probabilistic method to determine human subjects for low-resolution thermal imaging sensor
Yamashita et al. Removal of adherent noises from image sequences by spatio-temporal image processing
Yamashita et al. Noises removal from image sequences acquired with moving camera by estimating camera motion from spatio-temporal information
WO2018050644A1 (fr) Procédé, système informatique et produit programme d'ordinateur pour détecter une altération de caméra de surveillance vidéo
Rout et al. Video object detection using inter-frame correlation based background subtraction
Amri et al. Unsupervised background reconstruction based on iterative median blending and spatial segmentation
EP3701492B1 (fr) Procede de restauration d'images
US20180268228A1 (en) Obstacle detection device
Zhou et al. Improving video segmentation by fusing depth cues and the ViBe algorithm
CN109840911B (zh) 确定干净或脏污的拍摄图像的方法、系统和计算机可读存储介质
Zhang et al. Flicker parameters estimation in old film sequences containing moving objects
Mejia et al. Automatic moving object detection using motion and color features and bi-modal Gaussian approximation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13775117

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13775117

Country of ref document: EP

Kind code of ref document: A1