CN110889866A - Background updating method for depth map - Google Patents

Background updating method for depth map Download PDF

Info

Publication number
CN110889866A
CN110889866A CN201911230183.5A CN201911230183A CN110889866A CN 110889866 A CN110889866 A CN 110889866A CN 201911230183 A CN201911230183 A CN 201911230183A CN 110889866 A CN110889866 A CN 110889866A
Authority
CN
China
Prior art keywords
parallax
matrix
background
jump
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911230183.5A
Other languages
Chinese (zh)
Inventor
李行
李杨
丁鹏飞
霍志坤
蒋桐
古峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Meijisen Information Technology Co Ltd
Original Assignee
Nanjing Meijisen Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Meijisen Information Technology Co Ltd filed Critical Nanjing Meijisen Information Technology Co Ltd
Priority to CN201911230183.5A priority Critical patent/CN110889866A/en
Publication of CN110889866A publication Critical patent/CN110889866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Abstract

The invention provides a background updating method of a depth map. According to the method, when each frame of depth map is obtained, the parallax distribution matrix is updated through a moving average algorithm, and the parallax distribution situation of each pixel point in a period of time can be effectively reflected by the obtained parallax distribution matrix. Meanwhile, for the parallax jump situation, whether the parallax of a certain pixel point is stable or not is judged by calculating the parallax distribution variance within a period of time or counting the overlarge parallax jump ratio within a period of time, and the judgment is used for judging whether the parallax of the pixel point is reliable or not and taking the parallax as the basis for foreground extraction. And for the parallax distribution matrix, extracting an optional second parallax to try to avoid the wrong foreground problem caused by parallax jump. The method has the advantages of small calculated amount, high adaptation speed to environmental changes and high accuracy, and effectively solves the problem of parallax jump of the depth map.

Description

Background updating method for depth map
Technical Field
The invention relates to a background updating method of a depth map.
Background
In the field of computer vision, one common method for target tracking is the background subtraction method; the background subtraction method differentiates the foreground from the background by differentiating the current frame image from the extracted background.
An important problem of the background subtraction method is how to obtain an accurate background; if the background is inaccurate, it may cause (1) foreground objects to be treated as background, resulting in object omission; (2) the background is regarded as a foreground target, resulting in false detection of the target;
for binocular stereo vision, there are also specific cases that need to be dealt with: (1) due to the limitation of the parallax map algorithm, the background under different light conditions changes and needs to be updated; (2) due to algorithm limitation, the disparity map can be calculated incorrectly in certain specific areas, so that the disparity value jumps between a real value and a specific error value; (3) the binocular stereoscopic vision image is simple and can be regarded as a special gray image, and the gray value fluctuates in a narrow range, so that the requirement on the accuracy of the background is high;
the current background establishment method mainly comprises the following steps: (1) a statistical histogram method; (2) a median method of statistics; (3) multi-needle image averaging, etc.;
the methods mainly aim at the background calculation of a color or gray level image, have low accuracy, and cannot solve the requirements of parallax jump and high accuracy.
Disclosure of Invention
The present invention provides a real-time disparity map updating background algorithm to overcome the above-mentioned defects of the prior art. The algorithm includes two parts: (1) updating the background when receiving the disparity map each time; (2) and judging the foreground of the received disparity map through a background matrix and a reliability matrix. These two are described in detail below.
First, updating background
The flow of this step is shown in figure 1 of the specification.
Before performing the background update, the following matrix is established:
(1) let W be the size of the disparity map, and the disparity distribution range is 0 to (D-1). A three-dimensional matrix is established, the width of the matrix is W, the length is H, and the height is D. The initial value of all cells in the matrix is set to 0. For a certain point P in the matrix, its position is denoted by (w, h, d). This three-dimensional matrix is used to preserve the distribution of parallax at a certain time. This three-dimensional matrix is called a disparity distribution matrix.
(2) A two-dimensional matrix is created, with width W and length H. The two-dimensional matrix is used for storing the average variance of the parallax distribution of a certain pixel point. This two-dimensional matrix is called a variance matrix.
(3) A two-dimensional matrix is created, with width W and length H. The two-dimensional matrix is used for storing the hopping proportion of a certain pixel point on the time sequence. This two-dimensional matrix is called a transition ratio matrix.
(4) A two-dimensional matrix is created, with width W and length H. The two-dimensional matrix is used for storing whether a certain pixel point is reliable or not. This two-dimensional matrix is called the reliability matrix.
(5) A three-dimensional matrix is created, with width W, length H, and height 2. This three-dimensional matrix serves to hold the final calculated background. This three-dimensional matrix is called the background matrix.
For each frame of input disparity map, the intermediate matrix and the background matrix are updated through the following steps.
(1) Traversing the disparity map calculated in real time, and for each pixel point (w, h), setting the disparity value of the pixel point as d, and updating the value in the disparity distribution matrix through the following formula:
update formula when K < = N:
Hist(w, h, d) = Hist(w, h d) + 1;
update formula when K > N:
Hist(w,h,0) = Hist(w,h,0) * (N-1)/N;
Hist(w,h,1) = Hist(w,h,0) * (N-1)/N;
Hist(w,h,d) = Hist(w,h,0) * (N-1)/N + 1;
Hist(w,h,D-1) = Hist(w,h,0) * (N-1)/N;
wherein Hist is a parallax distribution matrix, w is a wide position, h is a long position, and d is a high position; n is a parameter that can be modified and the type is a shaping value.
(2) And calculating the variance of the distribution histogram of each pixel point, and storing the variance into a variance matrix. The calculation formula is as follows:
var (w, h) = (1/D) >; where mean _ d is the weighted average disparity, and Var (w, h) represents the pixel value at the position of w in width and h in length.
(3) (optional substitution in step 2) for each pixel point, calculating the difference between the pixel point and the same position in the previous disparity map, setting the difference as Ddiff, and updating a jump proportion matrix, wherein the formula is as follows:
jump (w, h) = Jump (w, h) × (K-1)/K, when Ddiff < = Dthreshold;
jump (w, h) = Jump (w, h) × (K-1)/K + 1/K when Ddiff > Dthreshold;
(4) and (5) for a certain pixel point, a step (6) and a step (7) are used for judging whether the parallax of the pixel point is reliable or not. Step (6) and step (7) may be mutually substituted.
The judgment method 1:
Stable(w,h) = (Var(w,h)<var_th)
the judgment method 2 comprises the following steps:
Stable(w,h) = (Jump(w,h)<jump_th);
(5) for pixel point (w, h), timing extracts the background map by the following formula:
Background(w,h,0) = max(Hist(w,h))
background (w, h,1) = d (if Hist (w, h, d) < Background (w, h,0) and Hist (w, h, d)/N > k), where k is a variable parameter, ranging from 0 to 0.5;
Background(w,h,1) = NULL(else)。
II, foreground judgment step
The flow of this step is shown in figure 2 of the specification.
Setting the image of the current frame as Cur and the image of the Foreground as Foreground; the width of the two is W, and the length is H. For a certain pixel (w, h), let the current frame parallax be d = Cur (w, h).
(1) First to see if this location is reliable. If Stable (w, h) = false, Foreground (w, h) = 0, the process ends; otherwise, continuing.
(2) For a certain pixel point (w, h), judging whether the pixel point is a background:
if abs (d-Background (w, h,0)) < Background _ th, judging that it is Background, Foreground (w, h) = 0, and ending the process; otherwise, continuing;
(3) judging whether Background (w, h,1) exists, if yes, continuously judging whether Background exists: if abs (d-Background (w, h,1)) < Background _ th, judging that it is Background, Foreground (w, h) = 0, and ending the process; otherwise, continuing;
setting the values of the foreground image: forego (w, h) = d.
Drawings
Fig. 1 is an update background flow diagram.
Fig. 2 is a flow chart of extracting a foreground.
Detailed Description
In order that those skilled in the art will better understand the present invention, the following detailed description of the invention is provided in conjunction with the accompanying drawings and the detailed description of the invention.
The algorithm mainly carries out background updating and foreground judgment on a scene under a fixed visual angle.
For the disparity map with the input image format size of 320 × 240 and the disparity calculation range of 0-15, the following processing is performed:
and establishing a three-dimensional parallax distribution matrix, wherein the length of the matrix is 320, the width of the matrix is 240, the height of the matrix is 16, and the initial values are all 0. A two-dimensional variance matrix is established with a length of 320, a width of 240 and an initial value of 0.
A two-dimensional reliability matrix is established with a length of 320, a width of 240 and an initial value of 1.
A three-dimensional background matrix is established, the length of the matrix is 320, the width of the matrix is 240, the height of the matrix is 2, and the initial values are NULL.
And traversing the newly input disparity map and updating the matrix. And obtaining a reliability matrix from the variance matrix, wherein the value of the variance threshold is 0.3. And when the background image is extracted, the value of the parameter k of the second background is 0.2.

Claims (4)

1. A background updating method for a depth map comprises the following steps: the method comprises the steps of establishing a parallax distribution matrix, updating the parallax distribution matrix, updating a background matrix and extracting the background matrix, and is characterized in that each pixel point in the background matrix has a selectable second background value.
2. A background update method according to claim 1, further comprising generating a reliability matrix for determining whether the pixel disparity is reliable.
3. A background update method as claimed in claim 2, wherein the reliability matrix is obtained by calculating and updating variance.
4. A background update method according to claim 2, wherein the reliability matrix is obtained by calculating and updating a jump ratio.
CN201911230183.5A 2019-12-04 2019-12-04 Background updating method for depth map Pending CN110889866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911230183.5A CN110889866A (en) 2019-12-04 2019-12-04 Background updating method for depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911230183.5A CN110889866A (en) 2019-12-04 2019-12-04 Background updating method for depth map

Publications (1)

Publication Number Publication Date
CN110889866A true CN110889866A (en) 2020-03-17

Family

ID=69750506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911230183.5A Pending CN110889866A (en) 2019-12-04 2019-12-04 Background updating method for depth map

Country Status (1)

Country Link
CN (1) CN110889866A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120307023A1 (en) * 2010-03-05 2012-12-06 Sony Corporation Disparity distribution estimation for 3d tv
CN104778721A (en) * 2015-05-08 2015-07-15 哈尔滨工业大学 Distance measuring method of significant target in binocular image
CN105809715A (en) * 2016-03-07 2016-07-27 南京航空航天大学 Inter-frame cumulative variation matrix-based visual moving object detection method
CN107274477A (en) * 2017-05-03 2017-10-20 西安电子科技大学 A kind of background modeling method based on three dimensions top layer
CN107301642A (en) * 2017-06-01 2017-10-27 中国人民解放军国防科学技术大学 A kind of full-automatic prospect background segregation method based on binocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120307023A1 (en) * 2010-03-05 2012-12-06 Sony Corporation Disparity distribution estimation for 3d tv
CN104778721A (en) * 2015-05-08 2015-07-15 哈尔滨工业大学 Distance measuring method of significant target in binocular image
CN105809715A (en) * 2016-03-07 2016-07-27 南京航空航天大学 Inter-frame cumulative variation matrix-based visual moving object detection method
CN107274477A (en) * 2017-05-03 2017-10-20 西安电子科技大学 A kind of background modeling method based on three dimensions top layer
CN107301642A (en) * 2017-06-01 2017-10-27 中国人民解放军国防科学技术大学 A kind of full-automatic prospect background segregation method based on binocular vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
郝志成与吴川: "基于稳定矩阵的动态图像运动目标检测", 《光学学报》 *
颜卓: "基于双目视觉的运动物体检测系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
EP3171292B1 (en) Driving lane data processing method, device, storage medium and apparatus
US9070042B2 (en) Image processing apparatus, image processing method, and program thereof
EP2858008B1 (en) Target detecting method and system
CN109086724B (en) Accelerated human face detection method and storage medium
WO2020083307A1 (en) Method, apparatus, and storage medium for obtaining depth image
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
CN104504688A (en) Method and system based on binocular stereoscopic vision for passenger flow density estimation
CN102136136A (en) Luminosity insensitivity stereo matching method based on self-adapting Census conversion
CN110910498B (en) Method for constructing grid map by using laser radar and binocular camera
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
CN115641359B (en) Method, device, electronic equipment and medium for determining movement track of object
CN105184771A (en) Adaptive moving target detection system and detection method
GB2599939A (en) Method of updating the existance probability of a track in fusion based on sensor perceived areas
CN110363197B (en) Video region of interest extraction method based on improved visual background extraction model
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN108876807B (en) Real-time satellite-borne satellite image moving object detection tracking method
CN110889866A (en) Background updating method for depth map
CN102708379B (en) Stereoscopic vision shielding pixel classification algorithm
CN115965961B (en) Local-global multi-mode fusion method, system, equipment and storage medium
CN114331919B (en) Depth recovery method, electronic device, and storage medium
CN116469073A (en) Target identification method, device, electronic equipment, medium and automatic driving vehicle
US9674498B1 (en) Detecting suitability for converting monoscopic visual content to stereoscopic 3D
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN113256574B (en) Three-dimensional target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200317