WO2016165064A1 - Robust foreground detection method based on multi-view learning - Google Patents

Robust foreground detection method based on multi-view learning Download PDF

Info

Publication number
WO2016165064A1
WO2016165064A1 PCT/CN2015/076533 CN2015076533W WO2016165064A1 WO 2016165064 A1 WO2016165064 A1 WO 2016165064A1 CN 2015076533 W CN2015076533 W CN 2015076533W WO 2016165064 A1 WO2016165064 A1 WO 2016165064A1
Authority
WO
WIPO (PCT)
Prior art keywords
foreground
background
feature
pixel
probability
Prior art date
Application number
PCT/CN2015/076533
Other languages
French (fr)
Chinese (zh)
Inventor
王坤峰
王飞跃
刘玉强
苟超
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2015/076533 priority Critical patent/WO2016165064A1/en
Publication of WO2016165064A1 publication Critical patent/WO2016165064A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the invention relates to an intelligent video monitoring technology, in particular to a robust foreground detection method based on multi-view learning.
  • Intelligent video surveillance is an important means of information collection, and foreground detection or background reduction is a challenging underlying problem in intelligent video surveillance research.
  • other applications such as target tracking, recognition, and anomaly detection can be realized.
  • the basic principle of foreground detection is to compare the current image of the video scene with the background model to detect areas with significant differences.
  • foreground detection often encounters three challenges in practical applications: motion shadows, illumination changes, and image noise.
  • the motion shadow is caused by the occlusion of the light source by the foreground target, which is a hard shadow on a sunny day and a soft shadow on a cloudy day.
  • the motion shadow is easily detected as foreground, interfering with the extraction of the size and shape information of the segmented foreground object.
  • Light changes are common in traffic scenes. For example, as the sun moves through the sky, the light changes slowly; as the sun enters or moves out of the clouds, the light may change rapidly.
  • noise is inevitably introduced during image acquisition, compression, and transmission. If the signal-to-noise ratio is too low, it will be difficult to distinguish the foreground target from the background scene.
  • the sparse model mainly uses various variants of principal component analysis and matrix decomposition to model the background as a low rank representation and the foreground as a sparse outlier.
  • the parametric model uses a certain probability distribution to model the background.
  • Nonparametric models have greater flexibility in probability density estimation.
  • the machine learning model uses machine learning methods such as support vector machines and neural networks to classify foreground and background.
  • the prior art has the following problems. First, only use the brightness feature, but the brightness feature is on the light Sensitive to changes and motion shadows. Second, only the background model is established, and the foreground pixels are identified as outliers, making it difficult to distinguish foregrounds that are similar to the background color. Third, the spatiotemporal consistency constraints in the video sequence are not utilized.
  • the robust foreground detection method based on multi-view learning provided by the invention can accurately realize the segmentation of the foreground and the background.
  • a robust foreground detection method based on multi-view learning including:
  • the energy function of the Markov random field model is constructed by the posterior probability of the foreground, the posterior probability of the background, and the spatiotemporal consistency constraint, and the energy function is minimized by the belief propagation algorithm to obtain the segmentation result of the foreground and the background. .
  • the multi-view learning based robust foreground detection method can calculate the posterior probability of the foreground and the posterior probability of the background by using the Bayesian rule according to the foreground likelihood, the background likelihood and the prior probability, and
  • the energy function of the Markov random field model is constructed by the posterior probability of the foreground, the posterior probability of the background and the spatio-temporal consistency constraint, so that the foreground and background segmentation can be accurately realized by the belief propagation algorithm.
  • FIG. 1 is a flowchart of a method for detecting a robust foreground based on multi-view learning according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an input video image and a reference background image according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a pyramid search template according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of texture variation features based on iterative search and multi-scale fusion according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of an RGB color model according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of brightness variation characteristics and chromaticity change characteristics according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for acquiring a candidate background according to an embodiment of the present invention.
  • FIG. 8 is a heterogeneous characteristic frequency histogram according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of image marking results according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of segmentation results of foreground and background according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for detecting a robust foreground based on multi-view learning according to an embodiment of the present invention.
  • step S101 an input video is acquired by a time domain value filtering method to obtain a reference background image, and an iterative search and multi-scale fusion of the current image and the reference background image are performed to obtain a heterogeneous feature.
  • step S102 the conditional probability density of the foreground class and the conditional probability density of the background class are calculated using the conditional independence of the heterogeneous feature, and the foreground is calculated using the Bayesian rule according to the foreground likelihood, the background likelihood, and the prior probability. Probability and background posterior probability.
  • step S103 an energy function of a Markov random field model is constructed by a posterior probability of the foreground, a posterior probability of the background, and a spatiotemporal consistency constraint, and the energy function is minimized by a belief propagation algorithm to obtain a foreground and The segmentation result of the background.
  • the obtaining, by the input domain video, the reference background image by using a time domain value filtering method includes:
  • the reference background image is obtained according to a median value of each of the pixels.
  • the threshold time window is the duration of the image of 500 frames, with specific reference to the input video image and the reference background image provided by the embodiment of the present invention as shown in FIG. 2, (a) is an input video image, and (b) is a reference background. image.
  • the heterogeneous feature is a texture change feature
  • the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
  • TV i is the texture change feature
  • i is a current pixel
  • [I R (i), I G (i), I B (i)] is a color value of a color model RGB of the current pixel
  • j is a background pixel corresponding to the current pixel
  • [E R (j), E G (j), E B (j)] is a color value of RGB of the background pixel
  • m ⁇ N(i) is the current pixel
  • the spatial neighborhood, n ⁇ N (j) is the spatial neighborhood of the background pixel.
  • Texture variation features are robust to motion shadows and illumination changes, but are sensitive to dynamic backgrounds. If not properly processed, a swaying textured background area can result in large texture changes. Therefore, in order to solve the above problem, the pixel i in the current image is matched with the pixel j in the reference background image by an iterative search and a multi-scale fusion strategy.
  • FIG. 3 is a schematic diagram of a pyramid search template according to an embodiment of the present invention. As shown in FIG. 3, (a) is a large pyramid search template, and (b) is a small pyramid search template.
  • the specific search process is as follows: First, the large pyramid template is used for rough search.
  • the pixel point i is initialized to the center point of the search template, and each iteration needs to examine 9 positions, and the optimal position (ie, Minimizing the position of TV i ) is set to the center point of the next iteration, repeating this iterative process until the optimal position happens to be the center point of the search template; secondly, using the small pyramid template for fine search, in the fine search process It is only necessary to examine 5 positions, and determine which pixel of the TV i is minimized as the optimal position; finally, the pixel j in the reference background image corresponding to the pixel i of the current image is acquired.
  • the optimal position ie, Minimizing the position of TV i
  • the present invention utilizes the complementary information contained in the multi-scale image for feature extraction, and specifically refers to the texture change based on iterative search and multi-scale fusion provided by the embodiment of the present invention as shown in FIG. Feature map.
  • the size of the current image and the reference background image are sequentially scaled to 1/2 times and 1/4 times of the original size, and feature extraction is performed on both the original size image and the scaled image;
  • the features of the three scales are original Fusion is performed on the scale, and the fusion operator is a median operator.
  • the heterogeneous feature is a brightness change feature
  • the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
  • BV i is the brightness variation characteristic
  • ⁇ i is a ratio of a brightness of the current pixel to a brightness of the background pixel
  • E j is a color value of RGB of the background pixel
  • the difference between the current pixel and the reference background pixel in the RGB space is decomposed into the luminance variation feature BV and the chrominance variation feature CV, with reference to the RGB color model diagram provided by the embodiment of the present invention as shown in FIG. 5 .
  • the change in luminance of I i with respect to the reference background pixel value E j is calculated.
  • [I R (i), I G (i), I B (i)] represent the RGB color value of the current pixel i
  • the specific process is: first calculating the ratio of the current pixel brightness to the background brightness ⁇ i , ⁇ i is known by the formula (3); secondly, the brightness variation characteristic BV i of the pixel i is the signed distance of ⁇ i E j with respect to E j , Specifically, it can be known from formula (2):
  • the heterogeneous feature is a chroma change feature
  • the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
  • CV i is the chromaticity change characteristic
  • ⁇ i is a ratio of the brightness of the current pixel to the brightness of the background pixel
  • [E R (j), E G (j), E B (j)] is the color value of RGB of the background pixel.
  • the specific process of the luminance variation feature and the chrominance variation feature based on the iterative search and the multi-scale fusion is as follows: First, the current image and the reference background image are sequentially scaled to 1/2 times and 1/4 times of the original size, in the original Feature extraction is performed on both the size image and the scaled image. Secondly, the features of the three scales are fused on the original scale to obtain the final brightness variation feature and chromaticity variation feature. For details, refer to the luminance variation feature and the chrominance variation feature provided by the embodiment of the present invention as shown in FIG. 6 .
  • Both BV i and CV i are distances in the RGB color space and have the same unit of measure.
  • the invention directly quantizes the values of these two features into integers, and can achieve high efficiency kernel density estimation.
  • the luminance variation characteristic the chrominance variation feature and the texture variation feature reflect the characteristics of different sides of the image, in the case of a given pixel class marker C, the probability distribution conditions of the three features are independent, and the formula (5) shows that:
  • the category tag C may be a foreground class or a background class.
  • calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature includes:
  • FG) p(BV
  • FG) p(TV
  • FG is the foreground class
  • FG) is a probability density of the brightness change characteristic under the condition of the foreground class
  • FG) is the condition of the foreground class
  • FG) is the probability density of the texture variation feature under the condition of the foreground class
  • ⁇ BV is the threshold of the luminance variation feature
  • ⁇ CV is the chromaticity
  • ⁇ TV is the threshold of the texture variation feature.
  • the luminance variation feature, the chrominance variation feature, and the texture variation feature select a trusted foreground pixel in the current image, accumulate and continuously update the frequency histogram of the three features, and use the multi-view learning method to estimate the conditional probability density of the foreground class. .
  • the calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature further includes:
  • an area outside the expanded trusted foreground area is used as a candidate background area, and a conditional probability of the background class is calculated according to the candidate background area. degree.
  • the candidate background acquisition method is specifically referred to the flowchart of the candidate background acquisition method provided by the embodiment of the present invention as shown in FIG. 7 . If the features of certain pixels in the current image satisfy BV > ⁇ BV or CV > ⁇ CV or TV > ⁇ TV , then these pixels belong to the trusted foreground region.
  • FIG. 8 is a heterogeneous characteristic frequency histogram according to an embodiment of the present invention.
  • figures a and d are luminance change characteristics
  • figures b and e are chromaticity change characteristics
  • figures c and f are texture change characteristics.
  • Figures a, b and c are feature frequency histograms based on ground-truth
  • figures d, e and f are feature frequency histograms based on multi-view learning.
  • the kernel density estimation modeling the foreground-like conditional probability density and the background-like conditional probability density, quantizing the values of the luminance change and the chrominance change into integers, and quantifying the value of the texture change feature to 0.1 interval, using a Gaussian kernel
  • the calculating the posterior probability of the foreground and the posterior probability of the background by using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
  • x) is the posterior probability of the foreground
  • C) is the foreground likelihood or background likelihood
  • P i (C) is the prior probability or the foreground of the foreground The prior probability of the background.
  • the calculating the posterior probability of the foreground and the posterior probability of the background by using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
  • x) is the posterior probability of the foreground
  • x) is the posterior probability of the background
  • the prior probabilities can be spatially distinct, with trees, buildings, and sky in the scene. Compared to the area, the road area should have a greater prospective prior probability.
  • the prior probability can also be changed with time. If a pixel is marked as foreground more frequently than the previous period in the recent period, its foreground prior probability increases, otherwise its foreground prior probability decreases. . Therefore, the present invention constructs a dynamic prior model based on the marking result of the previous image, which is specifically known by the formula (9):
  • P i,t+1 (FG) is the foreground prior probability of pixel i at time t+1
  • P i,t (FG) is the foreground prior probability of pixel i at time t
  • L i,t represents pixel i is the mark at time t
  • is the learning rate parameter.
  • FIG. 9 is a schematic diagram of image marking results according to an embodiment of the present invention.
  • graph a is the foreground prior probability of the pixel
  • graph b is the foreground posterior probability of the pixel.
  • the road area has a greater foreground prior probability than the tree area.
  • the real foreground target area has a greater foreground a posteriori probability than other areas.
  • the constructing the energy function of the Markov random field model by using the posterior probability of the foreground, the posterior probability of the background, and the space-time consistency constraint comprises:
  • E(f) is the energy function
  • D i (f i ) is the data item
  • W(f i , f u ) is the smoothing term.
  • I be the set of pixels in the current image
  • L be the set of marks. Marked as an estimate for each pixel, the estimate of the foreground is labeled 1 and the estimate of the background is labeled 0.
  • the marking process f is to assign a flag f i ⁇ L to each pixel i ⁇ I.
  • the markers can change slowly in the image space, but at some locations, such as target boundaries, the markers can change rapidly, and the quality of the markers depends on the energy function E(f).
  • N represents the edge set in the graph model structure
  • D i (f i ) is the data item, which measures the cost of assigning the label f i to the pixel i
  • W(f i , f u ) is smoothed.
  • Item which measures the cost of assigning the flags f i and f u to two pixels i and u that are spatially adjacent.
  • the marker that minimizes the energy function corresponds to the maximum a posteriori estimate of the Markov random field.
  • the data item D i (f i ) consists of two parts.
  • first part A posteriori probability that each pixel belongs to the foreground and a posterior probability that belongs to the background, namely:
  • the data item D i (f i ) imposes a constraint on each pixel, and the encouragement flag is consistent with the pixel observation value.
  • the second part Apply a time domain consistency constraint to the tag. It is assumed that a pair of associated pixels should have the same mark in a continuous image.
  • the current image ie, the image at time t
  • the previous frame image ie, the image at time t-1
  • each current pixel i ⁇ I and the pixel in the previous frame image v association Since the mark f v is known, It can be known from formula (12):
  • ⁇ >0 is a weight parameter.
  • the smoothing term W(f i , f u ) encourages the spatial consistency of the markers. If two spatially adjacent pixels have different markings, there is a price to pay. Specifically, it can be known from formula (13):
  • ⁇ I is the variance parameter and ⁇ I is set to 400.
  • FIG. 10 is a schematic diagram of segmentation results of foreground and background according to an embodiment of the present invention.
  • the first column is the embodiment number
  • the second column is the original image
  • the third column is the foreground detection result
  • the fourth column is the ground-truth.
  • the average recall rate (recall) of the present invention was 0.8271
  • the average precision was 0.8316
  • the average F-measure was 0.8252.
  • the figure includes motion shadow, illumination variation, image noise and other interference.
  • the robust foreground detection method based on multi-view learning proposed by the present invention has strong robustness, can overcome these interferences, and accurately obtain foreground detection results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a robust foreground detection method based on multi-view learning, comprising: acquiring a reference background image from an input video by means of a time domain median filtering method, and performing iterative search and multi-scale integration on a current image and the reference background image, so as to acquire heterogeneous features; calculating the conditional probability density of a foreground type and the conditional probability density of a background type using the conditional independence of the heterogeneous features, and calculating the posterior probability of a foreground and the posterior probability of a background using a Bayesean rule according to a foreground likelihood, a background likelihood and a priori probability; and constructing an energy function of a Markov random field model by means of the posterior probability of the foreground, the posterior probability of the background and a time-space consistency constraint, and minimizing the energy function using a belief propagation algorithm, so as to obtain a segmentation result of the foreground and the background. The present invention can realize robust foreground detection in a complex challenging environment.

Description

基于多视角学习的鲁棒性前景检测方法Robust foreground detection method based on multi-view learning 技术领域Technical field
本发明涉及智能视频监控技术,特别是涉及一种基于多视角学习的鲁棒性前景检测方法。The invention relates to an intelligent video monitoring technology, in particular to a robust foreground detection method based on multi-view learning.
背景技术Background technique
智能视频监控是一种重要的信息采集手段,而前景检测或背景消减是智能视频监控研究中一个很有挑战性的底层问题。在前景检测的基础上,可以实现目标跟踪、识别、异常检测等其他应用。前景检测的基本原则是将视频场景的当前图像与背景模型相比较,检测有显著区别的区域。虽然看似简单,前景检测在实际应用中经常遇到三种挑战:运动阴影、光照变化和图像噪声。运动阴影是由于光源被前景目标遮挡造成的,在晴天时为硬阴影,在阴天时为软阴影。无论何种形式,运动阴影容易被检测为前景,干扰对被分割前景目标的尺寸和形状信息的提取。光照变化在交通场景中很常见。例如,随着太阳在天空中移动,光照也缓慢变化;当太阳进入或移出云层时,光照也可能发生快速变化。另外,在图像的采集、压缩和传输过程中,不可避免会引入噪声。如果信噪比太低,将难以从背景场景中区分出前景目标。Intelligent video surveillance is an important means of information collection, and foreground detection or background reduction is a challenging underlying problem in intelligent video surveillance research. Based on the foreground detection, other applications such as target tracking, recognition, and anomaly detection can be realized. The basic principle of foreground detection is to compare the current image of the video scene with the background model to detect areas with significant differences. Although seemingly simple, foreground detection often encounters three challenges in practical applications: motion shadows, illumination changes, and image noise. The motion shadow is caused by the occlusion of the light source by the foreground target, which is a hard shadow on a sunny day and a soft shadow on a cloudy day. Regardless of the form, the motion shadow is easily detected as foreground, interfering with the extraction of the size and shape information of the segmented foreground object. Light changes are common in traffic scenes. For example, as the sun moves through the sky, the light changes slowly; as the sun enters or moves out of the clouds, the light may change rapidly. In addition, noise is inevitably introduced during image acquisition, compression, and transmission. If the signal-to-noise ratio is too low, it will be difficult to distinguish the foreground target from the background scene.
我们将前景检测技术分为稀疏模型、参数模型、非参数模型、机器学习模型等。稀疏模型主要利用主成分分析和矩阵分解的各种变体,将背景建模为低秩表示,将前景建模为稀疏离群点。但是这类方法的计算复杂性较高,并且难以检测与背景颜色相似的前景。参数模型利用某种概率分布来建模背景。非参数模型在概率密度估计中具有更高的灵活性。机器学习模型利用支持向量机、神经网络等机器学习方法进行前景和背景的分类。We divide the foreground detection technology into sparse models, parametric models, nonparametric models, machine learning models, and so on. The sparse model mainly uses various variants of principal component analysis and matrix decomposition to model the background as a low rank representation and the foreground as a sparse outlier. However, the computational complexity of such methods is high and it is difficult to detect foregrounds that are similar in color to the background. The parametric model uses a certain probability distribution to model the background. Nonparametric models have greater flexibility in probability density estimation. The machine learning model uses machine learning methods such as support vector machines and neural networks to classify foreground and background.
现有技术存在以下问题。第一,只利用亮度特征,但是亮度特征对光 照变化和运动阴影比较敏感。第二,只建立背景模型,将前景像素识别为离群点,难以区分与背景颜色相似的前景。第三,没有利用视频序列中的时空一致性约束。The prior art has the following problems. First, only use the brightness feature, but the brightness feature is on the light Sensitive to changes and motion shadows. Second, only the background model is established, and the foreground pixels are identified as outliers, making it difficult to distinguish foregrounds that are similar to the background color. Third, the spatiotemporal consistency constraints in the video sequence are not utilized.
发明内容Summary of the invention
本发明提供的基于多视角学习的鲁棒性前景检测方法,可以准确地实现前景和背景的分割。The robust foreground detection method based on multi-view learning provided by the invention can accurately realize the segmentation of the foreground and the background.
根据本发明的一方面,提供一种基于多视角学习的鲁棒性前景检测方法,包括:According to an aspect of the present invention, a robust foreground detection method based on multi-view learning is provided, including:
将输入视频通过时域中值滤波方法获取参考背景图像,对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征;Obtaining a reference background image by using a time domain value filtering method, performing iterative search and multi-scale fusion on the current image and the reference background image to obtain a heterogeneous feature;
利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度,并且根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率;Calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature, and calculating the posterior probability and background of the foreground using Bayesian rules according to foreground likelihood, background likelihood and prior probability Posterior probability
通过所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,利用置信传播算法将所述能量函数最小化得到前景和背景的分割结果。The energy function of the Markov random field model is constructed by the posterior probability of the foreground, the posterior probability of the background, and the spatiotemporal consistency constraint, and the energy function is minimized by the belief propagation algorithm to obtain the segmentation result of the foreground and the background. .
本发明实施例提供的基于多视角学习的鲁棒性前景检测方法,可以根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率,并通过前景的后验概率、背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,从而利用置信传播算法准确地实现前景和背景的分割。The multi-view learning based robust foreground detection method provided by the embodiment of the present invention can calculate the posterior probability of the foreground and the posterior probability of the background by using the Bayesian rule according to the foreground likelihood, the background likelihood and the prior probability, and The energy function of the Markov random field model is constructed by the posterior probability of the foreground, the posterior probability of the background and the spatio-temporal consistency constraint, so that the foreground and background segmentation can be accurately realized by the belief propagation algorithm.
附图说明DRAWINGS
图1为本发明实施例提供的基于多视角学习的鲁棒性前景检测方法流程图;FIG. 1 is a flowchart of a method for detecting a robust foreground based on multi-view learning according to an embodiment of the present invention;
图2为本发明实施例提供的输入视频图像和参考背景图像示意图;2 is a schematic diagram of an input video image and a reference background image according to an embodiment of the present invention;
图3为本发明实施例提供的金字塔搜索模板示意图; 3 is a schematic diagram of a pyramid search template according to an embodiment of the present invention;
图4为本发明实施例提供的基于迭代搜索和多尺度融合的纹理变化特征示意图;4 is a schematic diagram of texture variation features based on iterative search and multi-scale fusion according to an embodiment of the present invention;
图5为本发明实施例提供的RGB颜色模型示意图;FIG. 5 is a schematic diagram of an RGB color model according to an embodiment of the present invention; FIG.
图6为本发明实施例提供的亮度变化特征和色度变化特征示意图;FIG. 6 is a schematic diagram of brightness variation characteristics and chromaticity change characteristics according to an embodiment of the present invention; FIG.
图7为本发明实施例提供的候选背景获取方法流程图;FIG. 7 is a flowchart of a method for acquiring a candidate background according to an embodiment of the present invention;
图8为本发明实施例提供的异类特征频率直方图;FIG. 8 is a heterogeneous characteristic frequency histogram according to an embodiment of the present invention; FIG.
图9为本发明实施例提供的图像标记结果示意图;FIG. 9 is a schematic diagram of image marking results according to an embodiment of the present invention; FIG.
图10为本发明实施例提供的前景和背景的分割结果示意图。FIG. 10 is a schematic diagram of segmentation results of foreground and background according to an embodiment of the present invention.
具体实施方式detailed description
下面结合附图对本发明实施例提供的基于多视角学习的鲁棒性前景检测方法进行详细描述。The multi-view learning based robust foreground detection method provided by the embodiment of the present invention is described in detail below with reference to the accompanying drawings.
图1为本发明实施例提供的基于多视角学习的鲁棒性前景检测方法流程图。FIG. 1 is a flowchart of a method for detecting a robust foreground based on multi-view learning according to an embodiment of the present invention.
参照图1,在步骤S101,将输入视频通过时域中值滤波方法获取参考背景图像,对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征。Referring to FIG. 1, in step S101, an input video is acquired by a time domain value filtering method to obtain a reference background image, and an iterative search and multi-scale fusion of the current image and the reference background image are performed to obtain a heterogeneous feature.
在步骤S102,利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度,并且根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率。In step S102, the conditional probability density of the foreground class and the conditional probability density of the background class are calculated using the conditional independence of the heterogeneous feature, and the foreground is calculated using the Bayesian rule according to the foreground likelihood, the background likelihood, and the prior probability. Probability and background posterior probability.
在步骤S103,通过所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,利用置信传播算法将所述能量函数最小化得到前景和背景的分割结果。In step S103, an energy function of a Markov random field model is constructed by a posterior probability of the foreground, a posterior probability of the background, and a spatiotemporal consistency constraint, and the energy function is minimized by a belief propagation algorithm to obtain a foreground and The segmentation result of the background.
进一步地,所述将输入视频通过时域中值滤波方法获取参考背景图像包括:Further, the obtaining, by the input domain video, the reference background image by using a time domain value filtering method includes:
读取所述输入视频的每帧图像;Reading each frame of the input video;
将所述每帧图像通过时域中值滤波方法获取阈值时间窗口内各个像素 的中值;Obtaining each pixel in the threshold time window by using the time domain value filtering method Median value;
根据所述各个像素的中值得到所述参考背景图像。The reference background image is obtained according to a median value of each of the pixels.
这里,阈值时间窗口为500帧图像的持续时间,具体参照如图2所示的本发明实施例提供的输入视频图像和参考背景图像示意图,(a)为输入视频图像,(b)为参考背景图像。Here, the threshold time window is the duration of the image of 500 frames, with specific reference to the input video image and the reference background image provided by the embodiment of the present invention as shown in FIG. 2, (a) is an input video image, and (b) is a reference background. image.
进一步地,所述异类特征为纹理变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:Further, the heterogeneous feature is a texture change feature, and the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
根据公式(1)计算所述纹理变化特征:Calculating the texture change feature according to formula (1):
Figure PCTCN2015076533-appb-000001
Figure PCTCN2015076533-appb-000001
其中,TVi为所述纹理变化特征,i为当前像素,[IR(i),IG(i),IB(i)]为所述当前像素的颜色模型RGB的颜色值,j为所述当前像素对应的背景像素,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值,m∈N(i)为所述当前像素的空间邻域,n∈N(j)为所述背景像素的空间邻域。Wherein, TV i is the texture change feature, i is a current pixel, and [I R (i), I G (i), I B (i)] is a color value of a color model RGB of the current pixel, and j is a background pixel corresponding to the current pixel, [E R (j), E G (j), E B (j)] is a color value of RGB of the background pixel, and m∈N(i) is the current pixel The spatial neighborhood, n ∈ N (j) is the spatial neighborhood of the background pixel.
这里,对于当前图像的任意像素i,假设它的空间邻域N(i)为8邻域。Here, for any pixel i of the current image, it is assumed that its spatial neighborhood N(i) is 8 neighborhoods.
纹理变化特征对运动阴影和光照变化有很强的鲁棒性,但是对动态背景却很敏感。如果不加以适当处理,晃动的有纹理背景区域可以导致大的纹理变化。因此,为解决上述问题,通过迭代搜索和多尺度融合策略将当前图像中的像素i与参考背景图像中的像素j进行匹配。Texture variation features are robust to motion shadows and illumination changes, but are sensitive to dynamic backgrounds. If not properly processed, a swaying textured background area can result in large texture changes. Therefore, in order to solve the above problem, the pixel i in the current image is matched with the pixel j in the reference background image by an iterative search and a multi-scale fusion strategy.
图3为本发明实施例提供的金字塔搜索模板示意图。如图3所示,图(a)为大金字塔搜索模板,图(b)为小金字塔搜索模板。具体搜索过程如下:首先,利用大金字塔模板进行粗搜索,在第一轮迭代前,将像素点i初始化为搜索模板的中心点,每次迭代最多需要考察9个位置,将最优位置(即最小化TVi的位置)设为下一轮迭代的中心点,重复执行此迭代过程,直到最优位置恰好为搜索模板的中心点;其次,利用小金字塔模板进行细 搜索,在细搜索过程中,只需考察5个位置,将最小化TVi的那个像素确定为最优位置;最后获取与当前图像的像素i相对应的参考背景图像中的像素j。FIG. 3 is a schematic diagram of a pyramid search template according to an embodiment of the present invention. As shown in FIG. 3, (a) is a large pyramid search template, and (b) is a small pyramid search template. The specific search process is as follows: First, the large pyramid template is used for rough search. Before the first round of iteration, the pixel point i is initialized to the center point of the search template, and each iteration needs to examine 9 positions, and the optimal position (ie, Minimizing the position of TV i ) is set to the center point of the next iteration, repeating this iterative process until the optimal position happens to be the center point of the search template; secondly, using the small pyramid template for fine search, in the fine search process It is only necessary to examine 5 positions, and determine which pixel of the TV i is minimized as the optimal position; finally, the pixel j in the reference background image corresponding to the pixel i of the current image is acquired.
为了处理迭代搜索陷入局部极小值的问题,本发明利用多尺度图像包含的互补信息进行特征提取,具体参照如图4所示的本发明实施例提供的基于迭代搜索和多尺度融合的纹理变化特征示意图。In order to deal with the problem that the iterative search is trapped in the local minimum value, the present invention utilizes the complementary information contained in the multi-scale image for feature extraction, and specifically refers to the texture change based on iterative search and multi-scale fusion provided by the embodiment of the present invention as shown in FIG. Feature map.
首先,将当前图像和参考背景图像的尺寸依次缩放为原始尺寸的1/2倍和1/4倍,在原始尺寸图像和缩放图像上都进行特征提取;其次,将三种尺度的特征在原始尺度上进行融合,融合算子为中值运算子。First, the size of the current image and the reference background image are sequentially scaled to 1/2 times and 1/4 times of the original size, and feature extraction is performed on both the original size image and the scaled image; secondly, the features of the three scales are original Fusion is performed on the scale, and the fusion operator is a median operator.
进一步地,所述异类特征为亮度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:Further, the heterogeneous feature is a brightness change feature, and the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
根据公式(2)计算所述亮度变化特征:Calculating the brightness variation characteristic according to formula (2):
BVi=(αi-1)||OEj||             (2)BV i =(α i -1)||OE j || (2)
其中,BVi为所述亮度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,Ej为所述背景像素的RGB的颜色值,||OEj||为原点O与Ej的直线距离。Wherein, BV i is the brightness variation characteristic, α i is a ratio of a brightness of the current pixel to a brightness of the background pixel, and E j is a color value of RGB of the background pixel, and ||OE j || The linear distance between the origin O and E j .
这里,将RGB空间中当前像素与参考背景像素的差异分解为亮度变化特征BV和色度变化特征CV,具体参照如图5所示的本发明实施例提供的RGB颜色模型示意图。如图5所示,对于当前图像I中的像素i∈I,计算Ii相对于参考背景像素值Ej的亮度变化。令[IR(i),IG(i),IB(i)]表示当前像素i的RGB颜色值,[ER(j),EG(j),EB(j)]表示对应背景像素j的RGB颜色值。具体过程为:首先计算当前像素亮度与背景亮度的比值αi,αi由公式(3)可知;其次,像素i的亮度变化特征BVi为αiEj相对于Ej的有符号距离,具体由公式(2)可知: Here, the difference between the current pixel and the reference background pixel in the RGB space is decomposed into the luminance variation feature BV and the chrominance variation feature CV, with reference to the RGB color model diagram provided by the embodiment of the present invention as shown in FIG. 5 . As shown in FIG. 5, for the pixel i ∈ I in the current image I, the change in luminance of I i with respect to the reference background pixel value E j is calculated. Let [I R (i), I G (i), I B (i)] represent the RGB color value of the current pixel i, and [E R (j), E G (j), E B (j)] Background RGB color value of pixel j. The specific process is: first calculating the ratio of the current pixel brightness to the background brightness α i , α i is known by the formula (3); secondly, the brightness variation characteristic BV i of the pixel i is the signed distance of α i E j with respect to E j , Specifically, it can be known from formula (2):
Figure PCTCN2015076533-appb-000002
Figure PCTCN2015076533-appb-000002
由公式(2)可知,||OEj||表示原点O与Ej的直线距离,如果当前像素亮度等于背景亮度,则BVi=0。如果当前像素亮度小于背景亮度,则BVi<0。如果当前像素亮度大于背景亮度,则BVi>0。因此,亮度变化BVi反映了当前像素和对应背景像素在亮度上的差异。It can be seen from the formula (2) that ||OE j || represents the linear distance between the origin O and E j , and if the current pixel brightness is equal to the background luminance, BV i =0. If the current pixel brightness is less than the background brightness, then BV i <0. If the current pixel brightness is greater than the background brightness, then BV i >0. Therefore, the brightness change BV i reflects the difference in brightness between the current pixel and the corresponding background pixel.
进一步地,所述异类特征为色度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:Further, the heterogeneous feature is a chroma change feature, and the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
根据公式(4)计算所述色度变化特征:Calculating the chromaticity change characteristic according to formula (4):
Figure PCTCN2015076533-appb-000003
Figure PCTCN2015076533-appb-000003
其中,CVi为所述色度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,[IR(i),IG(i),IB(i)]为所述当前像素的RGB的颜色值,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值。Wherein CV i is the chromaticity change characteristic, and α i is a ratio of the brightness of the current pixel to the brightness of the background pixel, [I R (i), I G (i), I B (i)] For the color value of RGB of the current pixel, [E R (j), E G (j), E B (j)] is the color value of RGB of the background pixel.
这里,基于迭代搜索和多尺度融合的亮度变化特征和色度变化特征的具体过程如下:首先,将当前图像和参考背景图像依次缩放为原始尺寸的1/2倍和1/4倍,在原始尺寸图像和缩放图像上都进行特征提取;其次,将三种尺度的特征在原始尺度上进行融合,得到最终的亮度变化特征和色度变化特征。具体参照如图6所示的本发明实施例提供的亮度变化特征和色度变化特征示意图。Here, the specific process of the luminance variation feature and the chrominance variation feature based on the iterative search and the multi-scale fusion is as follows: First, the current image and the reference background image are sequentially scaled to 1/2 times and 1/4 times of the original size, in the original Feature extraction is performed on both the size image and the scaled image. Secondly, the features of the three scales are fused on the original scale to obtain the final brightness variation feature and chromaticity variation feature. For details, refer to the luminance variation feature and the chrominance variation feature provided by the embodiment of the present invention as shown in FIG. 6 .
BVi和CVi都是RGB颜色空间中的距离,有相同的测量单位。本发明将这两个特征的取值直接量化为整数,可以实现高效率的核密度估计。Both BV i and CV i are distances in the RGB color space and have the same unit of measure. The invention directly quantizes the values of these two features into integers, and can achieve high efficiency kernel density estimation.
由于亮度变化特征、色度变化特征和纹理变化特征反映了图像不同侧面的特性,在给定像素类别标记C的情况下,这三种特征的概率分布条件独立,由公式(5)可知:Since the luminance variation characteristic, the chrominance variation feature and the texture variation feature reflect the characteristics of different sides of the image, in the case of a given pixel class marker C, the probability distribution conditions of the three features are independent, and the formula (5) shows that:
p(BV,CV,TV|C)=p(BV|C)p(CV|C)p(TV|C)         (5) p(BV, CV, TV|C)=p(BV|C)p(CV|C)p(TV|C) (5)
其中,所述类别标记C可以是前景类或背景类。Wherein, the category tag C may be a foreground class or a background class.
进一步地,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度包括:Further, calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature includes:
根据公式(6)计算所述前景类的条件概率密度:Calculating the conditional probability density of the foreground class according to formula (6):
p(BV|FG)=p(BV|CV>τCV或TV>τTV),p(BV|FG)=p(BV|CV>τ CV or TV>τ TV ),
p(CV|FG)=p(CV|BV>τBV或TV>τTV),           (6)p(CV|FG)=p(CV|BV>τ BV or TV>τ TV ), (6)
p(TV|FG)=p(TV|BV>τBV或CV>τCV),p(TV|FG)=p(TV|BV>τ BV or CV>τ CV ),
其中,FG为所述前景类,p(BV|FG)为在所述前景类的条件下所述亮度变化特征的概率密度,p(CV|FG)为在所述前景类的条件下所述色度变化特征的概率密度,p(TV|FG)为在所述前景类的条件下所述纹理变化特征的概率密度,τBV为所述亮度变化特征的阈值,τCV为所述色度变化特征的阈值,τTV为所述纹理变化特征的阈值。Wherein FG is the foreground class, p(BV|FG) is a probability density of the brightness change characteristic under the condition of the foreground class, and p(CV|FG) is the condition of the foreground class The probability density of the chrominance variation feature, p(TV|FG) is the probability density of the texture variation feature under the condition of the foreground class, τ BV is the threshold of the luminance variation feature, and τ CV is the chromaticity The threshold of the variation feature, τ TV is the threshold of the texture variation feature.
这里,亮度变化特征、色度变化特征和纹理变化特征在当前图像中,选择可信前景像素,累积并持续更新三种特征的频率直方图,利用多视角学习方法来估计前景类的条件概率密度。Here, the luminance variation feature, the chrominance variation feature, and the texture variation feature select a trusted foreground pixel in the current image, accumulate and continuously update the frequency histogram of the three features, and use the multi-view learning method to estimate the conditional probability density of the foreground class. .
由公式(6)可知,如果亮度变化特征、色度变化特征和纹理变化特征中的其中一个特征的取值足够大,表明该像素是可信前景像素,可以加入频率直方图,用来估计其他特征的前景类条件概率密度。在本发明实施例中,设定
Figure PCTCN2015076533-appb-000004
τCV=20、τTV=3.6,这里
Figure PCTCN2015076533-appb-000005
表示在整幅图像中BV的中值,用于补偿图像的全局亮度变化。
It can be known from formula (6) that if the value of one of the brightness change characteristic, the chromaticity change feature and the texture change feature is sufficiently large, indicating that the pixel is a trusted foreground pixel, a frequency histogram can be added to estimate other The foreground class conditional probability density of the feature. In the embodiment of the present invention, setting
Figure PCTCN2015076533-appb-000004
τ CV =20, τ TV =3.6, here
Figure PCTCN2015076533-appb-000005
Represents the median of the BV over the entire image to compensate for the global brightness change of the image.
进一步地,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度还包括:Further, the calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature further includes:
从所述当前图像中获取可信前景区域;Obtaining a trusted foreground region from the current image;
对所述可信前景区域进行膨胀得到膨胀的可信前景区域;Extending the trusted foreground area to expand the credible foreground area;
从所述当前图像中,将位于所述膨胀的可信前景区域之外的区域作为候选背景区域,并且根据所述候选背景区域计算所述背景类的条件概率密 度。From the current image, an area outside the expanded trusted foreground area is used as a candidate background area, and a conditional probability of the background class is calculated according to the candidate background area. degree.
这里,候选背景获取方法具体参照如图7所示的本发明实施例提供的候选背景获取方法流程图。如果当前图像中的某些像素的特征满足BV>τBV或CV>τCV或TV>τTV,那么这些像素属于可信前景区域。The candidate background acquisition method is specifically referred to the flowchart of the candidate background acquisition method provided by the embodiment of the present invention as shown in FIG. 7 . If the features of certain pixels in the current image satisfy BV > τ BV or CV > τ CV or TV > τ TV , then these pixels belong to the trusted foreground region.
图8为本发明实施例提供的异类特征频率直方图。如图8所示,图a和图d为亮度变化特征,图b和图e为色度变化特征,图c和图f为纹理变化特征。图a、图b和图c是基于ground-truth的特征频率直方图,图d、图e和图f是基于多视角学习的特征频率直方图。FIG. 8 is a heterogeneous characteristic frequency histogram according to an embodiment of the present invention. As shown in FIG. 8, figures a and d are luminance change characteristics, figures b and e are chromaticity change characteristics, and figures c and f are texture change characteristics. Figures a, b and c are feature frequency histograms based on ground-truth, and figures d, e and f are feature frequency histograms based on multi-view learning.
这里,利用核密度估计,建模前景类条件概率密度和背景类条件概率密度,将亮度变化和色度变化的取值量化为整数,将纹理变化特征的取值量化为0.1间隔,采用高斯核函数,将三种特征的核宽度分别设为σBV=2.0、σCV=2.0和σTV=0.2。Here, using the kernel density estimation, modeling the foreground-like conditional probability density and the background-like conditional probability density, quantizing the values of the luminance change and the chrominance change into integers, and quantifying the value of the texture change feature to 0.1 interval, using a Gaussian kernel The function sets the kernel widths of the three features to σ BV = 2.0, σ CV = 2.0, and σ TV = 0.2, respectively.
进一步地,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:Further, the calculating the posterior probability of the foreground and the posterior probability of the background by using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
根据公式(7)计算所述前景的后验概率:Calculate the posterior probability of the foreground according to formula (7):
Figure PCTCN2015076533-appb-000006
Figure PCTCN2015076533-appb-000006
其中,Pi(FG|x)为所述前景的后验概率,p(x|C)为所述前景似然或背景似然,Pi(C)为所述前景的先验概率或所述背景的先验概率。Where P i (FG|x) is the posterior probability of the foreground, p(x|C) is the foreground likelihood or background likelihood, and P i (C) is the prior probability or the foreground of the foreground The prior probability of the background.
进一步地,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:Further, the calculating the posterior probability of the foreground and the posterior probability of the background by using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
根据公式(8)计算所述背景的后验概率:Calculate the posterior probability of the background according to formula (8):
Pi(BG|x)=1-Pi(FG|x)             (8)P i (BG|x)=1-P i (FG|x) (8)
其中,Pi(FG|x)为所述前景的后验概率,Pi(BG|x)为所述背景的后验概率。Where P i (FG|x) is the posterior probability of the foreground, and P i (BG|x) is the posterior probability of the background.
这里,先验概率可以是空间相异的,与场景中的树木、建筑物、天空 等区域相比,道路区域应该有更大的前景先验概率。先验概率还可以是随时间变化的,如果最近一段时间,某个像素比前一段时间更加频繁地被标记为前景,则它的前景先验概率增大,否则它的前景先验概率减小。因此,本发明基于先前图像的标记结果,构建一个动态先验模型,具体由公式(9)可知:Here, the prior probabilities can be spatially distinct, with trees, buildings, and sky in the scene. Compared to the area, the road area should have a greater prospective prior probability. The prior probability can also be changed with time. If a pixel is marked as foreground more frequently than the previous period in the recent period, its foreground prior probability increases, otherwise its foreground prior probability decreases. . Therefore, the present invention constructs a dynamic prior model based on the marking result of the previous image, which is specifically known by the formula (9):
Pi,t+1(FG)=(1-ρ)Pi,t(FG)+ρLi,t           (9)P i,t+1 (FG)=(1-ρ)P i,t (FG)+ρL i,t (9)
其中,Pi,t+1(FG)为像素i在t+1时刻的前景先验概率,Pi,t(FG)为像素i在t时刻的前景先验概率,Li,t表示像素i在t时刻的标记,ρ为学习率参数。Where P i,t+1 (FG) is the foreground prior probability of pixel i at time t+1, P i,t (FG) is the foreground prior probability of pixel i at time t, and L i,t represents pixel i is the mark at time t, and ρ is the learning rate parameter.
如果像素i在t时刻被标记为前景,则Li,t=1;如果像素i在t时刻被标记为背景,则Li,t=0。ρ是学习率参数,将ρ设定为0.001。在系统启动时,Pi,t(FG)设定为0.2。If pixel i is marked as foreground at time t, then L i,t =1; if pixel i is marked as background at time t, then L i,t =0. ρ is the learning rate parameter, and ρ is set to 0.001. At system startup, P i,t (FG) is set to 0.2.
图9为本发明实施例提供的图像标记结果示意图。如图9所示,图a为像素的前景先验概率,图b为像素的前景后验概率。从图a可以看出,道路区域比树木区域有更大的前景先验概率。从图b可以看出,真实的前景目标区域比其他区域有更大的前景后验概率。FIG. 9 is a schematic diagram of image marking results according to an embodiment of the present invention. As shown in FIG. 9, graph a is the foreground prior probability of the pixel, and graph b is the foreground posterior probability of the pixel. As can be seen from Figure a, the road area has a greater foreground prior probability than the tree area. As can be seen from Figure b, the real foreground target area has a greater foreground a posteriori probability than other areas.
进一步地,所述利用所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数包括:Further, the constructing the energy function of the Markov random field model by using the posterior probability of the foreground, the posterior probability of the background, and the space-time consistency constraint comprises:
根据公式(10)计算所述能量函数:Calculate the energy function according to formula (10):
Figure PCTCN2015076533-appb-000007
Figure PCTCN2015076533-appb-000007
其中,f为标记过程,E(f)为所述能量函数,Di(fi)为数据项,W(fi,fu)为平滑项。Where f is the labeling process, E(f) is the energy function, D i (f i ) is the data item, and W(f i , f u ) is the smoothing term.
这里,令I为当前图像中的像素集,L为标记集。标记为每个像素的估计值,将前景的估计值标记为1,背景的估计值标记为0。所述标记过程f即是为每个像素i∈I赋予一个标记fi∈L。在马尔科夫随机场框架下,标记可以在图像空间中缓慢变化,但是在一些位置,例如目标边界,标记可以 快速变化,标记的质量取决于能量函数E(f)。Here, let I be the set of pixels in the current image, and L be the set of marks. Marked as an estimate for each pixel, the estimate of the foreground is labeled 1 and the estimate of the background is labeled 0. The marking process f is to assign a flag f i ∈L to each pixel i ∈ I. Under the Markov random field framework, the markers can change slowly in the image space, but at some locations, such as target boundaries, the markers can change rapidly, and the quality of the markers depends on the energy function E(f).
由公式(10)可知,N表示图模型结构中的边集,Di(fi)是数据项,它衡量将标记fi赋给像素i的代价,W(fi,fu)是平滑项,它衡量将标记fi和fu赋给空间相邻的两个像素i和u的代价。使能量函数最小化的标记对应于马尔科夫随机场的最大后验估计。As can be seen from equation (10), N represents the edge set in the graph model structure, D i (f i ) is the data item, which measures the cost of assigning the label f i to the pixel i, and W(f i , f u ) is smoothed. Item, which measures the cost of assigning the flags f i and f u to two pixels i and u that are spatially adjacent. The marker that minimizes the energy function corresponds to the maximum a posteriori estimate of the Markov random field.
数据项Di(fi)由两部分组成。第一部分
Figure PCTCN2015076533-appb-000008
与每个像素属于前景的后验概率和属于背景的后验概率有关,即:
The data item D i (f i ) consists of two parts. first part
Figure PCTCN2015076533-appb-000008
A posteriori probability that each pixel belongs to the foreground and a posterior probability that belongs to the background, namely:
Figure PCTCN2015076533-appb-000009
Figure PCTCN2015076533-appb-000009
其中,数据项Di(fi)对每个像素施加约束,鼓励标记与像素观测值一致。Wherein, the data item D i (f i ) imposes a constraint on each pixel, and the encouragement flag is consistent with the pixel observation value.
第二部分
Figure PCTCN2015076533-appb-000010
为标记施加时域一致性约束。假设在连续图像中一对关联像素应该有相同的标记。在计算光流时,将当前图像(即t时刻的图像)反向映射到前一帧图像(即t-1时刻的图像),使每个当前像素i∈I与前一帧图像中的像素v关联。由于标记fv已知,
Figure PCTCN2015076533-appb-000011
由公式(12)可知:
the second part
Figure PCTCN2015076533-appb-000010
Apply a time domain consistency constraint to the tag. It is assumed that a pair of associated pixels should have the same mark in a continuous image. When calculating the optical flow, the current image (ie, the image at time t) is inversely mapped to the previous frame image (ie, the image at time t-1), so that each current pixel i∈I and the pixel in the previous frame image v association. Since the mark f v is known,
Figure PCTCN2015076533-appb-000011
It can be known from formula (12):
Figure PCTCN2015076533-appb-000012
Figure PCTCN2015076533-appb-000012
其中,γ>0是一个权值参数。由于噪声、大运动、边界效应等的影响,将γ设定为γ=0.5。Where γ>0 is a weight parameter. γ is set to γ=0.5 due to the influence of noise, large motion, boundary effect, and the like.
将两部分结合,数据项变为
Figure PCTCN2015076533-appb-000013
但是应当注意,如果视频的帧率很低,时域一致性约束将不可用,于是
Figure PCTCN2015076533-appb-000014
Combine the two parts and the data item becomes
Figure PCTCN2015076533-appb-000013
However, it should be noted that if the frame rate of the video is very low, the time domain consistency constraint will not be available, so
Figure PCTCN2015076533-appb-000014
平滑项W(fi,fu)鼓励标记的空间一致性。如果两个空间相邻像素有不同的标记,需要付出代价。具体由公式(13)可知:The smoothing term W(f i , f u ) encourages the spatial consistency of the markers. If two spatially adjacent pixels have different markings, there is a price to pay. Specifically, it can be known from formula (13):
Figure PCTCN2015076533-appb-000015
Figure PCTCN2015076533-appb-000015
其中,φ=5.0是权值参数,Z(Ii,Iu)是受像素i和u的亮度差控制的递减函数。函数Z由公式(14)可知: Where φ=5.0 is a weight parameter, and Z(I i , I u ) is a decreasing function controlled by the luminance difference of the pixels i and u. The function Z is known by the formula (14):
Figure PCTCN2015076533-appb-000016
Figure PCTCN2015076533-appb-000016
其中,σI为方差参数,σI设定为400。Where σ I is the variance parameter and σ I is set to 400.
图10为本发明实施例提供的前景和背景的分割结果示意图。FIG. 10 is a schematic diagram of segmentation results of foreground and background according to an embodiment of the present invention.
如图10所示,第一列为实施例编号,第二列为原始图像,第三列为前景检测结果,第四列为ground-truth。根据定量分析,本发明的平均召回率(recall)为0.8271,平均精度(precision)为0.8316,平均F-measure为0.8252。As shown in FIG. 10, the first column is the embodiment number, the second column is the original image, the third column is the foreground detection result, and the fourth column is the ground-truth. According to the quantitative analysis, the average recall rate (recall) of the present invention was 0.8271, the average precision was 0.8316, and the average F-measure was 0.8252.
图中包含运动阴影、光照变化、图像噪声等干扰,本发明提出的基于多视角学习的鲁棒性前景检测方法具有较强的鲁棒性,可以克服这些干扰,并准确得到前景检测结果。The figure includes motion shadow, illumination variation, image noise and other interference. The robust foreground detection method based on multi-view learning proposed by the present invention has strong robustness, can overcome these interferences, and accurately obtain foreground detection results.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。 The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the appended claims.

Claims (10)

  1. 一种基于多视角学习的鲁棒性前景检测方法,其特征在于,所述方法包括:A robust foreground detection method based on multi-view learning, characterized in that the method comprises:
    将输入视频通过时域中值滤波方法获取参考背景图像,对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征;Obtaining a reference background image by using a time domain value filtering method, performing iterative search and multi-scale fusion on the current image and the reference background image to obtain a heterogeneous feature;
    利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度,并且根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率;Calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature, and calculating the posterior probability and background of the foreground using Bayesian rules according to foreground likelihood, background likelihood and prior probability Posterior probability
    通过所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,利用置信传播算法将所述能量函数最小化得到前景和背景的分割结果。The energy function of the Markov random field model is constructed by the posterior probability of the foreground, the posterior probability of the background, and the spatiotemporal consistency constraint, and the energy function is minimized by the belief propagation algorithm to obtain the segmentation result of the foreground and the background. .
  2. 根据权利要求1所述的方法,其特征在于,所述将输入视频通过时域中值滤波方法获取参考背景图像包括:The method according to claim 1, wherein the obtaining the reference background image by the input domain video by the time domain value filtering method comprises:
    读取所述输入视频的每帧图像;Reading each frame of the input video;
    将所述每帧图像通过时域中值滤波方法获取阈值时间窗口内各个像素的中值;And obtaining, by using the time domain value filtering method, the median value of each pixel in the threshold time window;
    根据所述各个像素的中值得到所述参考背景图像。The reference background image is obtained according to a median value of each of the pixels.
  3. 根据权利要求1所述的方法,其特征在于,所述异类特征为纹理变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:The method according to claim 1, wherein the heterogeneous feature is a texture change feature, and the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature comprises:
    根据下式计算所述纹理变化特征:The texture variation feature is calculated according to the following formula:
    Figure PCTCN2015076533-appb-100001
    Figure PCTCN2015076533-appb-100001
    其中,TVi为所述纹理变化特征,i为当前像素,[IR(i),IG(i),IB(i)]为所述当前像素的颜色模型RGB的颜色值,j为所述当前像素对应的背景像素,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值,m∈N(i)为所述当前像 素的空间邻域,n∈N(j)为所述背景像素的空间邻域。Wherein, TV i is the texture change feature, i is a current pixel, and [I R (i), I G (i), I B (i)] is a color value of a color model RGB of the current pixel, and j is a background pixel corresponding to the current pixel, [E R (j), E G (j), E B (j)] is a color value of RGB of the background pixel, and m∈N(i) is the current pixel The spatial neighborhood, n ∈ N (j) is the spatial neighborhood of the background pixel.
  4. 根据权利要求3所述的方法,其特征在于,所述异类特征为亮度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:The method according to claim 3, wherein the heterogeneous feature is a brightness change feature, and the iterative search and multi-scale fusion of the current image and the reference background image to obtain a heterogeneous feature comprises:
    根据下式计算所述亮度变化特征:The brightness change characteristic is calculated according to the following formula:
    BVi=(αi-1)||OEj||BV i =(α i -1)||OE j ||
    其中,BVi为所述亮度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,Ej为所述背景像素的RGB的颜色值,||OEj||为原点O与Ej的直线距离。Wherein, BV i is the brightness variation characteristic, α i is a ratio of a brightness of the current pixel to a brightness of the background pixel, and E j is a color value of RGB of the background pixel, and ||OE j || The linear distance between the origin O and E j .
  5. 根据权利要求4所述的方法,其特征在于,所述异类特征为色度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:The method according to claim 4, wherein the heterogeneous feature is a chroma change feature, and the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature comprises:
    根据下式计算所述色度变化特征:The chromaticity change characteristic is calculated according to the following formula:
    Figure PCTCN2015076533-appb-100002
    Figure PCTCN2015076533-appb-100002
    其中,CVi为所述色度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,[IR(i),IG(i),IB(i)]为所述当前像素的RGB的颜色值,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值。Wherein CV i is the chromaticity change characteristic, and α i is a ratio of the brightness of the current pixel to the brightness of the background pixel, [I R (i), I G (i), I B (i)] For the color value of RGB of the current pixel, [E R (j), E G (j), E B (j)] is the color value of RGB of the background pixel.
  6. 根据权利要求1所述的方法,其特征在于,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度包括:The method according to claim 1, wherein the calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature comprises:
    根据下式计算所述前景类的条件概率密度:Calculate the conditional probability density of the foreground class according to the following formula:
    p(BV|FG)=p(BV|CV>τCV或TV>τTV),p(BV|FG)=p(BV|CV>τ CV or TV>τ TV ),
    p(CV|FG)=p(CV|BV>τBV或TV>τTV),p(CV|FG)=p(CV|BV>τ BV or TV>τ TV ),
    p(TV|FG)=p(TV|BV>τBV或CV>τCV),p(TV|FG)=p(TV|BV>τ BV or CV>τ CV ),
    其中,FG为所述前景类,p(BV|FG)为在所述前景类的条件下所述亮度变化特征的概率密度,p(CV|FG)为在所述前景类的条件下所述色度变化特征的概率密度,p(TV|FG)为在所述前景类的条件下所述纹理变化特征的概 率密度,τBV为所述亮度变化特征的阈值,τCV为所述色度变化特征的阈值,τTV为所述纹理变化特征的阈值。Wherein FG is the foreground class, p(BV|FG) is a probability density of the brightness change characteristic under the condition of the foreground class, and p(CV|FG) is the condition of the foreground class The probability density of the chrominance variation feature, p(TV|FG) is the probability density of the texture variation feature under the condition of the foreground class, τ BV is the threshold of the luminance variation feature, and τ CV is the chromaticity The threshold of the variation feature, τ TV is the threshold of the texture variation feature.
  7. 根据权利要求6所述的方法,其特征在于,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度还包括:The method according to claim 6, wherein the calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature further comprises:
    从所述当前图像中获取可信前景区域;Obtaining a trusted foreground region from the current image;
    对所述可信前景区域进行膨胀得到膨胀的可信前景区域;Extending the trusted foreground area to expand the credible foreground area;
    从所述当前图像中,将位于所述膨胀的可信前景区域之外的区域作为候选背景区域,并且根据所述候选背景区域计算所述背景类的条件概率密度。From the current image, an area outside the expanded trusted foreground area is used as a candidate background area, and a conditional probability density of the background class is calculated according to the candidate background area.
  8. 根据权利要求1所述的方法,其特征在于,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:The method according to claim 1, wherein the calculating the posterior probability of the foreground and the posterior probability of the background using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
    根据下式计算所述前景的后验概率:Calculate the posterior probability of the foreground according to the following formula:
    Figure PCTCN2015076533-appb-100003
    Figure PCTCN2015076533-appb-100003
    其中,Pi(FG|x)为所述前景的后验概率,p(x|C)为所述前景似然或背景似然,Pi(C)为所述前景的先验概率或所述背景的先验概率。Where P i (FG|x) is the posterior probability of the foreground, p(x|C) is the foreground likelihood or background likelihood, and P i (C) is the prior probability or the foreground of the foreground The prior probability of the background.
  9. 根据权利要求8所述的方法,其特征在于,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:The method according to claim 8, wherein the calculating the posterior probability of the foreground and the posterior probability of the background using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
    根据下式计算所述背景的后验概率:Calculate the posterior probability of the background according to the following formula:
    Pi(BG|x)=1-Pi(FG|x)P i (BG|x)=1-P i (FG|x)
    其中,Pi(FG|x)为所述前景的后验概率,Pi(BG|x)为所述背景的后验概率。Where P i (FG|x) is the posterior probability of the foreground, and P i (BG|x) is the posterior probability of the background.
  10. 根据权利要求1所述的方法,其特征在于,所述利用所述前景的 后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数包括:The method of claim 1 wherein said utilizing said foreground The posterior probability, the posterior probability of the background, and the space-time consistency constraint construct the energy function of the Markov random field model including:
    根据下式计算所述能量函数:The energy function is calculated according to the following formula:
    Figure PCTCN2015076533-appb-100004
    Figure PCTCN2015076533-appb-100004
    其中,f为标记过程,E(f)为所述能量函数,Di(fi)为数据项,W(fi,fu)为平滑项。 Where f is the labeling process, E(f) is the energy function, D i (f i ) is the data item, and W(f i , f u ) is the smoothing term.
PCT/CN2015/076533 2015-04-14 2015-04-14 Robust foreground detection method based on multi-view learning WO2016165064A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/076533 WO2016165064A1 (en) 2015-04-14 2015-04-14 Robust foreground detection method based on multi-view learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/076533 WO2016165064A1 (en) 2015-04-14 2015-04-14 Robust foreground detection method based on multi-view learning

Publications (1)

Publication Number Publication Date
WO2016165064A1 true WO2016165064A1 (en) 2016-10-20

Family

ID=57125468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/076533 WO2016165064A1 (en) 2015-04-14 2015-04-14 Robust foreground detection method based on multi-view learning

Country Status (1)

Country Link
WO (1) WO2016165064A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269273A (en) * 2018-02-12 2018-07-10 福州大学 A kind of matched belief propagation method of polar curve during panorama longitudinally roams
CN111091540A (en) * 2019-12-11 2020-05-01 西安科技大学 Active suspension control method based on Markov random field
CN111208568A (en) * 2020-01-16 2020-05-29 中国科学院地质与地球物理研究所 Time domain multi-scale full waveform inversion method and system
CN111368914A (en) * 2020-03-04 2020-07-03 西安电子科技大学 Polarimetric synthetic aperture radar change detection method based on total probability collaborative segmentation
CN111461011A (en) * 2020-04-01 2020-07-28 西安电子科技大学 Weak and small target detection method based on probabilistic pipeline filtering
CN113160098A (en) * 2021-04-16 2021-07-23 浙江大学 Processing method of dense particle image under condition of uneven illumination
CN113947569A (en) * 2021-09-30 2022-01-18 西安交通大学 Beam-type structure multi-scale weak damage positioning method based on computer vision
CN114155425A (en) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 Weak and small target detection method based on Gaussian Markov random field motion direction estimation
CN115082507A (en) * 2022-07-22 2022-09-20 聊城扬帆田一机械有限公司 Intelligent regulation and control system of pavement cutting machine

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126933A1 (en) * 2004-12-15 2006-06-15 Porikli Fatih M Foreground detection using intrinsic images
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN102509105A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Hierarchical processing method of image scene based on Bayesian inference

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126933A1 (en) * 2004-12-15 2006-06-15 Porikli Fatih M Foreground detection using intrinsic images
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN102509105A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Hierarchical processing method of image scene based on Bayesian inference

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SONG, XIAOFENG ET AL.: "SAR Image Segmentation Using Markov Random Field Based on Regions and Bayes Belief Propagation", CHINESE JOURNAL OF ELECTRONICS, vol. 12, no. 38, 31 December 2010 (2010-12-31) *
WANG, ZHILING ET AL.: "Analysis of Robust Background Modeling Techniques for Different Information Levels", PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 2, no. 22, 30 April 2009 (2009-04-30), XP055321059 *
ZHU, YIPING ET AL.: "Video Foreground and Shadow Automatic Segmentation Based on Discriminative Model", PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 6, no. 21, 31 December 2008 (2008-12-31), XP055321061 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269273B (en) * 2018-02-12 2021-07-27 福州大学 Belief propagation method for polar line matching in panoramic longitudinal roaming
CN108269273A (en) * 2018-02-12 2018-07-10 福州大学 A kind of matched belief propagation method of polar curve during panorama longitudinally roams
CN111091540A (en) * 2019-12-11 2020-05-01 西安科技大学 Active suspension control method based on Markov random field
CN111091540B (en) * 2019-12-11 2023-04-07 西安科技大学 Active suspension control method based on Markov random field
CN111208568A (en) * 2020-01-16 2020-05-29 中国科学院地质与地球物理研究所 Time domain multi-scale full waveform inversion method and system
CN111368914A (en) * 2020-03-04 2020-07-03 西安电子科技大学 Polarimetric synthetic aperture radar change detection method based on total probability collaborative segmentation
CN111461011B (en) * 2020-04-01 2023-03-24 西安电子科技大学 Weak and small target detection method based on probabilistic pipeline filtering
CN111461011A (en) * 2020-04-01 2020-07-28 西安电子科技大学 Weak and small target detection method based on probabilistic pipeline filtering
CN113160098A (en) * 2021-04-16 2021-07-23 浙江大学 Processing method of dense particle image under condition of uneven illumination
CN113947569A (en) * 2021-09-30 2022-01-18 西安交通大学 Beam-type structure multi-scale weak damage positioning method based on computer vision
CN113947569B (en) * 2021-09-30 2023-10-27 西安交通大学 Multi-scale weak damage positioning method for beam structure based on computer vision
CN114155425A (en) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 Weak and small target detection method based on Gaussian Markov random field motion direction estimation
CN114155425B (en) * 2021-12-13 2023-04-07 中国科学院光电技术研究所 Weak and small target detection method based on Gaussian Markov random field motion direction estimation
CN115082507A (en) * 2022-07-22 2022-09-20 聊城扬帆田一机械有限公司 Intelligent regulation and control system of pavement cutting machine
CN115082507B (en) * 2022-07-22 2022-11-18 聊城扬帆田一机械有限公司 Intelligent regulation and control system of pavement cutting machine

Similar Documents

Publication Publication Date Title
WO2016165064A1 (en) Robust foreground detection method based on multi-view learning
CN104766065B (en) Robustness foreground detection method based on various visual angles study
US10198823B1 (en) Segmentation of object image data from background image data
CN110119728B (en) Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
CN109241913B (en) Ship detection method and system combining significance detection and deep learning
US10497126B2 (en) Producing a segmented image using markov random field optimization
Feng et al. Local background enclosure for RGB-D salient object detection
Ju et al. Depth-aware salient object detection using anisotropic center-surround difference
CN110111338B (en) Visual tracking method based on superpixel space-time saliency segmentation
CN102542571B (en) Moving target detecting method and device
KR20230084486A (en) Segmentation for Image Effects
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN107506792B (en) Semi-supervised salient object detection method
CN112465021B (en) Pose track estimation method based on image frame interpolation method
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN108647605B (en) Human eye gaze point extraction method combining global color and local structural features
EP3343504B1 (en) Producing a segmented image using markov random field optimization
CN107704864B (en) Salient object detection method based on image object semantic detection
CN112037230B (en) Forest image segmentation method based on superpixels and hyper-metric profile map
Schulz et al. Object-class segmentation using deep convolutional neural networks
CN115601834A (en) Fall detection method based on WiFi channel state information
Feng et al. HOSO: Histogram of surface orientation for RGB-D salient object detection
Lin et al. Foreground object detection in highly dynamic scenes using saliency
Wong et al. Development of a refined illumination and reflectance approach for optimal construction site interior image enhancement
Rahimi et al. Single image ground plane estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15888769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15888769

Country of ref document: EP

Kind code of ref document: A1