CN106803952B - In conjunction with the cross validation depth map quality evaluating method of JND model - Google Patents
In conjunction with the cross validation depth map quality evaluating method of JND model Download PDFInfo
- Publication number
- CN106803952B CN106803952B CN201710041375.6A CN201710041375A CN106803952B CN 106803952 B CN106803952 B CN 106803952B CN 201710041375 A CN201710041375 A CN 201710041375A CN 106803952 B CN106803952 B CN 106803952B
- Authority
- CN
- China
- Prior art keywords
- tar
- pixel
- ref
- coordinate position
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000002790 cross-validation Methods 0.000 title claims abstract description 15
- 238000013441 quality evaluation Methods 0.000 claims abstract description 19
- 230000000007 visual effect Effects 0.000 claims abstract description 17
- 239000003550 marker Substances 0.000 claims abstract description 5
- 230000008569 process Effects 0.000 claims description 16
- 230000000873 masking effect Effects 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 abstract description 12
- 230000008901 benefit Effects 0.000 abstract description 3
- 238000009877 rendering Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012417 linear regression Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- YMHOBZXQZVXHBM-UHFFFAOYSA-N 2,5-dimethoxy-4-bromophenethylamine Chemical compound COC1=CC(CCN)=C(OC)C=C1Br YMHOBZXQZVXHBM-UHFFFAOYSA-N 0.000 description 4
- 241000545067 Venus Species 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000001303 quality assessment method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 208000015181 infectious disease Diseases 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013442 quality metrics Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种结合JND模型的交叉验证深度图质量评价方法,其利用深度图对应的彩色图和辅助视点上的彩色图来获得差值图;利用深度图及与其对应的彩色图经3D‑Warping映射到辅助视点上的彩色图中每个坐标处的像素个数来获取遮挡掩膜;然后利用遮挡掩膜去除差值图中被遮挡的像素点得到去遮挡后的差值图;接着将辅助视点上的彩色图划分为平坦、边缘和纹理三个区域获得区域标记图;之后引入JND模型,结合区域标记图,获取辅助视点上的彩色图中的每个像素点的误差可视阈值;最后根据去遮挡后的差值图和误差可视阈值,获得深度误差图,进而获得深度图中的错误像素点的比率作为质量评价值;优点是其能有效地提高评价结果与绘制得到的虚拟视点的质量之间的一致性。
The invention discloses a cross-validation depth map quality evaluation method combined with a JND model, which uses the color map corresponding to the depth map and the color map on the auxiliary viewpoint to obtain a difference map; uses the depth map and the color map corresponding to it through 3D ‑Warping is mapped to the number of pixels at each coordinate in the color map on the auxiliary viewpoint to obtain the occlusion mask; then use the occlusion mask to remove the occluded pixels in the difference map to obtain the de-occluded difference map; then Divide the color image on the auxiliary viewpoint into three regions of flat, edge and texture to obtain the region marker map; then introduce the JND model, combined with the region marker map, to obtain the error visual threshold of each pixel in the color image on the auxiliary viewpoint ;Finally, according to the de-occluded difference map and the error visual threshold, the depth error map is obtained, and then the ratio of the error pixels in the depth map is obtained as the quality evaluation value; the advantage is that it can effectively improve the evaluation result and the drawn result. Consistency between the quality of virtual viewpoints.
Description
技术领域technical field
本发明涉及一种图像质量评价方法,尤其是涉及一种结合JND(Just-noticeable-distortion,恰可察觉失真)模型的交叉验证深度图质量评价方法。The invention relates to an image quality evaluation method, in particular to a cross-validation depth map quality evaluation method combined with a JND (Just-noticeable-distortion, just noticeable distortion) model.
背景技术Background technique
近年来,视频技术迅速发展,出现了许多新的应用,如3D视频和自由视点视频(FVV,Free Viewpoint Video)。与传统的二维视频相比,3D视频提供了深度信息,带来了更逼真的视觉体验。深度图在许多3D视频应用中起着基本作用,例如,深度图可以用于通过在可用视点处内插或外推图像来生成任意新视点图像;此外,高质量的深度图为解决计算机视觉中的具有挑战性的问题提供了帮助。许多3D视频应用的性能受益于准确和高质量的深度图的估计或采集,可以通过匹配经校正的彩色图像或使用深度相机来获得深度图。在立体匹配技术中,由于受遮挡和大面积均匀区域的影响,因此常常会产生不准确的深度图,虽然立体匹配算法的固有困难可以使用深度相机来解决,但是不可避免的传感器噪声问题依然存在,影响了深度的精度和对象的形态。In recent years, video technology has developed rapidly, and many new applications have emerged, such as 3D video and Free Viewpoint Video (FVV, Free Viewpoint Video). Compared with traditional two-dimensional video, 3D video provides depth information, resulting in a more realistic visual experience. Depth maps play a fundamental role in many 3D video applications, for example, depth maps can be used to generate arbitrary new viewpoint images by interpolating or extrapolating images at available viewpoints; moreover, a high-quality depth map is an important tool for solving problems in computer vision. provided assistance with challenging problems. The performance of many 3D video applications benefits from the estimation or acquisition of accurate and high-quality depth maps, which can be obtained by matching rectified color images or using depth cameras. In stereo matching technology, due to the influence of occlusion and large uniform area, it often produces inaccurate depth maps. Although the inherent difficulties of stereo matching algorithms can be solved by using depth cameras, the inevitable sensor noise problem still exists. , affecting the accuracy of the depth and the shape of the object.
3D视频技术的一个主要发展方向是基于彩色加深度的自由视点视频系统,该系统的基本框架包括采集、预处理、编码、传输、解码、虚拟视点图像绘制和显示等环节。基于彩色加深度的自由视点视频系统能让用户自由地选择任意位置的视点进行观看,增强了人机交互性。实现自由视点视频系统的一项关键技术就是虚拟视点生成技术,它的主要用途是克服相机获取真实视点能力的限制,产生任意位置的虚拟视点。影响虚拟视点质量的因素主要有两个:一是深度图和对应彩色图像的质量;二是虚拟视点绘制算法。目前,基于深度图的绘制(DIBR,Depth Image Based Rendering)技术是业界应用最为广泛的一种虚拟视点生成技术。在基于深度图的绘制技术中,深度信息是生成高质量的虚拟视点的关键,深度信息错误将导致视差错误,造成虚拟视点中像素位置的偏移和对象扭曲,影响用户感知。深度信息代表的是对应场景到相机成像平面的距离信息,它将实际距离值量化到[0,255]。由于深度相机价格昂贵,因此目前用于测试的深度图大多通过深度估计软件得到。为了推广应用和降低成本,用于虚拟视点绘制的深度信息不适合在接收端通过深度估计产生,需要在发送端采集或者估计,然后编码传送给接收端。因此,深度图获取算法的限制和深度图编码会导致深度估计不准和深度压缩失真。One of the main development directions of 3D video technology is the free-viewpoint video system based on color plus depth. The free-viewpoint video system based on color plus depth allows users to freely choose any viewpoint for viewing, which enhances human-computer interaction. A key technology to realize the free-viewpoint video system is the virtual viewpoint generation technology. Its main purpose is to overcome the limitation of the camera's ability to obtain the real viewpoint and generate a virtual viewpoint at any position. There are two main factors affecting the quality of the virtual viewpoint: one is the quality of the depth map and the corresponding color image; the other is the virtual viewpoint rendering algorithm. Currently, a depth image based rendering (DIBR, Depth Image Based Rendering) technology is the most widely used virtual viewpoint generation technology in the industry. In the depth map-based rendering technology, depth information is the key to generate high-quality virtual viewpoints. Errors in depth information will lead to parallax errors, resulting in pixel position offsets and object distortions in virtual viewpoints, affecting user perception. Depth information represents the distance information from the corresponding scene to the camera imaging plane, and it quantifies the actual distance value to [0, 255]. Because the depth camera is expensive, most of the depth maps currently used for testing are obtained through depth estimation software. In order to popularize applications and reduce costs, the depth information used for virtual viewpoint rendering is not suitable for depth estimation at the receiving end. It needs to be collected or estimated at the sending end, and then encoded and transmitted to the receiving end. Therefore, the limitation of depth map acquisition algorithm and depth map encoding will lead to inaccurate depth estimation and depth compression distortion.
基于深度图的绘制技术的核心思想为利用深度信息和相机参数将参考图像中的像素投影到目标虚拟视点,一般可以分为两步,首先将原参考视点中的像素利用其深度信息重投影到它们对应的三维空间位置;然后根据虚拟视点的位置(如相机平移、旋转参数等)将这些三维空间点再投影到虚拟相机平面进行成像得到虚拟视点中的像素。虚拟视点绘制时,需要将深度转化为视差,通过视差可求得参考像素点在虚拟视点中的位置,深度值决定了参考视点中的像素的偏移距离。若相邻像素的深度值变化剧烈,则会在两像素之间产生空洞,深度值变化越尖锐,则产生的空洞越大。由于前背景交界处深度值变化较大,因此空洞的产生一般位于前背景交界处。当参考图像中被前景对象遮挡的背景区域在虚拟图像中可见时,虚拟图像中将出现空洞,而当参考图像中未被前景对象遮挡的背景区域在虚拟图像中不可见时,则发生遮挡。The core idea of the depth map-based rendering technology is to use depth information and camera parameters to project the pixels in the reference image to the target virtual viewpoint. Generally, it can be divided into two steps. First, reproject the pixels in the original reference viewpoint to the target virtual viewpoint using its depth information Their corresponding three-dimensional space positions; then according to the position of the virtual viewpoint (such as camera translation, rotation parameters, etc.), these three-dimensional space points are re-projected to the virtual camera plane for imaging to obtain the pixels in the virtual viewpoint. When drawing a virtual viewpoint, it is necessary to convert the depth into a parallax. The position of the reference pixel in the virtual viewpoint can be obtained through the parallax. The depth value determines the offset distance of the pixel in the reference viewpoint. If the depth values of adjacent pixels change sharply, a hole will be generated between the two pixels, and the sharper the depth value change, the larger the hole will be. Because the depth value changes greatly at the junction of the foreground and the background, the generation of the hole is generally located at the junction of the foreground and the background. Holes appear in the virtual image when background regions in the reference image that are occluded by foreground objects are visible in the virtual image, while occlusion occurs when background regions that are not occluded by foreground objects in the reference image are not visible in the virtual image.
虚拟视点失真大多为虚拟视点中像素位置偏移和对象扭曲,检测出的失真区域并非都可以很好地被人眼所察觉。图像由边缘、纹理及平坦区域三部分构成,不同区域不同幅度的失真对人眼视觉效果的影响不尽相同,纹理复杂度较高或纹理特征相似的区域往往可以容忍更多的失真,而边缘附近的变化则最能引起人眼的视觉感知。视觉生理、心理等方面的研究发现人类视觉系统特性和掩蔽效应对图像处理起着非常重要的作用,当图像失真小于某一范围时,人眼不能够感觉到此种影响,基于此人们提出了恰可察觉失真(JND,Just-noticeable-distortion)模型。常见的掩蔽效应包括:1)亮度掩蔽特性,人眼对被观测物体的绝对亮度判断力差,而对亮度的相对差异判断力较强,对高亮区所附加的噪声其敏感性较大;2)纹理掩蔽特性,人类视觉系统对图像平滑区域的敏感性远远高于纹理区域,纹理复杂度较高的区域往往可以容忍更多的失真。Most of the virtual viewpoint distortions are pixel position offset and object distortion in the virtual viewpoint, and not all detected distorted areas can be well perceived by human eyes. The image is composed of three parts: edge, texture, and flat area. Different areas of distortion have different effects on human visual effects. Areas with higher texture complexity or similar texture features can often tolerate more distortion, while edges The changes in the vicinity can most cause the visual perception of the human eye. Studies in visual physiology and psychology have found that the characteristics of the human visual system and the masking effect play a very important role in image processing. When the image distortion is less than a certain range, the human eye cannot feel this effect. Based on this, people put forward Just-noticeable-distortion (JND, Just-noticeable-distortion) model. Common masking effects include: 1) Brightness masking characteristics, the human eye has poor judgment on the absolute brightness of the observed object, but has a strong judgment on the relative difference in brightness, and is more sensitive to the noise added to the highlight area; 2) Texture masking characteristics, the human visual system is much more sensitive to image smooth areas than texture areas, and areas with higher texture complexity can often tolerate more distortion.
由于深度图的广泛使用,深度图的质量评估变得至关重要,能促进许多实际应用。例如,在自由视点视频系统中,检测深度失真能帮助进行深度增强,通过深度增强,可以进一步提高虚拟视点的质量,使观众可以享受更好的观看体验。深度图的质量评估的一个简单方法是将待测试的深度图与无失真参考深度图进行比较,该方法对应于全参考深度质量度量,其可以精确地测量深度图的精度,然而,在大多数实际应用中,由于深度图的误差不可避免,无失真的参考深度图通常无法获得,因此用无参考评价方法评估深度图更合理。Xiang等人提出的无参考深度图质量评估方案通过匹配彩色图像和深度图的边缘来检测误差,计算坏点率来评价深度图的质量,与绘制得到的虚拟图像的质量有较好的一致性,但是该方案只考虑了边缘附近的错误,忽略了其他平滑区域,所检测出来的只是部分误差像素,且场景的不同属性和误差分布对该方案的性能影响较大。深度图并不直接用于观看,而是作为辅助信息用于绘制虚拟视点,因此需要从应用的角度出发来评价深度图的质量。Due to the widespread use of depth maps, the quality assessment of depth maps becomes crucial and can facilitate many practical applications. For example, in a free-viewpoint video system, detecting depth distortion can help to perform depth enhancement. Through depth enhancement, the quality of virtual viewpoint can be further improved, so that viewers can enjoy a better viewing experience. A simple method for quality assessment of depth maps is to compare the depth map under test with an undistorted reference depth map. This method corresponds to a full-reference depth quality metric, which can accurately measure the accuracy of depth maps. However, in most In practical applications, due to the unavoidable errors of the depth map, the reference depth map without distortion is usually not available, so it is more reasonable to evaluate the depth map with the no-reference evaluation method. The no-reference depth map quality assessment scheme proposed by Xiang et al. detects errors by matching the edges of the color image and the depth map, and calculates the dead pixel rate to evaluate the quality of the depth map, which has a good consistency with the quality of the drawn virtual image. , but this scheme only considers the errors near the edge, ignoring other smooth areas, and only some error pixels are detected, and the different attributes and error distribution of the scene have a great impact on the performance of the scheme. The depth map is not directly used for viewing, but is used as auxiliary information to draw a virtual viewpoint. Therefore, it is necessary to evaluate the quality of the depth map from an application point of view.
发明内容Contents of the invention
本发明所要解决的技术问题是提供一种结合JND模型的交叉验证深度图质量评价方法,其不需要无失真参考深度图,且能够有效地提高评价结果与绘制得到的虚拟视点的质量之间的一致性。The technical problem to be solved by the present invention is to provide a cross-validation depth map quality evaluation method combined with the JND model, which does not require a distortion-free reference depth map, and can effectively improve the relationship between the evaluation result and the quality of the drawn virtual viewpoint. consistency.
本发明解决上述技术问题所采用的技术方案为:一种结合JND模型的交叉验证深度图质量评价方法,其特征在于包括以下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a method for evaluating the quality of a cross-validated depth map combined with a JND model, which is characterized in that it includes the following steps:
①将待评价的深度图记为Dtar,将Dtar对应的彩色图记为Τtar,将除Dtar和Τtar所在视点外的另一个已知视点定义为辅助视点,将辅助视点上的彩色图记为Tref;然后通过将Dtar中的所有像素点的像素值转化为视差值,将Τtar中的所有像素点经3D-Warping映射到Tref中;其中,Dtar、Τtar和Tref的垂直方向上的像素点的总个数为M,Dtar、Τtar和Tref的水平方向上的像素点的总个数为N;① mark the depth map to be evaluated as D tar , mark the color map corresponding to D tar as Τ tar , define another known viewpoint except the viewpoint where D tar and Τ tar are located as an auxiliary viewpoint, and define the The color map is marked as T ref ; then by converting the pixel values of all pixels in D tar into parallax values, all pixels in Τ tar are mapped to T ref through 3D-Warping; wherein, D tar , Τ The total number of pixels on the vertical direction of tar and T ref is M, and the total number of pixels on the horizontal direction of D tar , Τ tar and T ref is N;
②令Etar表示尺寸大小与Dtar的尺寸大小相同的差值图,将Etar中坐标位置为(x,y)的像素点的像素值记为Etar(x,y),当辅助视点在Dtar和Τtar所在视点的左边时,判断y+dtar,p(x,y)是否大于N,如果是,则令Etar(x,y)=0,否则,满足u=x、v=y+dtar,p(x,y),Etar(x,y)=|Ιtar(x,y)-Ιref(u,v)|;当辅助视点在Dtar和Τtar所在视点的右边时,判断y-dtar,p(x,y)是否小于1,如果是,则令Etar(x,y)=0,否则,满足u=x、v=y-dtar,p(x,y),Etar(x,y)=|Ιtar(x,y)-Ιref(u,v)|;其中,1≤x≤M,1≤y≤N,1≤u≤M,1≤v≤N,dtar,p(x,y)表示Dtar中坐标位置为(x,y)的像素点的像素值转化得到的视差值,符号“||”为取绝对值符号,Ιtar(x,y)表示Τtar中坐标位置为(x,y)的像素点的亮度分量,Ιref(u,v)表示Tref中坐标位置为(u,v)的像素点的亮度分量;②Let E tar represent the difference map whose size is the same as that of D tar , and record the pixel value of the pixel whose coordinate position is (x, y) in E tar as E tar (x, y), when the auxiliary viewpoint When D tar and Τ tar are on the left side of the viewpoint, judge whether y+d tar, p (x, y) is greater than N, if so, then make E tar (x, y)=0, otherwise, satisfy u=x, v=y+d tar, p (x, y), E tar (x, y)=|Ι tar (x, y)-Ι ref (u, v)|; when the auxiliary viewpoint is at the location of D tar and Τ tar When on the right side of the viewpoint, judge whether yd tar,p (x, y) is less than 1, if yes, then set E tar (x, y)=0, otherwise, satisfy u=x, v=yd tar,p (x, y), E tar (x, y) = |Ι tar (x, y)-Ι ref (u, v)|; among them, 1≤x≤M, 1≤y≤N, 1≤u≤M, 1 ≤v≤N, d tar, p (x, y) represents the disparity value converted from the pixel value of the pixel whose coordinate position is (x, y) in D tar , the symbol "||" is the absolute value symbol, Ι tar (x, y) represents the luminance component of a pixel whose coordinate position is (x, y) in Τ tar , and Ι ref (u, v) represents the brightness of a pixel whose coordinate position is (u, v) in T ref weight;
③令C表示尺寸大小与Dtar的尺寸大小相同的遮挡掩膜图像,将C中坐标位置为(x,y)的像素点的像素值记为C(x,y),将C中的每个像素点的像素值初始化为0,将Τtar中经3D-Warping映射到Tref中坐标位置为(u,v)处的像素点的总个数记为N(u,v);当N(u,v)=1时,令C(x,y)=0;当N(u,v)>1时,其中,N(u,v)的值为0或为1或大于1,Dtar(x,y)表示Dtar中坐标位置为(x,y)的像素点的像素值,max()为取最大值函数,1≤x(u,v),i≤M,1≤y(u,v),i≤N,(x(u,v),i,y(u,v),i)表示Τtar中经3D-Warping映射到Tref中坐标位置为(u,v)处的N(u,v)个像素点中的第i个像素点在Τtar中的坐标位置,Dtar(x(u,v),i,y(u,v),i)表示Dtar中坐标位置为(x(u,v),i,y(u,v),i)的像素点的像素值;③ Let C represent the occlusion mask image whose size is the same as that of D tar , record the pixel value of the pixel whose coordinate position is (x, y) in C as C(x, y), and record each pixel in C The pixel value of each pixel point is initialized to 0, and the total number of pixels at the coordinate position (u, v) in T ref is mapped to T ref through 3D-Warping in T tar and recorded as N (u, v) ; when N When (u,v) =1, set C(x,y)=0; when N (u,v) >1, Among them, the value of N (u, v) is 0 or 1 or greater than 1, D tar (x, y) represents the pixel value of the pixel point whose coordinate position is (x, y) in D tar , and max() is to take Maximum value function, 1≤x (u,v),i ≤M,1≤y (u,v),i ≤N, (x (u,v),i ,y (u,v),i ) means The coordinate position of the i-th pixel in the N (u, v) pixel points at (u, v) at the coordinate position in T ref through 3D-Warping mapping in T tar , D tar (x (u, v), i , y (u, v), i ) represents the pixel value of the pixel point whose coordinate position is (x (u, v), i , y (u, v), i ) in D tar ;
④利用C去除Etar中被遮挡的像素点,得到去遮挡后的差值图,记为E'tar,将E'tar中坐标位置为(x,y)的像素点的像素值记为E'tar(x,y),E'tar(x,y)=Etar(x,y)×(1-C(x,y));④Use C to remove the occluded pixels in E tar to obtain the difference map after de-occlusion, which is denoted as E' tar , and the pixel value of the pixel whose coordinate position is (x, y) in E' tar is denoted as E ' tar (x, y), E' tar (x, y) = E tar (x, y) × (1-C (x, y));
⑤计算Tref中的每个像素点的纹理判断因子,将Tref中坐标位置为(u,v)的像素点的纹理判断因子记为z(u,v),其中,1≤u≤M,1≤v≤N,zh(u,v)表示Tref中坐标位置为(u,v)的像素点的水平方向的纹理判断因子,zh(u,v)的值为1或0,zh(u,v)=1表示Tref中坐标位置为(u,v)的像素点为水平方向的纹理像素点,zh(u,v)=0表示Tref中坐标位置为(u,v)的像素点为水平方向的非纹理像素点,zv(u,v)表示Tref中坐标位置为(u,v)的像素点的垂直方向的纹理判断因子,zv(u,v)的值为1或0,zv(u,v)=1表示Tref中坐标位置为(u,v)的像素点为垂直方向的纹理像素点,zv(u,v)=0表示Tref中坐标位置为(u,v)的像素点为垂直方向的非纹理像素点;⑤ Calculate the texture judgment factor of each pixel in T ref , and record the texture judgment factor of the pixel whose coordinate position is (u, v) in T ref as z(u, v), Among them, 1≤u≤M, 1≤v≤N, z h (u, v) represents the texture judgment factor in the horizontal direction of the pixel whose coordinate position is (u, v) in T ref , z h (u, v ) value is 1 or 0, z h (u, v) = 1 means that the pixel at the coordinate position (u, v) in T ref is a texture pixel point in the horizontal direction, z h (u, v) = 0 means The pixel at the coordinate position (u, v) in T ref is a non-texture pixel in the horizontal direction, and z v (u, v) represents the texture in the vertical direction of the pixel at the coordinate position (u, v) in T ref Judgment factor, the value of z v (u, v) is 1 or 0, z v (u, v) = 1 means that the pixel at the coordinate position (u, v) in T ref is a texture pixel in the vertical direction, z v (u, v)=0 indicates that the pixel at the coordinate position (u, v) in T ref is a non-texture pixel in the vertical direction;
⑥令T表示尺寸大小与Tref的尺寸大小相同的区域标记图,将T中坐标位置为(u,v)的像素点的像素值记为T(u,v),将T中的每个像素点的像素值初始化为0;利用Canny算子检测出Tref中的边缘区域,假设Tref中坐标位置为(u,v)的像素点属于边缘区域,则令T(u,v)=1;假设Tref中坐标位置为(u,v)的像素点的纹理判断因子z(u,v)=1,则当T(u,v)=0时确定Tref中坐标位置为(u,v)的像素点属于纹理区域,并重新令T(u,v)=2;其中,T(u,v)的值为0或1或2,T(u,v)=0代表Tref中坐标位置为(u,v)的像素点属于平坦区域,T(u,v)=1代表Tref中坐标位置为(u,v)的像素点属于边缘区域,T(u,v)=2代表Tref中坐标位置为(u,v)的像素点属于纹理区域;⑥Let T denote the region marker map with the same size as T ref , record the pixel value of the pixel whose coordinate position is (u,v) in T as T(u,v), and set each of T in T The pixel value of the pixel is initialized to 0; use the Canny operator to detect the edge area in T ref , assuming that the pixel with the coordinate position (u, v) in T ref belongs to the edge area, then set T (u, v) = 1; Assuming that the texture judgment factor z(u,v)=1 of the pixel whose coordinate position in T ref is (u,v), then when T(u,v)=0, determine that the coordinate position in T ref is (u ,v) belongs to the texture area, and set T(u,v)=2 again; wherein, the value of T(u,v) is 0 or 1 or 2, and T(u,v)=0 represents T ref The pixel point whose coordinate position is (u, v) belongs to the flat area, T(u, v)=1 means that the pixel point whose coordinate position is (u, v) in T ref belongs to the edge area, T(u, v)= 2 means that the pixel at the coordinate position (u, v) in T ref belongs to the texture area;
⑦引入基于亮度掩蔽和纹理掩蔽效应的JND模型,利用JND模型,并根据Tref中的每个像素点所属区域,计算Tref中的每个像素点的误差可视阈值,将Tref中坐标位置为(u,v)的像素点的误差可视阈值记为Th(u,v),其中,max()为取最大值函数,min()为取最小值函数,bg(u,v)表示Tref中坐标位置为(u,v)的像素点的平均背景亮度,mg(u,v)表示Tref中坐标位置为(u,v)的像素点的周围亮度的最大平均加权,LA(u,v)表示Tref中坐标位置为(u,v)的像素点的亮度掩蔽效应,f(bg(u,v),mg(u,v))=mg(u,v)×α(bg(u,v))+β(bg(u,v)),α(bg(u,v))=bg(u,v)×0.0001+0.115,β(bg(u,v))=0.5-bg(u,v)×0.01;⑦ Introduce a JND model based on brightness masking and texture masking effects, use the JND model, and calculate the error visual threshold of each pixel in T ref according to the area to which each pixel in T ref belongs, and convert the coordinates in T ref The error visual threshold of the pixel at position (u, v) is denoted as Th(u, v), Among them, max() is the function of taking the maximum value, min() is the function of taking the minimum value, bg(u, v) represents the average background brightness of the pixel at the coordinate position (u, v) in T ref , mg(u, v) represents the maximum average weighting of the surrounding brightness of the pixel at the coordinate position (u, v) in T ref , and LA(u, v) represents the brightness masking effect of the pixel at the coordinate position (u, v) in T ref , f(bg(u,v),mg(u,v))=mg(u,v)×α(bg(u,v))+β(bg(u,v)), α(bg(u ,v))=bg(u,v)×0.0001+0.115, β(bg(u,v))=0.5-bg(u,v)×0.01;
⑧令E表示尺寸大小与Dtar的尺寸大小相同的深度误差图,将E中坐标位置为(x,y)的像素点的像素值记为E(x,y),当E'tar(x,y)=0时,E(x,y)=0;当E'tar(x,y)≠0时,其中,V(x,y)=(u,v)表示一个映射过程,(x,y)为Τtar中的像素点的坐标位置,(u,v)为Tref中的像素点的坐标位置,当Tref所在视点在Τtar所在视点的左边时,满足u=x、v=y+dtar,p(x,y);当Tref所在视点在Τtar所在视点的右边时,满足u=x、v=y-dtar,p(x,y);⑧ Let E represent the depth error map whose size is the same as that of D tar , record the pixel value of the pixel whose coordinate position is (x, y) in E as E(x, y), when E' tar (x ,y)=0, E(x,y)=0; when E' tar (x,y)≠0, Wherein, V (x, y) =(u, v) represents a mapping process, (x, y) is the coordinate position of the pixel in T tar , (u, v) is the coordinate position of the pixel in T ref , when the viewpoint of T ref is on the left side of the viewpoint of Τ tar , satisfy u=x, v=y+d tar, p (x, y); when the viewpoint of T ref is on the right of the viewpoint of Τ tar , satisfy u = x, v = yd tar, p (x, y);
⑨统计E中像素值为1的像素点的总个数,记为numE;然后计算Dtar中的错误像素点的比率作为Dtar的质量评价值,记为EPR, ⑨ count the total number of pixels with a pixel value of 1 in E, and record it as num E ; then calculate the ratio of the wrong pixels in D tar as the quality evaluation value of D tar , and record it as EPR,
所述的步骤①中将Dtar中的所有像素点的像素值转化为视差值的具体过程为:对于Dtar中坐标位置为(x,y)的像素点,将其像素值转化得到的视差值记为dtar,p(x,y),其中,1≤x≤M,1≤y≤N,b表示相机间的基线距离,f表示相机的焦距,Znear为最近实际景深,Zfar为最远实际景深,Dtar(x,y)表示Dtar中坐标位置为(x,y)的像素点的像素值。The specific process of converting the pixel values of all pixels in D tar into parallax values in the described step 1. is: for a pixel whose coordinate position is (x, y) in D tar , its pixel value is converted into The disparity value is recorded as d tar,p (x,y), Among them, 1≤x≤M, 1≤y≤N, b represents the baseline distance between cameras, f represents the focal length of the camera, Z near is the closest actual depth of field, Z far is the farthest actual depth of field, D tar (x,y) Indicates the pixel value of the pixel whose coordinate position is (x, y) in D tar .
所述的步骤⑤中zh(u,v)和zv(u,v)的获取过程为:The acquisition process of z h (u, v) and z v (u, v) in the step ⑤ is:
⑤_1、计算Tref中的每个像素点沿水平方向的差分信号,将Tref中坐标位置为(u,v)的像素点沿水平方向的差分信号记为dh(u,v),其中,Iref(u,v+1)表示Tref中坐标位置为(u,v+1)的像素点的亮度分量;⑤_1. Calculate the differential signal of each pixel point in T ref along the horizontal direction, and record the differential signal of the pixel point whose coordinate position is (u, v) in T ref along the horizontal direction as d h (u, v), Wherein, I ref (u, v+1) represents the luminance component of the pixel whose coordinate position is (u, v+1) in T ref ;
⑤_2、计算Tref中的每个像素点沿水平方向的差分信号的特征符号,将dh(u,v)的特征符号记为symdh(u,v), ⑤_2. Calculate the characteristic sign of the differential signal of each pixel point in T ref along the horizontal direction, and record the characteristic sign of d h (u, v) as symd h (u, v),
⑤_3、计算zh(u,v),其中,dhsym(u,v)为中间变量,symdh(u,v+1)表示Tref中坐标位置为(u,v+1)的像素点沿水平方向的差分信号的特征符号;⑤_3, calculate z h (u, v), Among them, d hsym (u, v) is an intermediate variable, symd h (u, v+1) represents the characteristic symbol of the differential signal along the horizontal direction of the pixel at the coordinate position (u, v+1) in T ref ;
⑤_4、计算Tref中的每个像素点沿垂直方向的差分信号,将Tref中坐标位置为(u,v)的像素点沿垂直方向的差分信号记为dv(u,v),其中,Iref(u+1,v)表示Tref中坐标位置为(u+1,v)的像素点的亮度分量;⑤_4. Calculate the differential signal of each pixel point in T ref along the vertical direction, and record the differential signal of the pixel point whose coordinate position is (u, v) in T ref along the vertical direction as d v (u, v), Wherein, I ref (u+1, v) represents the luminance component of the pixel whose coordinate position in T ref is (u+1, v);
⑤_5、计算Tref中的每个像素点沿垂直方向的差分信号的特征符号,将dv(u,v)的特征符号记为symdv(u,v), ⑤_5. Calculate the characteristic sign of the differential signal of each pixel point in T ref along the vertical direction, and record the characteristic sign of d v (u, v) as symd v (u, v),
⑤_6、计算zv(u,v),其中,dvsym(u,v)为中间变量,symdv(u+1,v)表示Tref中坐标位置为(u+1,v)的像素点沿垂直方向的差分信号的特征符号。⑤_6, calculate z v (u, v), Among them, d vsym (u, v) is an intermediate variable, symd v (u+1, v) represents the characteristic sign of the differential signal along the vertical direction of the pixel at the coordinate position (u+1, v) in T ref .
与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:
1)本发明方法充分考虑了深度图在虚拟视点绘制中的作用,深度图并不用于直接观看,而是提供像素位置偏移信息,因此用深度失真造成的虚拟视点失真来标记深度失真区域更合理。1) The method of the present invention fully considers the role of the depth map in virtual viewpoint rendering. The depth map is not used for direct viewing, but provides pixel position offset information. Therefore, it is more convenient to use the virtual viewpoint distortion caused by depth distortion to mark the depth distortion area. Reasonable.
2)本发明方法深入探索了深度图失真对虚拟视点质量的影响,深度失真会造成利用该深度信息绘制得到的虚拟视点中像素位置的偏移和对象扭曲,对应像素的亮度值错误,一般深度失真越严重,虚拟视点像素的亮度值误差越大,从而可以将虚拟视点像素的亮度误差值作为对应深度像素的误差标记,得到差值图。2) The method of the present invention deeply explores the influence of depth map distortion on the quality of virtual viewpoint. Depth distortion will cause offset and object distortion of pixel positions in the virtual viewpoint drawn by using the depth information, and the brightness value of the corresponding pixel is wrong. Generally, the depth The more serious the distortion, the greater the error of the luminance value of the virtual viewpoint pixel, so that the luminance error value of the virtual viewpoint pixel can be used as the error mark of the corresponding depth pixel to obtain a difference map.
3)本发明方法充分考虑了虚拟视点绘制时边界像素的遮挡情况,彩色图像中的像素点经3D-Warping映射到辅助视点上后,物体边界附近离成像平面较近的像素点可能会挡住离成像平面较远的像素点,由于被遮挡的像素点的失真对最终虚拟视点的质量没有影响,因此可以将这些被遮挡的像素点标记出来得到遮挡掩膜,在差值图中去除被遮挡像素的误差标记,可使得深度图质量评价结果与虚拟视点质量客观结果更加一致。3) The method of the present invention fully considers the occlusion of the boundary pixels when the virtual viewpoint is drawn. After the pixels in the color image are mapped to the auxiliary viewpoint by 3D-Warping, the pixels near the object boundary that are closer to the imaging plane may block the distance from the imaging plane. For pixels far from the imaging plane, since the distortion of the occluded pixels has no effect on the quality of the final virtual viewpoint, these occluded pixels can be marked to obtain an occlusion mask, and the occluded pixels can be removed in the difference map The error mark can make the evaluation result of the depth map quality more consistent with the objective result of the virtual viewpoint quality.
4)本发明方法充分考虑了人眼视觉特性,将辅助视点上的彩色图像划分为边缘、纹理和平坦区域三个部分,利用基于亮度掩蔽和纹理掩蔽效应的JND模型得到不同部分各像素点的误差可视阈值,在去遮挡后的差值图中将映射后小于对应误差可视阈值的误差标记去除,得到最终的深度误差图,使深度图质量评价结果更加符合人眼特性。4) The method of the present invention fully considers the visual characteristics of the human eye, divides the color image on the auxiliary viewpoint into three parts of edge, texture and flat area, and uses the JND model based on brightness masking and texture masking effects to obtain the pixel points of different parts. The error visual threshold is used to remove the error marks that are smaller than the corresponding error visual threshold after mapping in the de-occluded difference map to obtain the final depth error map, so that the quality evaluation results of the depth map are more in line with the characteristics of the human eye.
附图说明Description of drawings
图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;
图2为交叉验证过程的示意图;Figure 2 is a schematic diagram of the cross-validation process;
图3为遮挡示意图;Figure 3 is a schematic diagram of occlusion;
图4a为Cones序列第2视点由AdaptBP方法估计得到的深度图;Figure 4a is the depth map estimated by the AdaptBP method at the second viewpoint of the Cones sequence;
图4b为图4a所示的深度图对应的彩色图;Figure 4b is a color map corresponding to the depth map shown in Figure 4a;
图4c为Cones序列第3视点的彩色图;Figure 4c is a color map of the third viewpoint of the Cones sequence;
图4d为经过交叉验证后得到的图4a所示的深度图的差值图;Figure 4d is a difference map of the depth map shown in Figure 4a obtained after cross-validation;
图5a为利用图4a所示的深度图中的像素点的像素值映射到Cones序列第3视点得到的遮挡掩膜图像;Figure 5a is an occlusion mask image obtained by mapping the pixel values of the pixels in the depth map shown in Figure 4a to the third viewpoint of the Cones sequence;
图5b为图4a所示的深度图对应的去遮挡后的差值图;Fig. 5b is a difference map after de-occlusion corresponding to the depth map shown in Fig. 4a;
图5c为图4a所示的深度图对应的深度误差图。Fig. 5c is a depth error map corresponding to the depth map shown in Fig. 4a.
具体实施方式Detailed ways
以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.
本发明提出的一种结合JND模型的交叉验证深度图质量评价方法,其总体实现框图如图1所示,其包括以下步骤:A method for evaluating the quality of a cross-validated depth map combined with a JND model proposed by the present invention, its overall implementation block diagram is shown in Figure 1, and it includes the following steps:
①将待评价的深度图记为Dtar,将Dtar对应的彩色图记为Τtar,将除Dtar和Τtar所在视点外的另一个已知视点定义为辅助视点,将辅助视点上的彩色图记为Tref;然后通过将Dtar中的所有像素点的像素值转化为视差值,将Τtar中的所有像素点经3D-Warping映射到Tref中;其中,Dtar、Τtar和Tref的垂直方向上的像素点的总个数为M,Dtar、Τtar和Tref的水平方向上的像素点的总个数为N。① mark the depth map to be evaluated as D tar , mark the color map corresponding to D tar as Τ tar , define another known viewpoint except the viewpoint where D tar and Τ tar are located as an auxiliary viewpoint, and define the The color map is marked as T ref ; then by converting the pixel values of all pixels in D tar into parallax values, all pixels in Τ tar are mapped to T ref through 3D-Warping; wherein, D tar , Τ The total number of pixels in the vertical direction of tar and T ref is M, and the total number of pixels in the horizontal direction of D tar , Τ tar and T ref is N.
在此具体实施例中,步骤①中将Dtar中的所有像素点的像素值转化为视差值的具体过程为:对于Dtar中坐标位置为(x,y)的像素点,将其像素值转化得到的视差值记为dtar,p(x,y),其中,1≤x≤M,1≤y≤N,b表示相机间的基线距离,f表示相机的焦距,Znear为最近实际景深,Zfar为最远实际景深,Dtar(x,y)表示Dtar中坐标位置为(x,y)的像素点的像素值。In this specific embodiment, the specific process of converting the pixel values of all pixels in D tar into parallax values in step 1. is: for a pixel whose coordinate position is (x, y) in D tar , its pixel The disparity value converted from the value is denoted as d tar,p (x,y), Among them, 1≤x≤M, 1≤y≤N, b represents the baseline distance between cameras, f represents the focal length of the camera, Z near is the closest actual depth of field, Z far is the farthest actual depth of field, D tar (x,y) Indicates the pixel value of the pixel whose coordinate position is (x, y) in D tar .
②令Etar表示尺寸大小与Dtar的尺寸大小相同的差值图,将Etar中坐标位置为(x,y)的像素点的像素值记为Etar(x,y),当辅助视点在Dtar和Τtar所在视点的左边时,判断y+dtar,p(x,y)是否大于N,如果是,则令Etar(x,y)=0,否则,满足u=x、v=y+dtar,p(x,y),Etar(x,y)=|Ιtar(x,y)-Ιref(u,v)|;当辅助视点在Dtar和Τtar所在视点的右边时,判断y-dtar,p(x,y)是否小于1,如果是,则令Etar(x,y)=0,否则,满足u=x、v=y-dtar,p(x,y),Etar(x,y)=|Ιtar(x,y)-Ιref(u,v)|;其中,1≤x≤M,1≤y≤N,1≤u≤M,1≤v≤N,dtar,p(x,y)表示Dtar中坐标位置为(x,y)的像素点的像素值转化得到的视差值,符号“||”为取绝对值符号,Ιtar(x,y)表示Τtar中坐标位置为(x,y)的像素点的亮度分量,Ιref(u,v)表示Tref中坐标位置为(u,v)的像素点的亮度分量。②Let E tar represent the difference map whose size is the same as that of D tar , and record the pixel value of the pixel whose coordinate position is (x, y) in E tar as E tar (x, y), when the auxiliary viewpoint When D tar and Τ tar are on the left side of the viewpoint, judge whether y+d tar, p (x, y) is greater than N, if so, then make E tar (x, y)=0, otherwise, satisfy u=x, v=y+d tar, p (x, y), E tar (x, y)=|Ι tar (x, y)-Ι ref (u, v)|; when the auxiliary viewpoint is at the location of D tar and Τ tar When on the right side of the viewpoint, judge whether yd tar,p (x, y) is less than 1, if yes, then set E tar (x, y)=0, otherwise, satisfy u=x, v=yd tar,p (x, y), E tar (x, y) = |Ι tar (x, y)-Ι ref (u, v)|; among them, 1≤x≤M, 1≤y≤N, 1≤u≤M, 1 ≤v≤N, d tar, p (x, y) represents the disparity value converted from the pixel value of the pixel whose coordinate position is (x, y) in D tar , the symbol "||" is the absolute value symbol, Ι tar (x, y) represents the luminance component of a pixel whose coordinate position is (x, y) in Τ tar , and Ι ref (u, v) represents the brightness of a pixel whose coordinate position is (u, v) in T ref portion.
步骤①中将Τtar中的所有像素点经3D-Warping映射到Tref中的过程和步骤②的过程为交叉验证过程,图2给出了交叉验证过程的示意图,其中,Tl、Tr、Dl、Dr对应表示左视点彩色图、右视点彩色图、左视点深度图和右视点深度图。当要得到左视点深度图对应的差值图时,将右视点彩色图作为辅助信息进行交叉验证。Tl中坐标位置为(xl,yl)的像素点对应的亮度值为Ιl1,利用Dl中的深度信息,经过3D-Warping过程映射到右视点彩色图Tr上,若超出图像范围,则Ld中坐标位置为(xl,yl)的像素点赋值为0,若映射到右视点彩色图Tr中坐标位置为(xlr,yl)的像素点处,对应的亮度值为Ιr1,则将两个像素点的亮度值的差值|Ιl1-Ιr1|赋给Ld中坐标位置为(xl,yl)的像素点,Ld即为左视点深度图对应的差值图。同理,要得到右视点深度图对应的差值图时,将左视点彩色图Tl作为辅助信息进行交叉验证,即可得到右视点深度图对应的差值图Rd。In step ①, the process of mapping all pixels in Τ tar to T ref through 3D-Warping and the process of step ② are cross-validation processes. Figure 2 shows a schematic diagram of the cross-validation process, where T l , T r , D l , and D r correspond to the left-viewpoint color map, right-viewpoint color map, left-viewpoint depth map, and right-viewpoint depth map. When it is necessary to obtain the difference map corresponding to the left-viewpoint depth map, the right-viewpoint color map is used as auxiliary information for cross-validation. The brightness value corresponding to the pixel at the coordinate position (x l , y l ) in T l is I l1 , using the depth information in D l , it is mapped to the right-viewpoint color image T r through the 3D-Warping process. range, the pixel at the coordinate position (x l , y l ) in L d is assigned a value of 0, and if it is mapped to the pixel at the coordinate position (x lr , y l ) in the right-view color image T r , the corresponding The luminance value is Ι r1 , then assign the difference |Ι l1 -Ι r1 | of the luminance values of two pixels to the pixel in L d whose coordinate position is (x l , y l ), and L d is the left view point The difference map corresponding to the depth map. Similarly, to obtain the difference map corresponding to the right-viewpoint depth map, the left-viewpoint color map T l is used as auxiliary information for cross-validation, and the difference map Rd corresponding to the right-viewpoint depth map can be obtained.
用Cones序列第2视点由AdaptBP方法估计得到的深度图作为待评价的深度图,如图4a所示;图4b为图4a所示深度图对应的彩色图;用Cones序列第3视点的彩色图作为辅助视点上的彩色图,如图4c所示;经交叉验证后得到的差值图如图4d所示。Use the depth map estimated by the AdaptBP method at the second viewpoint of the Cones sequence as the depth map to be evaluated, as shown in Figure 4a; Figure 4b is the color map corresponding to the depth map shown in Figure 4a; use the color map of the third viewpoint of the Cones sequence As the color map on the auxiliary viewpoint, it is shown in Figure 4c; the difference map obtained after cross-validation is shown in Figure 4d.
③令C表示尺寸大小与Dtar的尺寸大小相同的遮挡掩膜图像,将C中坐标位置为(x,y)的像素点的像素值记为C(x,y),将C中的每个像素点的像素值初始化为0,将Τtar中经3D-Warping映射到Tref中坐标位置为(u,v)处的像素点的总个数记为N(u,v);当N(u,v)=1时,令C(x,y)=0;当N(u,v)>1时,其中,N(u,v)的值为0或为1或大于1,Dtar(x,y)表示Dtar中坐标位置为(x,y)的像素点的像素值,max()为取最大值函数,1≤x(u,v),i≤M,1≤y(u,v),i≤N,(x(u,v),i,y(u,v),i)表示Τtar中经3D-Warping映射到Tref中坐标位置为(u,v)处的N(u,v)个像素点中的第i个像素点在Τtar中的坐标位置,Dtar(x(u,v),i,y(u,v),i)表示Dtar中坐标位置为(x(u,v),i,y(u,v),i)的像素点的像素值。③ Let C represent the occlusion mask image whose size is the same as that of D tar , record the pixel value of the pixel whose coordinate position is (x, y) in C as C(x, y), and record each pixel in C The pixel value of each pixel point is initialized to 0, and the total number of pixels at the coordinate position (u, v) in T ref is mapped to T ref through 3D-Warping in T tar and recorded as N (u, v) ; when N When (u,v) =1, set C(x,y)=0; when N (u,v) >1, Among them, the value of N (u, v) is 0 or 1 or greater than 1, D tar (x, y) represents the pixel value of the pixel point whose coordinate position is (x, y) in D tar , and max() is to take Maximum value function, 1≤x (u,v),i ≤M,1≤y (u,v),i ≤N, (x (u,v),i ,y (u,v),i ) means The coordinate position of the i-th pixel in the N (u, v) pixel points at (u, v) at the coordinate position in T ref through 3D-Warping mapping in T tar , D tar (x (u,v),i ,y (u,v),i ) represents the pixel value of the pixel point whose coordinate position is (x (u,v),i ,y (u,v),i ) in D tar .
图3给出了遮挡示意图,左参考视点前景边界点、背景边界点从左至右分别表示为右参考视点前景边界点、背景边界点从左至右分别表示为在3D-Warping过程中,左参考视点中的边界点 分别映射到由它绘制的虚拟视点中的 同理,右参考视点中的边界点分别映射到由它绘制的虚拟视点中的在左参考虚拟视图中,从到这一部分,既有左参考图像中的前景像素点映射过来,也有背景像素点映射过来。同样的,在右参考虚拟视图中,从到这一部分,既有右参考图像中的前景像素点映射过来,也有背景像素点映射过来。这些部分发生了前景像素点对背景像素点的遮挡,在进行交叉验证后得到的差值图里,被遮挡的背景像素点也会被标记出来,但这并不是由于深度值错误导致的,所以需要将这部分背景像素点去除。经过3D-Warping后,对于映射到同一位置的像素点,比较其深度值大小,保留深度值最大的像素点,其余像素点标记出来得到遮挡掩膜图像。Figure 3 shows a schematic diagram of occlusion. The left reference viewpoint foreground boundary point and background boundary point are denoted from left to right as The foreground boundary point and background boundary point of the right reference viewpoint are denoted from left to right as In the 3D-Warping process, the boundary point in the left reference viewpoint respectively mapped to the virtual viewpoint drawn by it Similarly, the boundary points in the right reference viewpoint respectively mapped to the virtual viewpoint drawn by it In the left reference virtual view, from arrive In this part, both the foreground pixels in the left reference image and the background pixels are mapped. Similarly, in the right reference virtual view, from the arrive In this part, both the foreground pixels in the right reference image and the background pixels are mapped. In these parts, the foreground pixels are occluded from the background pixels. In the difference map obtained after cross-validation, the occluded background pixels will also be marked, but this is not caused by the wrong depth value, so These background pixels need to be removed. After 3D-Warping, for the pixels mapped to the same position, compare their depth values, keep the pixel with the largest depth value, and mark the remaining pixels to obtain an occlusion mask image.
用图4a所示的深度图作为待评价的深度图,图4c所示的彩色图作为辅助视点上的彩色图,得到的遮挡掩膜图像如图5a所示。Using the depth map shown in Figure 4a as the depth map to be evaluated, and the color map shown in Figure 4c as the color map on the auxiliary viewpoint, the obtained occlusion mask image is shown in Figure 5a.
④利用C去除Etar中被遮挡的像素点,得到去遮挡后的差值图,记为E'tar,将E'tar中坐标位置为(x,y)的像素点的像素值记为E'tar(x,y),E'tar(x,y)=Etar(x,y)×(1-C(x,y))。④Use C to remove the occluded pixels in E tar to obtain the difference map after de-occlusion, which is denoted as E' tar , and the pixel value of the pixel whose coordinate position is (x, y) in E' tar is denoted as E ' tar (x, y), E' tar (x, y) = E tar (x, y) × (1-C (x, y)).
图5b为在图4d所示的差值图中去除图5a所示的遮挡掩膜图像中被遮挡的像素点后得到的差值图。Fig. 5b is a difference map obtained after removing the occluded pixels in the occlusion mask image shown in Fig. 5a from the difference map shown in Fig. 4d.
⑤计算Tref中的每个像素点的纹理判断因子,将Tref中坐标位置为(u,v)的像素点的纹理判断因子记为z(u,v),其中,1≤u≤M,1≤v≤N,zh(u,v)表示Tref中坐标位置为(u,v)的像素点的水平方向的纹理判断因子,zh(u,v)的值为1或0,zh(u,v)=1表示Tref中坐标位置为(u,v)的像素点为水平方向的纹理像素点,zh(u,v)=0表示Tref中坐标位置为(u,v)的像素点为水平方向的非纹理像素点,zv(u,v)表示Tref中坐标位置为(u,v)的像素点的垂直方向的纹理判断因子,zv(u,v)的值为1或0,zv(u,v)=1表示Tref中坐标位置为(u,v)的像素点为垂直方向的纹理像素点,zv(u,v)=0表示Tref中坐标位置为(u,v)的像素点为垂直方向的非纹理像素点。⑤ Calculate the texture judgment factor of each pixel in T ref , and record the texture judgment factor of the pixel whose coordinate position is (u, v) in T ref as z(u, v), Among them, 1≤u≤M, 1≤v≤N, z h (u, v) represents the texture judgment factor in the horizontal direction of the pixel whose coordinate position is (u, v) in T ref , z h (u, v ) value is 1 or 0, z h (u, v) = 1 means that the pixel at the coordinate position (u, v) in T ref is a texture pixel point in the horizontal direction, z h (u, v) = 0 means The pixel at the coordinate position (u, v) in T ref is a non-texture pixel in the horizontal direction, and z v (u, v) represents the texture in the vertical direction of the pixel at the coordinate position (u, v) in T ref Judgment factor, the value of z v (u, v) is 1 or 0, z v (u, v) = 1 means that the pixel at the coordinate position (u, v) in T ref is a texture pixel in the vertical direction, z v (u, v)=0 indicates that the pixel at the coordinate position (u, v) in T ref is a non-texture pixel in the vertical direction.
在此具体实施例中,步骤⑤中zh(u,v)和zv(u,v)的获取过程为:In this specific embodiment, the acquisition process of z h (u, v) and z v (u, v) in step ⑤ is:
⑤_1、计算Tref中的每个像素点沿水平方向的差分信号,将Tref中坐标位置为(u,v)的像素点沿水平方向的差分信号记为dh(u,v),其中,Iref(u,v+1)表示Tref中坐标位置为(u,v+1)的像素点的亮度分量。⑤_1. Calculate the differential signal of each pixel point in T ref along the horizontal direction, and record the differential signal of the pixel point whose coordinate position is (u, v) in T ref along the horizontal direction as d h (u, v), Wherein, I ref (u, v+1) represents the luminance component of the pixel at the coordinate position (u, v+1) in T ref .
⑤_2、计算Tref中的每个像素点沿水平方向的差分信号的特征符号,将dh(u,v)的特征符号记为symdh(u,v), ⑤_2. Calculate the characteristic sign of the differential signal of each pixel point in T ref along the horizontal direction, and record the characteristic sign of d h (u, v) as symd h (u, v),
⑤_3、计算zh(u,v),其中,dhsym(u,v)为中间变量,symdh(u,v+1)表示Tref中坐标位置为(u,v+1)的像素点沿水平方向的差分信号的特征符号。⑤_3, calculate z h (u, v), Among them, d hsym (u, v) is an intermediate variable, symd h (u, v+1) represents the characteristic symbol of the differential signal along the horizontal direction of the pixel at the coordinate position (u, v+1) in T ref .
⑤_4、计算Tref中的每个像素点沿垂直方向的差分信号,将Tref中坐标位置为(u,v)的像素点沿垂直方向的差分信号记为dv(u,v),其中,Iref(u+1,v)表示Tref中坐标位置为(u+1,v)的像素点的亮度分量。⑤_4. Calculate the differential signal of each pixel point in T ref along the vertical direction, and record the differential signal of the pixel point whose coordinate position is (u, v) in T ref along the vertical direction as d v (u, v), Wherein, I ref (u+1, v) represents the luminance component of the pixel at the coordinate position (u+1, v) in T ref .
⑤_5、计算Tref中的每个像素点沿垂直方向的差分信号的特征符号,将dv(u,v)的特征符号记为symdv(u,v), ⑤_5. Calculate the characteristic sign of the differential signal of each pixel point in T ref along the vertical direction, and record the characteristic sign of d v (u, v) as symd v (u, v),
⑤_6、计算zv(u,v),其中,dvsym(u,v)为中间变量,symdv(u+1,v)表示Tref中坐标位置为(u+1,v)的像素点沿垂直方向的差分信号的特征符号。⑤_6, calculate z v (u, v), Among them, d vsym (u, v) is an intermediate variable, symd v (u+1, v) represents the characteristic sign of the differential signal along the vertical direction of the pixel at the coordinate position (u+1, v) in T ref .
⑥令T表示尺寸大小与Tref的尺寸大小相同的区域标记图,将T中坐标位置为(u,v)的像素点的像素值记为T(u,v),将T中的每个像素点的像素值初始化为0;利用Canny算子检测出Tref中的边缘区域,假设Tref中坐标位置为(u,v)的像素点属于边缘区域,则令T(u,v)=1;假设Tref中坐标位置为(u,v)的像素点的纹理判断因子z(u,v)=1,则当T(u,v)=0时确定Tref中坐标位置为(u,v)的像素点属于纹理区域,并重新令T(u,v)=2;其中,T(u,v)的值为0或1或2,T(u,v)=0代表Tref中坐标位置为(u,v)的像素点属于平坦区域,T(u,v)=1代表Tref中坐标位置为(u,v)的像素点属于边缘区域,T(u,v)=2代表Tref中坐标位置为(u,v)的像素点属于纹理区域。⑥Let T denote the region marker map with the same size as T ref , record the pixel value of the pixel whose coordinate position is (u,v) in T as T(u,v), and set each of T in T The pixel value of the pixel is initialized to 0; use the Canny operator to detect the edge area in T ref , assuming that the pixel with the coordinate position (u, v) in T ref belongs to the edge area, then let T (u, v) = 1; Assuming that the texture judgment factor z(u,v)=1 of the pixel whose coordinate position is (u,v) in T ref , then when T(u,v)=0, it is determined that the coordinate position in T ref is (u ,v) belongs to the texture area, and set T(u,v)=2 again; wherein, the value of T(u,v) is 0 or 1 or 2, and T(u,v)=0 represents T ref The pixel point whose coordinate position is (u, v) belongs to the flat area, T(u, v)=1 means that the pixel point whose coordinate position is (u, v) in T ref belongs to the edge area, T(u, v)= 2 means that the pixel at the coordinate position (u, v) in T ref belongs to the texture area.
⑦引入基于亮度掩蔽和纹理掩蔽效应的JND模型,利用JND模型,并根据Tref中的每个像素点所属区域,计算Tref中的每个像素点的误差可视阈值,将Tref中坐标位置为(u,v)的像素点的误差可视阈值记为Th(u,v),其中,max()为取最大值函数,min()为取最小值函数,bg(u,v)表示Tref中坐标位置为(u,v)的像素点的平均背景亮度,bg(u,v)由加权的低通算子计算得到,mg(u,v)表示Tref中坐标位置为(u,v)的像素点的周围亮度的最大平均加权,LA(u,v)表示Tref中坐标位置为(u,v)的像素点的亮度掩蔽效应,f(bg(u,v),mg(u,v))=mg(u,v)×α(bg(u,v))+β(bg(u,v)),α(bg(u,v))=bg(u,v)×0.0001+0.115,β(bg(u,v))=0.5-bg(u,v)×0.01。⑦ Introduce a JND model based on brightness masking and texture masking effects, use the JND model, and calculate the error visual threshold of each pixel in T ref according to the area to which each pixel in T ref belongs, and convert the coordinates in T ref The error visual threshold of the pixel at position (u, v) is denoted as Th(u, v), Among them, max() is the function of taking the maximum value, min() is the function of taking the minimum value, bg(u, v) represents the average background brightness of the pixel at the coordinate position (u, v) in T ref , bg(u, v) Calculated by a weighted low-pass operator, mg(u,v) represents the maximum average weighting of the surrounding brightness of the pixel whose coordinate position is (u,v) in T ref , LA(u,v) represents T ref The brightness masking effect of the pixel with the middle coordinate position (u,v), f(bg(u,v),mg(u,v))=mg(u,v)×α(bg(u,v))+β(bg(u,v)), α(bg(u, v))=bg(u,v)×0.0001+0.115, β(bg(u,v))=0.5−bg(u,v)×0.01.
⑧令E表示尺寸大小与Dtar的尺寸大小相同的深度误差图,将E中坐标位置为(x,y)的像素点的像素值记为E(x,y),当E'tar(x,y)=0时,E(x,y)=0;当E'tar(x,y)≠0时,其中,V(x,y)=(u,v)表示一个映射过程,(x,y)为Τtar中的像素点的坐标位置,(u,v)为Tref中的像素点的坐标位置,当Tref所在视点在Τtar所在视点的左边时,满足u=x、v=y+dtar,p(x,y);当Tref所在视点在Τtar所在视点的右边时,满足u=x、v=y-dtar,p(x,y)。⑧ Let E represent the depth error map whose size is the same as that of D tar , record the pixel value of the pixel whose coordinate position is (x, y) in E as E(x, y), when E' tar (x ,y)=0, E(x,y)=0; when E' tar (x,y)≠0, Wherein, V (x, y) =(u, v) represents a mapping process, (x, y) is the coordinate position of the pixel in T tar , (u, v) is the coordinate position of the pixel in T ref , when the viewpoint of T ref is on the left side of the viewpoint of Τ tar , satisfy u=x, v=y+d tar, p (x, y); when the viewpoint of T ref is on the right of the viewpoint of Τ tar , satisfy u =x, v=yd tar,p (x,y).
图5c为在图5b中去除小于相应误差可视阈值的像素点后得到的图4a的深度误差图。Fig. 5c is the depth error map of Fig. 4a obtained after removing pixels smaller than the corresponding error visual threshold in Fig. 5b.
⑨统计E中像素值为1的像素点的总个数,记为numE;然后计算Dtar中的错误像素点的比率作为Dtar的质量评价值,记为EPR, ⑨ count the total number of pixels with a pixel value of 1 in E, and record it as num E ; then calculate the ratio of the wrong pixels in D tar as the quality evaluation value of D tar , and record it as EPR,
为了测试本发明方法的性能,对采用Middlebury数据库提供的多种不同算法估计得到的深度图进行测试,选用了四个场景:“Tsukuba”、“Venus”、“Teddy”和“Cones”,对于每个场景,使用了第2视点九种不同的立体匹配算法估计得到的深度图,共36幅深度图构成评价数据库。这九种不同的立体匹配算法分别为:AdaptBP、WarpMat、P-LinearS、VSW、BPcompressed、Layered、SNCC、ReliabilityDP和Infection。In order to test the performance of the method of the present invention, the depth maps estimated by using a variety of different algorithms provided by the Middlebury database are tested, and four scenes are selected: "Tsukuba", "Venus", "Teddy" and "Cones", for each For each scene, the depth maps estimated by nine different stereo matching algorithms from the second viewpoint are used, and a total of 36 depth maps constitute the evaluation database. The nine different stereo matching algorithms are: AdaptBP, WarpMat, P-LinearS, VSW, BPcompressed, Layered, SNCC, ReliabilityDP and Infection.
表1给出了评价数据库中“Tsukuba”、“Venus”、“Teddy”和“Cones”的全参考客观质量评价指标PBMP(Percentage of Bad Matching Pixels)的值,PBMP是用估计深度图与无失真参考深度图作比较来计算误差的,如果某个像素点的视差误差大于一个像素宽度,就被视为错误像素点。由于使用了无失真深度图作参考,因此PBMP是一种准确而可靠的全参考指标。Table 1 shows the values of the full-reference objective quality evaluation index PBMP (Percentage of Bad Matching Pixels) for "Tsukuba", "Venus", "Teddy" and "Cones" in the evaluation database. PBMP is used to estimate the depth map with the distortion-free Refer to the depth map for comparison to calculate the error. If the parallax error of a pixel is greater than one pixel width, it is regarded as an error pixel. Due to the use of an undistorted depth map as a reference, PBMP is an accurate and reliable full reference metric.
表1评价数据库中不同深度图的PBMP(%)的值Table 1 Values of PBMP (%) for different depth maps in the evaluation database
表2给出了本发明方法得到的评价数据库中的“Tsukuba”、“Venus”、“Teddy”和“Cones”的质量评价值。表3给出了本发明方法的评价结果与全参考指标PBMP的相关系数,相关系数衡量了两者的一致性程度,皮尔逊系数和线性回归系数的值都是越接近1越好。由表3可知:本发明方法求得的结果与PBMP有很好的一致性,说明本发明方法能准确检测深度误差和评价深度图的质量。Table 2 shows the quality evaluation values of "Tsukuba", "Venus", "Teddy" and "Cones" in the evaluation database obtained by the method of the present invention. Table 3 has provided the evaluation result of the inventive method and the correlation coefficient of full reference index PBMP, and correlation coefficient has weighed the consistency degree of both, and the value of Pearson coefficient and linear regression coefficient all is that the value closer to 1 is better. It can be seen from Table 3 that the results obtained by the method of the present invention are in good agreement with PBMP, indicating that the method of the present invention can accurately detect depth errors and evaluate the quality of depth maps.
表2评价数据库中不同深度图的质量评价值EPR(%)Table 2 The quality evaluation value EPR (%) of different depth maps in the evaluation database
表3质量评价值EPR与PBMP之间的相关性Table 3 Correlation between quality evaluation value EPR and PBMP
表4给出了本发明方法的评价结果与虚拟视点质量的相关系数,虚拟视点质量用客观评价指标均方误差MSE来衡量。因为虚拟视点合成是基于深度图进行的,深度质量越差将导致虚拟视图中出现更多的错误,这表明MSE应该随着质量评价值EPR的增大而增大,用MSE和质量评价值EPR的线性回归系数表示度量的准确度。在“Tsukuba”、“Venus”、“Teddy”和“Cones”中,质量评价值EPR与MSE的线性回归系数均超过0.75。特别地,在“Tsukuba”中线性回归系数超过了0.92。这表明,质量评价值EPR与虚拟视点的质量有很好的一致性。Table 4 shows the correlation coefficient between the evaluation results of the method of the present invention and the quality of the virtual viewpoint, and the quality of the virtual viewpoint is measured by the objective evaluation index mean square error MSE. Because the virtual view synthesis is based on the depth map, the poorer the depth quality will lead to more errors in the virtual view, which indicates that MSE should increase with the increase of the quality evaluation value EPR, using MSE and quality evaluation value EPR The linear regression coefficient for indicates the accuracy of the measure. In "Tsukuba", "Venus", "Teddy" and "Cones", the linear regression coefficients of the quality evaluation values EPR and MSE all exceeded 0.75. In particular, the linear regression coefficient exceeded 0.92 in "Tsukuba". This shows that the quality evaluation value EPR has a good agreement with the quality of the virtual viewpoint.
表4质量评价值EPR与虚拟视点质量之间的相关性Table 4 Correlation between quality evaluation value EPR and virtual viewpoint quality
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710041375.6A CN106803952B (en) | 2017-01-20 | 2017-01-20 | In conjunction with the cross validation depth map quality evaluating method of JND model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710041375.6A CN106803952B (en) | 2017-01-20 | 2017-01-20 | In conjunction with the cross validation depth map quality evaluating method of JND model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106803952A CN106803952A (en) | 2017-06-06 |
CN106803952B true CN106803952B (en) | 2018-09-14 |
Family
ID=58987216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710041375.6A Active CN106803952B (en) | 2017-01-20 | 2017-01-20 | In conjunction with the cross validation depth map quality evaluating method of JND model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106803952B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110544233B (en) * | 2019-07-30 | 2022-03-08 | 北京的卢深视科技有限公司 | Depth image quality evaluation method based on face recognition application |
CN110691228A (en) * | 2019-10-17 | 2020-01-14 | 北京迈格威科技有限公司 | Three-dimensional transformation-based depth image noise marking method and device and storage medium |
CN111402152B (en) * | 2020-03-10 | 2023-10-24 | 北京迈格威科技有限公司 | Processing method and device of disparity map, computer equipment and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7006568B1 (en) * | 1999-05-27 | 2006-02-28 | University Of Maryland, College Park | 3D wavelet based video codec with human perceptual model |
BRPI0906767A2 (en) * | 2008-01-18 | 2015-07-14 | Thomson Licensing | Method for Perceptual Quality Assessment |
CN103002306B (en) * | 2012-11-27 | 2015-03-18 | 宁波大学 | Depth image coding method |
CN103426173B (en) * | 2013-08-12 | 2017-05-10 | 浪潮电子信息产业股份有限公司 | Objective evaluation method for stereo image quality |
CN103957401A (en) * | 2014-05-12 | 2014-07-30 | 武汉大学 | Three-dimensional mixed minimum perceivable distortion model based on depth image rendering |
TW201601522A (en) * | 2014-06-23 | 2016-01-01 | 國立臺灣大學 | Perceptual video coding method based on just-noticeable- distortion model |
CN104754320B (en) * | 2015-03-27 | 2017-05-31 | 同济大学 | A kind of 3D JND threshold values computational methods |
CN104954778B (en) * | 2015-06-04 | 2017-05-24 | 宁波大学 | An Objective Evaluation Method of Stereo Image Quality Based on Perceptual Feature Set |
CN105828061B (en) * | 2016-05-11 | 2017-09-29 | 宁波大学 | A kind of virtual view quality evaluating method of view-based access control model masking effect |
-
2017
- 2017-01-20 CN CN201710041375.6A patent/CN106803952B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106803952A (en) | 2017-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102026013B (en) | Three-dimensional video matching method based on affine transformation | |
CN102509343B (en) | Binocular image and object contour-based virtual and actual sheltering treatment method | |
JP5442111B2 (en) | A method for high-speed 3D construction from images | |
CN101271578B (en) | A Depth Sequence Generation Method in Plane Video to Stereo Video Technology | |
CN101996407B (en) | A multi-camera color calibration method | |
CN102663747B (en) | Stereo image objectivity quality evaluation method based on visual perception | |
TWI531212B (en) | System and method of rendering stereoscopic images | |
CN109255811A (en) | A kind of solid matching method based on the optimization of confidence level figure parallax | |
CN102665086A (en) | Method for obtaining parallax by using region-based local stereo matching | |
CN104954778B (en) | An Objective Evaluation Method of Stereo Image Quality Based on Perceptual Feature Set | |
CN104065947B (en) | The depth map acquisition methods of a kind of integration imaging system | |
CN101771893A (en) | Video frequency sequence background modeling based virtual viewpoint rendering method | |
CN102982535A (en) | Stereo image quality evaluation method based on peak signal to noise ratio (PSNR) and structural similarity (SSIM) | |
CN111385554B (en) | A High Image Quality Virtual Viewpoint Rendering Method for Free Viewpoint Video | |
CN109345502B (en) | Stereo image quality evaluation method based on disparity map stereo structure information extraction | |
CN101610425A (en) | A method and device for evaluating the quality of stereoscopic images | |
CN102831601A (en) | Three-dimensional matching method based on union similarity measure and self-adaptive support weighting | |
CN107071383A (en) | The virtual visual point synthesizing method split based on image local | |
CN106803952B (en) | In conjunction with the cross validation depth map quality evaluating method of JND model | |
CN110853027A (en) | Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation | |
CN105976351A (en) | Central offset based three-dimensional image quality evaluation method | |
CN102333234B (en) | A monitoring method and device for binocular stereoscopic video state information | |
CN101662695B (en) | Method and device for acquiring virtual viewport | |
CN102542541A (en) | Deep image post-processing method | |
CN107105214A (en) | A kind of 3 d video images method for relocating |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240704 Address after: 518000 1407, Phase II, Qianhai Shimao Financial Center, No. 3040 Xinghai Avenue, Nanshan Street, Qianhai Shenzhen Hong Kong Cooperation Zone, Shenzhen, Guangdong Patentee after: Shenzhen Yiqi Culture Co.,Ltd. Country or region after: China Address before: 509 Kangrui Times Square, Keyuan Business Building, 39 Huarong Road, Gaofeng Community, Dalang Street, Longhua District, Shenzhen, Guangdong Province, 518000 Patentee before: Shenzhen lizhuan Technology Transfer Center Co.,Ltd. Country or region before: China Effective date of registration: 20240703 Address after: 509 Kangrui Times Square, Keyuan Business Building, 39 Huarong Road, Gaofeng Community, Dalang Street, Longhua District, Shenzhen, Guangdong Province, 518000 Patentee after: Shenzhen lizhuan Technology Transfer Center Co.,Ltd. Country or region after: China Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818 Patentee before: Ningbo University Country or region before: China |
|
TR01 | Transfer of patent right |