WO2020140397A1 - 一种基于光线逆追踪技术还原井下图像的方法 - Google Patents

一种基于光线逆追踪技术还原井下图像的方法 Download PDF

Info

Publication number
WO2020140397A1
WO2020140397A1 PCT/CN2019/091631 CN2019091631W WO2020140397A1 WO 2020140397 A1 WO2020140397 A1 WO 2020140397A1 CN 2019091631 W CN2019091631 W CN 2019091631W WO 2020140397 A1 WO2020140397 A1 WO 2020140397A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
refracted
reflected
viewing plane
camera
Prior art date
Application number
PCT/CN2019/091631
Other languages
English (en)
French (fr)
Inventor
王忠宾
吴越
谭超
司垒
路绪良
刘朋
周红亚
刘博文
吴虹霖
李小玉
Original Assignee
中国矿业大学
徐州金枫液压技术开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国矿业大学, 徐州金枫液压技术开发有限公司 filed Critical 中国矿业大学
Priority to AU2019395238A priority Critical patent/AU2019395238B2/en
Priority to CA3079552A priority patent/CA3079552C/en
Priority to RU2020115096A priority patent/RU2742814C9/ru
Publication of WO2020140397A1 publication Critical patent/WO2020140397A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the invention belongs to the field of downhole image restoration, and particularly relates to a method for restoring downhole images based on ray inverse tracing technology.
  • Ray tracing technology is a method of rendering three-dimensional (3D) images on a two-dimensional (2D) screen.
  • the application of fiery in current games and computer graphics can bring more realistic effects to people.
  • the light source is a point light source
  • thousands of light rays are randomly emitted to the surroundings. These light rays reflect, refract, absorb (attenuate), and fluoresce when they touch different objects.
  • Ray tracing is a general technique from geometric optics. It traces the rays interacting with the optical surface to obtain a model of the path of the rays.
  • the underground video has the characteristics of low image illumination and uneven illumination distribution.
  • a special situation resulted in low quality of the collected video and poor resolution of the video.
  • a strong light source such as a safety miner's lamp appears in the field of view of the mining camera, glare phenomenon will appear in the collected image, resulting in a significant decrease in the quality of the video image, which may lead to safety accidents. Therefore, it is of great significance to apply ray tracing technology to the restoration of downhole images to improve the readability of the images.
  • the present invention proposes a method for restoring downhole images based on the ray tracing technology.
  • sudden strong light sources will interfere with the original camera screen, resulting in The black-and-white contrast of the monitoring screen is too large to identify the information in the camera screen.
  • the pixel value of the strong light source in the viewing plane is eliminated to eliminate the interference of the strong light source on the original camera screen.
  • the technical solution adopted by the present invention is: a method for restoring downhole images based on ray reverse tracing technology, which includes the following steps:
  • Step 1 Assume the downhole camera is the light source emission point, that is, the viewpoint, and emit light into the downhole scene;
  • Step 2 Record all the intersection points of all rays and the underground objects, and calculate the intersection point closest to the viewpoint among the intersection points;
  • Step 3 According to the light, object material and normal direction, calculate the reflected light intensity or the refracted light intensity at the closest intersection determined in step two.
  • Step 4 Calculate the direction of the newly generated light after being reflected and refracted by the object at the intersection;
  • Step 5 Track the newly generated light in Step 4 and determine whether the third reflected light and/or refracted light is incident on the viewing plane directly in front of the safety miner’s lamp. If so, calculate the third reflected light intensity and /Or refracted light intensity; otherwise return to step 2 to re-determine the nearest intersection point, repeat steps 3 to 5;
  • Step 6 The light intensity in step 5 is converted into pixel values by the CCD photosensitive element of the camera.
  • the light reflected and/or refracted by the camera for the third time is incident on the viewing plane and is imaged on the viewing plane;
  • Step 7 In the image finally presented on the viewing plane, eliminate the pixel value of the strong light emitted by the camera to obtain the image after removing the influence of the strong light source.
  • step three the reflected light intensity or the refracted light intensity at the nearest intersection determined in step two is calculated as follows:
  • I r represents the reflected light intensity
  • I a K a represents the influence value of ambient light at the intersection
  • I i represents the incident light intensity
  • K d represents the specular reflectance coefficient
  • K s represents the diffuse reflectance coefficient
  • R d represents specular reflectance
  • R s represents diffuse reflectance
  • N, L Respectively represent the normal vector of the object surface, the unit vector of the light direction, and the solid angle
  • I t represents the intensity of the refracted light
  • ⁇ 1 and ⁇ 2 are the incident angle and the refraction angle.
  • step 5 the newly generated ray in step 4 is tracked as follows:
  • step (1) If all the reflected and refracted rays generated by the initial ray are not incident on the viewing plane directly in front of the safety miner's lamp, determine the second closest intersection point between the initial ray and the object and repeat step (1) If the second closest intersection does not meet the above conditions, the next closest intersection is calculated in turn until the found intersection meets the above conditions.
  • step 7 in the image finally presented on the viewing plane, the pixel value of the strong light emitted by the camera is eliminated to obtain the image after eliminating the influence of the strong light source, as follows:
  • the light source A there are other artificial lights, namely the light source B, and ambient light, that is, the non-artificial light source C, in addition to the light emitted by the camera to simulate the light of the safety miner's lamp.
  • the image on the viewing plane can be expressed as follows:
  • P (x, y) represents the final image on the viewing plane
  • R (x, y) represents the image on the viewing plane when the camera does not emit light, that is, the light source B and the light source C are superimposed on the viewing plane
  • S(x,y) represents the imaging on the viewing plane when only the camera emits light
  • L(x,y) represents the imaging of ambient light, that is, the light source C, on the viewing plane
  • the ambient light L(x, y) can be expressed as follows through the Gaussian kernel convolution of P(x, y) and Gaussian function G(x, y):
  • C represents the Gaussian surrounding scale
  • S′(x,y) is the image after eliminating the influence of strong light source.
  • the present invention uses ray reverse tracing to change the traditional idea of image processing.
  • the traditional method mostly adopts linear transformation, gamma correction, histogram equalization, unsharp mask, homomorphic filtering, tone mapping, dark channel algorithm and other methods for the sudden emergence of strong light source, and the processing effect is not obvious.
  • Ray reverse tracing technology can effectively eliminate the interference of strong light sources, restore the original downhole images, and ensure the smooth progress of downhole work and the safety of operators' lives.
  • Figure 1 is the three-dimensional angle that the unit area opens toward the light source Schematic diagram of
  • FIG. 2 is a schematic diagram of ray retrotrace reflection and refraction receiving of the present invention
  • 3 is a process of ray tracing of the present invention to eliminate interference from strong light sources.
  • the method for restoring downhole images based on light reverse tracing technology is aimed at the conditions of low light intensity, high dust and high humidity under the mine, and the sudden strong light source will interfere with the original camera screen, resulting in monitoring screen
  • the black-and-white contrast is too large to recognize the information in the camera screen.
  • the method of ray tracing is used to eliminate the pixel value of the strong light source in the viewing plane, thereby eliminating the interference of the strong light source on the original camera screen.
  • the process of ray tracing of the present invention to eliminate strong light source interference specifically includes the following steps:
  • Step 1 Assume that the downhole camera is the light source emission point, that is, the viewpoint, and emits light into the downhole scene.
  • the intensity of this light is equal to the intensity of the light emitted by the safety miner's lamp.
  • Step 2 Record all intersection points of all rays and downhole objects, and calculate the intersection point closest to the viewpoint among the intersection points.
  • Step 3 According to the light, object material and normal direction, calculate the reflected light intensity or the refracted light intensity at the closest intersection determined in step two.
  • I r represents the reflected light intensity
  • I a K a represents the influence value of ambient light at the intersection
  • I i represents the incident light intensity
  • K d represents the specular reflectance coefficient
  • K s represents the diffuse reflectance coefficient
  • R d represents specular reflectance
  • R s represents diffuse reflectance
  • N, L Respectively represent the object surface normal vector, light direction unit vector, solid angle, as shown in Figure 1, the horizontal axis direction represents the object surface, and the vertical axis direction represents the object surface normal vector direction.
  • the solid angle is defined as follows: the camera is used as the observation point to form a solid spherical surface, and the projected area of the downhole object projected on the spherical surface is the angle for the observation point.
  • I t represents the intensity of the refracted light
  • ⁇ 1 and ⁇ 2 are the incident angle and the refraction angle.
  • the light and dark effect is only determined by the normal direction of the surface of the object that intersects for the first time, the material, the viewpoint and the lighting direction, and the light intensity.
  • the light casting does not consider the second layer and the deeper light, so it does not have shadows, reflections, The effect of refraction and fluorescence.
  • Step 4 Calculate the direction of the newly generated light after it is reflected and refracted by the object at the intersection.
  • the direction of the newly generated light is determined by the direction of the incident light, the surface normal of the object, and the medium.
  • Step 5 Track the newly generated light in Step 4 and determine whether the third reflected light and/or refracted light is incident on the viewing plane directly in front of the safety miner’s lamp. If so, calculate the third reflected light intensity and /Or refracted light intensity; otherwise, return to step 2 to re-determine the nearest intersection point, and repeat steps 3 to 5.
  • the ray tracing is as follows: after the light is emitted from the camera, it will intersect with transparent objects, non-transparent objects or no objects in the scene.
  • step (2) If you do not intersect with any object, give up tracking. If the intersection point is on a non-transparent object, only the light intensity of the reflected light is calculated. If the intersection point is on a transparent object, the light intensity of the reflected light and the light intensity of the refracted light need to be calculated to track the light reflected or refracted three times by the initial light. If the light reflected or refracted three times by the initial light enters the visual plane directly in front of the safety miner's lamp, the light intensity is calculated, if not, the tracking is abandoned, and step (2) is entered.
  • step (1) If all the reflected and refracted rays generated by the initial ray are not incident on the viewing plane directly in front of the safety miner's lamp, determine the second closest intersection point between the initial ray and the object and repeat step (1) A step of. If the second closest intersection does not meet the above conditions, the next closest intersection is calculated in turn until the found intersection meets the above conditions.
  • a camera is located at the viewpoint, light is emitted by the camera, a transparent body O 1 , and an opaque body O 2 .
  • an initial ray E is emitted from the viewpoint, intersects O 1 at P 1 , and generates reflected ray R 1 and refracted ray T 1 .
  • R 1 light intensity Because R 1 no longer intersects with other objects, the tracking is ended.
  • T 1 I t1 (cos ⁇ 2 /cos ⁇ 1 )(I i -I r1 ), which intersects P 2 inside O 1 to produce reflected light R 2 and refracted light T 2 , the light intensity of R 2
  • the light intensity of T 2 I t2 (cos ⁇ 4 /cos ⁇ 3 )(I t1 -I r2 ).
  • T 2 and O 3 intersect at P 3. Since O 3 is opaque, only reflected light R 3 is generated. The intensity of R 3 R 3 finally enters the viewing plane.
  • ⁇ 1 and ⁇ 2 are the angle of incidence and reflection at P 1
  • ⁇ 3 and ⁇ 4 are the angle of incidence and reflection at P 2
  • I i represents the light intensity of the light E, that is, the incident light intensity of the initial light
  • Represents the diffuse reflectance at P 1 , P 2 , P 3 , N 1 , N 2 , N 3 represent the normal vector of the object surface at P 1 , P 2 , P 3 , L 1 , L 2 , L 3
  • Step 7 In the final image presented on the viewing plane, eliminate the pixel values of the strong light emitted by the camera to obtain the image after removing the influence of the strong light source.
  • the method is as follows:
  • the light source A there are other artificial lights, that is, light source B, and ambient light, that is, non-artificial light source C, in addition to the light emitted by the camera to simulate the light of the safety miner's lamp.
  • the image on the viewing plane can be expressed as follows:
  • P (x, y) represents the final image on the viewing plane
  • R (x, y) represents the image on the viewing plane when the camera does not emit light, that is, the light source B and the light source C are superimposed on the viewing plane
  • S(x, y) represents the imaging on the viewing plane when only the camera emits light
  • L(x, y) represents the imaging of ambient light, that is, the light source C, on the viewing plane.
  • the ambient light L(x, y) can be expressed as follows through the Gaussian kernel convolution of P(x, y) and Gaussian function G(x, y):
  • C represents the Gaussian surrounding scale
  • S′(x,y) is the image after eliminating the influence of strong light source.
  • the invention uses the technology of ray tracing, under the condition of greatly reducing the calculation amount of ray tracing, effectively reducing the glare phenomenon of the strong light source to the low-illumination downhole camera picture, thereby achieving the effect of restoring the camera picture.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Generation (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种基于光线逆追踪技术还原井下图像的方法,包括以下步骤:设井下摄像仪为光源发射点,向井下场景中发射光线;记录所有光线与井下物体的所有交点,计算交点中距离视点最近的一个交点;计算光线在交点处被物体反射和折射后新产生光线的方向;对上述新产生的光线分别进行跟踪;记录摄像仪处发出的强光源反射或折射三次后照射到视平面上的光线,计算该光线的光强;通过摄像仪CCD感光元件将光强转化为像素值;在最终呈现在视平面上的图像中,消除摄像仪发出的强光的像素值,得到消除强光源影响后的图像。本发明能够有效的消除强光源的干扰,还原井下图像,确保井下工作的顺利进行和操作人员的生命安全。

Description

一种基于光线逆追踪技术还原井下图像的方法 技术领域
本发明属于井下图像还原领域,特别涉及一种基于光线逆追踪技术还原井下图像的方法。
背景技术
光线追踪技术(Ray Tracing)是一种在二维(2D)屏幕上呈现三维(3D)图像的方法,在目前的游戏和计算机图形学上应用火热,能给人们带来更加逼真的效果。假设光源为一个点光源,会向四周随机发射出成千上万条光线,这些光线在触碰到不同的物体后发生反射、折射、吸收(衰弱)、荧光。光线追踪是一种来自几何光学的通用技术,它通过追踪与光学表面发生交互作用的光线,得到光线经过路径的模型。但是由于光线有千千万万条,再加上经过反射、折射、吸收、荧光后的光线,更加数不胜数,导致光线正向追踪计算量很大,因此,光线逆向追踪的方法逐渐进入人们的视线。将摄像头作为光源的发出点,仅仅计算进入视平面的那部分光线,计算量大大减小。
由于目前井下使用的防爆摄像仪多为黑白摄像机,加上煤矿井下环境特殊,全天候人工照明,加上粉尘和潮湿等因素的影响,导致井下视频有图像照度低,光照分布不均匀的特点,这种特殊情况导致了采集到的视频质量低,视频的分辨度差。当矿用摄像仪视场中出现安全矿灯等强光源时,所采集到的图像会出现炫光现象,导致视频图像质量大幅下降,有可能导致安全事故的发生。因此将光线逆追踪技术应用到井下图像还原上,提升图像的可读性,具有重要的意义。
发明内容
发明目的:针对以上问题,本发明提出一种基于光线逆追踪技术还原井下图像的方法,针对矿井下光照度低,粉尘多的条件下,突然出现的强光源会对原本的摄像画面进行干扰,导致监控画面黑白层次反差过大,无法识别摄像画面内信息的现象,运用光线逆追踪的方法,通过消除视平面内强光源的像素值,从而消除强光源对原摄像画面的干扰。
技术方案:为实现本发明的目的,本发明所采用的技术方案是:一种基于光线逆追踪技术还原井下图像的方法,该方法包括如下步骤:
步骤一:假设井下摄像仪为光源发射点,即视点,向井下场景中发射光线;
步骤二:记录所有光线与井下物体的所有交点,计算交点中距离视点最近的一个交点;
步骤三:根据光照、物体材质以及法向方向,计算步骤二中确定的最近一个交点处的反射光线光强或折射光线光强。
步骤四:计算光线在交点处被物体反射和折射后新产生光线的方向;
步骤五:对步骤四中新产生的光线进行跟踪,并判断第三次反射光线和/或折射光线是否入射到安全矿灯正前方的视平面上,如果是,则计算第三次反射光强和/或折射光强;否则返回步骤二重新确定最近交点,重复步骤三至步骤五;
步骤六:通过摄像仪CCD感光元件将步骤五中的光强转化为像素值,摄像仪发出的光线第三次反射和/或折射后的光线入射到视平面上,在视平面上成像;
步骤七:在最终呈现在视平面上的图像中,消除摄像仪发出的强光的像素值,得到消除强光源影响后的图像。
其中,在步骤三中,计算步骤二中确定的最近一个交点处的反射光线光强或折射光线光强,方法如下:
通过公式(1)计算所述交点处的反射光线光强:
Figure PCTCN2019091631-appb-000001
其中,I r表示反射光线光强,I aK a表示环境光在交点处的影响值,I i表示入射光光强,K d表示镜面反射率系数,K s表示漫反射率系数,R d表示镜面反射率,R s表示漫反射率,N、L、
Figure PCTCN2019091631-appb-000002
分别表示物体表面法向量、光线方向单位向量、立体角;
或者,通过公式(2)计算所述交点处的折射光线光强:
I t=(cosθ 2/cosθ 1)(I i-I r)        (2)
其中,I t表示折射光线光强,θ 1,θ 2为入射角和折射角。
其中,在步骤五中,对步骤四中新产生的光线进行跟踪,方法如下:
(1)如果光线不与任何物体相交,放弃追踪;如果交点在非透明物体上,则只计算反射光线的光强,如果交点在透明物体上,则需要计算反射光线的光强和折射光线的光强,跟踪初始光线反射或折射三次的光线;若初始光线反射或折射三次的光线射入安全矿灯正前方的视平面上,则计算它的光强,若未射入,放弃追踪,进入步骤(2);
(2)如果该初始光线产生的所有的反射和折射光线都未射入安全矿灯正前方的视平面上,则确定初始光线与物体交点中距离视点第二个最近的交点,重复步骤(1)的步 骤,如果第二个最近的交点不满足上述条件,依次对下一个最近的交点进行计算,直到找到的交点满足上述条件。
其中,在步骤七中,在最终呈现在视平面上的图像中,消除摄像仪发出的强光的像素值,得到消除强光源影响后的图像,方法如下:
在井下除了用摄像仪发出的光模拟安全矿灯的光,即光源A外,还存在其它人造灯光即光源B,同时还有环境光即非人造光源C。
第三次反射光线和/或折射光线照射在视平面上时,在视平面上的图像,可表示为下式:
P(x,y)=R(x,y)·S(x,y)·L(x,y)        (3)
其中,P(x,y)表示最终呈现在视平面上的图像,R(x,y)表示摄像仪没发出光时呈现在视平面上的图像,即光源B和光源C叠加呈现在视平面上的图像,S(x,y)表示仅有摄像仪发出光时在视平面上的成像,L(x,y)表示环境光即光源C在视平面上的成像,
设I(x,y)=R(x,y)·S(x,y)                  (4)
两边取对数得lnP(x,y)=lnI(x,y)+lnL(x,y)    (5)
环境光L(x,y)可通过P(x,y)和高斯函数G(x,y)的高斯核卷积表示如下:
L(x,y)=P(x,y)*G(x,y)            (6)
其中
Figure PCTCN2019091631-appb-000003
C表示高斯环绕尺度,λ为一个尺度,它使得∫∫G(x,y)dxdy=1始终成立,
由式(4)、(5)和(6)可得:
lnR(x,y)=lnP(x,y)-ln(P(x,y)*G(x,y))-lnS(x,y)
令S′(x,y)=e lnR(x,y)
S′(x,y)为消除强光源影响后的图像。
有益效果:与现有技术相比,本发明的技术方案具有以下有益的技术效果:
本发明利用光线逆追踪改变了对于图像处理的传统思想。传统的方法对于突然出现强光源的情况,大多采用线性变换、伽马校正、直方图均衡、反锐化掩膜、同态滤波、色调映射、暗通道算法等方法,处理效果不明显。光线逆追踪技术能够有效的消除强光源的干扰,还原原有的井下图像,确保井下工作的顺利进行和操作人员的生命安全。
附图说明
图1是单位面积向光源所张开的立体夹角
Figure PCTCN2019091631-appb-000004
的示意图;
图2是本发明的光线逆追踪反射和折射接收示意图;
图3是本发明的光线逆追踪消除强光源干扰的过程。
具体实施方式
下面结合附图和实施例对本发明的技术方案作进一步的说明。
本发明所述的一种基于光线逆追踪技术还原井下图像的方法,针对矿井下光照度低,粉尘多,湿度大的条件下,突然出现的强光源会对原本的摄像画面进行干扰,导致监控画面黑白层次反差过大,无法识别摄像画面内信息的现象,运用光线逆追踪的方法,通过消除视平面内强光源的像素值,从而消除强光源对原摄像画面的干扰。如图3所示本发明的光线逆追踪消除强光源干扰的过程,具体有以下几个步骤:
步骤一:假设井下摄像仪为光源发射点,即视点,向井下场景中发射光线,该光线强度等于安全矿灯发出光线的光强。
步骤二:记录所有光线与井下物体的所有交点,计算交点中距离视点最近的一个交点。
步骤三:根据光照、物体材质以及法向方向,计算步骤二中确定的最近一个交点处的反射光线光强或折射光线光强。
通过公式(1)计算所述交点处的反射光线光强:
Figure PCTCN2019091631-appb-000005
其中,I r表示反射光线光强,I aK a表示环境光在交点处的影响值,I i表示入射光光强,K d表示镜面反射率系数,K s表示漫反射率系数,R d表示镜面反射率,R s表示漫反射率,N、L、
Figure PCTCN2019091631-appb-000006
分别表示物体表面法向量、光线方向单位向量、立体角,如图1所示,横轴方向表示物体表面,纵轴方向表示物体表面的法向量方向。立体角定义如下:以摄像仪为观测点,形成一个立体球面,井下物体投影到球面上的投影面积,对于观测点来说的角度。
或者,通过公式(2)计算所述交点处的折射光线光强:
I t=(cosθ 2/cosθ 1)(I i-I r)        (2)
其中,I t表示折射光线光强,θ 1,θ 2为入射角和折射角。
明暗效果仅仅由第一次相交的物体表面法向方向、材质、视点和光照方向、以及光照强度共同决定,而光线投射并不考虑第二层以及更深层次的光线,因此不具有阴影、 反射、折射、荧光的效果。
步骤四:计算光线在交点处被物体反射和折射后新产生光线的方向。新产生光线的方向由入射光方向、物体表面法向以及介质共同决定。
步骤五:对步骤四中新产生的光线进行跟踪,并判断第三次反射光线和/或折射光线是否入射到安全矿灯正前方的视平面上,如果是,则计算第三次反射光强和/或折射光强;否则返回步骤二重新确定最近交点,重复步骤三至步骤五。
光线由摄像仪发出后,光线追踪如下:光线从摄像仪发出后,在场景内会与透明物体,非透明物体相交或者不与任何物体相交。
(1)如果不与任何物体相交,放弃追踪。如果交点在非透明物体上,则只计算反射光线的光强,如果交点在透明物体上,则需要计算反射光线的光强和折射光线的光强,跟踪初始光线反射或折射三次的光线。若初始光线反射或折射三次的光线射入安全矿灯正前方的视平面上,则计算它的光强,若未射入,放弃追踪,进入步骤(2)。
(2)如果该初始光线产生的所有的反射和折射光线都未射入安全矿灯正前方的视平面上,则确定初始光线与物体交点中距离视点第二个最近的交点,重复步骤(1)的步骤。如果第二个最近的交点不满足上述条件,依次对下一个最近的交点进行计算,直到找到的交点满足上述条件。
如图2所示,给出了一个计算反射光线光强和折射光强的示例,具体如下:
假设井下场景中,一个摄像仪位于视点处,光由摄像仪发出,一个透明体O 1,和一个不透明体O 2。首先从视点发出一条初始光线E,与O 1相交于P 1,产生反射光线R 1和折射光线T 1。R 1的光强
Figure PCTCN2019091631-appb-000007
因为R 1不再与其他物体相交,结束跟踪。T 1的光强I t1=(cosθ 2/cosθ 1)(I i-I r1),其在O 1内部相交于P 2,产生反射光线R 2和折射光线T 2,R 2的光强
Figure PCTCN2019091631-appb-000008
Figure PCTCN2019091631-appb-000009
T 2的光强I t2=(cosθ 4/cosθ 3)(I t1-I r2)。可以继续递归下去继续对R 2跟踪,对T 2跟踪。比如,T 2与O 3交于P 3,由于O 3是不透明的,只产生反射光线R 3,R 3的光强
Figure PCTCN2019091631-appb-000010
R 3最终进入视平面。
其中,θ 1,θ 2为P 1处的入射角和反射角,θ 3,θ 4为P 2处的入射角和反射角,
Figure PCTCN2019091631-appb-000011
表示环境光在P 1处的影响值,
Figure PCTCN2019091631-appb-000012
表示环境光在P 2处的影响值,
Figure PCTCN2019091631-appb-000013
表示环境光在P 3处 的影响值,I i表示光线E的光强,也即初始光线的入射光光强,
Figure PCTCN2019091631-appb-000014
分别表示在P 1,P 2,P 3处的镜面反射率系数,
Figure PCTCN2019091631-appb-000015
分别表示在P 1,P 2,P 3处的漫反射率系数,
Figure PCTCN2019091631-appb-000016
表示在P 1,P 2,P 3处的镜面反射率,
Figure PCTCN2019091631-appb-000017
表示在P 1,P 2,P 3处的漫反射率,N 1,N 2,N 3分别表示在P 1,P 2,P 3处物体表面的法向量,L 1,L 2,L 3分别表示初始光线E,折射光线T 1,折射光线T 2的光线方向的单位向量,
Figure PCTCN2019091631-appb-000018
分别表示在P 1,P 2,P 3处产生的立体角。
步骤六:通过摄像仪CCD感光元件将步骤五中的光强转化为像素值,摄像仪发出的光线第三次反射和/或折射后的光线入射到视平面上,在视平面上成像。
步骤七:在最终呈现在视平面上的图像中,消除摄像仪发出的强光的像素值,得到消除强光源影响后的图像,方法如下:
在井下除了用摄像仪发出的光模拟安全矿灯的光,即光源A外,还存在其他人造灯光即光源B,同时还有环境光即非人造光源C。
第三次反射光线和/或折射光线照射在视平面上时,在视平面上的图像,可表示为下式:
P(x,y)=R(x,y)·S(x,y)·L(x,y)        (3)
其中,P(x,y)表示最终呈现在视平面上的图像,R(x,y)表示摄像仪没发出光时呈现在视平面上的图像,即光源B和光源C叠加呈现在视平面上的图像,S(x,y)表示仅有摄像仪发出光时在视平面上的成像,L(x,y)表示环境光即光源C在视平面上的成像。
设I(x,y)=R(x,y)·S(x,y)                  (4)
两边取对数得lnP(x,y)=lnI(x,y)+lnL(x,y)   (5)
环境光L(x,y)可通过P(x,y)和高斯函数G(x,y)的高斯核卷积表示如下:
L(x,y)=P(x,y)*G(x,y)                    (6)
其中
Figure PCTCN2019091631-appb-000019
C表示高斯环绕尺度,λ为一个尺度,它使得∫∫G(x,y)dxdy=1始终成立,
由式(4)、(5)和(6)可得:
lnR(x,y)=lnP(x,y)-ln(P(x,y)*G(x,y))-lnS(x,y)
令S′(x,y)=e lnR(x,y)
S′(x,y)为消除强光源影响后的图像。
本发明利用光线逆追踪的技术,在大大减小光线追踪的计算量的条件下,有效的降低强光源对低照度的井下摄像画面的炫光现象,从而起到还原摄像画面的效果。

Claims (4)

  1. 一种基于光线逆追踪技术还原井下图像的方法,其特征在于,该方法包括如下步骤:
    步骤一:假设井下摄像仪为光源发射点,即视点,向井下场景中发射光线;
    步骤二:记录所有光线与井下物体的所有交点,计算交点中距离视点最近的一个交点;
    步骤三:根据光照、物体材质以及法向方向,计算步骤二中确定的最近一个交点处的反射光线光强或折射光线光强;
    步骤四:计算光线在交点处被物体反射和折射后新产生光线的方向;
    步骤五:对步骤四中新产生的光线进行跟踪,并判断第三次反射光线和/或折射光线是否入射到安全矿灯正前方的视平面上,如果是,则计算第三次反射光强和/或折射光强;否则返回步骤二重新确定最近交点,重复步骤三至步骤五;
    步骤六:通过摄像仪CCD感光元件将步骤五中的光强转化为像素值,摄像仪发出的光线第三次反射和/或折射后的光线入射到视平面上,在视平面上成像;
    步骤七:在最终呈现在视平面上的图像中,消除摄像仪发出的强光的像素值,得到消除强光源影响后的图像。
  2. 根据权利要求1所述的一种基于光线逆追踪技术还原井下图像的方法,其特征在于,在步骤三中,计算步骤二中确定的最近一个交点处的反射光线光强或折射光线光强,方法如下:
    通过公式(1)计算所述交点处的反射光线光强:
    Figure PCTCN2019091631-appb-100001
    其中,I r表示反射光线光强,I aK a表示环境光在交点处的影响值,I i表示入射光光强,K d表示镜面反射率系数,K s表示漫反射率系数,R d表示镜面反射率,R s表示漫反射率,N、L、
    Figure PCTCN2019091631-appb-100002
    分别表示物体表面法向量、光线方向单位向量、立体角;
    或者,通过公式(2)计算所述交点处的折射光线光强:
    I t=(cosθ 2/cosθ 1)(I i-I r)  (2)
    其中,I t表示折射光线光强,θ 1,θ 2为入射角和折射角。
  3. 根据权利要求1或2所述的一种基于光线逆追踪技术还原井下图像的方法,其特征在于,在步骤五中,对步骤四中新产生的光线进行跟踪,方法如下:
    (1)如果光线不与任何物体相交,放弃追踪;如果交点在非透明物体上,则只计算 反射光线的光强,如果交点在透明物体上,则需要计算反射光线的光强和折射光线的光强,跟踪初始光线反射或折射三次的光线;若初始光线反射或折射三次的光线射入安全矿灯正前方的视平面上,则计算它的光强;若未射入,放弃追踪,进入步骤(2);
    (2)如果该初始光线产生的所有的反射和折射光线都未射入安全矿灯正前方的视平面上,则确定初始光线与物体交点中距离视点第二个最近的交点,重复步骤(1)的步骤,如果第二个最近的交点不满足上述条件,依次对下一个最近的交点进行计算,直到找到的交点满足上述条件。
  4. 根据权利要求1或2所述的一种基于光线逆追踪技术还原井下图像的方法,其特征在于,在步骤七中,在最终呈现在视平面上的图像中,消除摄像仪发出的强光的像素值,得到消除强光源影响后的图像,方法如下:
    第三次反射光线和/或折射光线照射在视平面上时,在视平面上的图像,可表示为下式:
    P(x,y)=R(x,y)·S(x,y)·L(x,y)  (3)其中,P(x,y)表示最终呈现在视平面上的图像,R(x,y)表示摄像仪没发出光时呈现在视平面上的图像,S(x,y)表示仅有摄像仪发出光时在视平面上的成像,L(x,y)表示环境光在视平面上的成像:
    设I(x,y)=R(x,y)·S(x,y)  (4)
    两边取对数得ln P(x,y)=ln I(x,y)+ln L(x,y)  (5)
    环境光L(x,y)可通过P(x,y)和高斯函数G(x,y)的高斯核卷积表示如下:
    L(x,y)=P(x,y)*G(x,y)  (6)
    其中
    Figure PCTCN2019091631-appb-100003
    C表示高斯环绕尺度,λ为一个尺度,由式(4)、(5)和(6)可得:
    ln R(x,y)=ln P(x,y)-ln(P(x,y)*G(x,y))-lnS(x,y)
    令S 0(x,y)=e ln R(x,y)
    S′(x,y)为消除强光源影响后的图像。
PCT/CN2019/091631 2019-01-04 2019-06-18 一种基于光线逆追踪技术还原井下图像的方法 WO2020140397A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2019395238A AU2019395238B2 (en) 2019-01-04 2019-06-18 Method for restoring underground image on basis of ray reverse tracing technology
CA3079552A CA3079552C (en) 2019-01-04 2019-06-18 Method for restoring underground image on basis of ray reverse tracing technology
RU2020115096A RU2742814C9 (ru) 2019-01-04 2019-06-18 Способ восстановления подземного изображения на основе технологии обратной трассировки лучей

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910006766.3A CN109862209B (zh) 2019-01-04 2019-01-04 一种基于光线逆追踪技术还原井下图像的方法
CN201910006766.3 2019-01-04

Publications (1)

Publication Number Publication Date
WO2020140397A1 true WO2020140397A1 (zh) 2020-07-09

Family

ID=66893940

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091631 WO2020140397A1 (zh) 2019-01-04 2019-06-18 一种基于光线逆追踪技术还原井下图像的方法

Country Status (5)

Country Link
CN (1) CN109862209B (zh)
AU (1) AU2019395238B2 (zh)
CA (1) CA3079552C (zh)
RU (1) RU2742814C9 (zh)
WO (1) WO2020140397A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286375A (zh) * 2021-12-16 2022-04-05 北京邮电大学 一种移动通信网络干扰定位方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862209B (zh) * 2019-01-04 2021-02-26 中国矿业大学 一种基于光线逆追踪技术还原井下图像的方法
CN116051450B (zh) * 2022-08-15 2023-11-24 荣耀终端有限公司 眩光信息获取方法、装置、芯片、电子设备及介质
CN116681814B (zh) * 2022-09-19 2024-05-24 荣耀终端有限公司 一种图像渲染方法和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106231286A (zh) * 2016-07-11 2016-12-14 北京邮电大学 一种三维图像生成方法及装置
US9558580B2 (en) * 2014-01-10 2017-01-31 Canon Kabushiki Kaisha Image processing apparatus and method therefor
CN109118531A (zh) * 2018-07-26 2019-01-01 深圳大学 透明物体的三维重建方法、装置、计算机设备及存储介质
CN109862209A (zh) * 2019-01-04 2019-06-07 中国矿业大学 一种基于光线逆追踪技术还原井下图像的方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6005916A (en) * 1992-10-14 1999-12-21 Techniscan, Inc. Apparatus and method for imaging with wavefields using inverse scattering techniques
US7389041B2 (en) * 2005-02-01 2008-06-17 Eastman Kodak Company Determining scene distance in digital camera images
US7764230B2 (en) * 2007-03-13 2010-07-27 Alcatel-Lucent Usa Inc. Methods for locating transmitters using backward ray tracing
CA2711858C (en) * 2008-01-11 2015-04-28 Opdi Technologies A/S A touch-sensitive device
WO2011066275A2 (en) * 2009-11-25 2011-06-03 Massachusetts Institute Of Technology Actively addressable aperture light field camera
KR101395255B1 (ko) * 2010-09-09 2014-05-15 한국전자통신연구원 전파 시스템에서 전파 프로파게이션 분석 장치 및 방법
RU125335U1 (ru) * 2012-11-07 2013-02-27 Общество с ограниченной ответственностью "Артек Венчурз" Устройство контроля линейных размеров трехмерных объектов
US9041914B2 (en) * 2013-03-15 2015-05-26 Faro Technologies, Inc. Three-dimensional coordinate scanner and method of operation
KR101716928B1 (ko) * 2013-08-22 2017-03-15 주식회사 만도 차량 카메라의 영상 처리 방법 및 이를 이용하는 영상 처리 장치
US9311565B2 (en) * 2014-06-16 2016-04-12 Sony Corporation 3D scanning with depth cameras using mesh sculpting
US10255664B2 (en) * 2014-07-04 2019-04-09 Sony Corporation Image processing device and method
US9977644B2 (en) * 2014-07-29 2018-05-22 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for conducting interactive sound propagation and rendering for a plurality of sound sources in a virtual environment scene
KR101592793B1 (ko) * 2014-12-10 2016-02-12 현대자동차주식회사 영상 왜곡 보정 장치 및 방법
KR20160071774A (ko) * 2014-12-12 2016-06-22 삼성전자주식회사 영상 처리를 위한 영상 처리 장치, 방법 및 기록 매체
CA3214444A1 (en) * 2016-04-12 2017-10-19 Quidient, Llc Quotidian scene reconstruction engine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9558580B2 (en) * 2014-01-10 2017-01-31 Canon Kabushiki Kaisha Image processing apparatus and method therefor
CN106231286A (zh) * 2016-07-11 2016-12-14 北京邮电大学 一种三维图像生成方法及装置
CN109118531A (zh) * 2018-07-26 2019-01-01 深圳大学 透明物体的三维重建方法、装置、计算机设备及存储介质
CN109862209A (zh) * 2019-01-04 2019-06-07 中国矿业大学 一种基于光线逆追踪技术还原井下图像的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG, LU: "Application and Improvement of Ray Tracing Algorithm in Realistic Virtual Scene", MASTER THESIS, 1 March 2008 (2008-03-01), CN, pages 1 - 78, XP009521913 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286375A (zh) * 2021-12-16 2022-04-05 北京邮电大学 一种移动通信网络干扰定位方法
CN114286375B (zh) * 2021-12-16 2023-08-18 北京邮电大学 一种移动通信网络干扰定位方法

Also Published As

Publication number Publication date
AU2019395238B2 (en) 2021-11-25
AU2019395238A1 (en) 2020-07-23
RU2742814C1 (ru) 2021-02-11
CA3079552A1 (en) 2020-07-04
RU2742814C9 (ru) 2021-04-20
CN109862209B (zh) 2021-02-26
CA3079552C (en) 2021-03-16
CN109862209A (zh) 2019-06-07

Similar Documents

Publication Publication Date Title
WO2020140397A1 (zh) 一种基于光线逆追踪技术还原井下图像的方法
US11671717B2 (en) Camera systems for motion capture
JP6824279B2 (ja) インサイドアウト方式のポジション、ユーザボディ、及び環境トラッキングを伴うバーチャルリアリティ及びミックスドリアリティ用ヘッドマウントディスプレイ
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
CN111968215B (zh) 一种体积光渲染方法、装置、电子设备及存储介质
JP5133626B2 (ja) 表面反射特性測定装置
WO2018027530A1 (zh) 红外光源的亮度调节方法与装置、光学动捕摄像机
TW201415863A (zh) 產生穩健立體影像的技術
CN101617271A (zh) 使用闪烁电磁辐射的增强输入
KR102291162B1 (ko) 인공 지능 학습용 가상 데이터 생성 장치 및 방법
CN106651870A (zh) 多视角三维重建中图像失焦模糊区域的分割方法
US11657908B2 (en) Methods and systems for handling virtual 3D object surface interaction
US20230245396A1 (en) System and method for three-dimensional scene reconstruction and understanding in extended reality (xr) applications
CN113971682A (zh) 一种基于深度信息实时可变绿幕生成方法及抠像方法
CN115375581A (zh) 基于事件时空同步的动态视觉事件流降噪效果评价方法
Xu et al. Hybrid mesh-neural representation for 3D transparent object reconstruction
Asano et al. Depth sensing by near-infrared light absorption in water
US20130194254A1 (en) Image processing apparatus, image processing method and program
Einabadi et al. Discrete Light Source Estimation from Light Probes for Photorealistic Rendering.
CN109427089B (zh) 基于环境光照条件的混合现实对象呈现
Rantoson et al. 3D reconstruction of transparent objects exploiting surface fluorescence caused by UV irradiation
JP5441752B2 (ja) 環境内の3d物体の3d姿勢を推定する方法及び装置
CN117255964A (zh) 具有降低的可检测性的外部照明
TW201917347A (zh) 深度感測攝影系統
CN111696146B (zh) 人脸模型重建方法、系统、图像处理系统及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 3079552

Country of ref document: CA

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019395238

Country of ref document: AU

Date of ref document: 20190618

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19907615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19907615

Country of ref document: EP

Kind code of ref document: A1