WO2019014846A1 - 一种用于光场还原的空间位置识别方法 - Google Patents

一种用于光场还原的空间位置识别方法 Download PDF

Info

Publication number
WO2019014846A1
WO2019014846A1 PCT/CN2017/093349 CN2017093349W WO2019014846A1 WO 2019014846 A1 WO2019014846 A1 WO 2019014846A1 CN 2017093349 W CN2017093349 W CN 2017093349W WO 2019014846 A1 WO2019014846 A1 WO 2019014846A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
image information
identification method
space
Prior art date
Application number
PCT/CN2017/093349
Other languages
English (en)
French (fr)
Inventor
李乔
Original Assignee
辛特科技有限公司
李乔
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 辛特科技有限公司, 李乔 filed Critical 辛特科技有限公司
Priority to PCT/CN2017/093349 priority Critical patent/WO2019014846A1/zh
Priority to CN201780093190.8A priority patent/CN111183637A/zh
Publication of WO2019014846A1 publication Critical patent/WO2019014846A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/02Stereoscopic photography by sequential recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators

Definitions

  • the present invention relates to the field of virtual reality light field localization technology, and in particular to a spatial position recognition method for light field reduction.
  • spatial position recognition There are many techniques and methods for spatial position recognition. Most of the spatial position recognition is realized by reflection. The basic principle is to perform spatial position by transmitting a beam of sound or electromagnetic waves through the direction and time of reflection of sound waves or electromagnetic waves. Positioning. However, such spatial recognition methods cannot identify planar graphics without spatial differences.
  • Vision-based picture recognition and plane recognition are very mature.
  • Computer-based visual recognition is often based on intelligent recognition of large sample learning. Since current cameras and cameras lack a space-based recording method based on theory, the research direction of space-based position recognition has been to identify the space by plane recognition plus computer image processing. Therefore, the traditional computer space location visual recognition is based on a large number of data and sample training to establish a relatively stable model. Its limitations are also very obvious, if there is an object that has not been studied before, there will be a recognition failure.
  • the camera group sends the collected image information to an image processing computer, and the image processing computer processes the collected image information to remove the unfocused portion;
  • the position of the image in the space in the step c) is located by:
  • the camera wall having different focal lengths is formed into a camera group by a plurality of camera arrays, and the plurality of the camera groups are arrayed into a camera wall, and the plurality of cameras in the same camera group have different focal lengths.
  • the plurality of camera groups of the camera wall collect image information of different viewing angles, and the plurality of the cameras in the same camera group collect image information of different spatial depths in the same viewing space region.
  • the plurality of cameras in the same camera group are in a tight array, and each camera can acquire complete image information of the same viewing space area.
  • the camera wall is arrayed by a plurality of the camera groups on a planar or spherical base.
  • the fast time-division zoom camera captures pictures by changing the focal length, and acquires image information having different depths.
  • the fast time-sharing zoom camera changes the focal length in a time-sharing cycle.
  • the fast time-division zoom camera changes the focal length by at least 24 times per second.
  • the fast time sharing zoom camera is a single camera or a camera set.
  • the invention provides a spatial position recognition method for light field reduction, which records the focal length of the lens and the object distance of the image to determine the position of the image in the space, and effectively solves the problem of spatial vertical recognition in the space recognition process.
  • the problem while solving the problem of data and sample dependencies in machine learning.
  • Figure 1 is a schematic block diagram showing the spatial position recognition method of the present invention
  • FIGS. 2a-2b are schematic views showing a camera wall according to an embodiment of the present invention.
  • 3 is a schematic view showing the image of the camera wall of the present invention collecting different viewing angles
  • FIG. 4 is a schematic view showing the same camera group collecting different spatial depth images according to the present invention.
  • FIG. 5 is a schematic diagram showing the capture of different spatial depth images by the fast zoom camera of the present invention.
  • Figure 6 is a diagram showing the position of the positioning image of the present invention in space.
  • a flow block diagram of a spatial position recognition method according to the present invention includes:
  • image information having different depths is acquired by a fast time-division zoom camera and/or a camera group having different focal lengths.
  • image information having different depths is captured by fast time-division zooming Head and/or camera groups of different image distances are acquired.
  • the camera group having different focal lengths is composed of a plurality of camera arrays, and the plurality of camera groups are arrayed into camera walls, and the plurality of cameras in the same camera group have different focal lengths.
  • the camera wall is arrayed by a plurality of the camera groups outside the convex spherical base.
  • the camera wall can be arrayed in a raised curved base; in other embodiments, the camera wall can also be arrayed in the same planar base.
  • 2a-2b are schematic views of a camera wall according to an embodiment of the present invention.
  • a camera wall 200 is mounted on a side of a convex spherical base.
  • the camera wall 200 is mounted on the outer side 200b of the spherical base to enable the camera.
  • the wall 200 collects image information of the spatial area in all directions.
  • the camera wall 200 includes a plurality of camera groups 210 of an array, the camera group 210 includes a plurality of cameras 211 of an array, and a plurality of cameras 211 in the same camera group 210 are closely arranged, and each camera can acquire the same viewing angle.
  • the complete image information of the spatial region, the plurality of cameras 211 in the same camera group 210 have different focal lengths.
  • the camera 211 realizes data transmission with the image processing computer on the inner side 200a of the spherical base.
  • the specific data transmission may be a wired connection for data transmission or a wireless data transmission.
  • the plurality of camera groups 210 of the camera wall 200 are used to collect image information of different viewing angles, and the plurality of camera heads 211 of the same camera group 210 are used to collect image information of different spatial depths of the same viewing angle.
  • the camera wall of the present invention collects images of different viewing angle images.
  • the camera wall 200 is mounted on the outer side of the spherical base, and different camera groups 210 collect image information of different viewing angles.
  • the adjacent three camera groups respectively collect image information of the space area A, the space area B, and the space area C.
  • the spatial area acquired between adjacent camera sets should have overlapping portions to ensure the integrity of the acquired spatial image information.
  • a plurality of cameras in the same camera group are closely arranged, and each camera can acquire complete image information of the same viewing space area.
  • the same camera group of the present invention collects schematic images of different spatial depth images.
  • the plurality of cameras in the same camera group simultaneously collect image information of different spatial depths.
  • the plurality of cameras in the same camera group collect image information of different spatial depths with different focal lengths and the same image distance.
  • multiple cameras in the same camera group can use different image distances and the same focus Capture image information from different depths of space.
  • the spatial region A corresponding to the mth camera group is taken as an example, and the image information of the spatial region A corresponding to the collected spatial region A of the camera group includes a first image (puppy) 201, a second image 202 (tree), and The third image (sun) 203, the first image (puppy) 201 is closest to the camera wall, the second image (tree) is 202, and the third image (sun) 203 is the farthest from the camera wall.
  • the plurality of cameras of the mth camera group array respectively collect image information of different spatial depths. Since multiple cameras of the same camera group use different focal lengths, image information with different spatial depths is always focused and imaged in a certain camera.
  • the first camera in the mth camera group focuses the first image (puppy) 201, and the first image (puppy) 201 is clearly imaged in the image information captured by the first camera.
  • the second image 202 (tree) and the third image (sun) 203 are blurred.
  • the second image (tree) 202 is clearly imaged, and the first image 201 (puppy) and the third image (sun) 203 are blurred;
  • the nth camera captures In the image information, the third image (sun) 203 is clearly imaged, and the first image 201 (puppy) and the second image (tree) 202 are blurredly imaged.
  • each image in the embodiment also has different spatial depths, and different spatial depths of the same image respectively acquire image information of different spatial depths by a plurality of cameras.
  • the first image (puppy) 201 the eye of the puppy of the camera wall is closer to the camera wall than the camera wall, and the tail of the puppy is farther from the camera wall, and the first image is respectively acquired by the cameras with different focal lengths ( Puppy) 201 spatial depth image information.
  • the image information of the spatial depth of the complete spatial space is collected.
  • the fast zoom camera of the present invention collects images of different spatial depth images.
  • the fast time-division zoom camera of the embodiment captures pictures by changing the focal length, and acquires image information having different depths.
  • a single fast time-division zoom camera is used to take a photo.
  • a plurality of fast time-division zoom cameras can be arrayed into a camera group to take a photo. As shown in FIG.
  • the fast time-division zoom camera 301 captures an object located at the front end of the camera by time-division zooming, thereby acquiring image information of different depths of the object.
  • two depth planes of the object are taken as an example, that is, the first depth plane 302 and the second depth plane 303, and the object distance of the fast time-division zoom camera 301 from the first depth plane 302
  • the object distance of the fast time-division zoom camera 301 from the second depth surface 303 is u2.
  • the time-sharing zoom shooting process is as follows:
  • the fast time-division zoom camera 301 adjusts the focal length to f1 to take a photo, and collects the image information of the first depth surface 302 of the object with the object distance u1, and obtains the image information of the first depth surface 302, and the remaining images of the deep surface.
  • the information is blurred;
  • the fast time-division zoom camera 301 adjusts the focal length to f2 to take a picture, and collects the image information of the second depth surface 303 of the object with the object distance u2, and obtains the second depth surface 303 clear.
  • Image information, the image information of the remaining depth is blurred.
  • the fast time-sharing zoom camera 301 constantly changes the focal length until all the image information of different depths of the object is collected.
  • the fast time-division zoom camera 301 cyclically changes the focal length, adjusts the focal length to fn at the time tn, and completes the image information collection on the depth of the object with the object distance un, and repeats the change after one cycle is completed.
  • the focal length is cyclically collected.
  • the image information collecting process of different depths described above changes the focal length at least 24 times per second.
  • the processing computer processes the collected image information.
  • the collected image information is sent to the image processing computer, and the image processing computer performs denoising processing on the collected image information to remove the unfocused portion.
  • the fast time-sharing zoom camera and/or the camera wall sends the captured image information to the image processing computer. Since the camera only focuses on image information having a certain depth of space, the captured image information has only a single focus point, for other
  • the unfocused portion is subjected to denoising processing, and the unfocused portion of the collected image information is removed by denoising processing.
  • the method of denoising is denoised according to the prior art as would occur to those skilled in the art, and the denoising process is preferably performed by a method of map.
  • S103 Image information verification, and performing image information verification on the image information after the denoising process, thereby ensuring that the collected image information of different depths has only one focus point.
  • FIG. 6 is a schematic diagram showing the position of the image in the space according to the present invention. According to the position of the image in space according to the present invention, the image is positioned as follows:
  • x f ⁇ X L1 /(Vf)
  • y f ⁇ Y L1 /(Vf)
  • z f ⁇ V/(fV).
  • a certain point is located as an example for description.
  • the coordinates of the image information of a certain point collected are: V L (X L1 , Y L1 , -V), and the image to be positioned is set.
  • the coordinates in space after a certain field (V L point) is restored are: P(x, y, z).
  • All the points of the acquired image information with different depths are all positioned as described above, and the positioning of the entire acquired image information is completed, and the optical field of the image is restored by the coordinates after the positioning.
  • the invention provides a spatial position recognition method for light field reduction, which records the focal length of the lens and the object distance of the image to determine the position of the image in the space, and effectively solves the problem of spatial vertical recognition in the space recognition process.
  • the problem while solving the problem of data and sample dependencies in machine learning.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种基于图像处理的用于光场还原的空间位置识别方法,包括:a)通过快速分时变焦摄像头(301)和/或具有不同焦距(f)的摄像头组(210),采集具有不同纵深的影像信息(S101);b)将采集到的影像信息发送至影像处理计算机,影像处理计算机对采集的影像信息进行处理,去除掉不聚焦的部分(S102);c)通过摄像头透镜中心位置,影像成像坐标以及透镜的焦距(f)定位影像在空间中的位置(S104)。通过透镜焦距(f)以及影像的物距(u)进行计算,确定影像在空间中的位置,有效解决了通过自然界的光信号进行空间纵向识别的问题,同时也解决了机器学习中对数据和样本依赖的问题。

Description

一种用于光场还原的空间位置识别方法 技术领域
本发明涉及虚拟现实光场定位技术领域,特别涉及一种用于光场还原的空间位置识别方法。
背景技术
至今为止几乎任何3D影像技术都是基于这个偏光原理开发的。1839年,英国科学家温斯特发现了一个奇妙的现象,人的两眼间距约为5cm(欧洲人平均值),看任何物体时,两只眼睛的角度不重合,即存在两个视角。这种细微的视角差异经由视网膜传递到大脑里,就能区别出物体的前后远近,产生强烈的立体感。这便是—偏光原理,至今为止几乎任何3D影像技术都是基于这个原理开发的。
空间位置识别技术和方法有多种,大多数的空间位置识别都是通过反射实现的,其基本原理都是通过发射一束声波或者电磁波通过声波或者电磁波反射回来的方向、时间等来进行空间位置定位。但是这类的空间识别方法不能对没有空间差的平面图形进行识别。
基于视觉的图片识别和平面识别已经非常成熟。计算机平面视觉识别通常是基于大量的样本学习的智能识别。由于目前的照相机和摄像机从理论基础上就缺乏对空间的记录方法,所以基于空间的位置识别一直的研究方向是通过平面识别加计算机图像处理来对空间进行位置识别。因此传统的计算机空间位置视觉识别是建立在大量的数据和样本培训建立相对稳定的模型的基础上。它的局限性也非常的明显,如果出现一个之前没有学习过的物体就会出现识别失败的情况。
因此,需要能够实现识别空间纵深,定位拍摄物体空间纵深位置的一种用于光场还原的空间位置识别方法
发明内容
本发明的目的在于提供一种用于光场还原的空间位置识别方法,所述识别方法包括:
a)通过快速分时变焦摄像头和/或具有不同焦距的摄像头组,或通过快速分时变焦摄像头和/或不同像距的摄像头组,采集具有不同纵深的影像信息;
b)所述摄像头组将采集到的影像信息发送至影像处理计算机,所述影像处理计算机对采集的影像信息进行处理,去除掉不聚焦的部分;
c)通过摄像头透镜中心位置,影像成像坐标以及透镜的焦距定位影像在空间中的位置。
优选地,所述步骤c)中影像在空间中的位置通过如下方法定位:
设定需要定位的影像在空间中的坐标为:(x,y,z),透镜中心位置坐标为:(0,0,0);
由采集到的影像信息的坐标(XL1,YL1,-V)和透镜的焦距f求解方程:x/XL1=y/YL1=z/(-V)和z=Vf/(V-f)得到影像在空间中的坐标:
x=f·XL1/(V-f),y=f·YL1/(V-f),z=f·V/(f-V)。
优选地,所述具有不同焦距的摄像头墙由多个摄像头阵列成摄像头组,多个所述摄像头组阵列成摄像头墙,同一所述摄像头组内的多个摄像头具有不同的焦距。
优选地,所述摄像头墙的多个所述摄像头组采集不同视角的影像信息,所述同一摄像头组中的多个所述摄头采集同一视角空间区域不同空间纵深的影像信息。
优选地,同一所述摄像头组内的多个摄像头呈紧密阵列,并且使每个摄像头均能采集到同一视角空间区域的完整影像信息。
优选地,所述摄像头墙由多个所述摄像头组阵列于平面或球面基座上。
优选地,所述快速分时变焦摄像头通过改变焦距拍摄图片,采集具有不同纵深的影像信息。
优选地,所述快速分时变焦摄像头分时循环改变焦距。
优选地,所述快速分时变焦摄像头每秒钟至少改变焦距24次。
优选地,所述快速分时变焦摄像头为单个摄像头或摄像头组。
本发明提供的一种用于光场还原的空间位置识别方法,通过透镜焦距以及影像的物距进行记录,确定影像在空间中的位置,有效解决了空间识别过程中,难以实现空间纵向识别的问题,同时解决了机器学习中对数据和样本依赖的问题。
应当理解,前述大体的描述和后续详尽的描述均为示例性说明和解释,并不应当用作对本发明所要求保护内容的限制。
附图说明
参考随附的附图,本发明更多的目的、功能和优点将通过本发明实施方式的如下描述得以阐明,其中:
图1示意性示出了本发明空间位置识别方法的流程框图;
图2a~2b示出了本发明一个实施例摄像头墙的示意图;
图3示出了本发明摄像头墙采集不同视角影像的示意图;
图4示出了本发明同一摄像头组采集不同空间纵深影像的示意图;
图5示出了本发明快速变焦摄像头采集不同空间纵深影像的示意图;
图6示出了本发明定位影像在空间中位置的示意图。
具体实施方式
通过参考示范性实施例,本发明的目的和功能以及用于实现这些目的和功能的方法将得以阐明。然而,本发明并不受限于以下所公开的示范性实施例;可以通过不同形式来对其加以实现。说明书的实质仅仅是帮助相关领域技术人员综合理解本发明的具体细节。
下面结合具体的实施例对本发明的内容进行说明,如图1所示本发明空间位置识别方法的流程框图,根据本发明提供的一种用于光场还原的空间位置识别方法包括:
S101、采集具有不同纵深的影像信息;本实施例,具有不同纵深的影像信息通过快速分时变焦摄像头和/或具有不同焦距的摄像头组进行采集。在一些实施例中,具有不同纵深的影像信息通过快速分时变焦摄像 头和/或不同像距的摄像头组进行采集。本实施例,具有不同焦距的摄像头组由多个摄像头阵列,多个摄像头组阵列成摄像头墙,同一摄像头组内的多个摄像头具有不同的焦距。本实施例中,摄像头墙由多个所述摄像头组阵列于凸起的球面基座外侧。在一些实施例中,摄像头墙可以阵列于凸起的弧形基座;在另一些实施例中,摄像头墙还可以阵列于同一平面的基座。如图2a~2b所示本发明一个实施例摄像头墙的示意图,根据本发明实施例中在凸起的球面基座外侧安装摄像头墙200,摄像头墙200安装于球面基座的外侧200b能够使摄像头墙200全方位采集空间区域的影像信息。
摄像头墙200包括阵列的多个摄像头组210,所述摄像头组210包括阵列的多个摄像头211,同一摄像头组210内的多个摄像头211呈紧密阵列,并且使每个摄像头均能采集到同一视角空间区域的完整影像信息,同一摄像头组210内的多个摄像头211具有不同的焦距。摄像头211在球面基座的内侧200a与影像处理计算机实现数据传输,具体的数据传输可以是有线连接进行数据传输,也可以是无线数据传输。
摄像头墙200的多个摄像头组210用于采集不同视角的影像信息,同一摄像头组210中的多个摄头头211用于采集同一视角不同空间纵深的影像信息。
如图3所示本发明摄像头墙采集不同视角影像的示意图,摄像头墙200安装于球面基座的外侧,不同的摄像头组210采集不同视角的影像信息。实施例中,相邻的三个摄像头组分别采集空间区域A、空间区域B和空间区域C的影像信息。相邻摄像头组之间采集的空间区域应当具有重叠的部分,从而确保采集的空间影像信息的完整性。
对于同一摄像头组内的多个摄像头呈紧密阵列,并且使每个摄像头均能采集到同一视角空间区域的完整影像信息。如图4所示本发明同一摄像头组采集不同空间纵深影像的示意图。同一摄像头组内的多个摄像头同时采集不同空间纵深的影像信息,本实施例中同一摄像头组内的多个摄像头采用不同的焦距相同像距采集不同空间纵深的影像信息。在一些实施例中,同一摄像头组内的多个摄像头可以采用不同的像距相同焦 距采集不同空间纵深的影像信息。
本实施例以第m个摄像头组对应的空间区域A为例,摄像头组对应采集的空间区域A中空间纵深不同的影像信息包括第一影像(小狗)201、第二影像202(树木)和第三影像(太阳)203,第一影像(小狗)201与摄像头墙的距离最近,第二影像(树木)202次之,第三影像(太阳)203距离摄像头墙的距离最远。第m个摄像头组阵列的多个摄像头分别采集不同空间纵深的影像信息。由于同一摄像头组的多个摄像头采用不同的焦距,具有不同空间纵深的影像信息总是在某一摄像头中聚焦成像。
下面进行示例性的说明,第m个摄像头组中的第1个摄像头使第一影像(小狗)201聚焦,第1个摄像头拍摄到的影像信息中,第一影像(小狗)201清晰成像,第二影像202(树木)和第三影像(太阳)203模糊成像。同样地,第2个摄像头拍摄到的影像信息中,第二影像(树木)202清晰成像,第一影像201(小狗)和第三影像(太阳)203模糊成像;第n个摄像头拍摄到的影像信息中,第三影像(太阳)203清晰成像,第一影像201(小狗)和第二影像(树木)202模糊成像。应当理解,实施例中每一个影像也具有不同空间纵深,对同一影像的不同空间纵深由多个摄像头分别获取不同空间纵深的影像信息。举例来说,对于第一影像(小狗)201,相对于摄像头墙小狗的眼睛距离摄像头墙较近,小狗的尾巴距离摄像头墙较远,由具有不同焦距的摄像头分别采集第一影像(小狗)201的空间纵深影像信息。经过多个摄像头的采集,采集到空间区域A完整的空间纵深的影像信息。
下面对通过快速分时变焦摄像头采集具有不同纵深的影像信息过程进行说明,如图5所示本发明快速变焦摄像头采集不同空间纵深影像的示意图。根据本发明,本实施例快速分时变焦摄像头通过改变焦距拍摄图片,采集具有不同纵深的影像信息。本实施例选用单个快速分时变焦摄像头拍摄照片,在一些实施例中,可以将多个快速分时变焦摄像头阵列成摄像头组拍摄照片。如图5所示,快速分时变焦摄像头301通过分时变焦拍摄位于摄像头前端的物体,从而采集到物体的不同纵深的影像信息。本实施例以物体的两个纵深面为例进行说明,即第一纵深面302和第二纵深面303,快速分时变焦摄像头301距离第一纵深面302的物距 为u1,快速分时变焦摄像头301距离第二纵深面303的物距为u2。分时变焦拍摄过程如下:
在t1时刻,快速分时变焦摄像头301调整焦距为f1拍摄照片,采集物距为u1的物体的第一纵深面302的影像信息,得到第一纵深面302清晰的影像信息,其余纵深面的影像信息模糊;在t1时刻的下一时刻t2时刻,快速分时变焦摄像头301调整焦距为f2拍摄照片,采集物距为u2的物体第二纵深面303的影像信息,得到第二纵深面303清晰的影像信息,其余纵深面的影像信息模糊。依次类推,快速分时变焦摄像头301,不断改变焦距直至将物体不同纵深的影像信息全部采集到。根据本发明,实施例中快速分时变焦摄像头301分时循环改变焦距,在tn时刻调整焦距为fn,完成对物距为un的物体纵深面的影像信息采集,在一个周期完成后,重复改变焦距进行循环采集。
根据本发明快速分时变焦摄像头301上述不同纵深的影像信息采集过程,每秒钟至少改变焦距24次。
S102、处理计算机对采集的影像信息处理。将采集到的影像信息发送至影像处理计算机,影像处理计算机对采集的影像信息进行去噪处理,去除掉不聚焦的部分。快速分时变焦摄像头和/或摄像头墙将采集的影像信息发送至影像处理计算机,由于摄像头只对具有某一空间纵深的影像信息聚焦,采集到的影像信息有且只有唯一的聚焦点,对于其他不聚焦的部分进行去噪处理,经过去噪处理去除掉采集的影像信息中不聚焦的部分。对于去噪的方法根据本领域技术人员所能想到的现有技术进行去噪,优选地采用抠图的方法进行去噪处理。
S103、影像信息校验,对经过去噪处理后的影像信息进行影像信息校验,从而确保采集到的不同纵深的影像信息有且只有唯一的聚焦点。
S104、定位影像在空间中的位置,通过摄像头透镜中心位置,影像成像坐标以及透镜的焦距定位影像在空间中的位置。如图6所示本发明定位影像在空间中位置的示意图,根据本发明影像在空间中的位置通过如下方法定位:
设定需要定位的影像在空间中的坐标为:(x,y,z),透镜中心位置 坐标为:(0,0,0)。
由采集到的影像信息的坐标(XL1,YL1,-V)和透镜的焦距f求解方程:x/XL1=y/YL1=z/(-V)和z=Vf/(V-f)得到影像在空间中的坐标:
x=f·XL1/(V-f),y=f·YL1/(V-f),z=f·V/(f-V)。本实施例,以定位某一点为例进行说明,如图6所示,采集到的某一点的影像信息的坐标为:VL(XL1,YL1,-V),设定需要定位的影像中某一点(VL点)光场还原后在空间中的坐标为:P(x,y,z)。对P点进行定位,空间上的P点、摄像头的透镜中心位置、影像信息中的VL点在同一条空间直线上,并且满足1/u+1/v=1/f关系,其中v为VL点到透镜的垂直距离,u为P点所在平面与透镜所在平面之间的距离。
透镜平面z=0,由采集到的某一点(VL点)的影像信息的坐标:VL(XL1,YL1,-V)和透镜焦距f求解方程:x/XL1=y/YL1=z/(-V)和z=Vf/(V-f),从而得到P点的坐标:x=f·XL1/(V-f),y=f·YL1/(V-f),z=f·V/(f-V)。
将采集的具有不同纵深的影像信息的所有点全部进行上述定位,完成对整个采集到的影像信息的定位,并由定位后的坐标将影像的光场还原。
本发明提供的一种用于光场还原的空间位置识别方法,通过透镜焦距以及影像的物距进行记录,确定影像在空间中的位置,有效解决了空间识别过程中,难以实现空间纵向识别的问题,同时解决了机器学习中对数据和样本依赖的问题。
结合这里披露的本发明的说明和实践,本发明的其他实施例对于本领域技术人员都是易于想到和理解的。说明和实施例仅被认为是示例性的,本发明的真正范围和主旨均由权利要求所限定。

Claims (10)

  1. 一种用于光场还原的空间位置识别方法,其特征在于,所述识别方法包括:
    a)通过快速分时变焦摄像头和/或具有不同焦距的摄像头组,或通过快速分时变焦摄像头和/或不同像距的摄像头组,采集具有不同纵深的影像信息;
    b)所述摄像头组将采集到的影像信息发送至影像处理计算机,所述影像处理计算机对采集的影像信息进行处理,去除掉不聚焦的部分;
    c)通过摄像头透镜中心位置,影像成像坐标以及透镜的焦距定位影像在空间中的位置。
  2. 根据权利要求1所述的识别方法,其特征在于,所述步骤c)中影像在空间中的位置通过如下方法定位:
    设定需要定位的影像在空间中的坐标为:(x,y,z),透镜中心位置坐标为:(0,0,0);
    由采集到影像信息的坐标(XL1,YL1,-V)和透镜的焦距f求解方程:x/XL1=y/YL1=z/(-V)和z=Vf/(V-f)得到影像在空间中的坐标:
    x=f·XL1/(V-f),y=f·YL1/(V-f),z=f·V/(f-V)。
  3. 根据权利要求1所述的识别方法,其特征在于,所述具有不同焦距的摄像头墙由多个摄像头阵列成摄像头组,多个所述摄像头组阵列成摄像头墙,同一所述摄像头组内的多个摄像头具有不同的焦距。
  4. 根据权利要求3所述的识别方法,其特征在于,所述摄像头墙的多个所述摄像头组采集不同视角的影像信息,所述同一摄像头组中的多个所述摄头采集同一视角下不同空间纵深的影像信息。
  5. 根据权利要求3所述的识别方法,其特征在于,同一所述摄像头组内的多个摄像头呈紧密阵列,并且使每个摄像头均能采集到同一视角空间区域的完整影像信息。
  6. 根据权利要求3或4所述的识别方法,其特征在于,所述摄像头墙由多个所述摄像头组阵列于平面或者球面基座上。
  7. 根据权利要求1所述的识别方法,其特征在于,所述快速分时变焦摄像头通过改变焦距拍摄图片,采集具有不同纵深的影像信息。
  8. 根据权利要求7所述的识别方法,其特征在于,所述快速分时变焦摄像头分时循环改变焦距。
  9. 根据权利要求7或8所述的识别方法,其特征在于,所述快速分时变焦摄像头每秒钟至少改变焦距24次。
  10. 根据权利要求1或7所述的识别方法,其特征在于,所述快速分时变焦摄像头为单个摄像头或摄像头组。
PCT/CN2017/093349 2017-07-18 2017-07-18 一种用于光场还原的空间位置识别方法 WO2019014846A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/093349 WO2019014846A1 (zh) 2017-07-18 2017-07-18 一种用于光场还原的空间位置识别方法
CN201780093190.8A CN111183637A (zh) 2017-07-18 2017-07-18 一种用于光场还原的空间位置识别方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/093349 WO2019014846A1 (zh) 2017-07-18 2017-07-18 一种用于光场还原的空间位置识别方法

Publications (1)

Publication Number Publication Date
WO2019014846A1 true WO2019014846A1 (zh) 2019-01-24

Family

ID=65014942

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/093349 WO2019014846A1 (zh) 2017-07-18 2017-07-18 一种用于光场还原的空间位置识别方法

Country Status (2)

Country Link
CN (1) CN111183637A (zh)
WO (1) WO2019014846A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130011044A1 (en) * 2011-07-07 2013-01-10 Era Optoelectronics Inc. Object contour detection device and method
CN103606181A (zh) * 2013-10-16 2014-02-26 北京航空航天大学 一种显微三维重构方法
CN105025219A (zh) * 2014-04-30 2015-11-04 齐发光电股份有限公司 图像获取方法
CN105827922A (zh) * 2016-05-25 2016-08-03 京东方科技集团股份有限公司 一种摄像装置及其拍摄方法
CN106162149A (zh) * 2016-09-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 一种拍摄3d照片的方法及移动终端
CN106657968A (zh) * 2015-11-04 2017-05-10 澧达科技股份有限公司 三维特征信息感测系统及感测方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780315A (zh) * 2015-04-08 2015-07-15 广东欧珀移动通信有限公司 摄像装置拍摄的方法和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130011044A1 (en) * 2011-07-07 2013-01-10 Era Optoelectronics Inc. Object contour detection device and method
CN103606181A (zh) * 2013-10-16 2014-02-26 北京航空航天大学 一种显微三维重构方法
CN105025219A (zh) * 2014-04-30 2015-11-04 齐发光电股份有限公司 图像获取方法
CN106657968A (zh) * 2015-11-04 2017-05-10 澧达科技股份有限公司 三维特征信息感测系统及感测方法
CN105827922A (zh) * 2016-05-25 2016-08-03 京东方科技集团股份有限公司 一种摄像装置及其拍摄方法
CN106162149A (zh) * 2016-09-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 一种拍摄3d照片的方法及移动终端

Also Published As

Publication number Publication date
CN111183637A (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2018049949A1 (zh) 一种基于手持式光场相机的距离估计方法
CN104717481B (zh) 摄像装置、图像处理装置、摄像方法
US9723295B2 (en) Image processing device, image processing method, image processing computer program, and information recording medium whereupon image processing computer program is stored
CN101964117B (zh) 一种深度图融合方法和装置
JP2019532451A (ja) 視点から距離情報を取得するための装置及び方法
KR20180054487A (ko) Dvs 이벤트 처리 방법 및 장치
WO2020063987A1 (zh) 三维扫描方法、装置、存储介质和处理器
CN104077804A (zh) 一种基于多帧视频图像构建三维人脸模型的方法
JP2009529824A (ja) 3次元映像獲得用cmosステレオカメラ
JP5672112B2 (ja) ステレオ画像較正方法、ステレオ画像較正装置及びステレオ画像較正用コンピュータプログラム
CN103793911A (zh) 一种基于集成图像技术的场景深度获取方法
JP2007074079A (ja) 画像入力装置
WO2020024079A1 (zh) 图像识别系统
CN104700355A (zh) 室内二维平面图的生成方法、装置和系统
RU2370817C2 (ru) Система и способ отслеживания объекта
CN107330930B (zh) 三维图像深度信息提取方法
WO2023015938A1 (zh) 三维点检测的方法、装置、电子设备及存储介质
CN114419568A (zh) 一种基于特征融合的多视角行人检测方法
Furukawa et al. Robust structure and motion from outlines of smooth curved surfaces
CN110349209A (zh) 基于双目视觉的振捣棒定位方法
JP2015019346A (ja) 視差画像生成装置
JP2009186287A (ja) 平面パラメータ推定装置、平面パラメータ推定方法及び平面パラメータ推定プログラム
KR20160024419A (ko) Dibr 방식의 입체영상 카메라 판별 방법 및 장치
Kang et al. Progressive 3D model acquisition with a commodity hand-held camera
WO2019014846A1 (zh) 一种用于光场还原的空间位置识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17918435

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17918435

Country of ref document: EP

Kind code of ref document: A1