WO2022199260A1 - 静态对象的立体显示方法、装置、介质及电子设备 - Google Patents

静态对象的立体显示方法、装置、介质及电子设备 Download PDF

Info

Publication number
WO2022199260A1
WO2022199260A1 PCT/CN2022/075458 CN2022075458W WO2022199260A1 WO 2022199260 A1 WO2022199260 A1 WO 2022199260A1 CN 2022075458 W CN2022075458 W CN 2022075458W WO 2022199260 A1 WO2022199260 A1 WO 2022199260A1
Authority
WO
WIPO (PCT)
Prior art keywords
dual
image sequence
human eye
stereoscopic display
viewpoint image
Prior art date
Application number
PCT/CN2022/075458
Other languages
English (en)
French (fr)
Inventor
廖鑫
杨民
董旭升
Original Assignee
纵深视觉科技(南京)有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纵深视觉科技(南京)有限责任公司 filed Critical 纵深视觉科技(南京)有限责任公司
Publication of WO2022199260A1 publication Critical patent/WO2022199260A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Definitions

  • the embodiments of the present application relate to the field of naked-eye 3D technologies, such as a stereoscopic display method, apparatus, medium, and electronic device for static objects.
  • Embodiments of the present application provide a stereoscopic display method, apparatus, medium, and electronic device for a static object.
  • an embodiment of the present application provides a stereoscopic display method for a static object, the method comprising:
  • the dual-viewpoint image sequence is automatically selected, and 3D rendering of the corresponding viewing angle images is performed to perform stereoscopic display.
  • an embodiment of the present application provides a stereoscopic display device for a static object, the device comprising:
  • the dual-viewpoint image sequence acquisition unit is configured to acquire the dual-viewpoint image sequence of the target static object captured by the camera;
  • the human eye position data determination unit is set to perform eye tracking on the user through the human eye tracking module to obtain the human eye position data
  • the stereoscopic display unit is configured to automatically select the dual-viewpoint image sequence according to the dual-viewpoint image sequence and the human eye position data, and perform 3D rendering of the corresponding viewing angle images for stereoscopic display.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the stereoscopic display method for a static object according to the embodiment of the present application.
  • an embodiment of the present application provides an electronic device, the electronic device includes a memory, a processor, and a computer program stored on the memory and executed on the processor, where the processor executes the computer program to achieve the following: The stereoscopic display method for static objects described in the embodiments of the present application.
  • FIG. 1 is a flowchart of a stereoscopic display method for a static object provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a dual-viewpoint image sequence acquisition scene of a target static object provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a stereoscopic display process of a target static object provided by an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of a stereoscopic display device for static objects provided by an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1 is a flowchart of a method for stereoscopic display of static objects provided by an embodiment of the present application.
  • This embodiment can perform stereoscopic display of static objects, and the method can be executed by the stereoscopic display device for static objects provided by the embodiment of the present application.
  • the device It can be implemented in software and/or hardware, and can be integrated into electronic devices.
  • the electronic device may be a remote control device or a robot.
  • the stereoscopic display method of the static object includes:
  • the target static object may be a person or an object, for example, a water cup, food or other objects.
  • the dual-view camera can be set separately from the naked-eye 3D display device, or can be set in connection. If it is a connection setting, 3D images of static objects can be displayed in real time.
  • the way of obtaining the double-viewpoint image sequence through the dual-viewpoint camera may be to shoot according to a certain point, and edit the image according to the pre-designed point number, so as to obtain the double-viewpoint image sequence.
  • the number in the front is 0001
  • the number in the front 5 degrees is 0002
  • the number in the front 5 degrees is 0003 and so on
  • the image sequence of the static object can be obtained, that is, the image sequence of dual viewpoints.
  • a double-viewpoint image sequence of the target static object may be acquired in advance through a camera.
  • the double-viewpoint image sequence of the target object can be stored in the naked-eye 3D display device.
  • the process of determining the target static object to be displayed in this solution may be a process in which a user selects a target item through a naked-eye 3D display device.
  • the naked-eye 3D display device stores a water cup, a porcelain bottle and a fan at the same time.
  • the user can select the image of the item he wants to view according to his needs, such as clicking OK on the screen of the naked-eye 3D display device, and then the item can be used as the target item to display the naked-eye 3D effect.
  • each time period can be set to display a certain item, and when another time period is reached, the target item can be switched according to a predetermined display sequence.
  • the double-viewpoint image sequence of the target static object may be pre-stored in the glasses-free 3D display device, or stored in a database or server connected to the glasses-free 3D display device. After the target static object is determined, the corresponding dual-view image sequence can be retrieved according to the ID (Identity, identification) of the object.
  • the double-viewpoint image sequence may be an image sequence composed of a plurality of images that are switched from one angle to another angle and are captured by a dual-viewpoint camera device. For example, on a horizontal plane, if one dual-viewpoint image is captured every 1 degree within 360 degrees, the dual-viewpoint image sequence may contain a total of 360 dual-viewpoint images captured from different angles.
  • the dual-viewpoint image sequence of the target static object is obtained based on the dual-viewpoint camera taking ring shots of the target static object on a turntable according to a preset sampling frequency.
  • FIG. 2 is a schematic diagram of a dual-viewpoint image sequence acquisition scene of a target static object provided by an embodiment of the present application.
  • the item may be a kettle, and the item may be placed at the center of the rotating tray.
  • One side is provided with a binocular camera.
  • the binocular camera can be sampled according to certain rules. For example, in the process of rotating the image tray at a constant speed, the sampling frequency of the binocular camera can be set, for example, 20 images are sampled per second, and the angle that the tray rotates per second can be 2 degrees, so that every 1 degree can be obtained. Within the angle, 10 images will be collected evenly.
  • this solution can select images from different viewing angles from a uniform double-viewpoint image sequence to switch when the naked-eye 3D display device is displayed, and can obtain a target static object image with a uniform angle change from the user's point of view, improving the 3D display effect. .
  • the human eye tracking module may be disposed above the naked eye 3D display device, or may be a module carried by the naked eye 3D display device itself, and the module has the functions of image acquisition and human eye tracking calculation. For example, after acquiring the image in front of the current naked-eye 3D display device, the position of the user's human eye is determined, and real-time tracking is performed to obtain the position data of the human eye. It can be understood that this solution is currently aimed at a situation where a single user is watching a stereoscopic display image of a naked-eye 3D display device.
  • the human eye position data can be converted into distance data and direction data relative to the center of the screen of the naked-eye 3D display device, which is helpful for subsequent calculations.
  • the images that the user should view at various angles can be determined according to the double-viewpoint image sequence, and then the actual angle of the user can be determined according to the human eye position data, so as to determine which image to display and perform 3D rendering.
  • the stereoscopic display of the target static object can be performed in the following three ways:
  • the dual-viewpoint image sequence is automatically selected, and the 3D rendering of the corresponding viewing angle images is performed for stereoscopic display, including:
  • 3D rendering is performed on the target image for stereoscopic display.
  • the initial perspective may be a front-facing perspective determined based on the characteristics of the target static object, such as a perspective corresponding to the front of a car model, and the image associated with the initial perspective is a dual-view image captured from the front of the car model. .
  • the direct front may correspond to the direct front of the screen in the naked-eye 3D display device. After the initial viewing angle image is obtained, it may be displayed in the naked-eye 3D display device preferentially, and the human eye position data may be angularly analyzed.
  • the viewing angle of the user may be determined according to the human eye position data. For example, according to the human eye position data, it is determined that the user's human eye is on the left side of the vertical line on the screen of the naked-eye 3D display device, and the included angle is 45 degrees, then the viewing angle of the user can be obtained as -45 degrees.
  • a target image may be determined from the double-viewpoint image sequence according to the angle between the viewing angle of view and the initial angle of view. That is, it can be obtained that what the user should actually see at the moment is a dual-viewpoint image captured at 45 degrees in front of the left of the vehicle, that is, the target image. The target image is then rendered in 3D for stereoscopic display.
  • the dual-viewpoint image sequence is automatically selected, and the 3D rendering of the corresponding viewing angle image is performed to perform stereoscopic display, including:
  • mapping relationship between the dual-viewpoint image sequence and the preset viewing angle of view determine the target image corresponding to the viewing angle from the dual-viewpoint image sequence
  • 3D rendering is performed on the target image for stereoscopic display.
  • a mapping relationship between the viewing angle of view and the dual-viewpoint image sequence of the target static object may be constructed, wherein the viewing angle may be corresponding to the shooting angle of view, for example, when the viewing angle is +30 degrees , the mapping obtained is a dual-viewpoint image obtained by shooting a static object with a viewing angle of 30 degrees to the right front.
  • the target image to be displayed in the dual-viewpoint image sequence can be determined, and 3D rendering can be performed to complete the stereoscopic display of the item.
  • the advantage of this setting is that, according to the predetermined mapping relationship, after the viewing angle of view of the user is calculated, the target image can be quickly determined, and the stereoscopic display of the item can be performed.
  • this solution also provides an effect of 3D rendering through multiple views.
  • this solution also provides an effect of 3D rendering through multiple views.
  • here is the third option:
  • the dual-viewpoint image sequence is automatically selected, and the 3D rendering of the corresponding perspective image is performed to perform stereoscopic display, including:
  • 3D rendering is performed on the multi-views corresponding to the target viewing angle range for stereoscopic display.
  • the multi-view may be an image composed of at least two images, and after the compression and splicing of the multiple images are completed, a multi-view is obtained.
  • the multi-view may be corresponding to the naked-eye 3D display device.
  • a grating is added to the display screen of the naked-eye 3D display device.
  • the user can see images from different angles in front of the screen. Parts are different.
  • the image seen is just the image in the middle before splicing.
  • the first, second, and third images from the middle to the left are seen in turn.
  • you move to the right you will see the first, second, and third images to the right in the middle, and so on.
  • the multi-view is obtained by splicing at least two adjacent images in the dual-view image sequence.
  • the target viewing angle range can be determined according to the position data of the human eye, and the target viewing angle range can be stitched corresponding to at least two adjacent images, so that multiple views can be obtained, and the multiple views can be displayed.
  • the splicing operation can preferentially correspond to the target viewing angle range. For example, it is pre-determined that every 3 degrees of dual-viewpoint images are spliced to obtain multiple multi-views.
  • the viewing angle of the human eye can be determined according to the position of the human eye to determine the target viewing angle range, and the corresponding multi-view is retrieved. displayed above.
  • a dual-viewpoint image sequence of a target static object is obtained through a dual-viewpoint camera; the user's eyes are tracked by a human eye tracking module to obtain human eye position data; according to the dual-viewpoint image sequence and The human eye position data is automatically selected for the dual-viewpoint image sequence, and 3D rendering of the automatically selected corresponding viewing angle images is performed for stereoscopic display.
  • the images can be displayed by the naked-eye 3D device according to the position of the human eye, so as to obtain a realistic naked-eye 3D display image. Effect.
  • the method further includes:
  • the eye tracking module In response to the detection of the viewing angle switching event, the eye tracking module is used to track the user's eyes, and the update data of the human eye position is obtained;
  • an update target image is determined from the dual-viewpoint image sequence, and 3D rendering is performed on the update target image for dynamic display.
  • the update process can be similar to the process shown above.
  • the image of the target static object can be dynamically displayed, and then the position of the human eye can be adapted to the position of the human eye.
  • the effect of the stereoscopic display of the image can be similar to the process shown above.
  • FIG. 3 is a schematic diagram of a stereoscopic display process of a target static object provided by an embodiment of the present application.
  • a naked-eye 3D display is used for hardware, and a matching 3D renderer program is used for software.
  • a stereo camera such as a binocular camera
  • the binocular camera can be controlled by the shooting control software of the host, and the turntable can also be controlled while the binocular camera is controlled, for example, the rotation speed of the turntable can be controlled.
  • the number of stereoscopic pictures taken in one circle is larger, that is, the larger the number of samples, the more continuous the sampled samples, the more coherent the viewing effect and the smoother the angle switching.
  • the image sequence can be retrieved from the storage device on the host side for processing by the item presentation software.
  • the eye tracking device of the 3D display device that is, the eye tracking module, for example, the eye tracking module can be: a single-camera or dual-camera eye-tracking module that uses image technology to determine the position of the human eye, a distance sensor
  • An eye tracking module that determines the position of the human eye, or an eye tracking module that utilizes gaze tracking technology, etc. acquires the position information of the human eye, and calculates the rendered image based on this.
  • the corresponding dual-viewpoint pictures are taken for naked-eye 3D rendering.
  • the effect is like a static object in the real space, and the viewer can see the static object from different angles. different sides, and it is a stereoscopic reality effect of naked eye 3D.
  • this solution can obtain a full 3D picture display sequence through the turntable ring, so as to obtain 3D multi-view; during playback, the combination of human eye tracking technology and naked-eye 3D display technology is used, and when the human eye tracks to different positions, the corresponding The 3D rendering of the angular dual-viewpoint view simulates the realistic stereoscopic display effect of the physical display. Combine the three-dimensional images of the ring shot into multiple views for multi-view naked-eye 3D stereoscopic display.
  • FIG. 4 is a schematic structural diagram of a stereoscopic display device for static objects provided by an embodiment of the present application. As shown in FIG. 4 , the device may include:
  • the dual-viewpoint image sequence acquisition unit 410 is configured to acquire the dual-viewpoint image sequence of the target static object captured by the camera;
  • the human eye position data determination unit 420 is configured to perform eye tracking on the user through the human eye tracking module to obtain the human eye position data;
  • the stereoscopic display unit 430 is configured to automatically select the dual-viewpoint image sequence according to the dual-viewpoint image sequence and the human eye position data, and perform 3D rendering of the corresponding viewing angle images for stereoscopic display.
  • the stereoscopic display device for static objects provided by the embodiments of the present application can execute the stereoscopic display method for static objects provided in the embodiments of the present application, and has functional modules and beneficial effects corresponding to the stereoscopic display method for static objects.
  • Embodiments of the present application also provide a storage medium containing computer-executable instructions, when the computer-executable instructions are executed by a computer processor for executing a stereoscopic display method for static objects, the method includes:
  • the dual-viewpoint image sequence is automatically selected, and 3D rendering of the corresponding viewing angle images is performed to perform stereoscopic display.
  • a storage medium refers to any of various types of memory electronics or storage electronics.
  • the term "storage medium” is intended to include: installation media, such as CD-ROMs, floppy disks, or tape devices; computer system memory or random access memory, such as DRAM (Dynamic Random Access Memory), DDR RAM (Double Data Rate RAM, double rate random access memory), SRAM (Static Random Access Memory, static random access memory), EDO RAM (Extended Data Output RAM, extended data output memory), Rambus (Rambus) RAM, etc.; Non-volatile memory, such as flash memory, magnetic media (eg hard disk or optical storage); registers or other similar types of memory elements, etc.
  • the storage medium may also include other types of memory or combinations thereof.
  • the storage medium may be located in the computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network such as the Internet.
  • the second computer system may provide program instructions to the computer for execution.
  • the term "storage medium" may include two or more storage media residing in different locations (eg, in different computer systems connected by a network).
  • the storage medium may store program instructions (eg, embodied as a computer program) executable by one or more processors.
  • a storage medium containing computer-executable instructions provided by an embodiment of the present application the computer-executable instructions of which are not limited to the operations of the stereoscopic display method for static objects as described above, and can also execute static objects provided by any embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present application. As shown in FIG. 5 , this embodiment provides an electronic device 500, which includes: one or more processors 520; and a storage device 510 for storing one or more programs, when the one or more programs are The one or more processors 520 execute, so that the one or more processors 520 implement the stereoscopic display method for static objects provided by the embodiments of the present application, and the method includes:
  • the dual-viewpoint image sequence is automatically selected, and 3D rendering of the corresponding viewing angle images is performed to perform stereoscopic display.
  • the electronic device 500 shown in FIG. 5 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • the electronic device 500 includes a processor 520 , a storage device 510 , an input device 530 and an output device 540 ; the number of processors 520 in the electronic device may be one or more, and one processor 520 is used in FIG. 5 .
  • the processor 520 , the storage device 510 , the input device 530 and the output device 540 in the electronic device may be connected by a bus or other means, and the connection by the bus 550 is taken as an example in FIG. 5 .
  • the storage device 510 can be used to store software programs, computer-executable programs, and module units, such as program instructions corresponding to the stereoscopic display method for static objects in the embodiments of the present application.
  • the storage device 510 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Additionally, storage device 510 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some instances, storage device 510 may further include memory located remotely from processor 520, the remote memory may be connected through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 530 may be used to receive input numbers, character information or voice information, and generate key signal input related to user settings and function control of the electronic device.
  • the output device 540 may include electronic devices such as a display screen, a speaker, and the like.
  • the naked-eye 3D device can display the image according to the position of the human eye, so as to obtain a realistic naked-eye image. 3D display effect.
  • the stereoscopic display device, medium and electronic device for static objects provided in the above embodiments can perform the stereoscopic display method for static objects provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for implementing the method.
  • the stereoscopic display method for static objects provided by any embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

提供了一种静态对象的立体显示方法、装置、介质及电子设备。该方法包括:获取摄像头拍摄的目标静态对象的双视点图像序列(S110);通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据(S120);根据所述双视点图像序列以及所述人眼位置数据,对自动选取的相应视角双视点图像进行3D渲染,以进行立体显示(S130)。该方法针对人眼的位置通过裸眼3D设备对图像进行显示,以得到逼真的裸眼3D显示的效果。

Description

静态对象的立体显示方法、装置、介质及电子设备
本申请要求在2021年03月24日提交中国专利局、申请号为202110313668.1的中国专利申请的优先权,以上申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及裸眼3D技术领域,例如涉及静态对象的立体显示方法、装置、介质及电子设备。
背景技术
随着科技水平的快速发展,对于物品的立体显示已经成为很多领域的基本需求。但是,由于立体显示过程中,无法实现真正的3D(三维,3-dimensional)显示效果,会导致用户的观看体验受到影响。例如,通过单视点拍摄的方式,围绕静态物品进行转台拍摄是生成的2D(二维,2-dimensional)图片序列,再根据不同的观看角度进行图片的显示,这种方式虽然能够实现对物品的全方位显示,但是并不能够达到立体显示的效果。而通过双视点拍摄方式拍摄,并使用VR(Virtual Reality,虚拟现实)技术播放时,需要用户佩戴头盔或者眼睛等配套设备,不方便使用。
发明内容
本申请实施例提供静态对象的立体显示方法、装置、介质及电子设备。
第一方面,本申请实施例提供了一种静态对象的立体显示方法,所述方法包括:
获取摄像头拍摄的目标静态对象的双视点图像序列;
通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;
根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
第二方面,本申请实施例提供了一种静态对象的立体显示装置,所述装置包括:
双视点图像序列获取单元,设置为获取摄像头拍摄的目标静态对象的双视点图像序列;
人眼位置数据确定单元,设置为通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;
立体显示单元,设置为根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
第三方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请实施例所述的静态对象的立体显示方法。
第四方面,本申请实施例提供了一种电子设备,该电子设备包括存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如本申请实施例所述的静态对象的立体显示方法。
附图说明
图1是本申请实施例提供的静态对象的立体显示方法的流程图;
图2是本申请实施例提供的目标静态对象的双视点图像序列采集场景的示意图;
图3是本申请实施例提供的目标静态对象的立体显示过程的示意图;
图4是本申请实施例提供的静态对象的立体显示装置的结构示意图;
图5是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请进行说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各步骤描述成顺序的处理,但是其中的许多步骤可以被并行地、并发地或者同时实施。此外,各步骤的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
图1是本申请实施例提供的静态对象的立体显示方法的流程图,本实施例可进行静态对象的立体显示,该方法可以由本申请实施例所提供的静态对象的立体显示装置执行,该装置可以由软件和/或硬件的方式来实现,并可集成于电子设备中。
此外,该电子设备可以是遥控设备,还可以是机器人。
如图1所示,所述静态对象的立体显示方法包括:
S110、获取摄像头拍摄的目标静态对象的双视点图像序列。
其中,目标静态对象可以是人或者物体,例如可以是水杯、食物或者其他物品。双视点摄像头可以是与裸眼3D显示设备分离设置的,也可以是连接设置的。如果是连接设置,可以实时的将静态对象的3D图像显示出来。通过双视点摄像头获取双视点图像序列的方式,可以是按照一定的点位进行拍摄,并且按照预先设计好的点位编号进行图像编辑,从而得到双视点图像序列的。例如,正前方编号为0001,左前方5度编号为0002,右前方5度编号是0003,等等,依次类推,可以得到静态对象的图像序列,即为双视点图像序列。
示例性地,可预先通过摄像头获取目标静态对象的双视点图像序列。
由于目标静态对象需要进行立体显示,所以可以把目标物品的双视点图像序列存储到裸眼3D显示设备当中。
在另一个实施例中,本方案中对于待显示的目标静态对象的确定过程,可以是用户通过裸眼3D显示设备选择目标物品的过程,例如该裸眼3D显示设备中同时存储有水杯、瓷瓶以及风扇的图像,则用户可以根据需求选择自己想要查看的物品的图像,如在裸眼3D显示设备的屏幕上面点选确定,进而可以将该物品作为目标物品,进行裸眼3D效果的显示。或者是在一个裸眼3D显示设备上,可以设定每个时间段显示某一种物品,则在到达另一个时间段时,可以根据预先确定的显示顺序切换目标物品。
本方案中,目标静态对象的双视点图像序列可以是预先存储在裸眼3D显示设备当中,或者存储在于该裸眼3D显示设备连接的数据库或者服务器当中的。在确定目标静态对象之后,可以通过根据该物品的ID(Identity,标识)调取相应的双视点图像序列。
其中,双视点图像序列可以是由双视点摄像装置拍摄得到的由一个角度切换至另一个角度的多张图像构成的图像序列。例如在水平面上,360度内每1度拍摄1张双视点图像,则双视点图像序列中可以一共含有360个分别从不同的角度拍摄到的双视点图像。
本方案中,可选的,所述目标静态对象的双视点图像序列,是基于双视点摄像头按照预设采样频率在转台对目标静态对象进行环拍得到的。
图2是本申请实施例提供的目标静态对象的双视点图像序列采集场景的示意图,如图2所示,物品可以是一个水壶,可以将物品摆在转动托盘的中心位 置,在转动托盘外部的一侧设置有双目相机。通过双目相机可以按照一定的规则进行采样。例如在转动图盘匀速转动的过程中,可以设置双目相机的采样频率,例如每秒采样20张图片,而每秒转动托盘所转过的角度可以是2度,则可以得到每1度的角度内,会均匀的采集有10张图片。
本方案通过这样的设置,可以在裸眼3D显示设备显示时,从均匀的双视点图像序列中选取不同视角图像进行切换,可以在用户看来得到角度均匀变化的目标静态对象图像,提高3D显示效果。
S120、通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据。
其中,人眼追踪模块可以是设置于裸眼3D显示设备上方的,也可以是裸眼3D显示设备自身携带的模块,该模块具有图像获取和人眼追踪计算的功能。例如在获取当前裸眼3D显示设备前方的图像之后,确定用户的人眼位置,并进行实时追踪,得到人眼的位置数据。可以理解的,本方案目前针对的是单个用户在观看裸眼3D显示设备的立体显示图像的情况。
其中,人眼位置数据,可以转化为相对于裸眼3D显示设备的屏幕中心的距离数据和方向数据,这样有助于后续的计算。
S130、根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
本方案中,可以根据双视点图像序列确定用户在各个角度应该查看到的图像,再根据人眼位置数据确定用户的实际角度,从而确定显示哪一张图像,并进行3D渲染。
本方案中,例如,可以采用以下三种方式进行目标静态对象的立体显示:
第一种,根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示,包括:
获取所述目标静态对象的双视点图像序列的初始视角以及与所述初始视角关联的初始视角图像;
对所述初始视角图像进行显示,并根据所述人眼位置数据,确定用户的观看视角;
根据所述观看视角与所述初始视角的夹角,从所述双视点图像序列中确定目标图像;
对所述目标图像进行3D渲染,以进行立体显示。
其中,初始视角可以是基于目标静态对象的特性确定的正面的朝向的视角,例如一个汽车模型的正前方所对应的视角,初始视角关联的图像也就是从汽车 模型正前方拍摄到的双视点图像。
该正前方在裸眼3D显示设备中可以是与屏幕的正前方相对应,在得到初始视角图像之后,可以优先在裸眼3D显示设备中进行显示,并且可以对人眼位置数据进行角度解析。
例如,可以根据所述人眼位置数据,确定用户的观看视角。例如根据人眼位置数据,确定用户的人眼在于裸眼3D显示设备的屏幕中垂线的左侧,并且夹角为45度,则可以得到用户的观看视角为-45度,在确定该观看视角之后,可以根据所述观看视角与所述初始视角的夹角,从所述双视点图像序列中确定目标图像。即可以得到用户在当前实际应该看到的是在车辆的左前方45度拍摄到的双视点图像,即目标图像。之后对所述目标图像进行3D渲染,以进行立体显示。
本方案通过这样的设置,可以根据人眼位置数据确定用户实际观看物品的角度应该与哪一个拍摄角度相对应,并调取从该拍摄角度得到的双视点图像。这样设置的好处是可以实现根据用户的位置,进行目标静态对象的立体显示的同时,无需绑定任何关联关系,直接计算即可得到实际需要渲染的目标图像。
第二种,根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示,包括:
构建预设观看视角与所述目标静态对象的双视点图像序列的映射关系;
根据所述人眼位置数据,确定用户的观看视角;
根据双视点图像序列与预设观看视角的映射关系,从所述双视点图像序列中确定与观看视角对应的目标图像;
对所述目标图像进行3D渲染,以进行立体显示。
在得到双视点图像序列之后,可以构建观看视角与所述目标静态对象的双视点图像序列的映射关系,其中,观看视角可以是与拍摄视角相对应的,例如观看视角为+30度的角度时,映射得到的是拍摄视角为静态对象的右前方30度拍摄得到的双视点图像。
这样,在得到人眼位置数据之后,只要确定用户的观看视角,就能够确定双视点图像序列中需要显示的目标图像,并进行3D渲染,完成对物品的立体显示。
这样设置的好处是可以根据预先确定的映射关系,在计算得到用户的观看视角之后,快速的确定目标图像,并进行物品的立体显示。
除了上述两种方案以外,本方案还提供一种通过多视图进行3D渲染的效 果。例如,以下为第三种方案:
根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示,包括:
根据所述双视点图像序列,按照视角范围生成至少一个与当前视角范围对应的多视图;
响应于所述人眼位置数据对应于目标视角范围,将目标视角范围对应的多视图进行3D渲染,以进行立体显示。
其中,多视图可以是由至少两张图像构成的图像,在完成对多张图像的压缩和拼接之后,得到一个多视图。该多视图可以是与裸眼3D显示设备相对应的,例如,在裸眼3D显示设备的显示屏中加入光栅,通过该光栅的有向设置,可以使得用户在屏幕前的不同角度,看到的图像部分是不同的,例如在正前方,所看到的图像刚好为拼接前中间一张图像,向左移动时,依次看到的是中间一张向左的第一、第二、第三张图像,向右移动时,依次看到的是中间一张向右的第一、第二、第三张图像,等等。通过这样的设置,可以在光栅的配合下,实现用户在不同角度看到的图像有所不同的效果,从而可以实现对目标静态对象的立体显示。
本方案中,例如,所述多视图是所述双视点图像序列中至少两个相邻图像拼接得到的。
例如,可以根据人眼的位置数据确定目标视角范围,将该目标视角范围对应至少两个相邻的图像进行拼接,可以得到多视图,并可以对多视图进行显示。本方案中,拼接操作可以优先与目标视角范围做对应。例如预先确定每3度的双视点图像进行拼接,得到多个多视图,可以根据人眼的位置确定人眼所处于视角确定目标视角范围,并调取与其对应的多视图,在裸眼3D显示设备上面进行显示。
本申请实施例所提供的技术方案,通过双视点摄像头获取目标静态对象的双视点图像序列;通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行自动选取的相应视角图像的3D渲染,以进行立体显示。通过执行本技术方案,可以在得到目标静态对象的双视点图像序列之后,并结合人眼跟踪的定位结果,针对人眼的位置通过裸眼3D设备对图像进行显示,以得到逼真的裸眼3D显示的效果。
在一个实施例中,在对所述目标图像进行3D渲染,以进行立体显示之后, 所述方法还包括:
响应于检测到视角切换事件,通过所述人眼追踪模块对用户进行人眼追踪,得到人眼位置更新数据;
根据所述人眼位置更新数据,确定用户的更新观看视角;
根据所述更新观看视角,从所述双视点图像序列中确定更新目标图像,并对所述更新目标图像进行3D渲染,以进行动态显示。
可以理解的,更新的过程可以与前面显示的过程相类似,通过这样的设置,可以实现对目标静态对象的图像进行动态的显示,进而达到基于人眼的位置,进行与人眼位置相适配的图像的立体显示的效果。
图3是本申请实施例提供的目标静态对象的立体显示过程的示意图,如图3所示,硬件使用裸眼3D显示器,软件采用配套的3D渲染器程序。采用立体相机,如双目相机进行转台拍摄生成若干个(环绕静态对象多个角度)双视点视图文件序列。并且,可以在得到图像序列之后,将图像序列存储在主机的存储设备当中。过程中,可以通过主机的拍摄控制软件对双目相机进行控制,在对双目相机进行控制的同时,也可以控制转台,例如控制转台的转动速度。
例如,在通过转台环拍静态对象时,如果一圈内拍摄的立体图片数量越多,即采样数越大,采样的样本越连续,则观看的效果越连贯,切换角度时越平滑。
可以在主机一侧将图像序列从存储设备调取出来,用于供物品展示软件进行处理。处理过程中,可以通过3D显示设备的人眼追踪设备(即人眼追踪模块,人眼追踪模块例如可以是:利用图像技术确定人眼位置的单摄像头或双摄像头人眼跟踪模块、利用距离传感器确定人眼位置的人眼跟踪模块、或利用注视点跟踪技术的人眼跟踪模块等)获取人眼的位置信息,并基于此来计算得到渲染图像。将渲染图像发送至裸眼3D显示器供3D显示设备进行显示。
播放时根据人眼追踪系统取得的人眼位置,相应取对应角度的双视点图片进行裸眼3D渲染,其效果就像是静态对象在现实空间里一样,观看者处于不同角度,可以看到静态对象的不同侧面,并且是裸眼3D的立体现实效果。
本方案通过这样的设置,可以通过转台环拍得到充分3D图片显示序列,从而得到3D多视图;播放时采用人眼追踪技术和裸眼3D显示技术的结合,在人眼追踪到不同位置时调用相应角度的双视点视图进行3D渲染,模拟出实物显示的逼真立体显示效果。将环拍的立体图,组合成多视图进行多视点裸眼3D立体显示。
图4是本申请实施例提供的静态对象的立体显示装置的结构示意图,如图4 所示,该装置可以包括:
双视点图像序列获取单元410,设置为获取摄像头拍摄的目标静态对象的双视点图像序列;
人眼位置数据确定单元420,设置为通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;
立体显示单元430,设置为根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
本申请实施例所提供的一种静态对象的立体显示装置可执行本申请实施例所提供的静态对象的立体显示方法,具备执行静态对象的立体显示方法相应的功能模块和有益效果。
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种静态对象的立体显示方法,该方法包括:
获取摄像头拍摄的目标静态对象的双视点图像序列;
通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;
根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
存储介质是指任何的各种类型的存储器电子设备或存储电子设备。术语“存储介质”旨在包括:安装介质,例如CD-ROM、软盘或磁带装置;计算机系统存储器或随机存取存储器,诸如DRAM(Dynamic Random Access Memory,动态随机存取存储器)、DDR RAM(Double Data Rate RAM,双倍速率随机存取存储器)、SRAM(Static Random Access Memory,静态随机存取存储器)、EDO RAM(Extended Data Output RAM,扩展数据输出内存),兰巴斯(Rambus)RAM等;非易失性存储器,诸如闪存、磁介质(例如硬盘或光存储);寄存器或其它相似类型的存储器元件等。存储介质可以还包括其它类型的存储器或其组合。另外,存储介质可以位于程序在其中被执行的计算机系统中,或者可以位于不同的第二计算机系统中,第二计算机系统通过网络(诸如因特网)连接到计算机系统。第二计算机系统可以提供程序指令给计算机用于执行。术语“存储介质”可以包括驻留在不同位置中(例如在通过网络连接的不同计算机系统中)的两个或更多存储介质。存储介质可以存储可由一个或多个处理器执行的程序指令(例如具体实现为计算机程序)。
本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的静态对象的立体显示方法操作,还可以执行本申请任意实施例所提供的静态对象的立体显示方法中的相关操作。
本申请实施例提供了一种电子设备,该电子设备中可集成本申请实施例提供的静态对象的立体显示装置,该电子设备可以是配置于系统内的,也可以是执行系统内的部分或者全部功能的设备。图5是本申请实施例提供的一种电子设备的结构示意图。如图5所示,本实施例提供了一种电子设备500,其包括:一个或多个处理器520;存储装置510,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器520执行,使得所述一个或多个处理器520实现本申请实施例所提供的静态对象的立体显示方法,该方法包括:
获取摄像头拍摄的目标静态对象的双视点图像序列;
通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;
根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
图5显示的电子设备500仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图5所示,该电子设备500包括处理器520、存储装置510、输入装置530和输出装置540;电子设备中处理器520的数量可以是一个或多个,图5中以一个处理器520为例;电子设备中的处理器520、存储装置510、输入装置530和输出装置540可以通过总线或其他方式连接,图5中以通过总线550连接为例。
存储装置510作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块单元,如本申请实施例中的静态对象的立体显示方法对应的程序指令。
存储装置510可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储装置510可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置510可进一步包括相对于处理器520远程设置的存储器,这些远程存储器可以通过网络连接。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置530可用于接收输入的数字、字符信息或语音信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置540可包括显 示屏、扬声器等电子设备。
本申请实施例提供的电子设备,可以在得到目标静态对象的双视点图像序列之后,并结合人眼跟踪的定位结果,针对人眼的位置通过裸眼3D设备对图像进行显示,以得到逼真的裸眼3D显示的效果。
上述实施例中提供的静态对象的立体显示装置、介质及电子设备可执行本申请任意实施例所提供的静态对象的立体显示方法,具备执行该方法相应的功能模块和有益效果。未在上述实施例中详尽描述的技术细节,可参见本申请任意实施例所提供的静态对象的立体显示方法。
本领域技术人员会理解,本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本申请的范围由所附的权利要求范围决定。

Claims (11)

  1. 一种静态对象的立体显示方法,所述方法包括:
    获取摄像头拍摄的目标静态对象的双视点图像序列;
    通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;
    根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
  2. 根据权利要求1所述的方法,其中,所述获取摄像头拍摄的目标静态对象的双视点图像序列,包括:获取双视点摄像头拍摄的目标静态对象的双视点图像序列。
  3. 根据权利要求1或2所述的方法,其中,根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示,包括:
    获取所述目标静态对象的双视点图像序列的初始视角以及与所述初始视角关联的初始视角图像;
    对所述初始视角图像进行显示,并根据所述人眼位置数据,确定用户的观看视角;
    根据所述观看视角与所述初始视角的夹角,从所述双视点图像序列中确定目标图像;
    对所述目标图像进行3D渲染,以进行立体显示。
  4. 根据权利要求1或2所述的方法,其中,根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示,包括:
    构建预设观看视角与所述目标静态对象的双视点图像序列的映射关系;
    根据所述人眼位置数据,确定用户的观看视角;
    根据双视点图像序列与预设观看视角的映射关系,从所述双视点图像序列中确定与观看视角对应的目标图像;
    对所述目标图像进行3D渲染,以进行立体显示。
  5. 根据权利要求3或4所述的方法,在对所述目标图像进行3D渲染,以进行立体显示之后,所述方法还包括:
    响应于检测到视角切换事件,通过所述人眼追踪模块对用户进行人眼追踪,得到人眼位置更新数据;
    根据所述人眼位置更新数据,确定用户的更新观看视角;
    根据所述更新观看视角,从所述双视点图像序列中确定更新目标图像,并 对所述更新目标图像进行3D渲染,以进行动态显示。
  6. 根据权利要求1或2所述的方法,其中,所述目标静态对象的双视点图像序列,是基于双视点摄像头按照预设采样频率在转台对目标静态对象进行环拍得到的。
  7. 根据权利要求1或2所述的方法,其中,根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示,包括:
    根据所述双视点图像序列,按照视角范围生成至少一个与当前视角范围对应的多视图;
    响应于所述人眼位置数据对应于目标视角范围,将目标视角范围对应的多视图进行3D渲染,以进行立体显示。
  8. 根据权利要求7所述的方法,其中,所述多视图是所述双视点图像序列中至少两个相邻图像拼接得到的。
  9. 一种静态对象的立体显示装置,所述装置包括:
    双视点图像序列获取单元,设置为获取摄像头拍摄的目标静态对象的双视点图像序列;
    人眼位置数据确定单元,设置为通过人眼追踪模块对用户进行人眼追踪,得到人眼位置数据;
    立体显示单元,设置为根据所述双视点图像序列以及所述人眼位置数据,对所述双视点图像序列进行自动选取,进行相应视角图像的3D渲染,以进行立体显示。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-8中任一项所述的静态对象的立体显示方法。
  11. 一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1-8中任一项所述的静态对象的立体显示方法。
PCT/CN2022/075458 2021-03-24 2022-02-08 静态对象的立体显示方法、装置、介质及电子设备 WO2022199260A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110313668.1 2021-03-24
CN202110313668.1A CN113079364A (zh) 2021-03-24 2021-03-24 一种静态对象的立体显示方法、装置、介质及电子设备

Publications (1)

Publication Number Publication Date
WO2022199260A1 true WO2022199260A1 (zh) 2022-09-29

Family

ID=76613695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075458 WO2022199260A1 (zh) 2021-03-24 2022-02-08 静态对象的立体显示方法、装置、介质及电子设备

Country Status (2)

Country Link
CN (1) CN113079364A (zh)
WO (1) WO2022199260A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229583A (zh) * 2023-05-06 2023-06-06 北京百度网讯科技有限公司 驱动信息生成、驱动方法、装置、电子设备以及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079364A (zh) * 2021-03-24 2021-07-06 纵深视觉科技(南京)有限责任公司 一种静态对象的立体显示方法、装置、介质及电子设备
CN113660476A (zh) * 2021-08-16 2021-11-16 纵深视觉科技(南京)有限责任公司 一种基于Web页面的立体显示系统及方法
CN113689551A (zh) * 2021-08-20 2021-11-23 纵深视觉科技(南京)有限责任公司 一种三维内容的展示方法、装置、介质及电子设备
CN114049432A (zh) * 2021-11-02 2022-02-15 百果园技术(新加坡)有限公司 一种人体测量方法、装置、电子设备及存储介质
CN114928739A (zh) * 2022-02-11 2022-08-19 广东未来科技有限公司 3d显示方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6445807B1 (en) * 1996-03-22 2002-09-03 Canon Kabushiki Kaisha Image processing method and apparatus
US20040032407A1 (en) * 1998-01-30 2004-02-19 Koichi Ejiri Method and system for simulating stereographic vision
CN104603717A (zh) * 2012-09-05 2015-05-06 Nec卡西欧移动通信株式会社 显示装置、显示方法以及程序
US20170366803A1 (en) * 2016-06-17 2017-12-21 Dustin Kerstein System and method for capturing and viewing panoramic images having motion parallax depth perception without image stitching
CN111683238A (zh) * 2020-06-17 2020-09-18 宁波视睿迪光电有限公司 基于观察跟踪的3d图像融合方法及装置
CN113079364A (zh) * 2021-03-24 2021-07-06 纵深视觉科技(南京)有限责任公司 一种静态对象的立体显示方法、装置、介质及电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9549174B1 (en) * 2015-10-14 2017-01-17 Zspace, Inc. Head tracked stereoscopic display system that uses light field type data
CN107885325B (zh) * 2017-10-23 2020-12-08 张家港康得新光电材料有限公司 一种基于人眼跟踪的裸眼3d显示方法及控制系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6445807B1 (en) * 1996-03-22 2002-09-03 Canon Kabushiki Kaisha Image processing method and apparatus
US20040032407A1 (en) * 1998-01-30 2004-02-19 Koichi Ejiri Method and system for simulating stereographic vision
CN104603717A (zh) * 2012-09-05 2015-05-06 Nec卡西欧移动通信株式会社 显示装置、显示方法以及程序
US20170366803A1 (en) * 2016-06-17 2017-12-21 Dustin Kerstein System and method for capturing and viewing panoramic images having motion parallax depth perception without image stitching
CN111683238A (zh) * 2020-06-17 2020-09-18 宁波视睿迪光电有限公司 基于观察跟踪的3d图像融合方法及装置
CN113079364A (zh) * 2021-03-24 2021-07-06 纵深视觉科技(南京)有限责任公司 一种静态对象的立体显示方法、装置、介质及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229583A (zh) * 2023-05-06 2023-06-06 北京百度网讯科技有限公司 驱动信息生成、驱动方法、装置、电子设备以及存储介质
CN116229583B (zh) * 2023-05-06 2023-08-04 北京百度网讯科技有限公司 驱动信息生成、驱动方法、装置、电子设备以及存储介质

Also Published As

Publication number Publication date
CN113079364A (zh) 2021-07-06

Similar Documents

Publication Publication Date Title
WO2022199260A1 (zh) 静态对象的立体显示方法、装置、介质及电子设备
CN107636534B (zh) 用于图像处理的方法和系统
US9684994B2 (en) Modifying perspective of stereoscopic images based on changes in user viewpoint
US20210065432A1 (en) Method and system for generating an image of a subject in a scene
US10373381B2 (en) Virtual object manipulation within physical environment
US9354718B2 (en) Tightly coupled interactive stereo display
WO2017028498A1 (zh) 3d场景展示方法及装置
US20130141419A1 (en) Augmented reality with realistic occlusion
TWI547901B (zh) 模擬立體圖像顯示方法及顯示設備
KR20190136117A (ko) 가상의 3차원 비디오 생성 및 관리 시스템 및 방법
US20120154378A1 (en) Determining device movement and orientation for three dimensional views
US20190045125A1 (en) Virtual reality video processing
KR20160094190A (ko) 시선 추적 장치 및 방법
US20150326847A1 (en) Method and system for capturing a 3d image using single camera
US9773350B1 (en) Systems and methods for greater than 360 degree capture for virtual reality
JP2022051978A (ja) 画像処理装置、画像処理方法、及び、プログラム
KR20200128661A (ko) 뷰 이미지를 생성하기 위한 장치 및 방법
WO2018173207A1 (ja) 情報処理装置
US11128836B2 (en) Multi-camera display
WO2017163649A1 (ja) 画像処理装置
CN113485547A (zh) 一种应用于全息沙盘的交互方法及装置
WO2018173206A1 (ja) 情報処理装置
WO2023188022A1 (ja) 画像生成装置、画像生成方法、及びプログラム
NL2030325B1 (en) Scaling of three-dimensional content for an autostereoscopic display device
WO2023140004A1 (ja) 情報処理装置および情報処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22773922

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: GM 9030/2022

Country of ref document: AT

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22773922

Country of ref document: EP

Kind code of ref document: A1