WO2018214431A1 - Method and apparatus for presenting scene using virtual reality device, and virtual reality apparatus - Google Patents

Method and apparatus for presenting scene using virtual reality device, and virtual reality apparatus Download PDF

Info

Publication number
WO2018214431A1
WO2018214431A1 PCT/CN2017/112383 CN2017112383W WO2018214431A1 WO 2018214431 A1 WO2018214431 A1 WO 2018214431A1 CN 2017112383 W CN2017112383 W CN 2017112383W WO 2018214431 A1 WO2018214431 A1 WO 2018214431A1
Authority
WO
WIPO (PCT)
Prior art keywords
lens
eye
center
image
imaging device
Prior art date
Application number
PCT/CN2017/112383
Other languages
French (fr)
Chinese (zh)
Inventor
王鹏
Original Assignee
歌尔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔科技有限公司 filed Critical 歌尔科技有限公司
Publication of WO2018214431A1 publication Critical patent/WO2018214431A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • the present invention relates to the field of virtual reality technologies, and in particular, to a method, a device, and a virtual reality device for presenting a scene through a virtual reality device.
  • virtual reality technology has developed rapidly, not only for presenting virtual scenes to provide users with near-real immersion, but also for presenting real-life scenes to provide users with a sense of realism in the real world. Therefore, virtual reality devices such as virtual reality helmets, virtual reality glasses, and the like that present scenes based on virtual reality technology are also attracting more and more users' attention.
  • Current virtual reality devices are typically used to present scenes by setting a dual camera mode for the left eye camera and the right eye camera.
  • the dual camera of the virtual reality device is different from the general single camera, and the depth of field of the corresponding scene image can be increased when the scene is presented, so that the presented scene image has a stereoscopic effect and enhances the realism when the user uses to view the real world.
  • a virtual reality device uses a dual camera to present a scene, it is necessary to introduce a third-party open source library, such as the FFmpeg library, to perform a merge algorithm on each frame image previewed by the left-eye camera and the right-eye camera to achieve a real-world simulation effect.
  • a third-party open source library such as the FFmpeg library
  • a method of presenting a scene by a virtual reality device comprising a left eye imaging device and a right eye imaging device, the left eye imaging device a left eye lens, a left eye camera, a left eye screen, and a left lens head are sequentially disposed, and the right eye imaging device is sequentially provided with a right eye lens, a right eye camera, a right eye screen, and a right lens head;
  • the method includes:
  • an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device where the imaging parameter includes at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
  • the center horizontal distance of the eye lens is not greater than the human eyelid distance
  • the left eye lens is horizontally offset from the left eye camera center by a first offset distance
  • the right eye lens is horizontally offset from the right eye camera center by a second offset Moving distance
  • the first image previewed by the left eye camera is rendered on the left eye screen
  • the second image corresponding to the simultaneous preview of the right eye camera is rendered on the right eye screen to present the corresponding scene to the user through the corresponding lens.
  • a scene presenting apparatus is provided, which is disposed on a virtual reality device side.
  • the virtual reality device includes a left eye imaging device and a right eye imaging device, and the left eye imaging device is sequentially provided with a left eye lens, a left eye camera, a left eye screen, and a left lens, and the right eye imaging device is sequentially provided with Right eye lens, right eye camera, right eye screen, and right lens head;
  • the scene rendering device includes:
  • a parameter obtaining unit configured to respectively acquire an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device, where the imaging parameter includes at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
  • An offset distance acquisition unit configured to acquire a first offset distance of the left-eye imaging device according to an imaging parameter of the left-eye imaging device, and a second offset distance of the right-eye imaging device according to an imaging parameter of the right-eye imaging device;
  • a component adjustment unit configured to adjust the left eye lens, the left eye camera, the right eye lens, and the right eye camera according to the first offset distance, the second offset distance, and a preset human eye distance
  • the center horizontal distance between the left eye lens and the right eye lens is not greater than the human eyelid distance
  • the left eye lens is horizontally offset from the left eye camera center by a first offset distance
  • the right eye lens is horizontally offset from the right eye camera center by a second offset distance
  • An image rendering unit configured to render a first image previewed by the left eye camera on the left eye screen, and a second image corresponding to simultaneously previewing the right eye camera on the right eye screen to present a corresponding to the user through the corresponding lens Scene.
  • a virtual reality device comprising:
  • a left eye imaging device which is provided with a left eye lens, a left eye camera, a left eye screen, and a left lens head in sequence;
  • a right-eye imaging device which is sequentially provided with a right-eye lens, a right-eye camera, a right-eye screen, and a right-eye lens;
  • a method, a device, and a virtual reality device for presenting a scenario by using a virtual reality device may perform a merge algorithm processing on each frame image previewed by the left and right eye cameras without introducing a third-party library, and reduce the virtual reality device.
  • the difficulty of implementation reduce the size of the program, to avoid the problem of low operating efficiency and high power consumption.
  • the image refresh speed is improved when the scene is presented, and the realism of the scene presentation is enhanced.
  • it can enhance the comfort of users using virtual reality devices. Especially suitable for presenting real scenes in the real world.
  • FIG. 1 is a block diagram showing an example of a hardware configuration of an electronic device that can be used to implement an embodiment of the present invention.
  • FIG. 2 shows a flow diagram of a method of presenting a scene by a virtual reality device in accordance with an embodiment of the present invention.
  • FIG. 3 illustrates an example of a method of presenting a scene by a virtual reality device according to an embodiment of the present invention. Sub-schematic.
  • FIG. 4 shows a schematic block diagram of a scene presenting apparatus of an embodiment of the present invention.
  • FIG. 5 shows a schematic block diagram of a virtual reality scene device of an embodiment of the present invention.
  • FIG. 1 is a block diagram showing a hardware configuration of an electronic device 1000 in which an embodiment of the present invention can be implemented.
  • the electronic device 1000 can be a virtual reality helmet or virtual reality glasses or the like.
  • the electronic device 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and the like.
  • the processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like.
  • the memory 1200 includes, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a nonvolatile memory such as a hard disk, and the like.
  • the interface device 1300 includes, for example, a USB interface, a headphone jack, and the like.
  • the communication device 1400 can, for example, perform wired or wireless communication, and specifically can include Wifi communication, Bluetooth communication, 2G/3G/4G/5G communication, and the like.
  • Display device 1500 cases Such as LCD screen, touch screen, etc.
  • Input device 1600 can include, for example, a touch screen, a keyboard, a somatosensory input, and the like. The user can input/output voice information through the speaker 1700 and the microphone 1800.
  • the memory 1200 of the electronic device 1000 is configured to store instructions for controlling the processor 1100 to operate to perform any of the embodiments provided by the embodiments of the present invention through a virtual reality device. The method of rendering the scene.
  • the present invention may relate only to some of the devices therein, for example, electronic device 1000 only relates to processor 1100 and memory 1200.
  • a technician can design instructions in accordance with the disclosed aspects of the present invention. How the instructions control the processor for operation is well known in the art and will not be described in detail herein.
  • a method for presenting a scene by using a virtual reality device is implemented on a virtual reality device, where the virtual reality device includes a left eye imaging device and a right eye imaging device, and the left eye imaging device is sequentially disposed with a left An eye lens, a left eye camera, a left eye screen, and a left eye lens, and the right eye imaging device is sequentially provided with a right eye lens, a right eye camera, a right eye screen, and a right eye lens.
  • the left-eye imaging device and the right-eye imaging device in the virtual reality device may be physically separated physical devices, or may be physically or partially integrated and logically separated.
  • the virtual reality device can be a virtual reality glasses or a virtual reality helmet.
  • the method for presenting a scene by using a virtual reality device includes steps S2100 to S2400.
  • step S2100 an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device are respectively acquired, the imaging parameter including at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
  • the imaging parameters may be provided by a manufacturer or manufacturer of the corresponding virtual reality device. And pre-stored in the storage area of the virtual reality device and provides an interface for acquisition, or may be obtained as a product parameter of the virtual reality device by means of a manual or a product official website description, and provided for downloading, etc., which are not enumerated here.
  • step S2200 a first offset distance of the left-eye imaging device is acquired according to an imaging parameter of the left-eye imaging device, and a second offset distance of the right-eye imaging device is acquired according to an imaging parameter of the right-eye imaging device.
  • the imaging parameters of the left-eye rendering device acquired in step S2100 include the lens-to-camera vertical vertical distance h 1 of the left-eye imaging device and the refractive angle ⁇ 1 of the lens
  • the imaging parameters of the right-eye rendering device include right-eye imaging.
  • the steps of obtaining the first offset distance and the second offset distance include:
  • step S2300 the left eye lens, the left eye camera, the right eye lens, and the right eye camera are adjusted according to the first offset distance, the second offset distance, and a preset human eye pupil distance, so that the The center horizontal distance of the left eye lens and the right eye lens is not greater than the human eyelid distance, the left eye lens is horizontally offset from the left eye camera center by a first offset distance, and the right eye lens and the right eye camera center level are Offset the second offset distance.
  • the left eye lens is horizontally offset from the left eye camera center by a first offset distance and the right eye lens is horizontally offset from the right eye camera center by a second offset distance, which can be prevented from being presented by the virtual reality device.
  • the center horizontal distance of the left-eye lens and the right-eye lens is not greater than the human eyelid distance, and the visual fatigue caused by the user after use may be avoided because the distance between the left and right eye lenses is greater than the lay length, thereby further improving the use comfort.
  • the center vertical distance of the right lens to right eye camera is h 1 and the angle of refraction of the right lens is ⁇ 1
  • the vertical distance from the center of the left lens to the camera is h 2 and the angle of refraction of the right lens is ⁇ . 2
  • the first offset distance obtained by the step S2200 is w 1
  • the second offset distance is w 2 .
  • the corresponding components are set as shown in FIG. 3 , wherein the left side shown in FIG. 3
  • the central horizontal distance of the eye lens and the right eye lens is the human eyelid distance and is merely illustrative. In other examples, it is also possible that the center horizontal distance of the left-eye lens and the right-eye lens is a value close to the human eyelid distance.
  • the human eyelid distance may be an average distance of a general user selected according to engineering experience or experimental simulation, or may be obtained by an interface or interface provided by the virtual reality device involved in the embodiment for user operation or input.
  • the value is used by the user who actually uses the virtual reality device to set the eyelid distance according to his own needs or application scenarios, so as to realize the personalized distance setting to further improve the comfort of the user using the virtual reality device.
  • step S2400 the first image previewed by the left eye camera is rendered on the left eye screen, and the second image corresponding to the simultaneous preview of the right eye camera is rendered on the right eye screen to present the corresponding image to the user through the corresponding lens. Scenes.
  • the scene may be a virtual scene to provide a near-real immersion of the user, or a real scene in the real world, to provide the user with a sense of realism in the real world.
  • the images obtained by simultaneously previewing the left and right eye cameras respectively in step S2400 are respectively corresponding to the respective left and right eye screens, and the corresponding scenes are directly presented to the user, so that the first scene does not need to be introduced.
  • the three-party library performs a merging algorithm on each frame of the preview of the left and right eye cameras, thereby reducing the difficulty of implementing the virtual reality device, reducing the size of the program, and avoiding the problems of low operating efficiency and high power consumption.
  • the image refresh speed is improved when the scene is presented, and the realism of the scene presentation is enhanced. In particular, it is particularly suitable for presenting real world real scenes, providing a sense of solid reality.
  • the rendering operation can be implemented by the rendering function provided by Unity3D.
  • Unity3D is a fully integrated professional game engine developed by Unity Technologies, which can be generally applied to the development of virtual reality device rendering scenarios, and will not be described here.
  • each frame image obtained by simultaneously previewing the left and right eye cameras respectively can be respectively rendered on the corresponding left and right eye screens by the rendering function provided by Unity3D to present corresponding scenes.
  • the image center of each frame of the image previewed by the left and right eye cameras may be offset before the rendering operation is performed to enhance the depth of field effect of the corresponding image. It makes the corresponding corresponding scene (especially the real scene of the real world) more realistic.
  • the method for presenting a scenario by using a virtual reality device provided in this example further includes:
  • the center of the image is horizontally offset to obtain a new image center, and then the rendering operation is performed;
  • the center performs a horizontal offset to obtain a new image center, and then performs the rendering operation.
  • h 1 is the center vertical distance from the lens of the left eye imaging device to the camera
  • w 1 is the first offset distance
  • H 1 is the image height of the first image
  • h 2 is the lens to camera of the right eye imaging device.
  • the center vertical distance, w 2 is the first offset distance
  • H 2 is the image height of the second image
  • d 1 (w 1 /h 1 ) ⁇ (H 1 /2
  • d 2 (w 2 /h 2 ) ⁇ (H 2 /2).
  • the object other than the outline of the image center object of each frame image previewed by the left and right eye cameras may be blurred or blurred, and then the rendering operation is performed to enhance the correspondence.
  • the depth of field effect of the image makes the corresponding scene (especially the real scene of the real world) more realistic.
  • the method for presenting a scenario by using a virtual reality device provided in this example further includes:
  • the rendering operation After determining a center object of the first image based on an image center point of the first image, performing contour detection on the center object to correspond to a center object contour, and the first outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed;
  • the rendering operation After determining a center object of the second image based on an image center point of the second image, performing contour detection on the center object to correspond to a center object contour, and the second outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed.
  • the contour detection can be implemented by the function findContours() provided in OpenCV, which is an open source cross-platform computer vision library, which will not be described here.
  • the image center of each frame image previewed by the left and right eye cameras may be offset, and the object other than the contour of the image center object may be blurred or blurred.
  • the rendering operation is performed to better enhance the depth of field effect of the corresponding image, so that the corresponding corresponding scene is more realistic.
  • a scene presentation device 3000 is further provided, as shown in FIG. 3, disposed on the virtual reality device 4000 side, where the scene presentation device 3000 includes a parameter acquisition unit 3100, an offset distance acquisition unit 3200, and component adjustment.
  • the unit 3300 and the image rendering unit 3400 optionally, further include an image center offset unit 3500 and a contour processing unit 3600 for implementing the method for presenting a scene by using a virtual reality device provided in this embodiment, and details are not described herein again.
  • the virtual reality device 4000 includes a left eye imaging device 4100 and a right eye imaging device 4200, which are sequentially provided with a left eye lens 4101, a left eye camera 4102, a left eye screen 4103, and a left lens. 4104.
  • the right-eye imaging device 4200 is sequentially provided with a right-eye lens 4201, a right-eye camera 4202, a right-eye screen 4203, and a right-eye lens 4204.
  • the scene rendering device 3000 includes:
  • the parameter obtaining unit 3100 is configured to respectively acquire imaging parameters of the left-eye imaging device and imaging parameters of the right-eye imaging device, where the imaging parameters include at least a central vertical distance of the lens from the corresponding imaging device to the camera and a refractive angle of the lens;
  • the offset distance acquiring unit 3200 is configured to acquire a first offset distance of the left eye imaging device according to the imaging parameter of the left eye imaging device, and acquire a second offset distance of the right eye imaging device according to the imaging parameter of the right eye imaging device;
  • the component adjusting unit 3300 is configured to adjust the left eye lens, the left eye camera, the right eye lens, and the right eye camera according to the first offset distance, the second offset distance, and a preset human eye pupil distance, so that The center horizontal distance of the left eye lens and the right eye lens is not greater than the human eyelid distance, the left eye lens is horizontally offset from the left eye camera center by a first offset distance, and the right eye lens and the right eye camera are The center horizontal offset is a second offset distance;
  • the image rendering unit 3400 is configured to render the first image previewed by the left eye camera on the left eye screen, and the second image corresponding to the right eye camera simultaneously previewed on the right eye screen to present to the user through the corresponding lens Corresponding scene.
  • the offset distance obtaining unit 3200 includes:
  • Means for calculating a first offset distance w 1 h 1 ⁇ tg ⁇ 1 according to a center-to-center distance h 1 of the lens of the left-eye imaging device to the camera and a refractive angle ⁇ 1 of the lens;
  • the scene rendering device 3000 further includes an image center offset unit 3400, configured to:
  • the center of the image is horizontally offset to obtain a new image center, and then the rendering operation is performed;
  • the center performs a horizontal offset to obtain a new image center, and then performs the rendering operation.
  • the scene rendering device 3000 further includes a contour processing unit 3500, configured to:
  • the rendering operation After determining a center object of the first image based on an image center point of the first image, performing contour detection on the center object to correspond to a center object contour, and the first outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed; and,
  • the rendering operation After determining a center object of the second image based on an image center point of the second image, performing contour detection on the center object to correspond to a center object contour, and the second outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed.
  • the connection relationship is merely illustrative and not a specific limitation.
  • the scene presentation device 3000 may be disposed in or integrated with the virtual reality device 4000, and may be connected to the virtual reality device 4000 through a wireless or wired connection form, and may be executed in a virtual or wired connection manner. This is not listed one by one.
  • left-eye imaging device 4100 and the right-eye imaging device 4200 included in the virtual reality device 4000 illustrated in FIG. 3 are merely illustrative, and do not necessarily mean that the left-eye imaging device 4100 and the right-eye imaging device 4200 are physically separated.
  • the two physical devices, the left-eye imaging device 4100 and the right-eye imaging device 4200 may be logically separated, for example, in a specific implementation, a left-eye lens 4101, a left-eye camera 4102, corresponding to the left-eye imaging device 4100.
  • the left-eye screen 4103, the left-eye lens 4104, and the right-eye lens 4201, the right-eye camera 4202, the right-eye screen 4203, and the right-eye lens 4204 corresponding to the right-eye rendering device 4200 may be integrated or built in the same physical device.
  • the elements are not physically separated to set the corresponding left-eye imaging device 4100 and right-eye imaging device 4200.
  • the hardware configuration of the scene rendering device 3000 may be the electronic device 1000 as shown in FIG. 1.
  • the scene rendering device 3000 can be implemented in a variety of ways.
  • the scene rendering device 3000 can be implemented by an instruction configuration processor.
  • the instructions can be stored in the ROM, and when the device is booted, the instructions are read from the ROM into the programmable device to implement the scene rendering device 3000.
  • scene rendering device 3000 can be cured into a dedicated device (eg, an ASIC).
  • the scene rendering device 3000 can be divided into mutually independent units, or they can be implemented together.
  • the scene rendering device 3000 may be implemented by one of the various implementations described above, or may be implemented by a combination of two or more of the various implementations described above.
  • a virtual reality device 5000 is further provided, as shown in FIG. 5, including:
  • the left eye imaging device 5100 is sequentially provided with a left eye lens 5101, a left eye camera 5102, a left eye screen 5103, and a left lens 5104;
  • a right-eye imaging device 5200 which is sequentially provided with a right-eye lens 5201, a right-eye camera 5202, a right-eye screen 5203, and a right-eye lens 5204;
  • the virtual display scene presentation device 3000 provided in this embodiment.
  • the virtual reality device 5000 may be a virtual reality helmet or virtual reality glasses or the like.
  • the virtual reality device 5000 can be the electronic device 1000 as shown in FIG.
  • the left-eye imaging device 5100 and the right-eye imaging device 5200 included in the virtual reality device 5000 illustrated in FIG. 5 are merely illustrative, and do not necessarily mean that the left-eye imaging device 5100 and the right-eye imaging device 5200 are physically separated.
  • the two-dimensional physical device, the left-eye imaging device 5100 and the right-eye imaging device 5200, may be logically separated.
  • the virtual reality device 5000 may include a left-eye lens 5101 and a left eye that are sequentially disposed.
  • the camera 5102, the left-eye screen 5103, the left-eye lens 5104, and the right-eye lens 5201, the right-eye camera 5202, the right-eye screen 5203, and the right-eye lens 5204 of the corresponding sequential device are not physically separated to set the corresponding left eye.
  • a method, a device, and a virtual reality device for presenting a scene through a virtual reality device are provided, and after performing an adjustment operation on the left and right eye cameras in the virtual reality device, respectively
  • the images previewed by the left and right eye cameras are respectively correspondingly rendered on the corresponding left and right lens heads, so that the third party library is not required to introduce a merge algorithm for each frame of the left and right eye camera previews, thereby reducing the virtual reality.
  • the difficulty of implementing the device reduces the size of the program and avoids the problems of low operating efficiency and high power consumption.
  • the image refresh speed is improved when the scene is presented, and the realism of the scene presentation is enhanced.
  • the invention can be a system, method and/or computer program product.
  • the computer program product can comprise a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement various aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can hold and store the instructions used by the instruction execution device.
  • the computer readable storage medium can be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive List includes: portable computer disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash), static random access memory (SRAM), portable compression Disk read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical encoding device, for example, a punch card or a recessed structure in which a command is stored thereon, and any suitable The combination.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash erasable programmable read only memory
  • SRAM static random access memory
  • CD-ROM compact disc
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanical encoding device for example, a punch card or a recessed structure in which a command is stored thereon, and any suitable The combination.
  • a computer readable storage medium as used herein is not to be interpreted as a transient signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (eg, a light pulse through a fiber optic cable), or through a wire The electrical signal transmitted.
  • the computer readable program instructions described herein can be downloaded from a computer readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
  • the computer readable program instructions can execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server. carried out.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider to access the Internet) connection).
  • the customized electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present invention.
  • the computer readable program instructions can be provided to a general purpose computer, a special purpose computer, or a processor of other programmable data processing apparatus to produce a machine such that when executed by a processor of a computer or other programmable data processing apparatus Means for implementing the functions/acts specified in one or more of the blocks of the flowcharts and/or block diagrams.
  • the computer readable program instructions can also be stored in a computer readable storage medium that causes the computer, programmable data processing device, and/or other device to operate in a particular manner, such that the computer readable medium storing the instructions includes An article of manufacture that includes instructions for implementing various aspects of the functions/acts recited in one or more of the flowcharts.
  • the computer readable program instructions can also be loaded onto a computer, other programmable data processing device, or other device to perform a series of operational steps on a computer, other programmable data processing device or other device to produce a computer-implemented process.
  • instructions executed on a computer, other programmable data processing apparatus, or other device implement the functions/acts recited in one or more of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagram can represent a module, a program segment, or a portion of an instruction that includes one or more components for implementing the specified logical functions.
  • Executable instructions can also occur in a different order than those illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or function. Or it can be implemented by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention discloses a method and an apparatus for presenting a scene using virtual reality device, and a virtual reality apparatus. The method comprises: obtaining imaging parameters from a left-eye imaging device and a right-eye imaging device, respectively; obtaining a first displacement distance of the left-eye imaging device and a second displacement distance of the right-eye imaging device on the basis of the imaging parameters from the left-eye imaging device and the right-eye imaging device, respectively; adjusting a left-eye lens, a left-eye camera, a right-eye lens and a right-eye camera on the basis of the first displacement distance, the second displacement distance and a predetermined human pupil distance; and rendering a first image previewed at the left-eye camera and a second image simultaneously previewed at the right-eye camera to a left-eye screen and a right-eye screen, respectively.

Description

通过虚拟现实设备呈现场景的方法、设备及虚拟现实设备Method, device and virtual reality device for presenting scene through virtual reality device 技术领域Technical field
本发明涉及虚拟现实技术领域,更具体地,涉及一种通过虚拟现实设备呈现场景的方法、设备及虚拟现实设备。The present invention relates to the field of virtual reality technologies, and in particular, to a method, a device, and a virtual reality device for presenting a scene through a virtual reality device.
背景技术Background technique
近年来虚拟现实技术飞速发展,不仅应用于呈现虚拟场景提供给用户近乎真实的沉浸感,也可以应用于呈现现实场景以提供给用户观看现实世界的真实感。因此,基于虚拟现实技术呈现场景的虚拟现实设备例如虚拟现实头盔、虚拟现实眼镜等等也受越来越多用户的关注。In recent years, virtual reality technology has developed rapidly, not only for presenting virtual scenes to provide users with near-real immersion, but also for presenting real-life scenes to provide users with a sense of realism in the real world. Therefore, virtual reality devices such as virtual reality helmets, virtual reality glasses, and the like that present scenes based on virtual reality technology are also attracting more and more users' attention.
目前的虚拟现实设备通常是通过设置左眼摄像头、右眼摄像头的双摄像头模式用于呈现场景。虚拟现实设备的双摄像头不同于一般的单摄像头,在呈现场景时可以增加对应的场景图像的景深,使得呈现的场景图像具有立体感,在用户用于观看现实世界时增强真实感。Current virtual reality devices are typically used to present scenes by setting a dual camera mode for the left eye camera and the right eye camera. The dual camera of the virtual reality device is different from the general single camera, and the depth of field of the corresponding scene image can be increased when the scene is presented, so that the presented scene image has a stereoscopic effect and enhances the realism when the user uses to view the real world.
虚拟现实设备利用双摄像头呈现场景时,需要通过引入第三方开源程序库例如FFmpeg库,对左眼摄像头和右眼摄像头所预览的每一帧图像进行合并算法处理以达到模拟真实世界的效果,再通过虚拟现实设备呈现给用户。When a virtual reality device uses a dual camera to present a scene, it is necessary to introduce a third-party open source library, such as the FFmpeg library, to perform a merge algorithm on each frame image previewed by the left-eye camera and the right-eye camera to achieve a real-world simulation effect. Presented to the user through a virtual reality device.
发明内容Summary of the invention
根据本发明的第一方面,提供了一种通过虚拟现实设备呈现场景的方法,实施于虚拟现实设备上,所述虚拟现实设备包括左眼成像设备以及右眼成像设备,所述左眼成像设备依次设置有左眼透镜、左眼摄像头、左眼屏幕以及左眼镜头,所述右眼成像设备依次设置有右眼透镜、右眼摄像头、右眼屏幕以及右眼镜头; According to a first aspect of the present invention, there is provided a method of presenting a scene by a virtual reality device, the virtual reality device comprising a left eye imaging device and a right eye imaging device, the left eye imaging device a left eye lens, a left eye camera, a left eye screen, and a left lens head are sequentially disposed, and the right eye imaging device is sequentially provided with a right eye lens, a right eye camera, a right eye screen, and a right lens head;
所述方法包括:The method includes:
分别获取左眼成像设备的成像参数以及右眼成像设备的成像参数,所述成像参数至少包括对应的成像设备中镜头至摄像头的中心垂直距离以及镜头的折射夹角;Obtaining an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device, where the imaging parameter includes at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
根据左眼成像设备的成像参数获取左眼成像设备的第一偏移距离,以及根据右眼成像设备的成像参数右眼成像设备的第二偏移距离;Obtaining a first offset distance of the left eye imaging device according to an imaging parameter of the left eye imaging device, and a second offset distance of the right eye imaging device according to an imaging parameter of the right eye imaging device;
根据所述第一偏移距离、第二偏移距离以及预设的人眼瞳距,调整所述左眼透镜、左眼摄像头、右眼透镜和右眼摄像头,使得所述左眼透镜和右眼透镜的中心水平距离不大于所述人眼瞳距、所述左眼透镜与左眼摄像头中心水平偏移第一偏移距离且所述右眼透镜与右眼摄像头中心水平偏移第二偏移距离;Adjusting the left eye lens, the left eye camera, the right eye lens, and the right eye camera according to the first offset distance, the second offset distance, and a preset human eye pupil distance, such that the left eye lens and the right eye The center horizontal distance of the eye lens is not greater than the human eyelid distance, the left eye lens is horizontally offset from the left eye camera center by a first offset distance, and the right eye lens is horizontally offset from the right eye camera center by a second offset Moving distance
将左眼摄像头预览的第一图像渲染在左眼屏幕上,以及对应将右眼摄像头同时预览的第二图像渲染在右眼屏幕上,以通过对应的镜头向用户呈现对应的场景。The first image previewed by the left eye camera is rendered on the left eye screen, and the second image corresponding to the simultaneous preview of the right eye camera is rendered on the right eye screen to present the corresponding scene to the user through the corresponding lens.
根据本发明的第二方面,提供一种场景呈现设备,设置于虚拟现实设备侧,According to a second aspect of the present invention, a scene presenting apparatus is provided, which is disposed on a virtual reality device side.
所述虚拟现实设备包括左眼成像设备以及右眼成像设备,所述左眼成像设备依次设置有左眼透镜、左眼摄像头、左眼屏幕以及左眼镜头,所述右眼成像设备依次设置有右眼透镜、右眼摄像头、右眼屏幕以及右眼镜头;The virtual reality device includes a left eye imaging device and a right eye imaging device, and the left eye imaging device is sequentially provided with a left eye lens, a left eye camera, a left eye screen, and a left lens, and the right eye imaging device is sequentially provided with Right eye lens, right eye camera, right eye screen, and right lens head;
所述场景呈现设备包括:The scene rendering device includes:
参数获取单元,用于分别获取左眼成像设备的成像参数以及右眼成像设备的成像参数,所述成像参数至少包括对应的成像设备中镜头至摄像头的中心垂直距离以及镜头的折射夹角;a parameter obtaining unit, configured to respectively acquire an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device, where the imaging parameter includes at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
偏移距离获取单元,用于根据左眼成像设备的成像参数获取左眼成像设备的第一偏移距离,以及根据右眼成像设备的成像参数右眼成像设备的第二偏移距离;An offset distance acquisition unit, configured to acquire a first offset distance of the left-eye imaging device according to an imaging parameter of the left-eye imaging device, and a second offset distance of the right-eye imaging device according to an imaging parameter of the right-eye imaging device;
元件调整单元,用于根据所述第一偏移距离、第二偏移距离以及预设的人眼瞳距,调整所述左眼透镜、左眼摄像头、右眼透镜和右眼摄像头,使得所述左眼透镜和右眼透镜的中心水平距离不大于所述人眼瞳距、所述 左眼透镜与左眼摄像头中心水平偏移第一偏移距离且所述右眼透镜与右眼摄像头中心水平偏移第二偏移距离;a component adjustment unit, configured to adjust the left eye lens, the left eye camera, the right eye lens, and the right eye camera according to the first offset distance, the second offset distance, and a preset human eye distance The center horizontal distance between the left eye lens and the right eye lens is not greater than the human eyelid distance, The left eye lens is horizontally offset from the left eye camera center by a first offset distance and the right eye lens is horizontally offset from the right eye camera center by a second offset distance;
图像渲染单元,用于将左眼摄像头预览的第一图像渲染在左眼屏幕上,以及对应将右眼摄像头同时预览的第二图像渲染在右眼屏幕上,以通过对应的镜头向用户呈现对应的场景。An image rendering unit, configured to render a first image previewed by the left eye camera on the left eye screen, and a second image corresponding to simultaneously previewing the right eye camera on the right eye screen to present a corresponding to the user through the corresponding lens Scene.
根据本发明的第三方面,提供一种虚拟现实设备,包括:According to a third aspect of the present invention, a virtual reality device is provided, comprising:
左眼成像设备,所述左眼成像设备依次设置有左眼透镜、左眼摄像头、左眼屏幕以及左眼镜头;a left eye imaging device, which is provided with a left eye lens, a left eye camera, a left eye screen, and a left lens head in sequence;
右眼成像设备,所述右眼成像设备依次设置有右眼透镜、右眼摄像头、右眼屏幕以及右眼镜头;以及a right-eye imaging device, which is sequentially provided with a right-eye lens, a right-eye camera, a right-eye screen, and a right-eye lens;
如本发明的第二方面提供的任意一项场景呈现设备。Any of the scene presentation devices provided by the second aspect of the present invention.
本发明一个实施例中,通过虚拟现实设备呈现场景的方法、设备及虚拟现实设备,可以无需引入第三方程序库对左、右眼摄像头预览的每一帧图像进行合并算法处理,降低虚拟现实设备的实现难度,减小程序大小,避免带来运行效率低和功耗大的问题。并且,在呈现场景时提高图像刷新速度,增强场景呈现时的真实感。同时,可以避免通过所述虚拟现实设备呈现场景时图像出现重复或重叠,强化对应图像的景深效果,增强场景呈现时的真实感。此外,还能提升用户使用虚拟现实设备的舒适度。尤其适用于呈现现实世界的真实场景。通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。In an embodiment of the present invention, a method, a device, and a virtual reality device for presenting a scenario by using a virtual reality device may perform a merge algorithm processing on each frame image previewed by the left and right eye cameras without introducing a third-party library, and reduce the virtual reality device. The difficulty of implementation, reduce the size of the program, to avoid the problem of low operating efficiency and high power consumption. Moreover, the image refresh speed is improved when the scene is presented, and the realism of the scene presentation is enhanced. At the same time, it is possible to avoid duplication or overlapping of images when the scene is presented by the virtual reality device, enhance the depth of field effect of the corresponding image, and enhance the realism of the scene presentation. In addition, it can enhance the comfort of users using virtual reality devices. Especially suitable for presenting real scenes in the real world. Other features and advantages of the present invention will become apparent from the Detailed Description of the <RTIgt;
附图说明DRAWINGS
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。The accompanying drawings, which are incorporated in FIG
图1是显示可用于实现本发明的实施例的电子设备的硬件配置的例子的框图。1 is a block diagram showing an example of a hardware configuration of an electronic device that can be used to implement an embodiment of the present invention.
图2示出了本发明的实施例的通过虚拟现实设备呈现场景的方法的流程图。2 shows a flow diagram of a method of presenting a scene by a virtual reality device in accordance with an embodiment of the present invention.
图3示出了本发明的实施例的通过虚拟现实设备呈现场景的方法的例 子示意图。FIG. 3 illustrates an example of a method of presenting a scene by a virtual reality device according to an embodiment of the present invention. Sub-schematic.
图4示出了本发明的实施例的场景呈现设备的示意性框图。FIG. 4 shows a schematic block diagram of a scene presenting apparatus of an embodiment of the present invention.
图5示出了本发明的实施例的虚拟现实场景设备的示意性框图。FIG. 5 shows a schematic block diagram of a virtual reality scene device of an embodiment of the present invention.
具体实施方式detailed description
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the drawings. It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in the embodiments are not intended to limit the scope of the invention unless otherwise specified.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。The following description of the at least one exemplary embodiment is merely illustrative and is in no way
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but the techniques, methods and apparatus should be considered as part of the specification, where appropriate.
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。In all of the examples shown and discussed herein, any specific values are to be construed as illustrative only and not as a limitation. Thus, other examples of the exemplary embodiments may have different values.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that similar reference numerals and letters indicate similar items in the following figures, and therefore, once an item is defined in one figure, it is not required to be further discussed in the subsequent figures.
<硬件配置><Hardware Configuration>
图1是示出可以实现本发明的实施例的电子设备1000的硬件配置的框图。FIG. 1 is a block diagram showing a hardware configuration of an electronic device 1000 in which an embodiment of the present invention can be implemented.
在一个例子中,电子设备1000可以是虚拟现实头盔或者虚拟现实眼镜等。如图1所示,电子设备1000可以包括处理器1100、存储器1200、接口装置1300、通信装置1400、显示装置1500、输入装置1600、扬声器1700、麦克风1800等等。其中,处理器1100可以是中央处理器CPU、微处理器MCU等。存储器1200例如包括ROM(只读存储器)、RAM(随机存取存储器)、诸如硬盘的非易失性存储器等。接口装置1300例如包括USB接口、耳机接口等。通信装置1400例如能够进行有线或无线通信,具体地可以包括Wifi通信、蓝牙通信、2G/3G/4G/5G通信等。显示装置1500例 如是液晶显示屏、触摸显示屏等。输入装置1600例如可以包括触摸屏、键盘、体感输入等。用户可以通过扬声器1700和麦克风1800输入/输出语音信息。In one example, the electronic device 1000 can be a virtual reality helmet or virtual reality glasses or the like. As shown in FIG. 1, the electronic device 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and the like. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone jack, and the like. The communication device 1400 can, for example, perform wired or wireless communication, and specifically can include Wifi communication, Bluetooth communication, 2G/3G/4G/5G communication, and the like. Display device 1500 cases Such as LCD screen, touch screen, etc. Input device 1600 can include, for example, a touch screen, a keyboard, a somatosensory input, and the like. The user can input/output voice information through the speaker 1700 and the microphone 1800.
图1所示的电子设备仅仅是说明性的并且决不意味着对本发明、其应用或使用的任何限制。应用于本发明的实施例中,电子设备1000的所述存储器1200用于存储指令,所述指令用于控制所述处理器1100进行操作以执行本发明实施例提供的任意一项通过虚拟现实设备呈现场景的方法。本领域技术人员应当理解,尽管在图1中对电子设备1000示出了多个装置,但是,本发明可以仅涉及其中的部分装置,例如,电子设备1000只涉及处理器1100和存储器1200。技术人员可以根据本发明所公开方案设计指令。指令如何控制处理器进行操作,这是本领域公知,故在此不再详细描述。The electronic device shown in Figure 1 is merely illustrative and is in no way meant to limit the invention, its application or use. In an embodiment of the present invention, the memory 1200 of the electronic device 1000 is configured to store instructions for controlling the processor 1100 to operate to perform any of the embodiments provided by the embodiments of the present invention through a virtual reality device. The method of rendering the scene. It will be understood by those skilled in the art that although a plurality of devices are illustrated for electronic device 1000 in FIG. 1, the present invention may relate only to some of the devices therein, for example, electronic device 1000 only relates to processor 1100 and memory 1200. A technician can design instructions in accordance with the disclosed aspects of the present invention. How the instructions control the processor for operation is well known in the art and will not be described in detail herein.
<实施例><Example>
在本实施例中,提供一种通过虚拟现实设备呈现场景的方法,实施于虚拟现实设备上,该虚拟现实设备包括左眼成像设备以及右眼成像设备,所述左眼成像设备依次设置有左眼透镜、左眼摄像头、左眼屏幕以及左眼镜头,所述右眼成像设备依次设置有右眼透镜、右眼摄像头、右眼屏幕以及右眼镜头。In this embodiment, a method for presenting a scene by using a virtual reality device is implemented on a virtual reality device, where the virtual reality device includes a left eye imaging device and a right eye imaging device, and the left eye imaging device is sequentially disposed with a left An eye lens, a left eye camera, a left eye screen, and a left eye lens, and the right eye imaging device is sequentially provided with a right eye lens, a right eye camera, a right eye screen, and a right eye lens.
具体地,所述虚拟现实设备中左眼成像设备以及右眼成像设备可以是物理上分离的两个实体设备,也可以物理上部分或全部集成在一起而逻辑上是分离的。Specifically, the left-eye imaging device and the right-eye imaging device in the virtual reality device may be physically separated physical devices, or may be physically or partially integrated and logically separated.
在一个例子中,所述虚拟现实设备可以是虚拟现实眼镜或虚拟现实头盔。In one example, the virtual reality device can be a virtual reality glasses or a virtual reality helmet.
该通过虚拟现实设备呈现场景的方法,如图2所示,包括步骤S2100到步骤S2400。The method for presenting a scene by using a virtual reality device, as shown in FIG. 2, includes steps S2100 to S2400.
在步骤S2100中,分别获取左眼成像设备的成像参数以及右眼成像设备的成像参数,所述成像参数至少包括对应的成像设备中镜头至摄像头的中心垂直距离以及镜头的折射夹角;In step S2100, an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device are respectively acquired, the imaging parameter including at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
所述成像参数可以是对应的虚拟现实设备的制造者或者生产商提供, 并预先存储于虚拟现实设备的存储区域中且提供接口供获取,也可以作为虚拟现实设备的产品参数通过说明书或产品官网说明等方式供应并提供下载等方式获取,在此不一一列举。The imaging parameters may be provided by a manufacturer or manufacturer of the corresponding virtual reality device. And pre-stored in the storage area of the virtual reality device and provides an interface for acquisition, or may be obtained as a product parameter of the virtual reality device by means of a manual or a product official website description, and provided for downloading, etc., which are not enumerated here.
在步骤S2200中,根据左眼成像设备的成像参数获取左眼成像设备的第一偏移距离,以及根据右眼成像设备的成像参数获取右眼成像设备的第二偏移距离。In step S2200, a first offset distance of the left-eye imaging device is acquired according to an imaging parameter of the left-eye imaging device, and a second offset distance of the right-eye imaging device is acquired according to an imaging parameter of the right-eye imaging device.
例如,在步骤S2100中获取的左眼呈现设备的成像参数包括左眼成像设备的镜头至摄像头的中心垂直距离h1以及镜头的折射夹角β1,右眼呈现设备的成像参数包括右眼成像设备的镜头至摄像头的中心垂直距离h2以及镜头的折射夹角β2,具体地,获取第一偏移距离和第二偏移距离的步骤包括:For example, the imaging parameters of the left-eye rendering device acquired in step S2100 include the lens-to-camera vertical vertical distance h 1 of the left-eye imaging device and the refractive angle β 1 of the lens, and the imaging parameters of the right-eye rendering device include right-eye imaging. The center vertical distance h 2 of the lens of the device to the camera and the refractive angle β 2 of the lens. Specifically, the steps of obtaining the first offset distance and the second offset distance include:
根据左眼成像设备的镜头至摄像头的中心垂直距离h1以及镜头的折射夹角β1,计算第一偏移距离w1=h1×tgβ1;以及Calculating a first offset distance w 1 =h 1 ×tgβ 1 according to a center vertical distance h 1 of the lens of the left eye imaging device to the camera and a refractive angle β 1 of the lens;
根据右眼成像设备的镜头至摄像头的中心垂直距离h2以及镜头的折射夹角β2,计算第二偏移距离w2=h2×tgβ2The second offset distance w 2 =h 2 ×tgβ 2 is calculated according to the center vertical distance h 2 of the lens of the right eye imaging device to the camera and the refractive angle β 2 of the lens.
在步骤S2300中,根据所述第一偏移距离、第二偏移距离以及预设的人眼瞳距,调整所述左眼透镜、左眼摄像头、右眼透镜和右眼摄像头,使得所述左眼透镜和右眼透镜的中心水平距离不大于所述人眼瞳距、所述左眼透镜与左眼摄像头中心水平偏移第一偏移距离且所述右眼透镜与右眼摄像头中心水平偏移第二偏移距离。In step S2300, the left eye lens, the left eye camera, the right eye lens, and the right eye camera are adjusted according to the first offset distance, the second offset distance, and a preset human eye pupil distance, so that the The center horizontal distance of the left eye lens and the right eye lens is not greater than the human eyelid distance, the left eye lens is horizontally offset from the left eye camera center by a first offset distance, and the right eye lens and the right eye camera center level are Offset the second offset distance.
通过上述调整,使得左眼透镜与左眼摄像头中心水平偏移第一偏移距离且所述右眼透镜与右眼摄像头中心水平偏移第二偏移距离,可以避免通过所述虚拟现实设备呈现场景时图像出现重复或重叠,强化后续通过左、右摄像头预览图像的景深效果,并且,避免用户出现舒适度降低乃至视疲劳。而使得所述左眼透镜和右眼透镜的中心水平距离不大于所述人眼瞳距,可以避免由于左右眼透镜距离大于瞳距带来用户使用后的视觉疲劳,进一步提高使用舒适度。Through the above adjustment, the left eye lens is horizontally offset from the left eye camera center by a first offset distance and the right eye lens is horizontally offset from the right eye camera center by a second offset distance, which can be prevented from being presented by the virtual reality device. When the scene is repeated or overlapping, the depth of field effect of the image is previewed by the left and right cameras, and the user's comfort is reduced or even fatigued. Therefore, the center horizontal distance of the left-eye lens and the right-eye lens is not greater than the human eyelid distance, and the visual fatigue caused by the user after use may be avoided because the distance between the left and right eye lenses is greater than the lay length, thereby further improving the use comfort.
例如,右眼镜头至右眼摄像头的中心垂直距离为h1以及右眼镜头的折射夹角为β1,左眼镜头至摄像头的中心垂直距离为h2以及右眼镜头的折射夹角为β2,通过步骤S2200获得的第一偏移距离为w1,第二偏移距离为w2, 执行步骤S2300调整后使得对应的各个部件设置如图3所示,其中,图3所示的左眼透镜和右眼透镜的中心水平距离为所述人眼瞳距,仅是示意性的。在其他的例子中,也可以使得左眼透镜和右眼透镜的中心水平距离是接近所述人眼瞳距的值。For example, the center vertical distance of the right lens to right eye camera is h 1 and the angle of refraction of the right lens is β 1 , the vertical distance from the center of the left lens to the camera is h 2 and the angle of refraction of the right lens is β. 2 , the first offset distance obtained by the step S2200 is w 1 , and the second offset distance is w 2 . After the step S2300 is performed, the corresponding components are set as shown in FIG. 3 , wherein the left side shown in FIG. 3 The central horizontal distance of the eye lens and the right eye lens is the human eyelid distance and is merely illustrative. In other examples, it is also possible that the center horizontal distance of the left-eye lens and the right-eye lens is a value close to the human eyelid distance.
所述人眼瞳距可以是根据工程经验或者实验仿真选取的一般用户的平均瞳距,或者,可以是通过本实施例中涉及的虚拟现实设备提供的界面或者接口供用户操作或输入而获取的数值,以供实际使用虚拟现实设备的用户根据自身需求或者应用场景设置人眼瞳距,以实现个性化的瞳距设置,以进一步提升用户使用虚拟现实设备的舒适度。The human eyelid distance may be an average distance of a general user selected according to engineering experience or experimental simulation, or may be obtained by an interface or interface provided by the virtual reality device involved in the embodiment for user operation or input. The value is used by the user who actually uses the virtual reality device to set the eyelid distance according to his own needs or application scenarios, so as to realize the personalized distance setting to further improve the comfort of the user using the virtual reality device.
在步骤S2400中,将左眼摄像头预览的第一图像渲染在左眼屏幕上,以及对应将右眼摄像头同时预览的第二图像渲染在右眼屏幕上,以通过对应的镜头向用户呈现对应的场景。In step S2400, the first image previewed by the left eye camera is rendered on the left eye screen, and the second image corresponding to the simultaneous preview of the right eye camera is rendered on the right eye screen to present the corresponding image to the user through the corresponding lens. Scenes.
具体地,所述场景可以是虚拟场景,以提供用户近乎真实的沉浸感,也可以是现实世界的真实场景,以提供给用户观看现实世界的真实感。Specifically, the scene may be a virtual scene to provide a near-real immersion of the user, or a real scene in the real world, to provide the user with a sense of realism in the real world.
通过步骤S2300设置左右眼摄像头之后,在步骤S2400中分别将左、右眼摄像头同时预览得到的图像,对应分别渲染对应的左、右眼屏幕上,直接向用户呈现对应的场景,使得无需引入第三方程序库对左、右眼摄像头预览的每一帧图像进行合并算法处理,从而降低虚拟现实设备的实现难度,减小程序大小,避免带来运行效率低和功耗大的问题。并且,在呈现场景时提高图像刷新速度,增强场景呈现时的真实感。特别地,尤其适用于呈现的现实世界的真实场景,提供立体真实感。After the left and right eye cameras are set in step S2300, the images obtained by simultaneously previewing the left and right eye cameras respectively in step S2400 are respectively corresponding to the respective left and right eye screens, and the corresponding scenes are directly presented to the user, so that the first scene does not need to be introduced. The three-party library performs a merging algorithm on each frame of the preview of the left and right eye cameras, thereby reducing the difficulty of implementing the virtual reality device, reducing the size of the program, and avoiding the problems of low operating efficiency and high power consumption. Moreover, the image refresh speed is improved when the scene is presented, and the realism of the scene presentation is enhanced. In particular, it is particularly suitable for presenting real world real scenes, providing a sense of solid reality.
具体地,所述渲染操作可以通过Unity3D提供的渲染功能实现,Unity3D由Unity Technologies开发的一个全面整合的专业游戏引擎,通常可以被应用于开发虚拟现实设备呈现的场景,在此不再赘述。Specifically, the rendering operation can be implemented by the rendering function provided by Unity3D. Unity3D is a fully integrated professional game engine developed by Unity Technologies, which can be generally applied to the development of virtual reality device rendering scenarios, and will not be described here.
而在本实施例中,可以通过Unity3D提供的渲染功能,将分别将左、右眼摄像头同时预览得到的每一帧图像,对应分别渲染对应的左、右眼屏幕上,以呈现对应的场景。In this embodiment, each frame image obtained by simultaneously previewing the left and right eye cameras respectively can be respectively rendered on the corresponding left and right eye screens by the rendering function provided by Unity3D to present corresponding scenes.
在一个例子中,还可以对左、右眼摄像头同时预览的每一帧图像的图像中心进行偏移后再执行所述渲染操作,以增强对应的图像的景深效果, 使得对应呈现对应的场景(特别是现实世界的真实场景),更具有真实感。对应地,本例中提供的通过虚拟现实设备呈现场景的方法还包括:In an example, the image center of each frame of the image previewed by the left and right eye cameras may be offset before the rendering operation is performed to enhance the depth of field effect of the corresponding image. It makes the corresponding corresponding scene (especially the real scene of the real world) more realistic. Correspondingly, the method for presenting a scenario by using a virtual reality device provided in this example further includes:
基于左眼成像设备的镜头至摄像头的中心垂直距离、第一偏移距离以及所述第一图像的图像高度获取第一水平偏移距离,基于所述第一水平偏移距离将所述第一图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作;Obtaining a first horizontal offset distance based on a center-to-center distance of the lens-to-camera of the left-eye imaging device, a first offset distance, and an image height of the first image, the first being based on the first horizontal offset distance The center of the image is horizontally offset to obtain a new image center, and then the rendering operation is performed;
以及as well as
根据右眼成像设备的镜头至摄像头的中心垂直距离、第二偏移距离以及第二图像的图像高度获取第二水平偏移距离,基于所述第二水平偏移距离将所述第二图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作。Obtaining a second horizontal offset distance according to a center vertical distance of the lens of the right eye imaging device to the camera, a second offset distance, and an image height of the second image, and the second image is based on the second horizontal offset distance The center performs a horizontal offset to obtain a new image center, and then performs the rendering operation.
更具体的,h1为左眼成像设备的镜头至摄像头的中心垂直距离、w1为第一偏移距离以及H1为第一图像的图像高度,h2为右眼成像设备的镜头至摄像头的中心垂直距离、w2为第一偏移距离以及H2为第二图像的图像高度,对应地,第一水平偏移距离为d1=(w1/h1)×(H1/2),第二水平偏移距离为d2=(w2/h2)×(H2/2)。More specifically, h 1 is the center vertical distance from the lens of the left eye imaging device to the camera, w 1 is the first offset distance, and H 1 is the image height of the first image, and h 2 is the lens to camera of the right eye imaging device. The center vertical distance, w 2 is the first offset distance, and H 2 is the image height of the second image, correspondingly, the first horizontal offset distance is d 1 =(w 1 /h 1 )×(H 1 /2 The second horizontal offset distance is d 2 = (w 2 /h 2 ) × (H 2 /2).
在另一个例子中,还可以对左、右眼摄像头同时预览的每一帧图像的图像中心对象的轮廓之外的对象进行虚化或者模糊处理后,后再执行所述渲染操作,以增强对应的图像的景深效果,使得对应呈现对应的场景(特别是现实世界的真实场景)更具有真实感。对应地,本例中提供的通过虚拟现实设备呈现场景的方法还包括:In another example, the object other than the outline of the image center object of each frame image previewed by the left and right eye cameras may be blurred or blurred, and then the rendering operation is performed to enhance the correspondence. The depth of field effect of the image makes the corresponding scene (especially the real scene of the real world) more realistic. Correspondingly, the method for presenting a scenario by using a virtual reality device provided in this example further includes:
基于所述第一图像的图像中心点确定所述第一图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第一图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作;以及After determining a center object of the first image based on an image center point of the first image, performing contour detection on the center object to correspond to a center object contour, and the first outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed;
基于所述第二图像的图像中心点确定所述第二图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第二图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作。 After determining a center object of the second image based on an image center point of the second image, performing contour detection on the center object to correspond to a center object contour, and the second outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed.
其中,所述轮廓检测可以通过OpenCV中提供的函数findContours()实现,OpenCV是一个开源的跨平台计算机视觉库,在此不再赘述。The contour detection can be implemented by the function findContours() provided in OpenCV, which is an open source cross-platform computer vision library, which will not be described here.
此外,应当理解的是,在具体应用中,还可以对左、右眼摄像头预览的每一帧图像的图像中心进行偏移后,对图像中心对象的轮廓之外的对象进行虚化或者模糊处理,再执行所述渲染操作,以更好地增强对应的图像的景深效果,使得对应呈现对应的场景更具有真实感。In addition, it should be understood that, in a specific application, the image center of each frame image previewed by the left and right eye cameras may be offset, and the object other than the contour of the image center object may be blurred or blurred. The rendering operation is performed to better enhance the depth of field effect of the corresponding image, so that the corresponding corresponding scene is more realistic.
在本实施例中,还提供一种场景呈现设备3000,如图3所示,设置于虚拟现实设备4000侧,所述场景呈现设备3000包括参数获取单元3100、偏移距离获取单元3200、元件调整单元3300以及图像渲染单元3400,可选地,还包括图像中心偏移单元3500以及轮廓处理单元3600,用于实施本实施例中提供的通过虚拟现实设备呈现场景的方法,在此不再赘述。In this embodiment, a scene presentation device 3000 is further provided, as shown in FIG. 3, disposed on the virtual reality device 4000 side, where the scene presentation device 3000 includes a parameter acquisition unit 3100, an offset distance acquisition unit 3200, and component adjustment. The unit 3300 and the image rendering unit 3400, optionally, further include an image center offset unit 3500 and a contour processing unit 3600 for implementing the method for presenting a scene by using a virtual reality device provided in this embodiment, and details are not described herein again.
具体地,所述虚拟现实设备4000包括左眼成像设备4100以及右眼成像设备4200,所述左眼成像设备4100依次设置有左眼透镜4101、左眼摄像头4102、左眼屏幕4103以及左眼镜头4104,所述右眼成像设备4200依次设置有右眼透镜4201、右眼摄像头4202、右眼屏幕4203以及右眼镜头4204。Specifically, the virtual reality device 4000 includes a left eye imaging device 4100 and a right eye imaging device 4200, which are sequentially provided with a left eye lens 4101, a left eye camera 4102, a left eye screen 4103, and a left lens. 4104. The right-eye imaging device 4200 is sequentially provided with a right-eye lens 4201, a right-eye camera 4202, a right-eye screen 4203, and a right-eye lens 4204.
所述场景呈现设备3000包括:The scene rendering device 3000 includes:
参数获取单元3100,用于分别获取左眼成像设备的成像参数以及右眼成像设备的成像参数,所述成像参数至少包括对应的成像设备中镜头至摄像头的中心垂直距离以及镜头的折射夹角;The parameter obtaining unit 3100 is configured to respectively acquire imaging parameters of the left-eye imaging device and imaging parameters of the right-eye imaging device, where the imaging parameters include at least a central vertical distance of the lens from the corresponding imaging device to the camera and a refractive angle of the lens;
偏移距离获取单元3200,用于根据左眼成像设备的成像参数获取左眼成像设备的第一偏移距离,以及根据右眼成像设备的成像参数获取右眼成像设备的第二偏移距离;The offset distance acquiring unit 3200 is configured to acquire a first offset distance of the left eye imaging device according to the imaging parameter of the left eye imaging device, and acquire a second offset distance of the right eye imaging device according to the imaging parameter of the right eye imaging device;
元件调整单元3300,用于根据所述第一偏移距离、第二偏移距离以及预设的人眼瞳距,调整所述左眼透镜、左眼摄像头、右眼透镜和右眼摄像头,使得所述左眼透镜和右眼透镜的中心水平距离不大于所述人眼瞳距、所述左眼透镜与左眼摄像头中心水平偏移第一偏移距离且所述右眼透镜与右眼摄像头中心水平偏移第二偏移距离; The component adjusting unit 3300 is configured to adjust the left eye lens, the left eye camera, the right eye lens, and the right eye camera according to the first offset distance, the second offset distance, and a preset human eye pupil distance, so that The center horizontal distance of the left eye lens and the right eye lens is not greater than the human eyelid distance, the left eye lens is horizontally offset from the left eye camera center by a first offset distance, and the right eye lens and the right eye camera are The center horizontal offset is a second offset distance;
图像渲染单元3400,用于将左眼摄像头预览的第一图像渲染在左眼屏幕上,以及对应将右眼摄像头同时预览的第二图像渲染在右眼屏幕上,以通过对应的镜头向用户呈现对应的场景。The image rendering unit 3400 is configured to render the first image previewed by the left eye camera on the left eye screen, and the second image corresponding to the right eye camera simultaneously previewed on the right eye screen to present to the user through the corresponding lens Corresponding scene.
可选地,所述偏移距离获取单元3200包括:Optionally, the offset distance obtaining unit 3200 includes:
用于根据左眼成像设备的镜头至摄像头的中心垂直距离h1以及镜头的折射夹角β1,计算第一偏移距离w1=h1×tgβ1的装置;以及Means for calculating a first offset distance w 1 =h 1 ×tgβ 1 according to a center-to-center distance h 1 of the lens of the left-eye imaging device to the camera and a refractive angle β 1 of the lens;
用于根据右眼成像设备的镜头至摄像头的中心垂直距离h2以及镜头的折射夹角β2,计算第二偏移距离w2=h2×tgβ2的装置。It means 2 = h 2 × tgβ 2 h 2 for an angle β, and the refractive lens 2, calculating a second offset distance w eye lens imaging apparatus according to the vertical distance from the center of the camera.
可选地,所述场景呈现设备3000还包括图像中心偏移单元3400,用于:Optionally, the scene rendering device 3000 further includes an image center offset unit 3400, configured to:
基于左眼成像设备的镜头至摄像头的中心垂直距离、第一偏移距离以及所述第一图像的图像高度获取第一水平偏移距离,基于所述第一水平偏移距离将所述第一图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作;Obtaining a first horizontal offset distance based on a center-to-center distance of the lens-to-camera of the left-eye imaging device, a first offset distance, and an image height of the first image, the first being based on the first horizontal offset distance The center of the image is horizontally offset to obtain a new image center, and then the rendering operation is performed;
以及,as well as,
根据右眼成像设备的镜头至摄像头的中心垂直距离、第二偏移距离以及第二图像的图像高度获取第二水平偏移距离,基于所述第二水平偏移距离将所述第二图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作。Obtaining a second horizontal offset distance according to a center vertical distance of the lens of the right eye imaging device to the camera, a second offset distance, and an image height of the second image, and the second image is based on the second horizontal offset distance The center performs a horizontal offset to obtain a new image center, and then performs the rendering operation.
可选地,所述场景呈现设备3000还包括轮廓处理单元3500,用于:Optionally, the scene rendering device 3000 further includes a contour processing unit 3500, configured to:
基于所述第一图像的图像中心点确定所述第一图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第一图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作;以及,After determining a center object of the first image based on an image center point of the first image, performing contour detection on the center object to correspond to a center object contour, and the first outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed; and,
基于所述第二图像的图像中心点确定所述第二图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第二图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作。After determining a center object of the second image based on an image center point of the second image, performing contour detection on the center object to correspond to a center object contour, and the second outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed.
应当理解的,图3所示的场景呈现设备3000与虚拟现实设备4000的 连接关系仅是示意性的,并不是具体的限制。场景呈现设备3000可以设置于或者集成于在虚拟现实设备4000中、可以独立于虚拟现实设备4000之外,通过无线或者有线连接形式与虚拟现实设备4000连接配合执行本实施例的场景呈现方法,在此不一一列举。It should be understood that the scene presentation device 3000 and the virtual reality device 4000 shown in FIG. 3 The connection relationship is merely illustrative and not a specific limitation. The scene presentation device 3000 may be disposed in or integrated with the virtual reality device 4000, and may be connected to the virtual reality device 4000 through a wireless or wired connection form, and may be executed in a virtual or wired connection manner. This is not listed one by one.
此外,图3所示的虚拟现实设备4000所包含的左眼成像设备4100以及右眼成像设备4200仅是示意性的,并不必然意味着左眼成像设备4100以及右眼成像设备4200是物理分离的两个实体设备,所述左眼成像设备4100以及右眼成像设备4200可以是逻辑分离的,例如,在具体实施时,与左眼成像设备4100对应的左眼透镜4101、左眼摄像头4102、左眼屏幕4103、左眼镜头4104以及与右眼呈现设备4200对应的右眼透镜4201、右眼摄像头4202、右眼屏幕4203以及右眼镜头4204,可以是集成或内置在同一个实体设备内的元件,并不以物理的分隔来设置对应的左眼成像设备4100以及右眼成像设备4200。In addition, the left-eye imaging device 4100 and the right-eye imaging device 4200 included in the virtual reality device 4000 illustrated in FIG. 3 are merely illustrative, and do not necessarily mean that the left-eye imaging device 4100 and the right-eye imaging device 4200 are physically separated. The two physical devices, the left-eye imaging device 4100 and the right-eye imaging device 4200, may be logically separated, for example, in a specific implementation, a left-eye lens 4101, a left-eye camera 4102, corresponding to the left-eye imaging device 4100. The left-eye screen 4103, the left-eye lens 4104, and the right-eye lens 4201, the right-eye camera 4202, the right-eye screen 4203, and the right-eye lens 4204 corresponding to the right-eye rendering device 4200 may be integrated or built in the same physical device. The elements are not physically separated to set the corresponding left-eye imaging device 4100 and right-eye imaging device 4200.
在一个具体的例子中,所述场景呈现设备3000的硬件配置可以如图1所示的电子设备1000。In a specific example, the hardware configuration of the scene rendering device 3000 may be the electronic device 1000 as shown in FIG. 1.
本领域技术人员也应当明白,可以通过各种方式来实现场景呈现设备3000。例如,可以通过指令配置处理器来实现场景呈现设备3000。例如,可以将指令存储在ROM中,并且当启动设备时,将指令从ROM读取到可编程器件中来实现场景呈现设备3000。例如,可以将场景呈现设备3000固化到专用器件(例如ASIC)中。可以将场景呈现设备3000分成相互独立的单元,或者可以将它们合并在一起实现。场景呈现设备3000可以通过上述各种实现方式中的一种来实现,或者可以通过上述各种实现方式中的两种或更多种方式的组合来实现。Those skilled in the art will also appreciate that the scene rendering device 3000 can be implemented in a variety of ways. For example, the scene rendering device 3000 can be implemented by an instruction configuration processor. For example, the instructions can be stored in the ROM, and when the device is booted, the instructions are read from the ROM into the programmable device to implement the scene rendering device 3000. For example, scene rendering device 3000 can be cured into a dedicated device (eg, an ASIC). The scene rendering device 3000 can be divided into mutually independent units, or they can be implemented together. The scene rendering device 3000 may be implemented by one of the various implementations described above, or may be implemented by a combination of two or more of the various implementations described above.
<虚拟现实设备><virtual reality device>
在本实施例中,还提供一种虚拟现实设备5000,如图5所示,包括:In this embodiment, a virtual reality device 5000 is further provided, as shown in FIG. 5, including:
左眼成像设备5100,所述左眼成像设备5100依次设置有左眼透镜5101、左眼摄像头5102、左眼屏幕5103以及左眼镜头5104;a left eye imaging device 5100, the left eye imaging device 5100 is sequentially provided with a left eye lens 5101, a left eye camera 5102, a left eye screen 5103, and a left lens 5104;
右眼成像设备5200,所述右眼成像设备5200依次设置有右眼透镜5201、右眼摄像头5202、右眼屏幕5203以及右眼镜头5204;以及 a right-eye imaging device 5200, which is sequentially provided with a right-eye lens 5201, a right-eye camera 5202, a right-eye screen 5203, and a right-eye lens 5204;
本实施例中提供的虚拟显示场景呈现设备3000。The virtual display scene presentation device 3000 provided in this embodiment.
具体地,虚拟现实设备5000可以是虚拟现实头盔或者虚拟现实眼镜等。在一个具体例子中,所述虚拟现实设备5000可以如图1所示电子设备1000。Specifically, the virtual reality device 5000 may be a virtual reality helmet or virtual reality glasses or the like. In a specific example, the virtual reality device 5000 can be the electronic device 1000 as shown in FIG.
此外,图5所示的虚拟现实设备5000所包含的左眼成像设备5100以及右眼成像设备5200仅是示意性的,并不必然意味着左眼成像设备5100以及右眼成像设备5200是物理分离的两个实体设备,所述左眼成像设备5100以及右眼成像设备5200可以是逻辑分离的,例如,在具体实施时,所述虚拟现实设备5000可以包含依次设置的左眼透镜5101、左眼摄像头5102、左眼屏幕5103、左眼镜头5104,以及对应依次设备的右眼透镜5201、右眼摄像头5202、右眼屏幕5203以及右眼镜头5204,并不以物理的分隔来设置对应的左眼成像设备5100以及右眼成像设备5200。In addition, the left-eye imaging device 5100 and the right-eye imaging device 5200 included in the virtual reality device 5000 illustrated in FIG. 5 are merely illustrative, and do not necessarily mean that the left-eye imaging device 5100 and the right-eye imaging device 5200 are physically separated. The two-dimensional physical device, the left-eye imaging device 5100 and the right-eye imaging device 5200, may be logically separated. For example, in a specific implementation, the virtual reality device 5000 may include a left-eye lens 5101 and a left eye that are sequentially disposed. The camera 5102, the left-eye screen 5103, the left-eye lens 5104, and the right-eye lens 5201, the right-eye camera 5202, the right-eye screen 5203, and the right-eye lens 5204 of the corresponding sequential device are not physically separated to set the corresponding left eye. The imaging device 5100 and the right eye imaging device 5200.
以上已结合附图说明本实施例,本实施例中提供一种通过虚拟现实设备呈现场景的方法、设备及虚拟现实设备,对虚拟现实设备中包含左、右眼摄像头执行调整操作后在分别将左、右眼摄像头同时预览的图像分别对应渲染在对应的左、右眼镜头上,使得无需引入第三方程序库对左、右眼摄像头预览的每一帧图像进行合并算法处理,从而降低虚拟现实设备的实现难度,减小程序大小,避免带来运行效率低和功耗大的问题。并且,在呈现场景时提高图像刷新速度,增强场景呈现时的真实感。同时,可以避免通过所述虚拟现实设备呈现场景时图像出现重复或重叠,强化对应图像的景深效果,增强场景呈现时的真实感。此外,还能提升用户使用虚拟现实设备的舒适度。尤其适用于呈现现实世界的真实场景。The embodiment has been described above with reference to the accompanying drawings. In this embodiment, a method, a device, and a virtual reality device for presenting a scene through a virtual reality device are provided, and after performing an adjustment operation on the left and right eye cameras in the virtual reality device, respectively The images previewed by the left and right eye cameras are respectively correspondingly rendered on the corresponding left and right lens heads, so that the third party library is not required to introduce a merge algorithm for each frame of the left and right eye camera previews, thereby reducing the virtual reality. The difficulty of implementing the device reduces the size of the program and avoids the problems of low operating efficiency and high power consumption. Moreover, the image refresh speed is improved when the scene is presented, and the realism of the scene presentation is enhanced. At the same time, it is possible to avoid duplication or overlapping of images when the scene is presented by the virtual reality device, enhance the depth of field effect of the corresponding image, and enhance the realism of the scene presentation. In addition, it can enhance the comfort of users using virtual reality devices. Especially suitable for presenting real scenes in the real world.
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。The invention can be a system, method and/or computer program product. The computer program product can comprise a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement various aspects of the present invention.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的 列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。The computer readable storage medium can be a tangible device that can hold and store the instructions used by the instruction execution device. The computer readable storage medium can be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples of computer readable storage media (non-exhaustive List) includes: portable computer disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash), static random access memory (SRAM), portable compression Disk read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical encoding device, for example, a punch card or a recessed structure in which a command is stored thereon, and any suitable The combination. A computer readable storage medium as used herein is not to be interpreted as a transient signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (eg, a light pulse through a fiber optic cable), or through a wire The electrical signal transmitted.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer readable program instructions described herein can be downloaded from a computer readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in each computing/processing device .
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。 Computer program instructions for performing the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages. Source code or object code written in any combination, including object oriented programming languages such as Smalltalk, C++, etc., as well as conventional procedural programming languages such as the "C" language or similar programming languages. The computer readable program instructions can execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server. carried out. In the case of a remote computer, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider to access the Internet) connection). In some embodiments, the customized electronic circuit, such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by utilizing state information of computer readable program instructions. Computer readable program instructions are executed to implement various aspects of the present invention.
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams can be implemented by computer readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。The computer readable program instructions can be provided to a general purpose computer, a special purpose computer, or a processor of other programmable data processing apparatus to produce a machine such that when executed by a processor of a computer or other programmable data processing apparatus Means for implementing the functions/acts specified in one or more of the blocks of the flowcharts and/or block diagrams. The computer readable program instructions can also be stored in a computer readable storage medium that causes the computer, programmable data processing device, and/or other device to operate in a particular manner, such that the computer readable medium storing the instructions includes An article of manufacture that includes instructions for implementing various aspects of the functions/acts recited in one or more of the flowcharts.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。The computer readable program instructions can also be loaded onto a computer, other programmable data processing device, or other device to perform a series of operational steps on a computer, other programmable data processing device or other device to produce a computer-implemented process. Thus, instructions executed on a computer, other programmable data processing apparatus, or other device implement the functions/acts recited in one or more of the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。 The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the invention. In this regard, each block in the flowchart or block diagram can represent a module, a program segment, or a portion of an instruction that includes one or more components for implementing the specified logical functions. Executable instructions. In some alternative implementations, the functions noted in the blocks may also occur in a different order than those illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified function or function. Or it can be implemented by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。 The embodiments of the present invention have been described above, and the foregoing description is illustrative, not limiting, and not limited to the disclosed embodiments. Numerous modifications and changes will be apparent to those skilled in the art without departing from the scope of the invention. The choice of terms used herein is intended to best explain the principles, practical applications, or technical improvements in the various embodiments of the embodiments, or to enable those of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

  1. 一种通过虚拟现实设备呈现场景的方法,实施于虚拟现实设备上,其特征在于,A method for presenting a scene by using a virtual reality device, implemented on a virtual reality device, wherein
    所述虚拟现实设备包括左眼成像设备以及右眼成像设备,所述左眼成像设备依次设置有左眼透镜、左眼摄像头、左眼屏幕以及左眼镜头,所述右眼成像设备依次设置有右眼透镜、右眼摄像头、右眼屏幕以及右眼镜头;The virtual reality device includes a left eye imaging device and a right eye imaging device, and the left eye imaging device is sequentially provided with a left eye lens, a left eye camera, a left eye screen, and a left lens, and the right eye imaging device is sequentially provided with Right eye lens, right eye camera, right eye screen, and right lens head;
    所述方法包括:The method includes:
    分别获取左眼成像设备的成像参数以及右眼成像设备的成像参数,所述成像参数至少包括对应的成像设备中镜头至摄像头的中心垂直距离以及镜头的折射夹角;Obtaining an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device, where the imaging parameter includes at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
    根据左眼成像设备的成像参数获取左眼成像设备的第一偏移距离,以及根据右眼成像设备的成像参数获取右眼成像设备的第二偏移距离;Obtaining a first offset distance of the left-eye imaging device according to imaging parameters of the left-eye imaging device, and acquiring a second offset distance of the right-eye imaging device according to imaging parameters of the right-eye imaging device;
    根据所述第一偏移距离、第二偏移距离以及预设的人眼瞳距,调整所述左眼透镜、左眼摄像头、右眼透镜和右眼摄像头,使得所述左眼透镜和右眼透镜的中心水平距离不大于所述人眼瞳距、所述左眼透镜与左眼摄像头中心水平偏移第一偏移距离且所述右眼透镜与右眼摄像头中心水平偏移第二偏移距离;Adjusting the left eye lens, the left eye camera, the right eye lens, and the right eye camera according to the first offset distance, the second offset distance, and a preset human eye pupil distance, such that the left eye lens and the right eye The center horizontal distance of the eye lens is not greater than the human eyelid distance, the left eye lens is horizontally offset from the left eye camera center by a first offset distance, and the right eye lens is horizontally offset from the right eye camera center by a second offset Moving distance
    将左眼摄像头预览的第一图像渲染在左眼屏幕上,以及对应将右眼摄像头同时预览的第二图像渲染在右眼屏幕上,以通过对应的镜头向用户呈现对应的场景。The first image previewed by the left eye camera is rendered on the left eye screen, and the second image corresponding to the simultaneous preview of the right eye camera is rendered on the right eye screen to present the corresponding scene to the user through the corresponding lens.
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一偏移距离和第二偏移距离的步骤包括:The method according to claim 1, wherein the step of acquiring the first offset distance and the second offset distance comprises:
    根据左眼成像设备的镜头至摄像头的中心垂直距离h1以及镜头的折射夹角β1,计算第一偏移距离w1=h1×tgβ1;以及,Calculating a first offset distance w 1 =h 1 ×tgβ 1 according to a center vertical distance h 1 of the lens of the left eye imaging device to the camera and a refractive angle β 1 of the lens;
    根据右眼成像设备的镜头至摄像头的中心垂直距离h2以及镜头的折射夹角β2,计算第二偏移距离w2=h2×tgβ2The second offset distance w 2 =h 2 ×tgβ 2 is calculated according to the center vertical distance h 2 of the lens of the right eye imaging device to the camera and the refractive angle β 2 of the lens.
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:The method according to claim 1 or 2, further comprising:
    基于左眼成像设备的镜头至摄像头的中心垂直距离、第一偏移距离以及所述第一图像的图像高度获取第一水平偏移距离,基于所述第一水平偏移距离将所述第一图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作;Obtaining a first horizontal offset distance based on a center-to-center distance of the lens-to-camera of the left-eye imaging device, a first offset distance, and an image height of the first image, the first being based on the first horizontal offset distance The center of the image is horizontally offset to obtain a new image center, and then the rendering operation is performed;
    以及,as well as,
    根据右眼成像设备的镜头至摄像头的中心垂直距离、第二偏移距离以及第二图像的图像高度获取第二水平偏移距离,基于所述第二水平偏移距离将所述第二图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作。Obtaining a second horizontal offset distance according to a center vertical distance of the lens of the right eye imaging device to the camera, a second offset distance, and an image height of the second image, and the second image is based on the second horizontal offset distance The center performs a horizontal offset to obtain a new image center, and then performs the rendering operation.
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,A method according to any one of claims 1 to 3, characterized in that
    所述第一水平偏移距离为d1=(w1/h1)×(H1/2),The first horizontal offset distance is d 1 = (w 1 /h 1 )×(H 1 /2),
    其中,h1为左眼成像设备的镜头至摄像头的中心垂直距离、w1为第一偏移距离以及H1为第一图像的图像高度;Wherein h 1 is the center vertical distance from the lens of the left eye imaging device to the camera, w 1 is the first offset distance, and H 1 is the image height of the first image;
    所述第二水平偏移距离为d2=(w2/h2)×(H2/2),The second horizontal offset distance is d 2 = (w 2 /h 2 ) × (H 2 /2),
    其中,h2为右眼成像设备的镜头至摄像头的中心垂直距离、w2为第一偏移距离以及H2为第二图像的图像高度。Where h 2 is the center vertical distance from the lens of the right eye imaging device to the camera, w 2 is the first offset distance, and H 2 is the image height of the second image.
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1 to 4, further comprising:
    基于所述第一图像的图像中心点确定所述第一图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第一图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作;以及,After determining a center object of the first image based on an image center point of the first image, performing contour detection on the center object to correspond to a center object contour, and the first outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed; and,
    基于所述第二图像的图像中心点确定所述第二图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第二图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作。 After determining a center object of the second image based on an image center point of the second image, performing contour detection on the center object to correspond to a center object contour, and the second outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed.
  6. 一种场景呈现设备,其特征在于,设置于虚拟现实设备侧,A scene presenting device, which is disposed on a virtual reality device side,
    所述虚拟现实设备包括左眼成像设备以及右眼成像设备,所述左眼成像设备依次设置有左眼透镜、左眼摄像头、左眼屏幕以及左眼镜头,所述右眼成像设备依次设置有右眼透镜、右眼摄像头、右眼屏幕以及右眼镜头;The virtual reality device includes a left eye imaging device and a right eye imaging device, and the left eye imaging device is sequentially provided with a left eye lens, a left eye camera, a left eye screen, and a left lens, and the right eye imaging device is sequentially provided with Right eye lens, right eye camera, right eye screen, and right lens head;
    所述场景呈现设备包括:The scene rendering device includes:
    参数获取单元,用于分别获取左眼成像设备的成像参数以及右眼成像设备的成像参数,所述成像参数至少包括对应的成像设备中镜头至摄像头的中心垂直距离以及镜头的折射夹角;a parameter obtaining unit, configured to respectively acquire an imaging parameter of the left-eye imaging device and an imaging parameter of the right-eye imaging device, where the imaging parameter includes at least a central vertical distance of the lens to the camera in the corresponding imaging device and a refractive angle of the lens;
    偏移距离获取单元,用于根据左眼成像设备的成像参数获取左眼成像设备的第一偏移距离,以及根据右眼成像设备的成像参数获取右眼成像设备的第二偏移距离;An offset distance acquisition unit configured to acquire a first offset distance of the left-eye imaging device according to an imaging parameter of the left-eye imaging device, and acquire a second offset distance of the right-eye imaging device according to an imaging parameter of the right-eye imaging device;
    元件调整单元,用于根据所述第一偏移距离、第二偏移距离以及预设的人眼瞳距,调整所述左眼透镜、左眼摄像头、右眼透镜和右眼摄像头,使得所述左眼透镜和右眼透镜的中心水平距离不大于所述人眼瞳距、所述左眼透镜与左眼摄像头中心水平偏移第一偏移距离且所述右眼透镜与右眼摄像头中心水平偏移第二偏移距离;a component adjustment unit, configured to adjust the left eye lens, the left eye camera, the right eye lens, and the right eye camera according to the first offset distance, the second offset distance, and a preset human eye distance The center horizontal distance of the left eye lens and the right eye lens is not greater than the human eyelid distance, the left eye lens is horizontally offset from the left eye camera center by a first offset distance, and the right eye lens and the right eye camera center are Horizontally offset by a second offset distance;
    图像渲染单元,用于将左眼摄像头预览的第一图像渲染在左眼屏幕上,以及对应将右眼摄像头同时预览的第二图像渲染在右眼屏幕上,以通过对应的镜头向用户呈现对应的场景。An image rendering unit, configured to render a first image previewed by the left eye camera on the left eye screen, and a second image corresponding to simultaneously previewing the right eye camera on the right eye screen to present a corresponding to the user through the corresponding lens Scene.
  7. 根据权利要求6所述的设备,其特征在于,所述偏移距离获取单元包括:The device according to claim 6, wherein the offset distance obtaining unit comprises:
    用于根据左眼成像设备的镜头至摄像头的中心垂直距离h1以及镜头的折射夹角β1,计算第一偏移距离w1=h1×tgβ1的装置;以及,Means for calculating a first offset distance w 1 =h 1 ×tgβ 1 according to a center-to-center distance h 1 of the lens of the left-eye imaging device to the camera and a refractive angle β 1 of the lens;
    用于根据右眼成像设备的镜头至摄像头的中心垂直距离h2以及镜头的折射夹角β2,计算第二偏移距离w2=h2×tgβ2的装置。It means 2 = h 2 × tgβ 2 h 2 for an angle β, and the refractive lens 2, calculating a second offset distance w eye lens imaging apparatus according to the vertical distance from the center of the camera.
  8. 根据权利要求6或7所述的设备,其特征在于,还包括图像中心偏移单元,用于: The device according to claim 6 or 7, further comprising an image center offset unit, configured to:
    基于左眼成像设备的镜头至摄像头的中心垂直距离、第一偏移距离以及所述第一图像的图像高度获取第一水平偏移距离,基于所述第一水平偏移距离将所述第一图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作;Obtaining a first horizontal offset distance based on a center-to-center distance of the lens-to-camera of the left-eye imaging device, a first offset distance, and an image height of the first image, the first being based on the first horizontal offset distance The center of the image is horizontally offset to obtain a new image center, and then the rendering operation is performed;
    以及,as well as,
    根据右眼成像设备的镜头至摄像头的中心垂直距离、第二偏移距离以及第二图像的图像高度获取第二水平偏移距离,基于所述第二水平偏移距离将所述第二图像的中心进行水平偏移,得到新的图像中心,再执行所述渲染操作。Obtaining a second horizontal offset distance according to a center vertical distance of the lens of the right eye imaging device to the camera, a second offset distance, and an image height of the second image, and the second image is based on the second horizontal offset distance The center performs a horizontal offset to obtain a new image center, and then performs the rendering operation.
  9. 根据权利要求6-8中任一项所述的设备,其特征在于,还包括轮廓处理单元,用于:The device according to any one of claims 6-8, further comprising a contour processing unit for:
    基于所述第一图像的图像中心点确定所述第一图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第一图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作;以及,After determining a center object of the first image based on an image center point of the first image, performing contour detection on the center object to correspond to a center object contour, and the first outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed; and,
    基于所述第二图像的图像中心点确定所述第二图像的中心对象后,对所述中心对象进行轮廓检测以对应的中心对象轮廓,并对所述中心对象轮廓之外的所述第二图像包括的对象进行虚化或者模糊处理后,再执行所述渲染操作。After determining a center object of the second image based on an image center point of the second image, performing contour detection on the center object to correspond to a center object contour, and the second outside the center object contour After the object included in the image is blurred or blurred, the rendering operation is performed.
  10. 一种虚拟现实设备,其特征在于,包括:A virtual reality device, comprising:
    左眼成像设备,所述左眼成像设备依次设置有左眼透镜、左眼摄像头、左眼屏幕以及左眼镜头;a left eye imaging device, which is provided with a left eye lens, a left eye camera, a left eye screen, and a left lens head in sequence;
    右眼成像设备,所述右眼成像设备依次设置有右眼透镜、右眼摄像头、右眼屏幕以及右眼镜头;a right eye imaging device, wherein the right eye imaging device is sequentially provided with a right eye lens, a right eye camera, a right eye screen, and a right eye lens;
    以及如权利要求6-9所述的任意一项场景呈现设备。 And any scene rendering device according to any of claims 6-9.
PCT/CN2017/112383 2017-05-22 2017-11-22 Method and apparatus for presenting scene using virtual reality device, and virtual reality apparatus WO2018214431A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710364277.6 2017-05-22
CN201710364277.6A CN107302694B (en) 2017-05-22 2017-05-22 Method, equipment and the virtual reality device of scene are presented by virtual reality device

Publications (1)

Publication Number Publication Date
WO2018214431A1 true WO2018214431A1 (en) 2018-11-29

Family

ID=60137596

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112383 WO2018214431A1 (en) 2017-05-22 2017-11-22 Method and apparatus for presenting scene using virtual reality device, and virtual reality apparatus

Country Status (2)

Country Link
CN (1) CN107302694B (en)
WO (1) WO2018214431A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107302694B (en) * 2017-05-22 2019-01-18 歌尔科技有限公司 Method, equipment and the virtual reality device of scene are presented by virtual reality device
CN108986225B (en) * 2018-05-29 2022-10-18 歌尔光学科技有限公司 Processing method, device and equipment for displaying scene by virtual reality equipment
CN108830943B (en) * 2018-06-29 2022-05-31 歌尔光学科技有限公司 Image processing method and virtual reality equipment
CN109002164B (en) * 2018-07-10 2021-08-24 歌尔光学科技有限公司 Display method and device of head-mounted display equipment and head-mounted display equipment
CN111193919B (en) * 2018-11-15 2023-01-13 中兴通讯股份有限公司 3D display method, device, equipment and computer readable medium
CN112235562B (en) * 2020-10-12 2023-09-15 聚好看科技股份有限公司 3D display terminal, controller and image processing method
CN115079826A (en) * 2022-06-24 2022-09-20 平安银行股份有限公司 Virtual reality implementation method, electronic equipment and storage medium
CN115880999B (en) * 2022-12-30 2024-06-21 视涯科技股份有限公司 Display device and near-to-eye display equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080062537A1 (en) * 2006-09-08 2008-03-13 Asia Optical Co., Inc. Compact imaging lens system
CN102782562A (en) * 2010-04-30 2012-11-14 北京理工大学 Wide angle and high resolution tiled head-mounted display device
CN102928979A (en) * 2011-08-30 2013-02-13 微软公司 Adjustment of a mixed reality display for inter-pupillary distance alignment
CN105103032A (en) * 2013-03-26 2015-11-25 鲁索空间工程项目有限公司 Display device
CN106461943A (en) * 2014-05-23 2017-02-22 高通股份有限公司 Method and apparatus for see-through near eye display
CN106445167A (en) * 2016-10-20 2017-02-22 网易(杭州)网络有限公司 Monocular vision field self-adaptive adjustment method and device and wearable visual device
CN106646892A (en) * 2017-03-21 2017-05-10 上海乐蜗信息科技有限公司 Optical system and head-mounted virtual reality device
CN107302694A (en) * 2017-05-22 2017-10-27 歌尔科技有限公司 Method, equipment and the virtual reality device of scene are presented by virtual reality device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016176309A1 (en) * 2015-04-30 2016-11-03 Google Inc. Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes
CN105892053A (en) * 2015-12-30 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual helmet lens interval adjusting method and device
CN105954875A (en) * 2016-05-19 2016-09-21 华为技术有限公司 VR (Virtual Reality) glasses and adjustment method thereof
CN106019588A (en) * 2016-06-23 2016-10-12 深圳市虚拟现实科技有限公司 Near-to-eye display device capable of automatically measuring interpupillary distance and method
CN106598250B (en) * 2016-12-19 2019-10-25 北京星辰美豆文化传播有限公司 A kind of VR display methods, device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080062537A1 (en) * 2006-09-08 2008-03-13 Asia Optical Co., Inc. Compact imaging lens system
CN102782562A (en) * 2010-04-30 2012-11-14 北京理工大学 Wide angle and high resolution tiled head-mounted display device
CN102928979A (en) * 2011-08-30 2013-02-13 微软公司 Adjustment of a mixed reality display for inter-pupillary distance alignment
CN105103032A (en) * 2013-03-26 2015-11-25 鲁索空间工程项目有限公司 Display device
CN106461943A (en) * 2014-05-23 2017-02-22 高通股份有限公司 Method and apparatus for see-through near eye display
CN106445167A (en) * 2016-10-20 2017-02-22 网易(杭州)网络有限公司 Monocular vision field self-adaptive adjustment method and device and wearable visual device
CN106646892A (en) * 2017-03-21 2017-05-10 上海乐蜗信息科技有限公司 Optical system and head-mounted virtual reality device
CN107302694A (en) * 2017-05-22 2017-10-27 歌尔科技有限公司 Method, equipment and the virtual reality device of scene are presented by virtual reality device

Also Published As

Publication number Publication date
CN107302694B (en) 2019-01-18
CN107302694A (en) 2017-10-27

Similar Documents

Publication Publication Date Title
WO2018214431A1 (en) Method and apparatus for presenting scene using virtual reality device, and virtual reality apparatus
US10013796B2 (en) Rendering glasses shadows
JP2016018560A (en) Device and method to display object with visual effect
WO2018000629A1 (en) Brightness adjustment method and apparatus
US9965898B2 (en) Overlay display
CN112105983B (en) Enhanced visual ability
KR20210082242A (en) Creation and modification of representations of objects in an augmented-reality or virtual-reality scene
KR102204212B1 (en) Apparatus and method for providing realistic contents
KR20200043432A (en) Technology for providing virtual lighting adjustments to image data
CN112074800A (en) Techniques for switching between immersion levels
CN112272296B (en) Video illumination using depth and virtual light
US11237413B1 (en) Multi-focal display based on polarization switches and geometric phase lenses
US11543655B1 (en) Rendering for multi-focus display systems
JP2023505235A (en) Virtual, Augmented, and Mixed Reality Systems and Methods
CN111866492A (en) Image processing method, device and equipment based on head-mounted display equipment
KR20220106076A (en) Systems and methods for reconstruction of dense depth maps
CN111612915A (en) Rendering objects to match camera noise
US10757401B2 (en) Display system and method for display control of a video based on different view positions
KR102480916B1 (en) Game engine selection device and content generation system including the same
US11983819B2 (en) Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject
US11823343B1 (en) Method and device for modifying content according to various simulation characteristics
KR102396060B1 (en) Changing Camera View in Electronic Games
US12002132B1 (en) Rendering using analytic signed distance fields
US11282171B1 (en) Generating a computer graphic for a video frame
US11989404B1 (en) Time-based visualization of content anchored in time

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910794

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910794

Country of ref document: EP

Kind code of ref document: A1