WO2021172950A1 - Dispositif électronique et procédé pour reprojection de carte de profondeur sur un dispositif électronique - Google Patents

Dispositif électronique et procédé pour reprojection de carte de profondeur sur un dispositif électronique Download PDF

Info

Publication number
WO2021172950A1
WO2021172950A1 PCT/KR2021/002485 KR2021002485W WO2021172950A1 WO 2021172950 A1 WO2021172950 A1 WO 2021172950A1 KR 2021002485 W KR2021002485 W KR 2021002485W WO 2021172950 A1 WO2021172950 A1 WO 2021172950A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
sequence
image frames
feature points
data
Prior art date
Application number
PCT/KR2021/002485
Other languages
English (en)
Inventor
Christopher Anthony Peri
Yingen Xiong
Lu LUO
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/942,627 external-priority patent/US11107290B1/en
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP21759994.3A priority Critical patent/EP4091128A4/fr
Publication of WO2021172950A1 publication Critical patent/WO2021172950A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/38Image reproducers using viewer tracking for tracking vertical translational head movements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44245Monitoring the upstream path of the transmission network, e.g. its availability, bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/16Indexing scheme for image data processing or generation, in general involving adaptation to the client's capabilities

Definitions

  • This disclosure relates generally to depth maps, and, more particularly, to the re-projection of depth maps on user electronic devices.
  • An extended reality (XR) system may generally include a computer-generated environment and/or a real-world environment that includes at least some XR artifacts.
  • Such an XR system or world and associated XR artifacts typically include various applications (e.g., video games), which may allow users to utilize these XR artifacts by manipulating their presence in the form of a computer-generated representation (e.g., avatar).
  • image data may be rendered on, for example, a lightweight, head-mounted display (HMD) that may be coupled through a physical wired connection to a base graphics generation device responsible for generating the image data.
  • HMD head-mounted display
  • the HMD may be desirable to couple the HMD to the base graphics generation device via a wireless network connection.
  • certain wireless network connections may suffer reliability issues, causing the user's XR experience to cease abruptly and without any preceding indication. It may be thus useful to provide techniques to improve XR systems.
  • a method for depth map re-projection on an electronic device may include: rendering, on one or more displays of the electronic device, a first sequence of image frames based on image data received from an external electronic device; detecting an interruption to the image data received from the external electronic device associated with the XR display device; accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames, wherein the plurality of feature points comprises movement and position information of one or more objects within the first sequence of image frames, performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data; and rendering, on the one or more displays of the electronic device, a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.
  • an electronic device may include: a transceiver; one or more displays; one or more non-transitory computer-readable storage media including instructions; and one or more processors coupled to the storage media, the one or more processors configured to execute the instructions to: render, on the one or more displays, a first sequence of image frames based on image data received from an external electronic device associated with the electronic device; detect an interruption to the image data received from the external electronic device; access a plurality of feature points from a depth map corresponding to the first sequence of image frames, wherein the plurality of feature points comprises movement and position information of one or more objects within the first sequence of image frames; perform a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data; and render, on the one or more displays, a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.
  • Non-transitory computer-readable medium in which instructions are stored, the instructions being configured to, when executed by one or more processors of an electronic device, cause the one or more processors to: render, on one or more displays of the electronic device, a first sequence of image frames based on image data received from an external electronic device associated with the electronic device; detect an interruption to the image data received from the external electronic device associated with the electronic device; access a plurality of feature points from a depth map corresponding to the first sequence of image frames, wherein the plurality of feature points comprises movement and position information of one or more objects within the first sequence of image frames; perform a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data; and render, on the one or more displays of the electronic device, a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.
  • it may prevent that a user's XR experience to cease abruptly and without any preceding indication. It may be useful to provide techniques to improve XR systems.
  • FIG. 1 illustrates an example extended reality (XR) system.
  • XR extended reality
  • FIG. 2A illustrates a detailed embodiment of an extended reality (XR) system with an available network connection.
  • XR extended reality
  • FIG. 2B illustrates a detailed embodiment of an extended reality (XR) device with an unavailable network connection.
  • XR extended reality
  • FIG. 2C illustrates another detailed embodiment of an extended reality (XR) system with an unavailable network connection.
  • XR extended reality
  • FIG. 2D illustrates another detailed embodiment of an extended reality (XR) device with an unavailable network connection.
  • XR extended reality
  • FIG. 3 illustrates is a flow diagram of a method for re-projecting depth maps on user electronic devices.
  • FIG. 4 illustrates is a flow diagram of a method for providing depth map feature points for re-projecting depth maps on user electronic devices.
  • FIGs. 5A and 5B illustrate workflow diagram for determining one or more current color frames and a frame extrapolation diagram, respectively.
  • FIGs. 6A, 6B, and 6C illustrate a workflow diagram for reducing feature points a workflow diagram for determining feature points, and a workflow diagram for determining key point and depth sequence extrapolation, respectively.
  • FIGs. 7A, 7B, and 7C illustrate a workflow diagrams for determining and estimating head poses and object poses in the most recent image frames and/or derived image frames, respectively.
  • FIGs. 7D and 7E illustrate workflow diagrams and for determining object pose estimation and performing a 2D image warping and re-projection, respectively.
  • FIG. 8 illustrates an example computer system.
  • an extended reality (XR) electronic device may render, on one or more displays of the XR electronic device, a first sequence of image frames based on image data received from a computing platform associated with the XR electronic device. The XR electronic device may then detecting an interruption to the image data received from the computing platform associated with the XR display device.
  • the XR electronic device and the computing platform may be communicatively connected to each other via a wireless connection, in which the interruption is an interruption to the wireless connection.
  • the computing platform may access a number of feature points from a depth map corresponding to the first sequence of image frames, in which the number of feature points may include movement and position information of one or more objects within the first sequence of image frames.
  • the XR electronic device may receive the number of feature points corresponding to the first sequence of image frames from the computing platform as a background process. In particular embodiments, the XR electronic device may then store, to a memory of the XR electronic device, the number of feature points corresponding to the first sequence of image frames. In particular embodiments, the XR electronic device may perform a re-warping process to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data.
  • the XR electronic device may access current head pose data and predicted head pose data, in which the current head pose data and the predicted head pose data may be associated with the number of feature points.
  • the XR electronic device may also access current object pose data and predicted object pose data, in which the current object pose data and the predicted object pose data may be associated with the number of feature points.
  • the XR electronic device may perform the re-warping process by determining one or more current color frames corresponding to the first sequence of image frames, and generating, based on the one or more current color frames, one or more updated color frames corresponding to the first sequence of image frames.
  • the XR electronic device may render, on the one or more displays of the XR electronic device, a second sequence of image frames corresponding to the partial re-rendering of the one or more objects. In this way, if a wireless network connection becomes temporarily unavailable, the second sequence of image frames may be rendered for a predetermined period of time thereafter, or until second image data is received again from the computing platform.
  • the user's XR experience may not cease abruptly, and, instead, may gradually and gracefully cease rendering in the case of network unavailability.
  • a computing platform may generate image data corresponding to a first sequence of image frames.
  • the computing platform may also access a depth map corresponding to the first sequence of image frames.
  • the depth map may include depth information for one or more most recent image frames of the first sequence of image frames.
  • the computing platform may also determine a number of feature points from the depth map corresponding to the first sequence of image frames based at least in part on a parametric data reduction (PDR) process, in which the number of feature points may include movement and position information of one or more objects within the first sequence of image frames.
  • PDR parametric data reduction
  • the computing platform may determine the number of feature points from the depth map may by selecting a subset of feature points of a total set of feature points included in the depth map. In particular embodiments, the computing platform may also determine the number of feature points from the depth map by determining a plurality of feature points within a predetermined viewing area. In particular embodiments, the computing platform may determine the number of feature points within the predetermined viewing area by determining a number of feature points within a predefined fovea display area. In particular embodiments, the number of feature points within the predefined fovea display area may include a grouping of feature points based at least part on a nearest-neighbor interpolation.
  • the grouping of feature points may include a subgrouping of feature points grouped based at least part on a depth calculation.
  • the computing platform may determine the number of feature points by determining a pixel region corresponding to the one or more objects within the first sequence of image frames, and dividing the pixel region corresponding to the one or more objects into N pixel subregions.
  • the computing platform may then extract a number of feature points from the N pixel subregions, in which each of the number of feature points is extracted from a respective one of the N pixel subregions based on a confidence threshold.
  • the computing platform may determine a position and an optical flow for each of the plurality of feature points.
  • the computing platform may then send the image data and the number of feature points to an XR electronic device that is external to the electronic device.
  • the computing platform and the XR display device may be communicatively connected to each other via a wireless connection.
  • the computing platform may also provide current head pose data and predicted head pose data to the XR electronic device, in which the current head pose data and the predicted head pose data may be associated with the number of feature points.
  • the computing platform may also provide current object pose data and predicted object pose data to the XR electronic device, in which the current object pose data and the predicted object pose data are associated with the number of feature points.
  • extended reality may refer to a form of electronic-based reality that has been manipulated in some manner before presentation to a user, including, for example, virtual reality (VR), augmented reality (AR), mixed reality (MR), hybrid reality, simulated reality, immersive reality, holography, or any combination thereof.
  • extended reality content may include completely computer-generated content or partially computer-generated content combined with captured content (e.g., real-world images).
  • the "extended reality” content may also include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer).
  • extended reality may be associated with applications, products, accessories, services, or a combination thereof, that, for example, may be utilized to create content in extended reality and/or utilized in (e.g., perform activities) in extended reality.
  • extended reality content may be implemented on various platforms, including a head-mounted device (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing extended reality content to one or more viewers.
  • HMD head-mounted device
  • FIG. 1 illustrates an example extended reality (XR) system 100, in accordance with presently disclosed embodiments.
  • the XR system 100 may include an XR electronic device 102, an input device 104, and a computing platform 106.
  • a user may wear the XR electronic device 102 that may display visual extended reality content to the user.
  • the XR electronic device 102 may include an audio device that may provide audio extended reality content to the user.
  • the XR electronic device 102 may include one or more cameras which can capture images and videos of environments.
  • the XR electronic device 102 may include an eye tracking system to determine the vergence distance of the user.
  • the XR electronic device 102 may include a head-mounted display (HDM).
  • the input device 104 may include, for example, a trackpad and one or more buttons.
  • the input device 104 may receive inputs from the user and relay the inputs to the computing platform 106 and/or the XR electronic device 102.
  • the XR electronic device 102 may be coupled to the computing platform 106 via one or more wireless networks 108.
  • the computing platform 106 may include, for example, a standalone host computing system, an on-board computer system integrated with the XR electronic device 102, a mobile device, or any other hardware platform that may be capable of providing extended reality content to and receiving inputs from the input device 104.
  • the computing platform 106 may include, for example, a cloud-based computing architecture (including one or more servers 110 and data stores 112) suitable for hosting and servicing XR applications or experiences executing on the XR electronic device 102.
  • a cloud-based computing architecture including one or more servers 110 and data stores 112 suitable for hosting and servicing XR applications or experiences executing on the XR electronic device 102.
  • the computing platform 106 may include a Platform as a Service (PaaS) architecture, a Software as a Service (SaaS) architecture, and an Infrastructure as a Service (IaaS), or other similar cloud-based computing architecture.
  • PaaS Platform as a Service
  • SaaS Software as a Service
  • IaaS Infrastructure as a Service
  • FIG. 2A illustrates a detailed embodiment of an extended reality (XR) system 200A for performing a 3D re-projection warping process, in accordance with presently disclosed embodiments.
  • the computing platform 106A may include a head pose tracking functional block 202A, a rendering engine 204A, a 3D re-projection warping functional block 206A, a key feature point and depth extraction functional block 208A, a head pose functional block 210A, and object pose prediction functional block 212A.
  • the computing platform 106 may generate image data corresponding to a first sequence of image frames via the rendering engine 204A.
  • the computing platform 106A may also access a depth map corresponding to the first sequence of image frames.
  • the depth map may include depth information for one or more most recent image frames of the first sequence of image frames.
  • the computing platform 106A may also determine, by the key feature point and depth extraction functional block 208A, a number of feature points from the depth map corresponding to the first sequence of image frames based on a PDR process.
  • the number of feature points may include movement and position information (e.g., head pose data and head pose prediction data calculated by the head pose functional block 210A, object pose data and object pose prediction data by the object pose prediction functional block 212A) of one or more objects within the first sequence of image frames.
  • the computing platform 106A may then send the image data and a number of feature points to the XR electronic device 102A.
  • the XR electronic device 102A may include the storage functional block 214A, a latest inertial measurement unit (IMU) functional block 216, a latest IMU functional block 218, a 3D re-projection warping functional block 220A, a data store 222, and a final re-projection and display functional block 224A.
  • IMU inertial measurement unit
  • the latest IMU functional block 216 may include IMU data captured at the time the head pose data and/or object pose data is stored or calculated at the storage functional block 214A, such that the head pose data and/or object pose data may be implied based on the IMU data from the latest IMU functional block 216.
  • the latest IMU functional block 218 may include real-time or near real-time IMU data that may be recalculated (e.g., updated) before the first sequence of image frames are provided by the final re-projection and display functional block 224A (e.g., when the network connection is still available) for rendering.
  • the storage functional block 214A that may be utilized to receive and store the number of feature points corresponding to the first sequence of image frames from the computing platform 106A as a background process.
  • the XR electronic device 102A may also receive current head pose data and predicted head pose data from the computing platform 106A.
  • the XR electronic device 106A may also receive current object pose data and predicted object pose data from the computing platform 106A.
  • the current head pose data, predicted head pose data, current object pose data, and predicted object pose data may be associated spatiotemporally with the number of feature points received from the computing platform 106A.
  • the XR electronic device 102A may then render, on one or more displays of the XR display device 102A, the first sequence of image frames by the final re-projection and display functional block 224A.
  • the 3D re-projection warping functional block 206A may provide the first sequence of image frames (e.g., 3D images) to the latest IMU functional block 218 to associate the first sequence of image frames with the latest user head pose data and object pose data, for example, and re-project and display the first sequence of image frames on the one or more displays of the XR display device 102A.
  • the wireless network connection may, in some instances, be become temporarily unavailable, and thus the first sequence of image frames may cease being be sent from the computing platform 106A to the XR display device 102A.
  • FIG. 2B illustrates a detailed embodiment of an extended reality (XR) device with an unavailable network connection for performing a 3D re-projection warping process once a network connection becomes unavailable, in accordance with presently disclosed embodiments.
  • the XR electronic device 102A may then access the number of feature points from a depth map corresponding to the first sequence of image frames. For example, prior to detecting the interruption to the image data received from computing platform 106A, the XR electronic device 102A may receive and store the number of feature points corresponding to the first sequence of image frames.
  • the feature points may be further processed and curated via the head pose sequence functional block 226A, reference frame color depths functional block 228A, an object key feature point and depth sequence extrapolation functional block 230A, and an object pose sequence extrapolation functional block 232A.
  • the XR electronic device 102A may then perform, based on the number of feature points, a 3D re-warping process by the 3D re-projection warping functional block 220A to render, by the final re-projection and display functional block 224A, at least a partial re-rendering of one or more objects included in the first sequence of image frames in accordance with movement and position information (e.g., head pose data, head pose prediction data, object pose data, and object pose prediction data stored at functional block 214A and the latest user IMU data as provided by the to the latest IMU functional block 218).
  • movement and position information e.g., head pose data, head pose prediction data, object pose data, and object pose prediction data stored at functional block 214A and the latest user IMU data as provided by the to the latest IMU functional block 218,.
  • the 3D re-projection warping functional block 220A may utilize the number of feature points to perform image re-projection warping (e.g., transforming the number of feature points into a 3D object or 2.5D object) into at least a partial re-rendering of the one or more objects included in the first sequence of image frames (e.g., along with the latest user head pose data and object pose data).
  • image re-projection warping e.g., transforming the number of feature points into a 3D object or 2.5D object
  • a partial re-rendering of the one or more objects included in the first sequence of image frames e.g., along with the latest user head pose data and object pose data.
  • a wireless network connection becomes temporarily unavailable
  • a second sequence of image frames may be rendered for a predetermined period of time thereafter, or until second image data is received again from the computing platform.
  • the user's XR experience may not cease abruptly, and, instead, may gradually and gracefully cease rendering in the case of
  • FIG. 2C illustrates another detailed embodiment of an extended reality (XR) system 200C for performing a 2D re-projection warping process, in accordance with presently disclosed embodiments.
  • the XR system 200C may differ from the XR system 200A in that the XR electronic device 102B may perform a 2D re-warping via a 2D re-warping function 236A and image distortion correction via the distortion correction functional block 238A.
  • the computing platform 106B may include the head pose tracking functional block 202B, the rendering engine 204B, the key feature point and depth extraction functional block 208B, head pose functional block 210B, object pose estimation functional block 234, and the object pose prediction functional block 212B.
  • the computing platform 106B may generate image data (e.g., color frames) corresponding to a first sequence of image frames via the rendering engine 204B.
  • the rendering engine 204B may also provide image data (e.g., color frames) to the key feature point and depth extraction functional block 208B.
  • the key feature point and depth extraction functional block 208B may then provide key feature point and depth extraction data to the object pose estimation functional block 234.
  • the object pose estimation functional block 234 may be provided to estimate object poses from the key feature point and depth extraction data.
  • the object pose prediction functional block 212B may then receive the estimated object poses and generate object pose prediction data based thereon.
  • the computing platform 106B may store the number of feature points and the object pose prediction data to the object key feature points and depth map data storage functional block 214C of the XR electronic device 102B. Similarly, the computing platform 106B may store head pose data and head pose prediction data to the storage functional block 214B of the XR electronic device 102B. As further depicted, while the wireless network connection remains available, the number of feature points and the head pose prediction data and the object pose prediction data, as well as the image data (e.g., color frames) from the rendering engine 204B may all be provided to the 3D re-projection warping functional block 220B.
  • the 3D re-projection warping functional block 220B may then provide output rendering data to the 2D warping functional block 236A.
  • the color frames corresponding to a first sequence of image frames may be provided to the display 224B for rendering to a user.
  • FIG. 2D illustrates another detailed embodiment of an extended reality (XR) system 200D for performing a 2D re-projection warping process with an unavailable network connection, in accordance with presently disclosed embodiments.
  • the XR electronic device 102B may receive head pose sequence data from the head pose extrapolation functional block 226B, reference frame color and depth data from the reference frame color and depth functional block 228B, key feature point and depth sequence data from the key feature point and depth functional block 230B, and object pose sequence data from the object pose extrapolation functional block 232B.
  • XR extended reality
  • the head pose sequence data, the reference frame color and depth data, the key feature point and depth sequence data, and the object pose sequence data may all be provided to the 3D re-projection warping functional block 220B.
  • the 3D re-projection warping functional block 220B may then provide output rendering data to the 2D warping functional block 236B.
  • the color frames corresponding to a second sequence of image frames may be provided to the display 224B for rendering to a user.
  • a second sequence of image frames may be rendered for a predetermined period of time thereafter, or until second image data is received again from the computing platform.
  • the user's XR experience may not cease abruptly, and, instead, may gradually and gracefully cease rendering in the case of network unavailability.
  • FIG. 3 illustrates is a flow diagram of a method 300 for re-projecting depth maps on user electronic devices.
  • the method 300 may be performed utilizing one or more processing devices (e.g., XR electronic device 102) that may include hardware (e.g., a general purpose processor, a graphic processing unit (GPU), an application-specific integrated circuit (ASIC), a system-on-chip (SoC), a microcontroller, a field-programmable gate array (FPGA), a central processing unit (CPU), an application processor (AP), a visual processing unit (VPU), a neural processing unit (NPU), a neural decision processor (NDP), or any other processing device(s) that may be suitable for processing image data), software (e.g., instructions running/executing on one or more processors), firmware (e.g., microcode), or some combination thereof.
  • XR electronic device 102 may include hardware (e.g., a general purpose processor, a graphic processing unit (GPU), an application-specific integrated circuit (ASIC
  • the method 300 may begin block 302 with the one or more processing devices (e.g., XR electronic device 102) rendering, on one or more displays of an XR electronic device, a first sequence of image frames based on image data received from an external electronic device associated with the XR electronic device.
  • the method 300 may then continue at block 304 with the one or more processing devices (e.g., XR electronic device 102) detecting an interruption to the image data received from the external electronic device associated with the XR display device.
  • the XR electronic device and the external electronic device may be communicatively connected to each other via a wireless connection, in which the interruption is an interruption to the wireless connection.
  • the method 300 may then continue at block 306 with the one or more processing devices (e.g., XR electronic device 102) accessing a number of feature points from a depth map corresponding to the first sequence of image frames, in which the number of feature points includes movement and position information of one or more objects within the first sequence of image frames.
  • the one or more processing devices e.g., XR electronic device 102
  • the XR electronic device may receive the number of feature points corresponding to the first sequence of image frames from the external electronic device as a background process. In particular embodiments, the XR electronic device may then store, to a memory of the XR electronic device, the number of feature points corresponding to the first sequence of image frames. The method 300 may then continue at block 308 with the one or more processing devices (e.g., XR electronic device 102) performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data.
  • the one or more processing devices e.g., XR electronic device 102
  • the XR electronic device may access current head pose data and predicted head pose data, in which the current head pose data and the predicted head pose data may be associated with the number of feature points.
  • the XR electronic device may also access current object pose data and predicted object pose data, in which the current object pose data and the predicted object pose data may be associated with the number of feature points.
  • the XR electronic device may perform the re-warping process by determining one or more current color frames corresponding to the first sequence of image frames, and generating, based on the one or more current color frames, one or more updated color frames corresponding to the first sequence of image frames.
  • the method 300 may then conclude at block 310 with the one or more processing devices (e.g., XR electronic device 102) rendering, on the one or more displays of the XR electronic device, a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.
  • the second sequence of image frames may be rendered for a predetermined period of time thereafter, or until second image data is received again from the computing platform.
  • the user's XR experience may not cease abruptly, and, instead, may gradually and gracefully cease rendering in the case of network unavailability.
  • FIG. 4 illustrates is a flow diagram of a method 400 for providing depth map feature points for re-projecting depth maps on user electronic devices, in accordance with the presently disclosed embodiments.
  • the method 400 may be performed utilizing one or more processing devices (e.g., computing platform 106) that may include hardware (e.g., a general purpose processor, a graphic processing unit (GPU), an application-specific integrated circuit (ASIC), a system-on-chip (SoC), a microcontroller, a field-programmable gate array (FPGA), a central processing unit (CPU), an application processor (AP), a visual processing unit (VPU), a neural processing unit (NPU), a neural decision processor (NDP), or any other processing device(s) that may be suitable for processing image data), software (e.g., instructions running/executing on one or more processors), firmware (e.g., microcode), or some combination thereof.
  • hardware e.g., a general purpose processor, a graphic processing unit (GPU), an application-specific
  • the method 400 may begin block 402 with the one or more processing devices (e.g., computing platform 106) generating image data corresponding to a first sequence of image frames.
  • the method 400 may then continue at block 404 with the one or more processing devices (e.g., computing platform 106) accessing a depth map corresponding to the first sequence of image frames.
  • the depth map may include depth information for one or more most recent image frames of the first sequence of image frames.
  • the method 400 may then continue at block 406 with the one or more processing devices (e.g., computing platform 106) determining a number of feature points from the depth map corresponding to the first sequence of image frames based at least in part on a parametric data reduction (PDR) process, in which the number of feature points includes movement and position information of one or more objects within the first sequence of image frames.
  • the one or more processing devices e.g., computing platform 106
  • PDR parametric data reduction
  • the computing platform may determine the number of feature points from the depth map may by selecting a subset of feature points of a total set of feature points included in the depth map. In particular embodiments, the computing platform may also determine the number of feature points from the depth map by determining a plurality of feature points within a predetermined viewing area. In particular embodiments, the computing platform may determine the number of feature points within the predetermined viewing area by determining a number of feature points within a predefined fovea display area. In particular embodiments, the number of feature points within the predefined fovea display area may include a grouping of feature points based at least part on a nearest-neighbor interpolation.
  • the grouping of feature points may include a subgrouping of feature points grouped based at least part on a depth calculation.
  • the computing platform may determine the number of feature points by determining a pixel region corresponding to the one or more objects within the first sequence of image frames, and dividing the pixel region corresponding to the one or more objects into N pixel subregions.
  • the computing platform may then extract a number of feature points from the N pixel subregions, in which each of the number of feature points is extracted from a respective one of the N pixel subregions based on a confidence threshold.
  • the computing platform may determine a position and an optical flow for each of the plurality of feature points.
  • the method 400 may then conclude at block 408 with the one or more processing devices (e.g., computing platform 106) sending the image data and the number of feature points to an XR electronic device that is external to the electronic device.
  • the computing platform and the XR display device may be communicatively connected to each other via a wireless connection.
  • the computing platform may also provide current head pose data and predicted head pose data to the XR electronic device, in which the current head pose data and the predicted head pose data may be associated with the number of feature points.
  • the computing platform mayu also provide current object pose data and predicted object pose data to the XR electronic device, in which the current object pose data and the predicted object pose data are associated with the number of feature points.
  • the second sequence of image frames may be rendered for a predetermined period of time thereafter, or until second image data is received again from the computing platform.
  • the user's XR experience may not cease abruptly, and, instead, may gradually and gracefully cease rendering in the case of network unavailability.
  • FIGs. 5A and 5B illustrate workflow diagram 500A for determining one or more current color frames and a frame extrapolation diagram 500B for determining a number of image frames to be extrapolated, respectively, in accordance with the presently disclosed embodiments.
  • the workflow diagram 500A may be performed, for example, by the XR electronic device 102.
  • the workflow diagram 500A may commence at block 502 with the XR electronic device 102 obtaining predicted head poses and object poses.
  • the XR electronic device 102 may access current head pose data, predicted head pose data, current object pose data, and predicted object pose data that may be associated with the number of feature points received from the computing platform 106.
  • the workflow diagram 500A may then continue at block 504 with the XR electronic device 102 re-projecting the current color image frames.
  • the re-projection of the current color image frames may be expressed as vectors u 2 and v 2 , as set forth below:
  • the workflow diagram 500A may then conclude at block 506 with the XR electronic device 102 obtaining updated color image frames.
  • the XR electronic device 102 may, for example, update a current frame by the 3D re-projection warping functional block 220 (e.g., 3D re-projection warping process) to create a new color frame, which may be utilized, for example, during one or more delays between head poses and/or object poses or changes thereto.
  • the new color frame may be correlated with changes to head poses and/or object poses based on, for example, a 2D rotation and translation.
  • the XR electronic device 102 may utilize the 3D re-projection warping functional block 220 (e.g., 3D re-projection warping process) to create new image frame sequence based on, for example, the number of feature points corresponding to the first sequence of image frames and received from the computing platform 106, and the current head pose data, predicted head pose data, current object pose data, and predicted object pose data that may be associated with the number of feature points received from the computing platform 106.
  • 3D re-projection warping functional block 220 e.g., 3D re-projection warping process
  • 5B illustrates a manner in which a sequence of image frames 508, 510, 512, 514, and 516 (e.g., color image frames) may be extrapolated based on, for example, one or more of the most recent image frames of the sequence of image frames 508, 510, 512, 514, and 516, as well indicating the determined number of image frames to be extrapolated based on, for example, the most recent 3 image frames for 3D extrapolation and the most recent 2 image frames for 2D extrapolation to perform gradual and graceful ceasing of image rendering in the case of network unavailability.
  • a sequence of image frames 508, 510, 512, 514, and 516 e.g., color image frames
  • the 3 most recent image frames of the sequence of image frames 508, 510, 512, 514, and 516 may be extrapolated for rendering 3D image frames (e.g., depth maps), and the 2 most recent image frames of the sequence of image frames 508, 510, 512, 514, and 516 may be extrapolated for rendering 2D image frames.
  • the number of most recent image frames of the sequence of image frames 508, 510, 512, 514, and 516 may also depend on, for example, factors including frame frequency and duration, scene complexity, distances between objects, and so forth.
  • FIGs. 6A, 6B, and 6C illustrate a workflow diagram 600A for selecting key feature points based on a parametric data reduction (PDR) process, a workflow diagram 600B for determining and reducing feature points, and a workflow diagram 600C for determining key point and depth sequence extrapolation, respectively, in accordance with the presently disclosed embodiments.
  • the workflow diagrams 600A and 600B may each be performed, for example, by the key feature point and depth extraction functional block 208A, 208B of the computing platform 106A, 106B.
  • the workflow diagram 600A may be provided to reduce feature points selected from a number of frames.
  • the workflow diagram 600A may commence at block 602 with the computing platform 106 determining a new image frame, a display refresh, or fovea image update.
  • the workflow diagram 600A may then at block 604 with the computing platform 106 subsampling the depth map corresponding the first sequence of images.
  • the workflow diagram 600A may then continue at block 606 with the computing platform 106 identifying feature points within a predetermined viewing area.
  • the workflow diagram 600A may then continue at block 608 with the computing platform 106 reducing the feature points within the viewing area based on a predefined fovea display area.
  • the workflow diagram 600A may then continue at block 610 with the computing platform 106 grouping the feature points based on a on a nearest-neighbor interpolation.
  • the workflow diagram 600A may then continue at block 612 with the computing platform 106 grouping the feature points based on a depth calculation.
  • the workflow diagram 600A may then continue at block 614 with the computing platform 106 running a pixel regional subset if the reduced feature points are determined to be greater than, for example, a predetermined preset value.
  • the workflow diagram 600A may then conclude at block 616 with the computing platform 106 storing the reduced dataset of feature points (e.g., to be provided to the XR electronic device 102).
  • the workflow diagram 600B may be provided to determine feature points selected from a number of frames 618.
  • the workflow diagram 600B may be provided to determine one or more pixel regions corresponding to one or more objects within, for example, the first sequence of image frames being provided to the XR electronic device 102 for rendering while the wireless network connection is available.
  • the workflow diagram 600B may commence at block 620 with the computing platform 106A, 106B dividing the one or more pixel regions corresponding to the one or more objects into N pixel subregions (e.g., 6 pixel subregions as illustrated in FIG. 6B).
  • the workflow diagram 600B may then continue at block 622 with the computing platform 106A, 106B extracting a number of feature points 626A, 626B, 626C, 626D, 626E, and 626F from the N pixel subregions, in which each of the number of feature points 626A, 626B, 626C, 626D, 626E, and 626F may be extracted from a respective one of the N pixel subregions based on a confidence threshold.
  • the workflow diagram 600B may then continue at block 624 with the computing platform 106A, 106B selecting the number of feature points 626A, 626B, 626C, 626D, 626E, and 626F from the N pixel subregions corresponding to the confidence threshold for each of the N pixel subregions.
  • the workflow diagram 600B may then conclude at block 628 with the computing platform 106A, 106B determining a position and an optical flow for each of the number of feature points 626A, 626B, 626C, 626D, 626E, and 626F.
  • FIGs. 7A, 7B, and 7C illustrate a workflow diagrams 700A, 700B, and 700C for determining and estimating head poses and object poses in the most recent image frames and/or derived image frames, respectively, in accordance with the presently disclosed embodiments.
  • the workflow diagram 700A may be performed, for example, by the head pose sequence functional block 226A, 226B of the XR electronic device 102A, 102B.
  • the workflow diagram 700A may commence at block 702 with the XR electronic device 102A, 102B obtaining stored head poses of previous two frames.
  • the workflow diagram 700A may continue at block 704 with the XR electronic device 102A, 102B extrapolating 3D head pose sequence for derived image frames.
  • the workflow diagram 700A may continue at block 706 with the XR electronic device 102A, 102B obtaining head poses sequence for derived image frames.
  • the workflow diagram 700B may be performed, for example, by the object pose prediction functional block 212A, 212B of the computing platform 106A, 106B.
  • the workflow diagram 700B may commence at block 708 with the computing platform 106 obtaining estimated object poses of previous two frames.
  • the workflow diagram 700A may continue at block 710 with the computing platform 106A, 106B extrapolating poses of all of objects in the next frame.
  • the workflow diagram 700A may continue at block 712 with the computing platform 106A, 106B obtaining 3D poses of all objects in the next frame.
  • the workflow diagram 700C may be performed, for example, by the object pose sequence extrapolation functional block 232A, 232B of the XR electronic device 102A, 102B.
  • the workflow diagram 700C may commence at block 714 with the XR electronic device 102A, 102B obtaining key feature points sequences of derived image frames.
  • the workflow diagram 700C may continue at block 716 with the XR electronic device 102A, 102B estimating poses for each object in each derived image frame.
  • the workflow diagram 700A may continue at block 718 with the XR electronic device 102A, 102B obtaining object poses sequences of all derived image frames.
  • FIGs. 7D and 7E illustrate a workflow diagrams 700D and 700E for determining object pose estimation and performing a 2D image warping and re-projection, respectively, in accordance with the presently disclosed embodiments.
  • the workflow diagram 700D may be performed, for example, by the object pose estimation functional block 234 of the computing platform 106B.
  • the workflow diagram 700D may commence at block 720 with the computing platform 106B determining corresponding 3D feature point sets A in a reference image frame and 3D feature point sets B in a current image frame.
  • the workflow diagram 700D may continue at block 722 with the computing platform 106B computing an error function with respect to the current frame and the reference image frame.
  • the workflow diagram 700D may continue at block 724 with the computing platform 106B computing one or more centroid values with respect to 3D feature point sets A in the reference image frame and 3D feature point sets B in the current image frame.
  • the workflow diagram 700D may continue at block 726 with the computing platform 106B creating a criterion function based on the one or more centroid values.
  • the workflow diagram 700D may continue at block 728 with the computing platform 106B performing a singular value decomposition (SVD) with respect to the reference frame and the current image frame.
  • SVD singular value decomposition
  • the workflow diagram 700D may continue at block 730 with the computing platform 106B computing a rotation matrix based on the SVD decomposition.
  • the workflow diagram 700D may then conclude at block 732 with the computing platform 106B computing a translation matrix based on the rotation of the SVD decomposition.
  • the workflow diagram 700E may be performed, for example, by the 2D re-warping function 236A, 236B of the XR electronic device 102A, 102B.
  • the workflow diagram 700E may commence at block 734 with the XR electronic device 102A, 102B obtaining predicted head poses and object poses.
  • the workflow diagram 700E may continue at block 736 with the XR electronic device 102A, 102B warping a current color image frame utilizing 2D rotation and translation.
  • the workflow diagram 700E may then conclude at block 738 with the XR electronic device 102A, 102B obtaining updated color image frames.
  • FIG. 8 illustrates an example computer system 800 that may be utilized for re-projecting depth maps on user electronic devices, in accordance with the presently disclosed embodiments.
  • one or more computer systems 800 perform one or more steps of one or more methods described or illustrated herein.
  • one or more computer systems 800 provide functionality described or illustrated herein.
  • software running on one or more computer systems 800 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein.
  • Particular embodiments include one or more portions of one or more computer systems 800.
  • reference to a computer system may encompass a computing device, and vice versa, where appropriate.
  • reference to a computer system may encompass one or more computer systems, where appropriate.
  • computer system 800 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (e.g., a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these.
  • SBC single-board computer system
  • PDA personal digital assistant
  • server a server
  • tablet computer system augmented/virtual reality device
  • one or more computer systems 800 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein.
  • one or more computer systems 800 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein.
  • One or more computer systems 800 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
  • computer system 800 includes a processor 802, memory 804, storage 806, an input/output (I/O) interface 808, a communication interface 810, and a bus 812.
  • I/O input/output
  • this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
  • processor 802 includes hardware for executing instructions, such as those making up a computer program.
  • processor 802 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 804, or storage 806; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 804, or storage 806.
  • processor 802 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 802 including any suitable number of any suitable internal caches, where appropriate.
  • processor 802 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 804 or storage 806, and the instruction caches may speed up retrieval of those instructions by processor 802.
  • TLBs translation lookaside buffers
  • Data in the data caches may be copies of data in memory 804 or storage 806 for instructions executing at processor 802 to operate on; the results of previous instructions executed at processor 802 for access by subsequent instructions executing at processor 802 or for writing to memory 804 or storage 806; or other suitable data.
  • the data caches may speed up read or write operations by processor 802.
  • the TLBs may speed up virtual-address translation for processor 802.
  • processor 802 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 802 including any suitable number of any suitable internal registers, where appropriate.
  • processor 802 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 802.
  • memory 804 includes main memory for storing instructions for processor 802 to execute or data for processor 802 to operate on.
  • computer system 800 may load instructions from storage 806 or another source (such as, for example, another computer system 800) to memory 804.
  • Processor 802 may then load the instructions from memory 804 to an internal register or internal cache.
  • processor 802 may retrieve the instructions from the internal register or internal cache and decode them.
  • processor 802 may write one or more results (which may be intermediate or final results) to the internal register or internal cache.
  • Processor 802 may then write one or more of those results to memory 804.
  • processor 802 executes only instructions in one or more internal registers or internal caches or in memory 804 (as opposed to storage 806 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 804 (as opposed to storage 806 or elsewhere).
  • One or more memory buses may couple processor 802 to memory 804.
  • Bus 812 may include one or more memory buses, as described below.
  • one or more memory management units reside between processor 802 and memory 804 and facilitate accesses to memory 804 requested by processor 802.
  • memory 804 includes random access memory (RAM).
  • This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM.
  • Memory 804 may include one or more memories 804, where appropriate.
  • storage 806 includes mass storage for data or instructions.
  • storage 806 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these.
  • Storage 806 may include removable or non-removable (or fixed) media, where appropriate.
  • Storage 806 may be internal or external to computer system 800, where appropriate.
  • storage 806 is non-volatile, solid-state memory.
  • storage 806 includes read-only memory (ROM).
  • this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these.
  • This disclosure contemplates mass storage 806 taking any suitable physical form.
  • Storage 806 may include one or more storage control units facilitating communication between processor 802 and storage 806, where appropriate. Where appropriate, storage 806 may include one or more storages 806. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
  • I/O interface 808 includes hardware, software, or both, providing one or more interfaces for communication between computer system 800 and one or more I/O devices.
  • Computer system 800 may include one or more of these I/O devices, where appropriate.
  • One or more of these I/O devices may enable communication between a person and computer system 800.
  • an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these.
  • An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 806 for them.
  • I/O interface 808 may include one or more device or software drivers enabling processor 802 to drive one or more of these I/O devices.
  • I/O interface 808 may include one or more I/O interfaces 806, where appropriate.
  • communication interface 810 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 800 and one or more other computer systems 800 or one or more networks.
  • communication interface 810 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network.
  • NIC network interface controller
  • WNIC wireless NIC
  • WI-FI network wireless network
  • computer system 800 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these.
  • PAN personal area network
  • LAN local area network
  • WAN wide area network
  • MAN metropolitan area network
  • computer system 800 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these.
  • Computer system 800 may include any suitable communication interface 810 for any of these networks, where appropriate.
  • Communication interface 810 may include one or more communication interfaces 810, where appropriate.
  • a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate.
  • ICs such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)
  • HDDs hard disk drives
  • HHDs hybrid hard drives
  • ODDs optical disc drives
  • magneto-optical discs magneto-optical drives
  • references in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé consistant à rendre, sur des écrans d'un dispositif d'affichage à réalité étendue (XR), une première séquence de trames d'image sur la base de données d'image reçues en provenance d'un dispositif électronique externe associé au dispositif d'affichage XR. Le procédé consiste en outre à détecter une interruption des données d'image reçues en provenance du dispositif électronique externe, et à accéder à une pluralité de points caractéristiques à partir d'une carte de profondeur correspondant à la première séquence de trames d'image. La pluralité de points caractéristiques comprend des informations de mouvement et de position d'un ou de plusieurs objets dans la première séquence de trames d'image. Le procédé consiste en outre à effectuer une re-déformation pour rendre à nouveau au moins partiellement le ou les objets sur la base, au moins en partie, de la pluralité de points caractéristiques et de données spatio-temporelles, et à rendre une seconde séquence de trames d'image correspondant au nouveau rendu partiel du ou des objets.
PCT/KR2021/002485 2020-02-27 2021-02-26 Dispositif électronique et procédé pour reprojection de carte de profondeur sur un dispositif électronique WO2021172950A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21759994.3A EP4091128A4 (fr) 2020-02-27 2021-02-26 Dispositif électronique et procédé pour reprojection de carte de profondeur sur un dispositif électronique

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US202062982570P 2020-02-27 2020-02-27
US62/982,570 2020-02-27
US16/942,627 US11107290B1 (en) 2020-02-27 2020-07-29 Depth map re-projection on user electronic devices
US16/942,627 2020-07-29
KR10-2020-0149309 2020-11-10
KR1020200149309A KR20210110164A (ko) 2020-02-27 2020-11-10 전자 장치 및 전자 장치에서 깊이 맵의 재-투사 방법

Publications (1)

Publication Number Publication Date
WO2021172950A1 true WO2021172950A1 (fr) 2021-09-02

Family

ID=77490278

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/002485 WO2021172950A1 (fr) 2020-02-27 2021-02-26 Dispositif électronique et procédé pour reprojection de carte de profondeur sur un dispositif électronique

Country Status (2)

Country Link
EP (1) EP4091128A4 (fr)
WO (1) WO2021172950A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0955606A2 (fr) * 1998-05-08 1999-11-10 Mixed Reality Systems Laboratory Inc. Mesure de la profondeur d'image en considérant un délai
KR20110112143A (ko) * 2010-04-06 2011-10-12 (주)리얼디스퀘어 Ldi 기법 깊이맵을 참조한 2d 동영상의 3d 동영상 전환방법
US20140118482A1 (en) * 2012-10-26 2014-05-01 Korea Advanced Institute Of Science And Technology Method and apparatus for 2d to 3d conversion using panorama image
US20170155889A1 (en) * 2015-11-30 2017-06-01 Altek Semiconductor Corp. Image capturing device, depth information generation method and auto-calibration method thereof
US20180053284A1 (en) 2016-08-22 2018-02-22 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
KR20180060868A (ko) * 2016-11-28 2018-06-07 숭실대학교산학협력단 감시영상의 실시간 와핑을 이용한 3차원 감시 시스템 및 방법

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0955606A2 (fr) * 1998-05-08 1999-11-10 Mixed Reality Systems Laboratory Inc. Mesure de la profondeur d'image en considérant un délai
KR20110112143A (ko) * 2010-04-06 2011-10-12 (주)리얼디스퀘어 Ldi 기법 깊이맵을 참조한 2d 동영상의 3d 동영상 전환방법
US20140118482A1 (en) * 2012-10-26 2014-05-01 Korea Advanced Institute Of Science And Technology Method and apparatus for 2d to 3d conversion using panorama image
US20170155889A1 (en) * 2015-11-30 2017-06-01 Altek Semiconductor Corp. Image capturing device, depth information generation method and auto-calibration method thereof
US20180053284A1 (en) 2016-08-22 2018-02-22 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
KR20180060868A (ko) * 2016-11-28 2018-06-07 숭실대학교산학협력단 감시영상의 실시간 와핑을 이용한 3차원 감시 시스템 및 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAI, SC: "An effiicient full frame algorithm for object based concealment in 3D depth-based video", MULTIMEDIA TOOLS AND APPLICATIONS, vol. 75, 2016, pages 9927 - 9947, XP036037300, DOI: 10.1007/s11042-015-2899-4

Also Published As

Publication number Publication date
EP4091128A4 (fr) 2023-06-21
EP4091128A1 (fr) 2022-11-23

Similar Documents

Publication Publication Date Title
US11719933B2 (en) Hand-locked rendering of virtual objects in artificial reality
US11625862B2 (en) Mirror reconstruction
EP3912141A1 (fr) Identification de plans dans des systèmes de réalité artificielle
WO2023075973A1 (fr) Suivi d'un dispositif portatif
US11182647B2 (en) Distributed sensor module for tracking
US11688073B2 (en) Method and system for depth map reconstruction
US8891857B2 (en) Concave surface modeling in image-based visual hull
US11704877B2 (en) Depth map re-projection on user electronic devices
WO2021172950A1 (fr) Dispositif électronique et procédé pour reprojection de carte de profondeur sur un dispositif électronique
US11217011B2 (en) Providing semantic-augmented artificial-reality experience
US11715272B2 (en) 3D reconstruction of a moving object
US11615594B2 (en) Systems and methods for reconstruction of dense depth maps
CN117546472A (zh) 光场或全息媒体的资产重用性
US20220201271A1 (en) Temporal foveated rendering
WO2022191373A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique
WO2023146372A1 (fr) Reconstruction d'une scène tridimensionnelle
WO2021201638A1 (fr) Dispositif électronique et procédé d'identification d'objet utilisant un dispositif électronique apparié
WO2022158890A1 (fr) Systèmes et procédés de reconstruction de cartes de profondeur dense
US20240119672A1 (en) Systems, methods, and media for generating visualization of physical environment in artificial reality
US20230136662A1 (en) Parallax Asynchronous Spacewarp for Multiple Frame Extrapolation
WO2023279868A1 (fr) Procédé et appareil d'initialisation simultanée de localisation et de mappage et support de stockage
WO2024111783A1 (fr) Transformation de maillage avec reconstruction et filtrage de profondeur efficaces dans des systèmes de réalité augmentée (ar) de passage
WO2024081260A1 (fr) Systèmes, procédés et supports de génération de visualisation d'un environnement physique en réalité artificielle
CN115937284A (zh) 一种图像生成方法、设备以及存储介质和程序产品
EP4233011A1 (fr) Tramage corrigé par distorsion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21759994

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021759994

Country of ref document: EP

Effective date: 20220819

NENP Non-entry into the national phase

Ref country code: DE