WO2017142334A1 - Method and apparatus for generating omni media texture mapping metadata - Google Patents

Method and apparatus for generating omni media texture mapping metadata Download PDF

Info

Publication number
WO2017142334A1
WO2017142334A1 PCT/KR2017/001734 KR2017001734W WO2017142334A1 WO 2017142334 A1 WO2017142334 A1 WO 2017142334A1 KR 2017001734 W KR2017001734 W KR 2017001734W WO 2017142334 A1 WO2017142334 A1 WO 2017142334A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
geometric
interest
area
planar
Prior art date
Application number
PCT/KR2017/001734
Other languages
French (fr)
Inventor
Young-Kwon Lim
Madhukar Budagavi
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to KR1020187026553A priority Critical patent/KR20180107271A/en
Priority to EP17753495.5A priority patent/EP3403244A4/en
Publication of WO2017142334A1 publication Critical patent/WO2017142334A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • G06T3/12
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23614Multiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A device for generating omnidirectional media texture mapping metadata is provided. The device includes a transceiver, a memory and a processor connected to the memory. The transceiver is configured to receive, from an electronic device, a signal indicating a shape of a geometric frame for a video and an area of interest on a planar frame. The processor is also configured to map the area of interest on the planar frame to a region of interest on the geometric frame based on the shape of the geometric frame. The processor is further configured to generate the geometric frame with the region of interest.

Description

METHOD AND APPARATUS FOR GENERATING OMNI MEDIA TEXTURE MAPPING METADATA
This disclosure relates generally to virtual reality mapping. More specifically, this disclosure relates to a method and apparatus for omni (; omnidirectional) media texture mapping metadata.
VR media today uses a single rectangular video as a source of texture data to be mapped onto various 3D geometry types. To ensure interoperable rendering of VR contents on a player regardless of camera configuration and stitching methods, standardized metadata needs to be defined.
This disclosure provides a method and apparatus for view-dependent tone mapping of virtual reality images.
In a first embodiment, an electronic device for generating omni (; omnidirectional) media texture mapping metadata is provided. The electronic device includes a transceiver, a memory and a processor connected to the memory. The processor is configured to identify a shape of a geometric frame for a video and identify a region of interest on the geometric frame. The processor is also configured to map the geometric frame to a planar frame with the region of interest from the geometric frame indicated as an area of interest on the planar frame. The processor is further configured to generate a signal indicating the shape and the area of interest. The transceiver is configured to transmit, to a device, the signal.
In a second embodiment, a device for generating omni media texture mapping metadata is provided. The device includes a transceiver, a memory and a processor connected to the memory. The transceiver is configured to receive, from an electronic device, a signal indicating a shape of a geometric frame for a video and an area of interest on a planar frame. The processor is configured to map the area of interest on the planar frame to a region of interest on the geometric frame based on the shape of the geometric frame. The processor is further configured to generate the geometric frame with the region of interest.
In a third embodiment, a method for an electronic device for generating omnidirectional media texture mapping metadata is provided. The method includes identifing a shape of a geometric frame for a video. The method also includes mapping the geometric frame to a planar frame with the region of interest from the geometric frame indicated as an area of interest on the planar frame. the method further includes generating a signal indicating the shape and the area of interest, and transmitting, to a device, the signal.
In a fourth embodiment, a method for a device for generating omni media texture mapping metadata is provided. The method includes receiving, from an electronic device, a signal indicating a shape of a geometric frame for a video and an area of interest on a planar frame. The method also includes mapping the area of interest on the planar frame to a region of interest on the geometric frame based on the shape of the geometric frame. The method further includes generating the geometric frame with the region of interest.
Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.
For a more complete understanding of this disclosure and its advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
FIGURE 1 illustrates an example computing system according to various embodiments of the present disclosure;
FIGURES 2 and 3 illustrate example devices in a computing system according to various embodiments of the present disclosure;
FIGURE 4 illustrates an example squished sphere frame according to an embodiment of this disclosure;
FIGURE 5 illustrates an example mapping of a planar frame to a cylindrical frame according to an embodiment of this disclosure;
FIGURES 6A and 6B illustrate an example mapping of a planar frame to a cubical frame according to an embodiment of this disclosure;
FIGURES 7A and 7B illustrate an example mapping of a planar frame to a pyramidal frame according to an embodiment of this disclosure;
FIGURE 8 illustrates an example mapping of an area of a planar frame to a region of a spherical frame according to an embodiment of this disclosure;
FIGURE 9 illustrates an example mapping of an area of a planar frame to a region of a cylindrical frame according to an embodiment of this disclosure;
FIGURE 10 illustrates an example mapping of an area of a planar frame to a region of a cubical frame according to an embodiment of this disclosure;
FIGURES 11A and 11B illustrate an example projection of a spherical frame onto a cubical frame surrounding the spherical frame according to an embodiment of this disclosure;
FIGURES 12A and 12B illustrate an example projection of a spherical frame onto a cubical frame surrounded by the spherical frame according to an embodiment of this disclosure;
FIGURE 13 illustrates an example process for omni media texture mapping metadata in a video processor according to an embodiment of this disclosure; and
FIGURE 14 illustrates an example process for omni media texture mapping metadata in a video player according to an embodiment of this disclosure.
It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The term "couple" and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another. The terms "transmit," "receive," and "communicate," as well as derivatives thereof, encompass both direct and indirect communication. The terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation. The term "or" is inclusive, meaning and/or. The phrase "associated with," as well as derivatives thereof, means to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The term "controller" means any device, system or part thereof that controls at least one operation. Such a controller may be implemented in hardware or a combination of hardware and software and/or firmware. The functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. The phrase "at least one of," when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, "at least one of: A, B, and C" includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
Moreover, various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium. The terms "application" and "program" refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code. The phrase "computer readable program code" includes any type of computer code, including source code, object code, and executable code. The phrase "computer readable medium" includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A "non-transitory" computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
Definitions for other certain words and phrases are provided throughout this patent document. Those of ordinary skill in the art should understand that in many if not most instances, such definitions apply to prior as well as future uses of such defined words and phrases.
FIGURES 1 through 14, discussed below, and the various embodiments used to describe the principles of this disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of this disclosure may be implemented in any suitably arranged wireless communication system.
FIGURE 1 illustrates an example computing system 100 according to this disclosure. The embodiment of the computing system 100 shown in FIGURE 1 is for illustration only. Other embodiments of the computing system 100 could be used without departing from the scope of this disclosure.
As shown in FIGURE 1, the system 100 includes a network 102, which facilitates communication between various components in the system 100. For example, the network 102 may communicate internet protocol (IP) packets, frame relay frames, asynchronous transfer mode (ATM) cells, or other information between network addresses. The network 102 may include one or more local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), all or a portion of a global network such as the Internet, or any other communication system or systems at one or more locations.
The network 102 facilitates communications between at least one server 104 and various client devices 106-114. Each server 104, which is an electronic device, includes any suitable computing device or processing device that can provide computing services for one or more client devices. Each server 104 could, for example, include one or more processing devices, one or more memories storing instructions and data, and one or more network interfaces facilitating communication over the network 102.
Each client device 106-114 represents any suitable computing or processing device that interacts with at least one server or other computing device(s) over the network 102. In this example, the client devices 106-114 include a desktop computer 106, a mobile telephone or smartphone 108, a personal digital assistant (PDA) 110, a laptop computer 112, and a tablet computer 114. However, any other or additional client devices could be used in the computing system 100.
In this example, some client devices 108-114 communicate indirectly with the network 102. For example, the client devices 108-110 communicate via one or more base stations 116, such as cellular base stations or eNodeBs. Also, the client devices 112-114 communicate via one or more wireless access points 118, such as IEEE 802.11 wireless access points. Note that these are for illustration only and that each client device could communicate directly with the network 102 or indirectly with the network 102 via any suitable intermediate device(s) or network(s).
In this illustrative embodiment, computing system 100 provides for generating omni media texture mapping metadata. For example, server 104 may represent a video processor that generates and provides a signal indicating a shape of a geometric frame and an area of interest of a planar frame and smartphone 108 may represent a video player that receives the signal in order to map a planar frame with an area of interest to a corresponding geometric frame with a region of interest.
Although FIGURE 1 illustrates one example of a computing system 100, various changes may be made to FIGURE 1. For example, the system 100 could include any number of each component in any suitable arrangement. In general, computing and communication systems come in a wide variety of configurations, and FIGURE 1 does not limit the scope of this disclosure to any particular configuration. While FIGURE 1 illustrates one operational environment in which various features disclosed in this patent document can be used, these features could be used in any other suitable system.
FIGURES 2 and 3 illustrate example devices in a computing system according to this disclosure. In particular, FIGURE 2 illustrates an example server 200, and FIGURE 3 illustrates an example client device 300. The server 200 could represent the server 104 in FIGURE 1, and the client device 300 could represent one or more of the client devices 106-114 in FIGURE 1.
As shown in FIGURE 2, the server 200 includes a bus system 205, which supports communication between one or more processors 210, at least one storage device 215, at least one communications unit 220, and at least one input/output (I/O) unit 225. The controller 210 may be implemented by one or more processor. The controller 210 can be replaced by the one or more processor in FIGURE 2.
The processor(s) execute instructions that may be loaded into a memory 230, such as instructions for generating omni media texture mapping metadata. The processor(s) may include any suitable number(s) and type(s) of processors or other devices in any suitable arrangement. Example types of processor(s) include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discreet circuitry. The processor(s) is configured to perform operations for unlocking an electronic device with an authenticated wearable device.
The memory 230 and a persistent storage 235 are examples of storage devices 215, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory 230 may represent a random access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage 235 may contain one or more components or devices supporting longer-term storage of data, such as a ready only memory, hard drive, Flash memory, or optical disc.
The communications unit 220 supports communications with other systems or devices. For example, the communications unit 220 could include a network interface card or a wireless transceiver facilitating communications over the network 102. The communications unit 220 may support communications through any suitable physical or wireless communication link(s).
The I/O unit 225 allows for input and output of data. For example, the I/O unit 225 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit 225 may also send output to a display, printer, or other suitable output device.
In this illustrative embodiment, server 200 may implement an apparatus that provides for generating omni media texture mapping metadata, as will be discussed in greater detail below. Note that while FIGURE 2 is described as representing the server 104 of FIGURE 1, the same or similar structure could be used in one or more of the client devices 106-114. For example, a laptop or desktop computer could have the same or similar structure as that shown in FIGURE 2.
As shown in FIGURE 3, the client device 300 includes an antenna 305, a radio frequency (RF) transceiver 310, transmit (TX) processing circuitry 315, a microphone 320, and receive (RX) processing circuitry 325. The client device 300 also includes a speaker 330, a one or more processors 340, an input/output (I/O) interface (IF) 345, a touchscreen 350, a display 355, and a memory 360. The memory 360 includes a basic operating system (OS) program 361 and one or more applications 362.
The RF transceiver 310 receives, from the antenna 305, an incoming RF signal transmitted by another component in a system. The RF transceiver 310 down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal. The IF or baseband signal is sent to the RX processing circuitry 325, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The RX processing circuitry 325 transmits the processed baseband signal to the speaker 330 (such as for voice data) or to the processor(s) 340 for further processing (such as for web browsing data).
The TX processing circuitry 315 receives analog or digital voice data from the microphone 320 or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the processor(s) 340. The TX processing circuitry 315 encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The RF transceiver 310 receives the outgoing processed baseband or IF signal from the TX processing circuitry 315 and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna 305.
The processor(s) 340 can include one or more processors or other processing devices and execute the basic OS program 361 stored in the memory 360 in order to control the overall operation of the client device 300. For example, the processor(s) 340 could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceiver 310, the RX processing circuitry 325, and the TX processing circuitry 315 in accordance with well-known principles. In some embodiments, the processor(s) 340 includes at least one microprocessor or microcontroller.
The processor(s) 340 is also capable of executing other processes and programs resident in the memory 360. The processor(s) 340 can move data into or out of the memory 360 as required by an executing process. In some embodiments, the processor(s) 340 is configured to execute the applications 362 based on the OS program 361 or in response to signals received from external devices or an operator. The processor(s) 340 is also coupled to the I/O interface 345, which provides the client device 300 with the ability to connect to other devices such as laptop computers and handheld computers. The I/O interface 345 is the communication path between these accessories and the processor(s) 340.
The processor(s) 340 is also coupled to the touchscreen 350 and the display unit 355. The operator of the client device 300 can use the touchscreen 350 to enter data into the client device 300. The display 355 may be a liquid crystal display or other display capable of rendering text and/or at least limited graphics, such as from web sites.
The memory 360 is coupled to the processor(s) 340. Part of the memory 360 could include a random access memory (RAM), and another part of the memory 360 could include a flash memory or other read-only memory (ROM).
As will be discussed in greater detail below, in this illustrative embodiment, client device 300 receives a signal indicating a shape of a geometric frame and an area of interest in a planar frame. Although FIGURES 2 and 3 illustrate examples of devices in a computing system, various changes may be made to FIGURES 2 and 3. For example, various components in FIGURES 2 and 3 could be combined, further subdivided, or omitted and additional components could be added according to particular needs. As a particular example, the processor(s) 340 could be divided into multiple processors, such as one or more central processing units (CPUs) and one or more graphics processing units (GPUs). Also, while FIGURE 3 illustrates the client device 300 configured as a mobile telephone or smartphone, client devices could be configured to operate as other types of mobile or stationary devices. In addition, as with computing and communication networks, client devices and servers can come in a wide variety of configurations, and FIGURES 2 and 3 do not limit this disclosure to any particular client device or server.
FIGURE 4 illustrates an example normal spherical frame 400 and a squished spherical frame 405 according to an embodiment of this disclosure. The embodiment of the squished sphere frame 400 for omni media texture mapping metadata shown in FIGURE 4 is for illustration only. Other embodiments of the squished sphere frame 400 for omni media texture mapping metadata may be used without departing from the scope of this disclosure.
The spherical frame 400 is defined with a center point 410. The center point 410 represents the point of view of the spherical frame 400. When viewing the spherical frame 400, certain regions have more detail than others. To optimize an equirectangular projection, the regions with less detail are condensed with the equirectangular frame of "squished" in the spherical frame 405.
Some portion comprising a top region 415 and bottom region 420 of spherical frame 400 is squished while other part of sphere is unchanged. As drawn in the figure, the top height 425 of top region 415 in the spherical frame 400 starts at a top angle 435 from the equator 445 indicated by squish_start_pitch_top. The top region 415 is squished into the squished top region 450 to the top squished height 460 by the ratio given by squish_ratio_top, where the value of squish ratio is normalized to 255.
The bottom height 430 of bottom region 420 in the spherical frame 400 starts at a bottom angle 440 from the equator 445 indicated by squish_start_pitch_bottom. The bottom region 420 is squished into the squished bottom region 455 to the bottom squished height 465 by the ratio given by squish_ratio_bottom, where the value of squish ratio is normalized to 255.
Although FIGURE 4 illustrates one example of squished sphere frame 400 for omni media texture mapping metadata, various changes may be made to FIGURE 4. For example, various components in FIGURE 4 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURE 5 illustrates an example mapping 500 of a planar frame 505 to a cylindrical frame 510 according to an embodiment of this disclosure. The embodiment of the example mapping 500 for omni media texture mapping metadata shown in FIGURE 5 is for illustration only. Other embodiments of the example mapping 500 for omni media texture mapping metadata may be used without departing from the scope of this disclosure.
A planar frame 505 to be mapped to a cylindrical frame 510 includes a rectangular area 515 on the one side of the planar frame 505 and a top circular area 520 and a bottom circular area 525 on the other side of planar frame 505. The rectangular area 515 is defined by a rectangular width 530 and a rectangular height 535. The top circular area 520 and bottom circular area 525 are defined by a radius 540 that is the value of radius field. The planar frame 505 also includes extra space 545 that is not used for the mapping of the cylindrical frame 510, but could be used for other purposes.
The rectangular area 515 of the planar frame 505 is mapped to the center portion of the cylindrical frame 510 in a manner that a cross section of the width creates a circle. The top circular area 520 of the planar frame 505 is mapped to the top of the cylindrical frame 510 and the bottom circular area 525 of the planar frame 505 is mapped to the bottom of the cylindrical frame 510.
Although FIGURE 5 illustrates one example of example mapping 500, various changes may be made to FIGURE 5. For example, various components in FIGURE 5 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURES 6A and 6B illustrate an example mapping of a planar frame 600 to a cubical frame 605 according to an embodiment of this disclosure. The embodiment of the planar frame 600 and cubical frame 605 for omni media texture mapping metadata shown in FIGURES 6A AND 6B is for illustration only. Other embodiments of the planar frame 600 and cubical frame 605 for omni media texture mapping metadata may be used without departing from the scope of this disclosure.
A planar frame 600 to be mapped to a cubical frame 605 includes six rectangular areas including a front area 610, a top area 615, a bottom area 620, a left area 625, a right area 630 a back area 635 and extra area 640. When mapping the planar frame 600 to a cubical frame 605, the front area 610 is mapped to the front region 645, the top area 615 is mapped to the top region 650, a bottom area 620 is mapped to the bottom region 655, the left area 625 is mapped to the left region 660, the right area 630 is mapped to the right region 665, and the back area 635 is mapped to the back region 670. The extra space 640 is not used for the mapping of the cubical frame 605, but could be used for other purposes. The locations and size of each region of the cubical frame 605 are indicated in an OmniMediaTextureMappingMetadataSampleBox. For example, the OmniMediaTextureMappingMetadataSampleBox could by written as:
Figure PCTKR2017001734-appb-I000001
The parameters "center_x" and "center_y" indicate, respectively, the horizontal and vertical coordinates of the pixel to be rendered at the center of the geometrical frame. The pixel data at this location of the planar frame in the referenced track is rendered at the point described in the following table 1 below according to the type of the geometrical frame. A geometrical frame is a three dimensional frame that corresponds to a specific geometry, such as cubical, cylindrical, pyramidal, etc.
Figure PCTKR2017001734-appb-T000001
The parameters "number_regions" indicates the number of regions to divide the planar frame in the referenced track. The planar frame in the referenced track is divided into the number of non-overlapping areas as given by the value of this field and each area are separately mapped to the specific regions of the geometrical frame.
The parameters "region_top_left_x" and "region_top_left_y" indicate respectively the horizontal and vertical coordinate of the top-left corner of the area of the planar frame in the referenced track in rectangular frame.
The parameters "region_width" and "region_height" indicate respectively the width and height of an area of the planar frame in the referenced track in a rectangular frame.
The parameters "pitch_start" and "pitch_end" indicate respectively the starting and ending pitch angles of the specific region of the geometrical frame.
The parameters "yaw_start" and "yaw_end" indicate respectively the starting and ending yaw angles of the specific region of the geometrical frame.
The parameters "start_height" and "end_height" indicate respectively the normalized starting and ending height of the specific area of the geometry in cylinder frame.
The parameter "surface_id" indicates the identifier of the region of either cubical frame or pyramidal frame.
The parameter "virtual_sphere_type" in the following table 2 indicates the relationship between the 3D geometry to render the planar frame and the spherical frame to represent the 3D regions. A 3D geometry to render 2D video is defined as a list of combination of triangular faces. Each triangular face on a 3D surface defined by three vertices on a 3D surface has one and only one corresponding triangle on a 2D video defined by three 2D texture coordinates.
Figure PCTKR2017001734-appb-T000002
Although FIGURES 6A AND 6B illustrates one example of planar frame 600 and cubical frame 605, various changes may be made to FIGURES 6A AND 6B. For example, various components in FIGURES 6A AND 6B may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURES 7A and 7B illustrate an example mapping of a planar frame 700 to a pyramidal frame 705 according to an embodiment of this disclosure. The embodiment of the planar frame 700 and pyramidal frame 705 for omni media texture mapping metadata shown in FIGURES 7A AND 7B is for illustration only. Other embodiments of the planar frame 700 and pyramidal frame 705 for omni media texture mapping metadata may be used without departing from the scope of this disclosure.
A planar frame 700 to be mapped to a pyramidal frame 705 includes a number rectangular areas depending on the shape of the front region 735. For example, an octagonal front region 735 includes a total of nine regions, a front region 735 and eight triangular regions from each side of the front region. In the illustrated embodiment, the front region 735 is rectangular and the pyramidal frame 705 has a total of five regions, a front region 735, a top region 740, a bottom region 745, a left region 750, and a right region 755. The planar frame 700 corresponding to the pyramidal frame 705 includes a front area 710, a top area 715, a bottom area 720, a left area 725, a right area 730. When mapping the planar frame 700 to a pyramidal frame 705, the front area 710 is mapped to the front region 735, the top area 715 is mapped to the top region 740, a bottom area 720 is mapped to the bottom region 745, the left area 725 is mapped to the left region 750, the right area 730 is mapped to the right region 755. The location and size of the areas of the planar frame 700 to be mapped to the regions of the pyramidal frame 705 are indicated by the OmniMediaTextureMappingMetadataSample box and are defined in the following table 3:
Figure PCTKR2017001734-appb-T000003
where the wv is the width of the planar frame 700, w1 is the width of the front area 715, hv is the height of the planar frame 700, and h1 is the height of the front area 715.
Although FIGURES 7A AND 7B illustrates one example of planar frame 700 and pyramidal frame 705, various changes may be made to FIGURES 7A AND 7B. For example, various components in FIGURES 7A AND 7B may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURE 8 illustrates an example mapping 800 of an area 815 of a planar frame 805 to a region 820 of a spherical frame 810 according to an embodiment of this disclosure. The embodiment of the mapping 800 for omni media texture mapping metadata shown in FIGURE 8 is for illustration only. Other embodiments of the mapping 800 for omni media texture mapping metadata may be used without departing from the scope of this disclosure.
The location and dimensions are required for mapping 800 of an area 815 of a planar frame 805 to a region 820 of a spherical frame 810. The area 815 corresponds to a portion of the planar frame 805 that may require more or less definition that the other portion 825 of the planar frame 805. The planar frame 805 can include multiple areas 815, where each area 815 can require more or less definition that other areas 815.
Because the spherical frame 810 is not flat, portions of the area 815 of the planar frame 805 are squished or stretched. For the spherical frame 810 or squished sphere, the area 815 is identified at location point 830 with a width 835 and height 840. The area 815, defined by a portion of the planar frame 805 starting from (top_left_x, top_left_y) and ending at (top_left_x+width, top_left_y+height), is mapped to the region 820 on a spherical frame 810 where the yaw and pitch values of the vertices are between variables "yaw_start" 845 and "yaw_end" 850, and between variables "pitch_start" 855 and "pitch_end" 860, respectively. The "yaw_start" 845 and "yaw_end" 850 identify the sides of the region 820 on the spherical frame 810 and the "pitch_start" 855 and "pitch_end" 860 identify the top and bottom of the region 820 on the spherical frame 810.
Although FIGURE 8 illustrates one example of mapping 800, various changes may be made to FIGURE 8. For example, various components in FIGURE 8 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURE 9 illustrates an example mapping 900 of an area 915 of a planar frame 905 to a region 920 of a cylindrical frame 910 according to an embodiment of this disclosure. The embodiment of the mapping 900 for omni media texture mapping metadata shown in FIGURE 9 is for illustration only. Other embodiments of the mapping 900 for omni media texture mapping metadata may be used without departing from the scope of this disclosure.
The location and dimensions are required for mapping 900 of an area 915 of a planar frame 905 to a region 920 of a cylindrical frame 910. The area 915 corresponds to a portion of the planar frame 905 that may require more or less definition that the other portion 925 of the planar frame 905. The planar frame 905 can include multiple areas 915, where each area 915 can require more or less definition that other areas 915.
Because the cylindrical frame 910 is not flat, portions of the area 915 of the planar frame 905 are squished or stretched. For the cylindrical frame 910, the area 915 is identified at location point 930 with a width 935 and height 940. The area 915, defined by a portion of the planar frame 905 starting from (top_left_x, top_left_y) and ending at (top_left_x+width, top_left_y+height), is mapped to the region 920 on a cylindrical frame 910 where the yaw and height values of the vertices are between variables "yaw_start" 945 and "yaw_end" 950, and between variables "height_start" 955 and "height_end" 960, respectively. The "yaw_start" 945 and "yaw_end" 950 identify the sides of the region 920 on the cylindrical frame 910 and the "height_start" 955 and "height_end" 960 identify the top and bottom of the region 920 on the cylindrical frame 910.
Although FIGURE 9 illustrates one example of mapping 900, various changes may be made to FIGURE 9. For example, various components in FIGURE 9 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURE 10 illustrates example mapping 1000 of an area 1015 of a planar frame 1005 to a region 1020 of a cubical frame 1010 according to an embodiment of this disclosure. The embodiment of the mapping 1000 for omni media texture mapping metadata shown in FIGURE 10 is for illustration only. Other embodiments of the mapping 1000 for omni media texture mapping metadata may be used without departing from the scope of this disclosure.
The location and dimensions are required for mapping 1000 of an area 1015 of a planar frame 1005 to a region 1020 of a cubical frame 1010. The area 1015 corresponds to a portion of the planar frame 1005 that may require more or less definition that the other portion 1025 of the planar frame 1005. The planar frame 1005 can include multiple areas 1015, where each area 1015 can require more or less definition that other areas 1015.
For the cubical frame 1010, the area 1015 is identified at location point 1030 with a width 1035 and height 1040. The area 1015, defined by a portion of the planar frame 1005 starting from (top_left_x, top_left_y) and ending at (top_left_x+width, top_left_y+height), is mapped to the region 1020 on a cubical frame 1010 where the location point 1045 is defined by the variables (area_top_left_x, area_top_left_y) and ending at (area_top_left_x+area_width, area_top_left_y+area_height). The variables "width" 1050 identifies the sides of the region 1020 on the cubical frame 1010 and the"height" 1055 identifies the top and bottom of the region 1020 on the cubical frame 1010. For a pyramidal frame, the regional mapping is applied to the front surface.
Although FIGURE 10 illustrates one example of mapping 1000, various changes may be made to FIGURE 10. For example, various components in FIGURE 10 may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURES 11A and 11B illustrate an example projection 1100 of a spherical frame onto a cubical frame surrounding the spherical frame according to an embodiment of this disclosure. The embodiments of the projection 1100 for omni media texture mapping metadata shown in FIGURES 11A and 11B are for illustration only. Other embodiments of the projection 1100 for omni media texture mapping metadata may be used without departing from the scope of this disclosure. FIGURE 11B is a cross section 1105 of the projection 1100 of FIGURE 11A. For example, the cubical frame 1115 can take any geometric form, such as pyramidal.
The spherical frame 1110 is located inside the cubical frame 1115 based on a determination of the largest size the spherical frame 1110 can exist touching at least one surface of the cubical frame 1115. At the center of the spherical frame 1110 is a viewpoint 1120 from how a viewer is meant to ingest the video. To determine a region 1130 of a spherical frame 1110 as a region 1130 on the cubical frame 1115, the pixels are mapped based on the viewpoint 1120. A pixel's location on the cubical frame 1115 is determined based on an imaginary straight line from the viewpoint to the cubical frame 1115. The point on the cubical frame 1115 corresponds to the point where the imaginary line crosses the spherical frame 1110.
For the case where the rendering geometry is defined as a list of parameters, the region starting from (top_left_x, top_left_y) and ending at (top_left_x+width, top_left_y+height) is mapped to the area on the rendering geometry corresponding the region on the spherical frame where the yaw and pitch value of vertices of the virtual sphere are between yaw_start and yaw_end, and pitch_start and pitch_end, respectively. The spherical frame is defined as a sphere whose volume is smaller than the rendering geometry the value of the virtual_sphere_type is equal to 0x02. In this case, the spherical frame is the largest sphere fully contained in the rendering geometry. When the value of the virtual_sphere_type is equal to 0x02, the region on the rendering geometry is defined as a region defined by the four points where four lines from the center of the sphere and passes one of the four points on the virtual sphere such as (yaw_start, pitch_start), (yaw_start,pitch_end), (yaw_end, pitch_start), and(yaw_end,pitch_end), respectively and extended until the line intersects the rendering geometry are intersecting.
Although FIGURES 11A and 11B illustrate one example of projection 1100, various changes may be made to FIGURES 11A and 11B. For example, various components in FIGURES 11A and 11B may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
FIGURES 12A and 12B illustrate an example projection 1200 of a spherical frame 1210 onto a cubical frame 1215 surrounded by the spherical frame according to an embodiment of this disclosure. The embodiments of the projection 1200 for omni media texture mapping metadata shown in FIGURES 12A and 12B are for illustration only. Other embodiments of the projection 1200 for omni media texture mapping metadata may be used without departing from the scope of this disclosure. FIGURE 12B is a cross section 1205 of the projection 1200 of FIGURE 12A. For example, the cubical frame 1215 can take any geometric form, such as pyramidal.
The spherical frame 1210 is located outside the cubical frame 1215 based on a determination of the largest edge located on the cubical frame 1215 in a manner that the spherical frame will contact at the vertices. At the center of the spherical frame 1210 is a viewpoint 1220 from how a viewer is meant to ingest the video. To determine a region 1230 of a spherical frame 1210 as a region 1230 on the cubical frame 1215, the pixels are mapped based on the viewpoint 1220. A pixel's location on the cubical frame 1215 is determined based on an imaginary straight line from the viewpoint to the spherical frame 1215. The point on the cubical frame 1215 corresponds to the point where the imaginary line crosses the cubical frame 1215.
Although FIGURES 12A and 12B illustrate one example of projection 1200, various changes may be made to FIGURES 12A and 12B. For example, various components in FIGURES 12A and 12B may be combined, further subdivided, or omitted and additional components may be added according to particular needs.
For the case where the rendering geometry is defined as a list of parameters, the region starting from (top_left_x, top_left_y) and ending at (top_left_x+width, top_left_y+height) is mapped to the area on the rendering geometry corresponding the region on the spherical frame where the yaw and pitch value of vertices of the virtual sphere are between yaw_start and yaw_end, and pitch_start and pitch_end, respectively. The spherical frame is defined as a sphere whose volume is bigger than the volume of the geometry when the value of the virtual_sphere_type is equal to 0x01. In this case, all vertices of the rendering geometry that have the largest value of the distance from the center of the rendering geometry intersect a point on the spherical frame. When the value of the virtual_sphere_type is equal to 0x01, the region on the rendering geometry is defined as a region defined by the four points where four lines from the center of the sphere to the one of the four points on the virtual sphere such as (yaw_start, pitch_start), (yaw_start,pitch_end), (yaw_end, pitch_start), and(yaw_end,pitch_end), respectively are intersecting.
FIGURE 13 illustrates an example process for omni media texture mapping metadata in a video processor according to an embodiment of this disclosure. For example, the process depicted in FIGURE 13 may be performed by the video processor in FIGURE 2.
In operation 1305, the video processor identifies a shape of a geometric frame for a video. Examples of shapes of geometric frame include spherical frames, squished spherical frames, cylindrical frames, pyramidal frames, cubical frames, etc. The shape can be stored in along with the video in the video processor or the video processor can determine the shape based on the surfaces of the video.
In operation 1310, the video processor identifies a region of interest on the geometric frame. The region of interest is identified as containing an above or below average amount of details. Regions of interest with more details are identified for a higher resolution and regions of interest with less details are identified a lower resolution. The region of interest is located based on geometric parameters.
In operation 1315, the video processor maps the geometric frame to a planar frame with the region of interest from the geometric frame indicated as an area of interest on the planar frame. The area of interest has a different resolution than other portions of the planar frame. The area of interest is located by planar parameters, including a location point, a width, and a height. The planar parameters correspond to the geometric parameters. When the shape is a spherical frame, the geometric parameters are pitch start, pitch end, yaw start and yaw end. When the shape is a cylindrical frame, the geometric parameters include yaw start, yaw end, height start and height end. When the shape is a cubical frame, the geometric parameters include a geometric location point, a geometric width, and a geometric height.
In operation 1320, the video processor generates and transmits a signal indicating the shape and the area of interest to a video player. The signal also includes the resolution of the area of interest and the planar parameters and geometric parameters. Other parameters can include:
The parameter "geometry_type" in the following table 4 indicates the type of geometry for rendering of omnidirectional media. Mathematical representation of each geometry type.
Figure PCTKR2017001734-appb-T000004
The parameter "projection_type" in the following table 5 indicates the method to be used for mapping of texture in the video in the referenced track onto the geometry for rendering of omnidirectional media. Mathematical representation of each projection method for specific geometry type.
Figure PCTKR2017001734-appb-T000005
The parameter "stereoscopic" indicates whether stereoscopic media rendering is used or not. If the value of this field is '1', the video in the referenced track shall be divided into two parts to provide different texture data for left eye and right eye separately according to the composition type specified by stereoscopic_type.
The parameter "multiple_regions" indicates whether video is divided into multiple regions, where each region may have different resolutions or squish factors. If the value of this field '1', the video in the referenced track shall be divided into multiple non-overlapping regions, where each region shall provide texture data for specific area of geometry.
The parameter "entire_volume" indicates whether video covers entire volume of geometry. If the value of this field is '1', entire volume of the geometry is rendered with the video in the referenced track. If the value of this field is '0', texture of some area of the geometry is provided by the mean other than the video in the referenced track.
The parameter "static" indicates whether the texture mapping is changed over time. If the value of this filed is '1', mapping is not changed for duration of entire video in the referenced track. If the value of this filed is '0', mapping is changed over time.
The parameter "static_top" indicates whether the texture data other than video in the referenced track is provided. If the value of this field is '1', the image data to be used as texture for top region of the geometry shall be provided.
The parameter "static_bottom" indicates whether the texture data other than video in the referenced track is provided. If the value of this field is '1', the image data to be used as texture for bottom region of the geometry is provided.
The parameter "radius" indicates the radius of circular shaped area for top and bottom regions of cylindrical frame. The area for the texture of top region is located at the top right corner of the planar frame in circular shape with the radius indicated by this field. The area for the texture of bottom surface is located at the bottom right corner of the video in circular shape with the radius indicated by this field.
The parameter "stereoscopic_type" in the following table 6 indicates the type of composition for the stereoscopic video in the referenced track.
Figure PCTKR2017001734-appb-T000006
The parameters "squish_start_pitch_top" and "squish_start_pitch_bottom" indicate, respectively, the pitch angle of top and the bottom of the spherical frame where the squishing applied. The top and bottom portion of sphere indicated by these fields are squished with the ratio given by the value of the field squish_ratio.
The parameter "squish_ratio" indicates the ratio of squishing for the squished spherical frame.
Although FIGURE 13 illustrates an example process for omni media texture mapping metadata in a video processor, respectively, various changes may be made to FIGURE 13. For example, while shown as a series of steps, various steps may overlap, occur in parallel, occur in a different order, occur multiple times, or not be performed in certain embodiments.
FIGURE 14 illustrates an example process 1400 for omni media texture mapping metadata in a video player according to an embodiment of this disclosure. For example, the process depicted in FIGURE 14 may be performed by the video player in FIGURE 3.
In operation 1405, the video player receives a signal indicating a shape of a geometric frame for a video and an area of interest on a planar frame. The area of interest has a different resolution than the other portion of the planar frame. The signal can include multiple areas of interest for the planar frame. The area of interest is located by planar parameters including a location point a width and a height.
In operation 1410, the video player maps the area of interest on the planar frame to a region of interest on the geometric frame based on the shape of the geometric frame. The region of interest is located based on geometric parameters corresponding to the planar parameters of the planar frame. When the shape is a spherical frame, the geometric parameters are pitch start, pitch end, yaw start and yaw end. When the shape is a cylindrical frame, the geometric parameters include yaw start, yaw end, height start and height end. When the shape is a cubical frame, the geometric parameters include a geometric location point, a geometric width, and a geometric height.
In operation 1415, the video player generates the geometric frame with the region of interest.
Although FIGURE 14 illustrates an example process 1400 for omni media texture mapping metadata in a video player, various changes may be made to FIGURE 14. For example, while shown as a series of steps, various steps may overlap, occur in parallel, occur in a different order, occur multiple times, or not be performed in certain embodiments.
None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claim scope.
None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claim scope. The scope of patented subject matter is defined only by the claims. Use of any other term, including without limitation 'mechanism,' 'module,' 'device,' 'unit,' 'component,' 'element,' 'member,' 'apparatus,' 'machine,' 'system,' 'processor,' or 'controller,' within a claim is understood by the applicants to refer to structures known to those skilled in the relevant art.
Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.

Claims (15)

  1. An electronic device for generating omnidirectional media texture mapping metadata, the electronic device comprising:
    a memory;
    a processor operably connected to the memory, the processor configured to:
    identify a shape of a geometric frame for a video;
    identify a region of interest on the geometric frame;
    map the geometric frame to a planar frame with the region of interest from the geometric frame indicated as an area of interest on the planar frame; and
    generate a signal indicating the shape and the area of interest; and
    a transceiver configured to transmit, to a device, the signal.
  2. The electronic device of Claim 1, wherein:
    the area of interest has a different resolution than other portions of the planar frame; and
    the signal includes the different resolution for the area of interest.
  3. The electronic device of Claim 1, wherein:
    the area of interest is located by planar parameters on the planar frame, and
    the planar parameters include a location point, a width, and a height.
  4. The electronic device of Claim 3, wherein the region of interest is located based on geometric parameters corresponding to the planar parameters of the planar frame.
  5. The electronic device of Claim 4, wherein, when the shape is a spherical frame, the geometric parameters include a pitch start, a pitch end, a yaw start and a yaw end, and
    when the shape is a cylindrical frame, the geometric parameters include a yaw start, a yaw end, a height start, and a height end, and
    when the shape is a cubical frame, the geometric parameters include a geometric location point, a geometric width, and a geometric height.
  6. The electronic device of Claim 4, wherein, when the shape is a parameterized frame, the geometric parameters include a pitch start, a pitch end, a yaw start and a yaw end, of either a largest sphere fully contained in the parameterized frame or a smallest sphere containing the parameterized frame.
  7. A device for generating omnidirectional media texture mapping metadata, the device comprising:
    a transceiver configured to receive, from an electronic device, a signal indicating a shape of a geometric frame for a video and an area of interest on a planar frame;
    a memory; and
    a processor operably connected to the memory, the processor configured to:
    map the area of interest on the planar frame to a region of interest on the geometric frame based on the shape of the geometric frame; and
    generate the geometric frame with the region of interest.
  8. The device of Claim 7, wherein:
    the area of interest has a different resolution than other portions of the planar frame; and
    the signal includes the different resolution for the area of interest.
  9. The device of Claim 7, wherein:
    the area of interest is located by planar parameters on the planar frame, and
    the planar parameters include a location point, a width, and a height.
  10. The device of Claim 9, wherein the region of interest is located based on geometric parameters corresponding to the planar parameters of the planar frame.
  11. The device of Claim 10, wherein, when the shape is a spherical frame, the geometric parameters include a pitch start, a pitch end, a yaw start and a yaw end,
    when the shape is a cylindrical frame, the geometric parameters include a yaw start, a yaw end, a height start, and a height end, and
    when the shape is a cubical frame, the geometric parameters include a geometric location point, a geometric width, and a geometric height.
  12. The device of Claim 10, wherein, when the shape is a parameterized frame, the geometric parameters include a pitch start, a pitch end, a yaw start and a yaw end, of either a largest sphere fully contained in the parameterized frame or a smallest sphere containing the parameterized frame.
  13. A method for generating omnidirectional media texture mapping metadata in an electronic device, the method comprising:
    identifing a shape of a geometric frame for a video;
    identifing a region of interest on the geometric frame;
    mapping the geometric frame to a planar frame with the region of interest from the geometric frame indicated as an area of interest on the planar frame;
    generating a signal indicating the shape and the area of interest; and
    transmitting, to a device, the signal.
  14. A method for generating omnidirectional media texture mapping metadata in a device, the method comprising:
    receiving, from an electronic device, a signal indicating a shape of a geometric frame for a video and an area of interest on a planar frame;
    mapping the area of interest on the planar frame to a region of interest on the geometric frame based on the shape of the geometric frame; and
    generating the geometric frame with the region of interest.
  15. The method of Claim 14, wherein:
    the area of interest has a different resolution than other portions of the planar frame; and
    the signal includes the different resolution for the area of interest.
PCT/KR2017/001734 2016-02-16 2017-02-16 Method and apparatus for generating omni media texture mapping metadata WO2017142334A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020187026553A KR20180107271A (en) 2016-02-16 2017-02-16 Method and apparatus for generating omni media texture mapping metadata
EP17753495.5A EP3403244A4 (en) 2016-02-16 2017-02-16 Method and apparatus for generating omni media texture mapping metadata

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662295823P 2016-02-16 2016-02-16
US62/295,823 2016-02-16
US15/431,587 US10147224B2 (en) 2016-02-16 2017-02-13 Method and apparatus for generating omni media texture mapping metadata
US15/431,587 2017-02-13

Publications (1)

Publication Number Publication Date
WO2017142334A1 true WO2017142334A1 (en) 2017-08-24

Family

ID=59560348

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/001734 WO2017142334A1 (en) 2016-02-16 2017-02-16 Method and apparatus for generating omni media texture mapping metadata

Country Status (4)

Country Link
US (1) US10147224B2 (en)
EP (1) EP3403244A4 (en)
KR (1) KR20180107271A (en)
WO (1) WO2017142334A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180059210A (en) * 2016-11-25 2018-06-04 삼성전자주식회사 Image processing apparatus and method for image processing thereof
US10999602B2 (en) 2016-12-23 2021-05-04 Apple Inc. Sphere projected motion estimation/compensation and mode decision
JPWO2018135321A1 (en) * 2017-01-19 2019-11-07 ソニー株式会社 Image processing apparatus and method
US11259046B2 (en) 2017-02-15 2022-02-22 Apple Inc. Processing of equirectangular object data to compensate for distortion by spherical projections
US10924747B2 (en) 2017-02-27 2021-02-16 Apple Inc. Video coding techniques for multi-view video
US11093752B2 (en) 2017-06-02 2021-08-17 Apple Inc. Object tracking in multi-view video
US10754242B2 (en) 2017-06-30 2020-08-25 Apple Inc. Adaptive resolution and projection format in multi-direction video
US10664947B2 (en) * 2017-06-30 2020-05-26 Canon Kabushiki Kaisha Image processing apparatus and image processing method to represent part of spherical image in planar image using equidistant cylindrical projection
US20190005709A1 (en) * 2017-06-30 2019-01-03 Apple Inc. Techniques for Correction of Visual Artifacts in Multi-View Images
KR102442089B1 (en) 2017-12-20 2022-09-13 삼성전자주식회사 Image processing apparatus and method for image processing thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060268360A1 (en) * 2005-05-12 2006-11-30 Jones Peter W J Methods of creating a virtual window
US20120307001A1 (en) * 2011-06-03 2012-12-06 Nintendo Co., Ltd. Information processing system, information processing device, storage medium storing information processing program, and moving image reproduction control method
US20140132788A1 (en) * 2012-11-09 2014-05-15 Sean Geoffrey Ramsay Systems and Methods for Generating Spherical Images
US20140218354A1 (en) * 2013-02-06 2014-08-07 Electronics And Telecommunications Research Institute View image providing device and method using omnidirectional image and 3-dimensional data
US20150113581A1 (en) * 2011-01-10 2015-04-23 Dropbox, Inc. System and method for sharing virtual and augmented reality scenes between users and viewers
JP2015173424A (en) 2014-03-12 2015-10-01 株式会社セック Video distribution system and video display device
US20160012855A1 (en) 2014-07-14 2016-01-14 Sony Computer Entertainment Inc. System and method for use in playing back panorama video content

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6734855B2 (en) * 2000-07-11 2004-05-11 Sony Corporation Image editing system and method, image processing system and method, and recording media therefor
KR100450823B1 (en) 2001-11-27 2004-10-01 삼성전자주식회사 Node structure for representing 3-dimensional objects using depth image
US8207964B1 (en) * 2008-02-22 2012-06-26 Meadow William D Methods and apparatus for generating three-dimensional image data models
JP4740723B2 (en) * 2005-11-28 2011-08-03 富士通株式会社 Image analysis program, recording medium storing the program, image analysis apparatus, and image analysis method
US7990394B2 (en) * 2007-05-25 2011-08-02 Google Inc. Viewing and navigating within panoramic images, and applications thereof
US8493408B2 (en) * 2008-11-19 2013-07-23 Apple Inc. Techniques for manipulating panoramas
JP5521750B2 (en) * 2010-05-10 2014-06-18 富士通株式会社 Simulation program, simulation apparatus, and simulation method
US20120092348A1 (en) * 2010-10-14 2012-04-19 Immersive Media Company Semi-automatic navigation with an immersive image
CN103493105B (en) * 2011-04-25 2017-04-05 林光雄 Omnidirectional images edit routine and omnidirectional images editing device
JP6075066B2 (en) * 2012-12-28 2017-02-08 株式会社リコー Image management system, image management method, and program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060268360A1 (en) * 2005-05-12 2006-11-30 Jones Peter W J Methods of creating a virtual window
US20150113581A1 (en) * 2011-01-10 2015-04-23 Dropbox, Inc. System and method for sharing virtual and augmented reality scenes between users and viewers
US20120307001A1 (en) * 2011-06-03 2012-12-06 Nintendo Co., Ltd. Information processing system, information processing device, storage medium storing information processing program, and moving image reproduction control method
US20140132788A1 (en) * 2012-11-09 2014-05-15 Sean Geoffrey Ramsay Systems and Methods for Generating Spherical Images
US20140218354A1 (en) * 2013-02-06 2014-08-07 Electronics And Telecommunications Research Institute View image providing device and method using omnidirectional image and 3-dimensional data
JP2015173424A (en) 2014-03-12 2015-10-01 株式会社セック Video distribution system and video display device
US20160012855A1 (en) 2014-07-14 2016-01-14 Sony Computer Entertainment Inc. System and method for use in playing back panorama video content

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C. GRUNHEIT ET AL.: "Efficient representation and interactive streaming of high-resolution panoramic views", INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP, vol. 1, 1 January 2002 (2002-01-01), pages III-209 - III-212, XP055534111, DOI: doi:10.1109/ICIP.2002.1038942
S. HEYMANN ET AL.: "Representation, Coding and Interactive Rendering of High-Resolution Panoramic Images and Video using MPEG-4", PROC. PANORAMIC PHOTOGRAMMETRY WORKSHOP (PPW, 28 February 2005 (2005-02-28)
See also references of EP3403244A4

Also Published As

Publication number Publication date
EP3403244A4 (en) 2019-01-23
US10147224B2 (en) 2018-12-04
EP3403244A1 (en) 2018-11-21
KR20180107271A (en) 2018-10-01
US20170236323A1 (en) 2017-08-17

Similar Documents

Publication Publication Date Title
WO2017142334A1 (en) Method and apparatus for generating omni media texture mapping metadata
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
WO2019045473A1 (en) Method and apparatus for point-cloud streaming
WO2019013430A1 (en) Point cloud and mesh compression using image/video codecs
WO2018070754A1 (en) System and method to prevent boundary artifacts
WO2018128472A1 (en) Virtual reality experience sharing
WO2018070803A1 (en) Method and apparatus for session control support for field of view virtual reality streaming
WO2018182321A1 (en) Method and apparatus for rendering timed text and graphics in virtual reality video
WO2018048221A1 (en) Three hundred sixty degree video stitching
CN112933599A (en) Three-dimensional model rendering method, device, equipment and storage medium
WO2020071811A1 (en) Method and apparatus for carriage of pcc in isobmff for flexible combination
WO2019199083A1 (en) Method and apparatus for compressing and decompressing point clouds
WO2017138728A1 (en) Method and apparatus for creating, streaming, and rendering hdr images
WO2020122604A1 (en) Electronic device and method for displaying web content in augmented reality mode
WO2020171599A1 (en) Electronic device and method of displaying content thereon
EP4085624A1 (en) A device and a method for storage of video decoder configuration information
WO2022045779A1 (en) Restoration of the fov of images for stereoscopic rendering
WO2019216572A1 (en) Image providing method for portable terminal, and apparatus using same
WO2020085570A1 (en) Device and method for acquiring 360-degree vr image in game by using multiple virtual cameras
WO2018110839A1 (en) Method for transmitting data relating to three-dimensional image
CN114257755A (en) Image processing method, device, equipment and storage medium
WO2022225301A1 (en) Operation of video decoding engine for evc
WO2024014814A1 (en) Evc decoding complexity metrics
WO2022220553A1 (en) Mpeg media transport (mmt) signaling of visual volumetric video-based coding (v3c) content
WO2024075953A1 (en) System and method for acquiring mobile robot map image and object position based on multiple cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17753495

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2017753495

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017753495

Country of ref document: EP

Effective date: 20180814

ENP Entry into the national phase

Ref document number: 20187026553

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020187026553

Country of ref document: KR