EP3833029A1 - Dispositif et procédé de traitement d'image - Google Patents

Dispositif et procédé de traitement d'image Download PDF

Info

Publication number
EP3833029A1
EP3833029A1 EP19843809.5A EP19843809A EP3833029A1 EP 3833029 A1 EP3833029 A1 EP 3833029A1 EP 19843809 A EP19843809 A EP 19843809A EP 3833029 A1 EP3833029 A1 EP 3833029A1
Authority
EP
European Patent Office
Prior art keywords
data
image
dimensional
bitstream
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19843809.5A
Other languages
German (de)
English (en)
Other versions
EP3833029A4 (fr
Inventor
Ohji Nakagami
Koji Yano
Tsuyoshi Kato
Satoru Kuma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of EP3833029A1 publication Critical patent/EP3833029A1/fr
Publication of EP3833029A4 publication Critical patent/EP3833029A4/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present disclosure relates to an image processing apparatus and a method, and more particularly to an image processing apparatus and a method that allow for easier reproduction of a two-dimensional image.
  • Non-Patent Document 1 As an encoding method for 3D data representing an object having a three-dimensional shape such as a point cloud, there has conventionally been encoding using voxels such as Octree (see, for example, Non-Patent Document 1).
  • the 3D data encoded as described above is, for example, transmitted as a bitstream and decoded. Then, the object having a three-dimensional shape is reproduced as a two-dimensional image just like an image captured with a camera at an optional position and orientation.
  • the present disclosure has been made in view of such circumstances, and is intended to allow for easier reproduction of a two-dimensional image.
  • An image processing apparatus includes a generation unit that generates a video frame that includes a patch obtained by projecting, onto a two-dimensional plane, a point cloud that represents an object having a three-dimensional shape as a group of points, and a two-dimensional image different from the patch, and a coding unit that encodes the video frame generated by the generation unit to generate a bitstream.
  • An image processing method includes generating a video frame that includes a patch obtained by projecting, onto a two-dimensional plane, a point cloud that represents an object having a three-dimensional shape as a group of points, and a two-dimensional image different from the patch, and encoding the generated video frame to generate a bitstream.
  • An image processing apparatus includes an extraction unit that extracts, from a bitstream that includes coded data of a video frame that includes a patch obtained by projecting, onto a two-dimensional plane, a point cloud that represents an object having a three-dimensional shape as a group of points, and a two-dimensional image different from the patch, the coded data, and a two-dimensional decoding unit that decodes the coded data extracted from the bitstream by the extraction unit to restore the two-dimensional image.
  • An image processing method includes extracting, from a bitstream that includes coded data of a video frame that includes a patch obtained by projecting, onto a two-dimensional plane, a point cloud that represents an object having a three-dimensional shape as a group of points, and a two-dimensional image different from the patch, the coded data, and decoding the coded data extracted from the bitstream to restore the two-dimensional image.
  • a video frame that includes a patch obtained by projecting, onto a two-dimensional plane, a point cloud that represents an object having a three-dimensional shape as a group of points, and a two-dimensional image different from the patch is generated, and the generated video frame is encoded and a bitstream is generated.
  • the coded data is extracted, and the coded data extracted from the bitstream is decoded and the two-dimensional image is restored.
  • images can be processed.
  • a two-dimensional image can be reproduced more easily.
  • Non-patent Document 6 contents described in the non-patent documents described above are also the basis for determining support requirements.
  • QTBT quad tree plus binary tree
  • 3D data such as a point cloud representing an object having a three-dimensional shape on the basis of position information, attribute information, and the like of a group of points, and a mesh that is constituted by vertices, edges, and faces and defines an object having a three-dimensional shape using a polygonal representation.
  • a three-dimensional structure (object having a three-dimensional shape) is represented as a set of a large number of points (group of points). That is, point cloud data is constituted by position information and attribute information (e.g., color) of each point in this group of points. Consequently, the data has a relatively simple structure, and any three-dimensional structure can be represented with sufficient accuracy with use of a sufficiently large number of points.
  • a video-based approach has been proposed, in which a two-dimensional image is formed by projecting each of position information and color information of such a point cloud onto a two-dimensional plane for each subregion, and the two-dimensional image is encoded by an encoding method for two-dimensional images.
  • an input point cloud is divided into a plurality of segmentations (also referred to as regions), and each region is projected onto a two-dimensional plane.
  • data for each position of the point cloud i.e., data for each point
  • position information i.e., data for each point
  • attribute information texture
  • each segmentation (also referred to as a patch) projected onto the two-dimensional plane is arranged to form a two-dimensional image, and is encoded by an encoding method for two-dimensional plane images such as Advanced Video Coding (AVC) or High Efficiency Video Coding (HEVC).
  • AVC Advanced Video Coding
  • HEVC High Efficiency Video Coding
  • the 3D data encoded as described above is, for example, encoded and transmitted as a bitstream to a transmission destination, where the 3D data is decoded and then reproduced.
  • a transmission destination where the 3D data is decoded and then reproduced.
  • an object having a three-dimensional shape indicated by decoded and reconstructed 3D data is rendered just like capturing an image with a camera at an optional position and orientation, and displayed on the 2D display as a two-dimensional image (also referred to as a rendered image).
  • a two-dimensional image (rendered image) obtained by rendering an object as described above is different from a two-dimensional image (two-dimensional image in which patches are arranged) at the time of encoding.
  • a two-dimensional image in which patches are arranged is a format for transmitting 3D data, and is not an image intended for display. That is, even if this two-dimensional image in which the patches are arranged is displayed, the displayed image cannot be understood by a user who views it (the image does not serve as content).
  • a rendered image is an image that represents an object having a three-dimensional shape in two dimensions. Consequently, the image is displayed as an image that can be understood by a user who views it (the image serves as content).
  • a recommended camera work position, direction, or the like of a camera used for rendering
  • a rendered image obtained by rendering the object with the recommended camera work is displayed on a decoding side
  • it is necessary to render the object on the decoding side and there has been a possibility that the time required to display the rendered image increases.
  • the load of rendering is heavy, and there has been a possibility that only higher-performance devices can be equipped with a bitstream decoding/reproduction function. That is to say, there has been a possibility that the number of devices that cannot be equipped with the bitstream decoding/reproduction function increases (there has been a possibility that the number of devices that can be equipped with the bitstream decoding/reproduction function reduces).
  • a two-dimensional image can be displayed (2D data included in a bitstream can be reproduced) without rendering of an object having a three-dimensional shape.
  • a 3D data decoder 32 decodes a bitstream of the 3D data and reconstructs the 3D data (e.g., a point cloud). Then, the 3D display 35 displays the 3D data.
  • the 3D data decoder 32 decodes a bitstream of the 3D data and reconstructs the 3D data. Then, a renderer 34 renders the 3D data to generate a rendered image (two-dimensional image), and the 2D display 36 displays the rendered image. That is, in this case, rendering processing is required, and there has been a possibility that the load increases.
  • a demultiplexer 31 extracts coded data of the 2D data from the bitstream, a 2D video decoder 33 decodes the coded data to generate a two-dimensional image, and the 2D display 36 can thus display the two-dimensional image. That is, the rendering processing on the decoding side can be skipped (omitted).
  • a two-dimensional image can be displayed more easily. Consequently, for example, a two-dimensional image indicating contents of a bitstream can be included in the bitstream so that the two-dimensional image can be displayed without rendering of an object having a three-dimensional shape on the decoding side. Consequently, the contents of the bitstream can be checked more quickly. Furthermore, for example, a rendered image obtained with a recommended camera work can be added as 2D data to a bitstream, so that the rendered image can be displayed without rendering of an object having a three-dimensional shape on the decoding side. Consequently, the recommended camera work can be checked more quickly.
  • a two-dimensional image can be displayed without rendering of an object having a three-dimensional shape, which requires a heavy processing load, and this allows even lower-performance devices to reproduce 2D data included in a bitstream. Consequently, it is possible to suppress a reduction in the number of devices that cannot be equipped with the bitstream decoding/reproduction function (increase the number of devices that can be equipped with the bitstream decoding/reproduction function).
  • contents of the 2D data added to the bitstream of the 3D data are optional as long as the contents are different from the patches of the 3D data.
  • the contents may be a rendered image of an object having a three-dimensional shape indicated by the 3D data.
  • the contents may be a rendered image obtained by rendering the 3D data just like imaging the object having a three-dimensional shape indicated by the 3D data with a predetermined camera work (position, direction, or the like of the rendering camera).
  • a 3D data encoder 21 encodes 3D data (point cloud) on the encoding side in Fig. 2
  • the 3D data encoder 21 may also encode a rendered image of the 3D data to be encoded to generate coded data, and the coded data of the rendered image may be added to a bitstream that includes coded data of the 3D data. That is, the rendered image may be added to the bitstream of the 3D data.
  • the demultiplexer 31 extracts coded data of a rendered image from a bitstream, and the 2D video decoder decodes the coded data, and thus the rendered image can be obtained. That is, the rendering processing can be skipped (omitted).
  • this rendered image may be an image obtained by rendering just like imaging an object having a three-dimensional shape indicated by 3D data from a recommended camera position and direction. That is, this rendered image may be an image obtained by rendering with a recommended camera work.
  • the 3D data encoder 21 may also encode a rendered image obtained by rendering an object in the 3D data to be encoded with a recommended camera work to generate coded data, and the coded data of the rendered image may be added to a bitstream that includes coded data of the 3D data. That is, the rendered image obtained by rendering with the recommended camera work may be added to the bitstream of the 3D data.
  • the demultiplexer 31 extracts coded data of a rendered image from a bitstream, and the 2D video decoder decodes the coded data, and thus the rendered image obtained with a recommended camera work specified on the encoding side can be obtained. That is, the rendering processing can be skipped (omitted).
  • this rendered image may be generated on the encoding side.
  • a renderer 22 may render an object in 3D data to be encoded to generate a rendered image, and the 3D data encoder 21 may encode and add the rendered image to a bitstream of the 3D data.
  • the demultiplexer 31 extracts coded data of a rendered image from a bitstream, and the 2D video decoder decodes the coded data, and thus the rendered image generated by the renderer 22 can be obtained. That is, the rendering processing on the decoding side can be skipped (omitted).
  • the 2D data is not limited to the example described above.
  • This 2D data may not be a rendered image.
  • the 2D data may be an image including information (characters, numbers, symbols, figures, patterns, or the like) regarding contents of 3D data included in a bitstream.
  • Such 2D data may be added to the bitstream so that the information regarding the contents of the 3D data can be more easily displayed on the decoding side. That is, a user on the decoding side can grasp the contents of the bitstream more quickly. Furthermore, the user can grasp the contents of the bitstream on a wider variety of devices.
  • the 2D data may be an image with contents independent of the 3D data included in the bitstream (an irrelevant image).
  • the 2D data may be a rendered image of an object different from the object indicated by the 3D data included in the bitstream, or may be an image including information (characters, numbers, symbols, figures, patterns, or the like) unrelated to the contents of the 3D data included in the bitstream.
  • Such 2D data may be added to the bitstream so that a wider variety of information can be more easily displayed on the decoding side. That is, a user on the decoding side can obtain a wider variety of information more quickly. Furthermore, the user can obtain a wider variety of information on a wider variety of devices.
  • each of the 2D data and the 3D data may be a moving image or a still image.
  • the length of reproduction time of the 2D data and that of the 3D data may be the same as each other or may be different from each other.
  • Such 2D data may be added to the bitstream so that the 2D data can be more easily displayed on the decoding side, regardless of whether the 2D data is a moving image or a still image. That is, a user on the decoding side can start viewing the 2D data more quickly, regardless of whether the 2D data is a moving image or a still image. Furthermore, the user can view the 2D data on a wider variety of devices, regardless of whether the 2D data is a moving image or a still image.
  • a plurality of pieces of 2D data may be added to the bitstream of the 3D data.
  • the lengths of reproduction time of the plurality of pieces of 2D data may be the same as each other or may be different from each other.
  • the plurality of pieces of 2D data may be added to the bitstream in a state in which each of them is reproduced in sequence.
  • a plurality of pieces of 2D data may be added to a bitstream in a state in which each of them is reproduced in sequence, so that the plurality of pieces of 2D data can be more easily reproduced in sequence on the decoding side. That is, a user on the decoding side can start viewing the plurality of pieces of 2D data more quickly. Furthermore, the user can view the plurality of pieces of 2D data on a wider variety of devices.
  • the same moving image may be added to the bitstream a plurality of times as the plurality of pieces of 2D data.
  • the moving image can be reproduced a plurality of times more easily on the decoding side. That is, a user on the decoding side can start viewing the moving image reproduced the plurality of times more quickly. Furthermore, the user can view the moving image reproduced the plurality of times on a wider variety of devices.
  • the plurality of pieces of 2D data for example, moving images with contents that are different from each other may be added to the bitstream in a state in which each of them is reproduced in sequence.
  • a plurality of rendered images obtained by rendering with camera works (camera position, direction, or the like) that are different from each other may be added to the bitstream.
  • the rendered images from a plurality of viewpoints by a plurality of camera works
  • the rendered images from the corresponding viewpoints by the corresponding camera works
  • the user can view the rendered images from the plurality of viewpoints on a wider variety of devices.
  • 2D data may be added to any location in a bitstream.
  • the 2D data may be added to a video frame.
  • a point cloud (3D data) is constituted by position information and attribute information of a group of points.
  • position information and attribute information of a point cloud are projected onto a two-dimensional plane for each segmentation and packed in a video frame as patches.
  • the 2D data described above may be added to such a video frame.
  • the 3D data encoder 21 encodes a packed video frame by an encoding method for two-dimensional plane images such as AVC or HEVC, thereby encoding 3D data and 2D data. That is, 2D data can be encoded more easily.
  • 2D data can be decoded more easily on the decoding side.
  • the 2D video decoder 33 can generate 2D data by decoding coded data by a decoding method for two-dimensional plane images such as AVC or HEVC.
  • a bitstream 40 of 3D data includes a stream header 41, a group of frames (GOF) stream 42-1, a GOF stream 42-2,..., a GOF stream 42-n-1, and a GOF stream 42-n (n is an optional natural number).
  • GOF group of frames
  • the stream header 41 is header information of the bitstream 40, where various types of information regarding the bitstream 40 are stored.
  • Each of the GOF stream 42-1 to the GOF stream 42-n is created by packing correlations in a time direction in random access. That is, they are bitstreams for a predetermined length of time. In a case where it is not necessary to distinguish the GOF stream 42-1 to the GOF stream 42-n from each other in the description, they are referred to as GOF streams 42.
  • a GOF stream 42 includes a GOF header 51, a GOF geometry video stream 52, GOF auxiliary info & occupancy maps 53, and a GOF texture video stream 54.
  • the GOF header 51 includes parameters 61 for the corresponding GOF stream 42.
  • the parameters 61 include parameters such as information regarding a frame height (frameWidth), information regarding a frame width (frameHeight), and information regarding a resolution of an occupancy map (occupancyResolution), for example.
  • the GOF geometry video stream 52 is coded data (bitstream) obtained by encoding, by an encoding method for two-dimensional plane images such as AVC or HEVC, a geometry video frame 62 in which position information patches of a point cloud are packed, for example.
  • the GOF auxiliary info & occupancy maps 53 are coded data (bitstream) in which auxiliary info and an occupancy map 64 are encoded by a predetermined encoding method.
  • the occupancy map 64 is map information that indicates whether or not there are position information and attribute information at each position on a two-dimensional plane.
  • the GOF texture video stream 54 is coded data (bitstream) obtained by encoding a color video frame 65 by an encoding method for two-dimensional plane images such as AVC or HEVC, for example.
  • This color video frame 65 may have 2D data 72 added.
  • 2D data can be encoded together with 3D data.
  • the 3D data encoder 21 encodes a packed color video frame by an encoding method for two-dimensional plane images such as AVC or HEVC, thereby encoding not only attribute information of a point cloud but also 2D data. That is, 2D data can be encoded more easily.
  • 2D data can be decoded more easily on the decoding side.
  • the demultiplexer 31 extracts coded data of a color video frame (the GOF texture video stream 54 in the case of the example in Fig. 3 ) from a bitstream, and the 2D video decoder 33 decodes the extracted coded data (the GOF texture video stream 54) by a decoding method for two-dimensional plane images such as AVC or HEVC, and thus 2D data (the 2D data 72 in the case of the example in Fig. 3 ) can be generated.
  • the 2D data 72 is information different from a point cloud, and the 2D data 72 is not reflected in the occupancy map 64. Consequently, for example, in a case where the 3D data decoder 32 ( Fig. 2 ) decodes the bitstream 40 of the 3D data, the 2D data 72 is ignored. That is, the 3D data decoder 32 can decode the bitstream 40 in a similar manner to a case of decoding a bitstream of 3D data to which 2D data is not added. That is, 3D data can be easily decoded.
  • 2D data may be added to all color video frames, or may be added to some of the video frames.
  • 2D data may be added to video frames of all layers to be encoded, or the 2D data may be added to video frames of some of the layers to be encoded.
  • a predetermined number of two-dimensional images may be generated from one point cloud frame so that a patch depth can be represented.
  • a plurality of patches can be generated in a depth direction for one point cloud frame.
  • a packed video frame can be hierarchically encoded in the time direction, and a layer to be encoded can be assigned to each position in the patch depth direction (each patch can be arranged in a video frame of a layer corresponding to the depth direction).
  • 2D data may be added only to color video frames of some of the layers to be encoded in the layered structure described above, and the color video frames of all the layers to be encoded may be encoded in accordance with the layered structure, and then the 2D data may be reproduced by decoding only coded data of the color video frames of the layers to be encoded to which the 2D data has been added.
  • 2D data may be added to a color video frame of one layer to be encoded, and the color video frames of all the layers to be encoded may be encoded in accordance with the layered structure, and then the 2D data may be reproduced by decoding only coded data of the color video frame of the layer to be encoded to which the 2D data has been added.
  • the 2D data can be extracted from all the decoded color video frames, and this prevents the noise image described above from being displayed.
  • the 2D data may be added to the color video frames of all the layers to be encoded in the layered structure described above.
  • the 2D data may be added to all the color video frames in the layered structure to be encoded, in order from the first frame.
  • the same 2D data may be added repeatedly.
  • the images of the 2D data may be added again to the subsequent color video frames from the first frame.
  • each of rendered images obtained by rendering by a predetermined camera work may be added to color video frames of all layers in order from the first frame, and after the last rendered image has been added, the rendered images once added may be added to the remaining color video frames in order from the first rendered image.
  • one rendered image which is a moving image, can be repeatedly displayed on the decoding side.
  • new 2D data may be added. For example, when an image of the last frame of a two-dimensional moving image has been added to a certain color video frame, images of new 2D data may be added to the subsequent color video frames from the first frame.
  • the 2D data can be extracted from all the decoded color video frames, and this prevents the noise image described above from being displayed.
  • each of rendered images obtained by rendering by a predetermined camera work may be added to color video frames of all layers in order from the first frame, and after the last rendered image has been added, each of rendered images obtained by rendering by a new camera work may be added to the remaining color video frames in order.
  • a plurality of rendered images, which are moving images can be displayed in sequence on the decoding side.
  • 2D data may be added to video frames of all layers, or the 2D data may be added to video frames of some of the layers.
  • the 2D data may be added to other than color video frames.
  • the 2D data may be added to geometry video frames.
  • Information regarding 2D data to be added to a bitstream of 3D data as described above may be further included in the bitstream.
  • This information regarding the 2D data may be any information.
  • the information regarding the 2D data may be added to any position in the bitstream.
  • the information regarding the 2D data may be added to a header of the bitstream as metadata.
  • the information regarding the 2D data may be added as 2D control syntax 71 to the stream header 41 of the bitstream 40.
  • information regarding a two-dimensional image may include two-dimensional image presence/absence identification information that indicates whether or not a bitstream includes two-dimensional image data.
  • thumbnail_available_flag may be transmitted as the 2D control syntax 71.
  • the thumbnail_available_flag is flag information (i.e., two-dimensional image presence/absence identification information) that indicates whether or not there is 2D data in the bitstream (whether or not 2D data has been added). If this flag information is true (e.g., "1"), it indicates that there is 2D data in the bitstream. Furthermore, if this flag information is false (e.g., "0"), it indicates that there is no 2D data in the bitstream.
  • the information regarding the two-dimensional image may include two-dimensional image reproduction assisting information for assisting reproduction of the two-dimensional image.
  • two-dimensional image reproduction assisting information for assisting reproduction of the two-dimensional image.
  • thumbnail_available_flag true (if (thumbnail_available_flag) ⁇ )
  • num_rendering_view, InsertionMethod, SeparationID, and IndependentDecodeflag may be transmitted as the 2D control syntax 71.
  • These syntaxes are two-dimensional image reproduction assisting information for assisting reproduction of a two-dimensional image.
  • the num_rendering_view is information indicating the number of rendered viewpoints (the number of camera works).
  • the InsertionMethod is information indicating whether 2D data has been added with layers divided by LayerID or TemporalID, or the 2D data has been added by repeat or the like (whether the 2D data has been added to all the layers). Note that, in a case where the 2D data has been added with the layers divided by LayerID or TemporalID, it is necessary to change an operation of an AVC or HEVC decoder. That is, the operation of the decoder can be changed on the basis of this information.
  • the SeparationID is information indicating a break of LayerID or TemporalID. This information may be passed to the AVC or HEVC decoder so that only a specific layer can be displayed.
  • the IndependentDecodeflag is flag information that indicates whether or not a 2D data portion can be independently decoded by a tile or the like. If this flag information is true (e.g., "1"), it indicates that the 2D data can be independently decoded. Furthermore, if this flag information is false (e.g., "0"), it indicates that the 2D data cannot be independently decoded.
  • MCTS_ID may be transmitted as the 2D control syntax 71.
  • the MCTS_ID is information for identifying a tile to be specified for decoding of a specific tile portion defined separately in motion-constrained tile sets suplemental enhancement information (MCTS SEI).
  • the syntaxes illustrated in Fig. 4 are examples, and the 2D control syntax 71 may include any syntax.
  • the information regarding the two-dimensional image may include two-dimensional image spatial position management information for managing a position in a spatial direction to which the two-dimensional image has been added.
  • def_disp_win_left_offset, def_disp_win_right_offset, def_disp_win_top_offset, and def_disp_win_bottom_offset may be transmitted. These syntaxes are two-dimensional image spatial position management information for managing the position in the spatial direction to which the two-dimensional image has been added.
  • the def_disp_win_left_offset is information indicating an offset of the left edge of the 2D data 72 using, as a reference, the left edge of the color video frame 65.
  • the def_disp_win_right_offset is information indicating an offset of the right edge of the 2D data 72 using, as a reference, the left edge of the color video frame 65.
  • the def_disp_win_top_offset is information indicating an offset of the top edge of the 2D data 72 using, as a reference, the top edge of the color video frame 65.
  • the def_disp_win_bottom_offset is information indicating an offset of the bottom edge of the 2D data 72 using, as a reference, the top edge of the color video frame 65.
  • the position of the added 2D data can be identified from these pieces of information. That is, a decoder can more easily extract the 2D data added to the color video frame on the basis of these pieces of information. That is, 2D data can be reproduced more easily.
  • these pieces of information are specified in HEVC.
  • these pieces of information are transmitted as syntaxes as illustrated in A of Fig. 6 . That is, 2D data can be reproduced more easily by using an HEVC decoder.
  • Cropping_offset specified in AVC may be used instead of these pieces of information.
  • the Cropping_offset may be transmitted as syntax as illustrated in B of Fig. 6 .
  • 2D data can be reproduced more easily by using an AVC decoder.
  • the information regarding the two-dimensional image may include two-dimensional image temporal position management information for managing a position in a time direction to which the two-dimensional image has been added.
  • the TemporalID may be used to add 2D data only to color video frames of some of the layers. That is, this TemporalID is two-dimensional image temporal position management information for managing the position in the time direction to which the two-dimensional image has been added.
  • a case is assumed in which color video frames are two-layered as illustrated in Fig. 7 .
  • a decoder can decode only video frames in a layer to which 2D data has been added (in the case of the example in Fig. 7 , the video frames in the Video 0 layer). Consequently, reproduction of a noise image can be prevented more easily.
  • LayerID may be used instead of the TemporalID.
  • two-dimensional image spatial position management information and two-dimensional image temporal position management information may be included in, for example, a GOF texture video stream.
  • a decoder can easily reproduce the 2D data by decoding the GOF texture video stream.
  • the demultiplexer 31 extracts coded data (GOF texture video stream) of a color video frame from a bitstream on the basis of metadata (e.g., the 2D control syntax 71), and supplies the coded data to the 2D video decoder 33.
  • the 2D video decoder 33 can obtain a rendered image by decoding the GOF texture video stream and extracting 2D data.
  • 2D data added to a color video frame may be encoded independently only for that portion (independently of patches and the like). That is, the 2D data added to the color video frame may be able to be decoded independently only for that portion (independently of the patches and the like). In other words, the 2D data may be added to the video frame as a unit of data that can be independently encoded/decoded, such as a tile, a slice, or a picture.
  • a decoder e.g., the 2D video decoder 33
  • a coding parameter for the 2D data (a coding parameter set independently of other regions) may be used to encode/decode the 2D data.
  • a coding parameter more suitable for the 2D data can be used, and it is therefore possible to suppress a reduction in coding efficiency (improve the coding efficiency).
  • the number of pieces of 2D data that can be added into one video frame is optional. For example, as shown in the 14th row from the top of Table 10 in Fig. 1 , a plurality of pieces of 2D data may be added into one video frame.
  • FIG. 8 is a block diagram illustrating an example of a configuration of a coding device, which is an aspect of an image processing apparatus to which the present technology is applied.
  • a coding device 100 illustrated in Fig. 8 is a device (a coding device to which a video-based approach is applied) that projects 3D data such as a point cloud onto a two-dimensional plane and performs encoding by an encoding method for two-dimensional images.
  • Fig. 8 illustrates a main part of processing units, a data flow, and the like, and not all of them are illustrated in Fig. 8 . That is, the coding device 100 may include a processing unit that is not illustrated as a block in Fig. 8 , or may involve a flow of processing or data that is not illustrated as an arrow or the like in Fig. 8 . This also applies to other drawings for describing processing units and the like in the coding device 100.
  • the coding device 100 includes a patch decomposition unit 111, a packing unit 112, an auxiliary patch information compression unit 113, a video coding unit 114, a video coding unit 115, an OMap coding unit 116, a 2D data generation unit 117, and a multiplexer 118.
  • the patch decomposition unit 111 performs processing related to decomposition of 3D data. For example, the patch decomposition unit 111 acquires 3D data (e.g., point cloud) representing a three-dimensional structure input to the coding device 100. Furthermore, the patch decomposition unit 111 decomposes the acquired point cloud into a plurality of segmentations, projects the point cloud onto a two-dimensional plane for each segmentation, and generates position information patches and attribute information patches. The patch decomposition unit 111 supplies information regarding each generated patch to the packing unit 112. Furthermore, the patch decomposition unit 111 supplies auxiliary patch information, which is information regarding the decomposition, to the auxiliary patch information compression unit 113.
  • 3D data e.g., point cloud
  • the patch decomposition unit 111 decomposes the acquired point cloud into a plurality of segmentations, projects the point cloud onto a two-dimensional plane for each segmentation, and generates position information patches and attribute information patches.
  • the patch decomposition unit 111 supplies information regarding each
  • the packing unit 112 performs processing related to data packing. For example, the packing unit 112 acquires, from the patch decomposition unit 111, information regarding a patch of position information (geometry) indicating a position of a point and information regarding a patch of attribute information (texture) such as color information added to the position information.
  • a patch of position information geometry
  • a patch of attribute information texture
  • the packing unit 112 arranges each of the acquired patches on a two-dimensional image and packs them as a video frame.
  • the packing unit 112 arranges position information patches on a two-dimensional image and packs them as a position information video frame (also referred to as a geometry video frame).
  • the packing unit 112 arranges attribute information patches on a two-dimensional image and packs them as an attribute information video frame (also referred to as a color video frame).
  • the packing unit 112 is controlled by the 2D data generation unit 117 to add, to a predetermined position in a color video frame, 2D data supplied from the 2D data generation unit 117 (e.g., a rendered image of an object having a three-dimensional shape represented by a point cloud input to the coding device 100) by a method as described above in ⁇ 1. Addition of 2D data>.
  • 2D data supplied from the 2D data generation unit 117 (e.g., a rendered image of an object having a three-dimensional shape represented by a point cloud input to the coding device 100) by a method as described above in ⁇ 1. Addition of 2D data>.
  • the packing unit 112 generates an occupancy map corresponding to these video frames. Moreover, the packing unit 112 performs dilation processing on a color video frame.
  • the packing unit 112 supplies the geometry video frame generated as described above to the video coding unit 114. Furthermore, the packing unit 112 supplies the color video frame generated as described above to the video coding unit 115. Moreover, the packing unit 112 supplies the occupancy map generated as described above to the OMap coding unit 116. Furthermore, the packing unit 112 supplies such control information regarding packing to the multiplexer 118.
  • the auxiliary patch information compression unit 113 performs processing related to compression of auxiliary patch information. For example, the auxiliary patch information compression unit 113 acquires data supplied from the patch decomposition unit 111. The auxiliary patch information compression unit 113 encodes (compresses) auxiliary patch information included in the acquired data. The auxiliary patch information compression unit 113 supplies coded data of the obtained auxiliary patch information to the multiplexer 118.
  • the video coding unit 114 performs processing related to encoding of a video frame of position information (geometry). For example, the video coding unit 114 acquires a geometry video frame supplied from the packing unit 112. Furthermore, the video coding unit 114 encodes the acquired geometry video frame by an optional encoding method for two-dimensional images such as AVC or HEVC. The video coding unit 114 supplies coded data (coded data of the geometry video frame) obtained by the encoding to the multiplexer 118.
  • the video coding unit 115 performs processing related to encoding of a video frame of attribute information (texture). For example, the video coding unit 115 acquires a color video frame supplied from the packing unit 112. Furthermore, the video coding unit 115 encodes the acquired color video frame (e.g., a color video frame to which 2D data has been added) by an optional encoding method for two-dimensional images such as AVC or HEVC.
  • the video coding unit 115 encodes the color video frame under the control of the 2D data generation unit 117 by a method as described above in ⁇ 1. Addition of 2D data>. Furthermore, the video coding unit 115 adds, to coded data (bitstream) of the color video frame, metadata such as syntax supplied from the 2D data generation unit 117 as described above in ⁇ 1. Addition of 2D data>. The video coding unit 115 supplies the coded data (the coded data of the color video frame) obtained by the encoding to the multiplexer 118.
  • the OMap coding unit 116 performs processing related to encoding of an occupancy map. For example, the OMap coding unit 116 acquires an occupancy map supplied from the packing unit 112. Furthermore, the OMap coding unit 116 encodes the acquired occupancy map by an optional encoding method such as arithmetic coding. The OMap coding unit 116 supplies coded data (coded data of the occupancy map) obtained by the encoding to the multiplexer 118.
  • the 2D data generation unit 117 performs processing related to 2D data generation as described above in ⁇ 1. Addition of 2D data>. For example, the 2D data generation unit 117 acquires a point cloud (3D data) input to the coding device 100. The 2D data generation unit 117 renders an object having a three-dimensional shape represented by the point cloud and generates a rendered image (2D data). Furthermore, the 2D data generation unit 117 also generates information regarding the 2D data.
  • the 2D data generation unit 117 supplies the generated 2D data to the packing unit 112, and controls arrangement of the 2D data. Furthermore, the 2D data generation unit 117 supplies information regarding the generated 2D data (syntax or the like) to the video coding unit 115, and controls encoding of a color video frame. Moreover, the 2D data generation unit 117 supplies the information regarding the generated 2D data as metadata to the multiplexer 118.
  • the multiplexer 118 performs processing related to bitstream generation (information multiplexing). For example, the multiplexer 118 acquires coded data of auxiliary patch information supplied from the auxiliary patch information compression unit 113. Furthermore, the multiplexer 118 acquires control information regarding packing supplied from the packing unit 112. Moreover, the multiplexer 118 acquires coded data of a geometry video frame supplied from the video coding unit 114. Furthermore, the multiplexer 118 acquires coded data of a color video frame supplied from the video coding unit 115. Moreover, the multiplexer 118 acquires coded data of an occupancy map supplied from the OMap coding unit 116. Furthermore, the multiplexer 118 acquires metadata supplied from the 2D data generation unit 117.
  • the multiplexer 118 multiplexes the acquired information to generate a bitstream. That is, the multiplexer 118 generates a bitstream that includes coded data of 3D data and 2D data, and information regarding the 2D data. The multiplexer 118 outputs the bitstream to outside of the coding device 100 (transmits it to the decoding side).
  • the coding device 100 adds 2D data, which is different from the 3D data, to the bitstream. Therefore, as described above in ⁇ 1. Addition of 2D data>, a two-dimensional image can be displayed (2D data included in a bitstream can be reproduced) without rendering of an object having a three-dimensional shape on the decoding side. That is, a two-dimensional image can be reproduced more easily.
  • Fig. 9 is a block diagram illustrating an example of a main configuration of the 2D data generation unit 117 in Fig. 8 .
  • the 2D data generation unit 117 includes a control unit 131, a rendering unit 132, an arrangement control unit 133, a syntax generation unit 134, a coding control unit 135, and a metadata generation unit 136.
  • the control unit 131 performs processing related to rendering control. For example, the control unit 131 receives information regarding rendering control (e.g., a control command) input from outside such as a user, and controls the rendering unit 132 in accordance with the information.
  • rendering control e.g., a control command
  • the rendering unit 132 performs processing related to rendering. For example, the rendering unit 132 acquires a point cloud (3D data) input to the coding device 100. Furthermore, the rendering unit 132 renders, under the control of the control unit 131, an object having a three-dimensional shape represented by the point cloud, and generates a rendered image (2D data). The rendering unit 132 supplies the generated rendered image to the arrangement control unit 133.
  • a point cloud (3D data) input to the coding device 100.
  • the rendering unit 132 renders, under the control of the control unit 131, an object having a three-dimensional shape represented by the point cloud, and generates a rendered image (2D data).
  • the rendering unit 132 supplies the generated rendered image to the arrangement control unit 133.
  • the arrangement control unit 133 performs processing related to controlling of arrangement of a rendered image. For example, the arrangement control unit 133 acquires a rendered image supplied by the rendering unit 132. Furthermore, the arrangement control unit 133 supplies the rendered image to the packing unit 112. Moreover, the arrangement control unit 133 controls the packing unit 112 to arrange the rendered image at a predetermined position in a color video frame. The arrangement control unit 133 supplies the syntax generation unit 134 and the metadata generation unit 136 with arrangement information that indicates the spatial and temporal position where the rendered image (2D data) has been arranged.
  • the syntax generation unit 134 performs processing related to syntax generation. For example, the syntax generation unit 134 generates syntax on the basis of arrangement information supplied from the arrangement control unit 133. For example, the syntax generation unit 134 generates syntax that includes two-dimensional image spatial position management information, two-dimensional image temporal position management information, or the like. The syntax generation unit 134 supplies the generated syntax to the coding control unit 135.
  • the coding control unit 135 performs processing related to controlling of encoding of a color video frame. For example, the coding control unit 135 acquires syntax supplied from the syntax generation unit 134. Furthermore, the coding control unit 135 controls the video coding unit 115 to encode a color video frame with desired specifications. For example, the coding control unit 135 controls the video coding unit 115 to encode 2D data added to the color video frame as an independently decodable coding unit (e.g., a tile, a slice, or a picture).
  • an independently decodable coding unit e.g., a tile, a slice, or a picture.
  • the coding control unit 135 supplies an acquired syntax (two-dimensional image spatial position management information, two-dimensional image temporal position management information, or the like) to the video coding unit 115, and the acquired syntax is added to a bitstream of a color video frame.
  • the metadata generation unit 136 performs processing related to metadata generation. For example, the metadata generation unit 136 generates metadata on the basis of arrangement information supplied from the arrangement control unit 133. For example, the metadata generation unit 136 generates metadata that includes two-dimensional image presence/absence identification information, two-dimensional image reproduction assisting information, or the like. The metadata generation unit 136 supplies the generated metadata to the multiplexer 118.
  • the 2D data generation unit 117 of the coding device 100 executes 2D data generation processing to generate 2D data in step S101.
  • step S102 the patch decomposition unit 111 projects an input point cloud onto a two-dimensional plane, and decomposes the point cloud into patches. Furthermore, the patch decomposition unit 111 generates auxiliary patch information for the decomposition.
  • step S103 the auxiliary patch information compression unit 113 compresses (encodes) the auxiliary patch information generated in step S102.
  • step S104 the packing unit 112 arranges each patch of position information and attribute information generated in step S102 on a two-dimensional image, and packs the patches as a video frame. Furthermore, the packing unit 112 generates an occupancy map. Moreover, the packing unit 112 performs dilation processing on a color video frame. Furthermore, the packing unit 112 generates control information regarding such packing.
  • step S105 the packing unit 112 is controlled by the 2D data generation unit 117 to embed (add) the 2D data generated in step S101 into the color video frame generated in step S104.
  • step S106 the video coding unit 114 encodes the geometry video frame generated in step S104, by an encoding method for two-dimensional images.
  • step S107 the video coding unit 115 encodes the color video frame generated in step S104 (including the color video frame to which the 2D data has been added in step S105), by the encoding method for two-dimensional images.
  • step S108 the OMap coding unit 116 encodes the occupancy map (or auxiliary info) generated in step S104, by a predetermined encoding method.
  • step S109 the multiplexer 118 multiplexes the coded data generated in each of step S106 to step S108, and generates a bitstream that includes them (a bitstream of 3D data to which the 2D data has been added).
  • step S110 the multiplexer 118 adds, to the bitstream generated in step S109, metadata that includes information regarding the 2D data generated in step S101.
  • step S111 the multiplexer 118 outputs the bitstream generated in step S110 to the outside of the coding device 100.
  • step S111 When the processing of step S111 ends, the point cloud coding processing ends.
  • the control unit 131 of the 2D data generation unit 117 receives rendering control information, which is information regarding rendering control, in step S131.
  • step S132 the rendering unit 132 generates a rendered image by rendering an object having a three-dimensional shape represented by a point cloud input to the coding device 100 on the basis of the rendering control information received in step S131.
  • step S133 the arrangement control unit 133 supplies the rendered image generated in step S132 to the packing unit 112, and controls the packing unit 112 to arrange the rendered image at a desired position in a color video frame. This processing is executed corresponding to the processing of step S104 in Fig. 10 .
  • step S134 the syntax generation unit 134 generates desired syntax on the basis of arrangement information that indicates the position where the rendered image has been arranged in step S133.
  • step S135 the coding control unit 135 controls the video coding unit 115 on the basis of the arrangement information, and controls encoding of the color video frame. That is, the coding control unit 135 encodes the color video frame with desired specifications and generates a bitstream.
  • step S136 the coding control unit 135 controls the video coding unit 115 to add the syntax generated in step S134 to the bitstream of the color video frame generated in step S135.
  • step S135 and step S136 are executed corresponding to the processing of step S107 in Fig. 10 .
  • step S137 the metadata generation unit 136 generates desired metadata on the basis of the arrangement information that indicates the position where the rendered image has been arranged in step S133.
  • step S138 the metadata generation unit 136 supplies the metadata generated in step S137 to the multiplexer 118, and the metadata is added to the bitstream generated in step S109. Note that this processing is executed corresponding to the processing of step S110 in Fig. 10 .
  • step S138 ends, the 2D data generation processing ends, and the processing returns to Fig. 10 .
  • the coding device 100 can generate a bitstream of 3D data to which 2D data has been added. Consequently, on the decoding side, the coding device 100 can display a two-dimensional image (reproduce 2D data included in a bitstream) without rendering an object having a three-dimensional shape, as described above in ⁇ 1. Addition of 2D data>. That is, a two-dimensional image can be reproduced more easily.
  • Fig. 12 is a block diagram illustrating an example of a configuration of a decoding device, which is an aspect of an image processing apparatus to which the present technology is applied.
  • a decoding device 200 illustrated in Fig. 12 is a device (a decoding device to which a video-based approach is applied) that decodes, by a decoding method for two-dimensional images, coded data obtained by projecting 3D data such as a point cloud onto a two-dimensional plane and encoding the 3D data, and reconstructs the 3D data.
  • the decoding device 200 decodes a bitstream generated by encoding of 3D data by the coding device 100 ( Fig. 8 ) (a bitstream of the 3D data to which 2D data has been added), and reconstructs the 3D data.
  • the decoding device 200 decodes coded data of the 2D data included in the bitstream, and generates 2D data without performing rendering.
  • Fig. 12 illustrates a main part of processing units, a data flow, and the like, and not all of them are illustrated in Fig. 12 . That is, the decoding device 200 may include a processing unit that is not illustrated as a block in Fig. 12 , or may involve a flow of processing or data that is not illustrated as an arrow or the like in Fig. 12 .
  • the decoding device 200 includes a demultiplexer 211, an auxiliary patch information decoding unit 212, a video decoding unit 213, a video decoding unit 214, an OMap decoding unit 215, an unpacking unit 216, a 3D reconstruction unit 217, and a video decoding unit 218.
  • the demultiplexer 211 performs processing related to data demultiplexing. For example, the demultiplexer 211 acquires a bitstream input to the decoding device 200. This bitstream is supplied from, for example, the coding device 100. The demultiplexer 211 demultiplexes this bitstream, extracts coded data of auxiliary patch information, and supplies it to the auxiliary patch information decoding unit 212. Furthermore, the demultiplexer 211 extracts coded data of a geometry video frame (e.g., a GOF geometry video stream 52) from a bitstream by demultiplexing, and supplies it to the video decoding unit 213.
  • a geometry video frame e.g., a GOF geometry video stream 52
  • the demultiplexer 211 extracts coded data of a color video frame (e.g., a GOF texture video stream 54) from a bitstream by demultiplexing, and supplies it to the video decoding unit 214. Furthermore, the demultiplexer 211 extracts coded data of an occupancy map or the like (e.g., GOF auxiliary info & occupancy maps 53) from a bitstream by demultiplexing, and supplies it to the OMap decoding unit 215.
  • a color video frame e.g., a GOF texture video stream 54
  • an occupancy map or the like e.g., GOF auxiliary info & occupancy maps 53
  • the demultiplexer 211 extracts control information regarding packing from a bitstream by demultiplexing, and supplies it to the unpacking unit 216 (not illustrated).
  • the demultiplexer 211 extracts, from a bitstream, a bitstream of a color video frame including 2D data (e.g., the GOF texture video stream 54), and supplies the extracted bitstream to the video decoding unit 218.
  • a bitstream of a color video frame including 2D data e.g., the GOF texture video stream 54
  • the auxiliary patch information decoding unit 212 performs processing related to decoding of coded data of auxiliary patch information. For example, the auxiliary patch information decoding unit 212 acquires coded data of auxiliary patch information supplied from the demultiplexer 211. Furthermore, the auxiliary patch information decoding unit 212 decodes (decompresses) the coded data of the auxiliary patch information included in the acquired data. The auxiliary patch information decoding unit 212 supplies the auxiliary patch information obtained by decoding to the 3D reconstruction unit 217.
  • the video decoding unit 213 performs processing related to decoding of coded data of a geometry video frame. For example, the video decoding unit 213 acquires coded data of a geometry video frame supplied from the demultiplexer 211. The video decoding unit 213 decodes the coded data of the geometry video frame by an optional decoding method for two-dimensional images such as AVC or HEVC. The video decoding unit 213 supplies the geometry video frame obtained by the decoding to the unpacking unit 216.
  • the video decoding unit 214 performs processing related to decoding of coded data of a color video frame. For example, the video decoding unit 214 acquires coded data of a color video frame supplied from the demultiplexer 211. The video decoding unit 214 decodes the coded data of the color video frame by an optional decoding method for two-dimensional images such as AVC or HEVC. The video decoding unit 214 supplies the color video frame obtained by the decoding to the unpacking unit 216.
  • the OMap decoding unit 215 performs processing related to decoding of coded data of an occupancy map or the like. For example, the OMap decoding unit 215 acquires coded data of an occupancy map or the like supplied from the demultiplexer 211. The OMap decoding unit 215 decodes the coded data of the occupancy map or the like by an optional decoding method corresponding to the encoding method used for the coded data. The OMap decoding unit 215 supplies information such as the occupancy map obtained by the decoding to the unpacking unit 216.
  • the unpacking unit 216 performs processing related to unpacking. For example, the unpacking unit 216 acquires a geometry video frame from the video decoding unit 213, acquires a color video frame from the video decoding unit 214, and acquires information such as an occupancy map from the OMap decoding unit 215. Furthermore, the unpacking unit 216 unpacks a geometry video frame or a color video frame on the basis of control information regarding packing or information such as an occupancy map, and extracts a patch (geometry patch) of position information (geometry), a patch (texture patch) of attribute information (texture), or the like from the video frame.
  • the occupancy map does not include information regarding 2D data, and the unpacking unit 216 can therefore ignore 2D data included in the color video frame and extract only the texture patch from the color video frame. That is, the unpacking unit 216 can easily perform unpacking as in a case of a bitstream to which 2D data has not been added.
  • the unpacking unit 216 supplies the geometry patch, the texture patch, the occupancy map, and the like obtained by unpacking as described above to the 3D reconstruction unit 217.
  • the 3D reconstruction unit 217 performs processing related to reconstruction of a point cloud. For example, the 3D reconstruction unit 217 reconstructs a point cloud on the basis of auxiliary patch information supplied from the auxiliary patch information decoding unit 212, and information such as a geometry patch, a texture patch, and an occupancy map supplied from the unpacking unit 216. The 3D reconstruction unit 217 outputs the reconstructed point cloud to outside of the decoding device 200 (e.g., the 3D display 35).
  • the video decoding unit 218 performs processing related to decoding of coded data of 2D data included in coded data of a color video frame. For example, the video decoding unit 218 acquires coded data of a color video frame supplied from the demultiplexer 211. The video decoding unit 218 decodes coded data of 2D data included in the coded data (e.g., the GOF texture video stream 54) of the color video frame by an optional decoding method for two-dimensional images such as AVC or HEVC. The video decoding unit 218 outputs 2D data (e.g., a rendered image) obtained by the decoding to the outside of the decoding device 200 (e.g., the 2D display 36).
  • 2D data e.g., a rendered image
  • the decoding device 200 can display a two-dimensional image (reproduce 2D data included in a bitstream) without rendering an object having a three-dimensional shape, as described above in ⁇ 1. Addition of 2D data>. That is, a two-dimensional image can be reproduced more easily.
  • the demultiplexer 211 demultiplexes a bitstream input to the decoding device 200 in step S201.
  • step S202 the demultiplexer 211 determines whether or not there is 2D data in the bitstream on the basis of 2D control syntax. For example, if thumbnail_available_flag of the 2D control syntax is true and it is determined that 2D data has been added, the processing proceeds to step S203.
  • step S203 the demultiplexer 211 extracts coded data of a color video frame (GOF texture video stream) from the bitstream input to the decoding device 200.
  • a color video frame (GOF texture video stream)
  • step S204 the video decoding unit 218 decodes coded data of 2D data (2D coded data) included in the coded data of the color video frame (GOF texture video stream) extracted in step S203.
  • the video decoding unit 218 decodes only the portion where the 2D data is included.
  • the video decoding unit 218 decodes only the portion in the coding unit.
  • step S205 the video decoding unit 218 outputs 2D data generated by decoding as described above to the outside of the decoding device 200.
  • step S205 When the processing of step S205 ends, the processing proceeds to step S206. Furthermore, if it is determined in step S202 that 2D data has not been added, the processing proceeds to step S206.
  • step S206 the auxiliary patch information decoding unit 212 decodes auxiliary patch information extracted from the bitstream in step S201.
  • step S207 the video decoding unit 213 decodes coded data of a geometry video frame (position information video frame) extracted from the bitstream in step S201.
  • step S208 the video decoding unit 214 decodes coded data of a color video frame (attribute information video frame) extracted from the bitstream in step S201.
  • step S209 the OMap decoding unit 215 decodes coded data of an occupancy map or the like extracted from the bitstream in step S201.
  • the unpacking unit 216 performs unpacking. For example, the unpacking unit 216 unpacks the geometry video frame obtained by decoding of the coded data in step S207 on the basis of information such as the occupancy map obtained by decoding of the coded data in step S209, and generates a geometry patch. Furthermore, the unpacking unit 216 unpacks the color video frame obtained by decoding of the coded data in step S208 on the basis of information such as the occupancy map obtained by decoding of the coded data in step S209, and generates a texture patch.
  • step S211 the 3D reconstruction unit 217 reconstructs a point cloud (object having a three-dimensional shape) on the basis of the auxiliary patch information obtained in step S206, and the geometry patch, the texture patch, the occupancy map, and the like obtained in step S210.
  • step S212 the 3D reconstruction unit 217 outputs the reconstructed point cloud to the outside of the decoding device 200.
  • step S212 When the processing of step S212 ends, the decoding processing ends.
  • the decoding device 200 can display a two-dimensional image (reproduce 2D data included in a bitstream) without rendering an object having a three-dimensional shape, as described above in ⁇ 1. Addition of 2D data>. That is, a two-dimensional image can be reproduced more easily.
  • the decoding device 200 has been described as having the video decoding unit 218 in addition to the video decoding unit 214.
  • Both the video decoding unit 214 and the video decoding unit 218 are processing units that decode coded data of a color video frame. That is, they are decoding units having functions similar to each other. Consequently, the processing to be performed by the video decoding unit 214 and the processing to be performed by the video decoding unit 218 may be performed by one video decoding unit.
  • Fig. 14 is a block diagram illustrating an example of a main configuration of the decoding device 200 in that case.
  • the decoding device 200 basically has a configuration similar to that in the case of Fig. 12 , but includes, unlike the case of Fig. 12 , a video decoding unit 221 instead of the video decoding unit 214 and the video decoding unit 218.
  • the video decoding unit 221 performs both the processing to be performed by the video decoding unit 214 and the processing to be performed by the video decoding unit 218.
  • the video decoding unit 221 acquires coded data of a color video frame supplied from a demultiplexer 211, decodes the coded data of the color video frame by an optional decoding method for two-dimensional images such as AVC or HEVC, and supplies the color video frame obtained by the decoding to an unpacking unit 216.
  • the video decoding unit 221 decodes coded data of 2D data included in the acquired coded data of the color video frame by an optional decoding method for two-dimensional images such as AVC or HEVC, and outputs 2D data obtained by the decoding (e.g., a rendered image) to outside of the decoding device 200 (e.g., the 2D display 36).
  • an optional decoding method for two-dimensional images such as AVC or HEVC
  • 2D data obtained by the decoding e.g., a rendered image
  • the configuration of the decoding device 200 can be simplified as compared with that in the case of Fig. 12 . That is, it is possible to suppress an increase in circuit scale of the decoding device 200.
  • the series of pieces of processing described above can be executed not only by hardware but also by software.
  • a program constituting the software is installed on a computer.
  • the computer includes a computer incorporated in dedicated hardware, or a general-purpose personal computer capable of executing various functions with various programs installed therein, for example.
  • Fig. 15 is a block diagram illustrating a configuration example of hardware of a computer that executes the series of pieces of processing described above in accordance with a program.
  • a central processing unit (CPU) 901, a read only memory (ROM) 902, and a random access memory (RAM) 903 are connected to each other via a bus 904.
  • An input/output interface 910 is also connected to the bus 904.
  • An input unit 911, an output unit 912, a storage unit 913, a communication unit 914, and a drive 915 are connected to the input/output interface 910.
  • the input unit 911 includes, for example, a keyboard, a mouse, a microphone, a touch panel, an input terminal, or the like.
  • the output unit 912 includes, for example, a display, a speaker, an output terminal, or the like.
  • the storage unit 913 includes, for example, a hard disk, a RAM disk, a nonvolatile memory, or the like.
  • the communication unit 914 includes, for example, a network interface.
  • the drive 915 drives a removable medium 921 such as a magnetic disk, an optical disk, a magnetooptical disk, or a semiconductor memory.
  • the computer configured as described above causes the CPU 901 to, for example, load a program stored in the storage unit 913 into the RAM 903 via the input/output interface 910 and the bus 904 and then execute the program.
  • the RAM 903 also stores, as appropriate, data or the like necessary for the CPU 901 to execute various types of processing.
  • the program to be executed by the computer can be provided by, for example, being recorded on the removable medium 921 as a package medium or the like. In that case, inserting the removable medium 921 into the drive 915 allows the program to be installed into the storage unit 913 via the input/output interface 910.
  • the program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be received by the communication unit 914 and installed into the storage unit 913.
  • the program can also be installed in advance in the ROM 902 or the storage unit 913.
  • the present technology is not limited to these examples, and can be applied to encoding/decoding of 3D data of any standard. That is, the various types of processing such as encoding/decoding methods and the various types of data such as 3D data and metadata may have any specifications, as long as the specifications do not contradict the present technology described above. Furthermore, some of the pieces of processing and specifications described above may be omitted as long as the omission does not contradict the present technology.
  • the present technology can be applied to any configuration.
  • the present technology can be applied to a variety of electronic devices such as a transmitter or a receiver (e.g., a television receiver or a mobile phone) for satellite broadcasting, wired broadcasting such as cable TV, distribution on the Internet, distribution to a terminal by cellular communication, or the like, or a device (e.g., a hard disk recorder or a camera) that records an image on a medium such as an optical disk, a magnetic disk, or a flash memory, and reproduces an image from such a storage medium.
  • a transmitter or a receiver e.g., a television receiver or a mobile phone
  • wired broadcasting such as cable TV
  • distribution on the Internet distribution to a terminal by cellular communication, or the like
  • a device e.g., a hard disk recorder or a camera
  • records an image on a medium such as an optical disk, a magnetic disk, or a flash memory, and reproduces an image from such a storage medium.
  • the present technology can also be carried out as a configuration of a part of a device such as a processor (e.g., a video processor) as a system large scale integration (LSI) or the like, a module (e.g., a video module) using a plurality of processors or the like, a unit (e.g., a video unit) using a plurality of modules or the like, or a set (e.g., a video set) in which other functions have been added to a unit.
  • a processor e.g., a video processor
  • LSI system large scale integration
  • a module e.g., a video module
  • a unit e.g., a video unit
  • a set e.g., a video set
  • the present technology can also be applied to a network system constituted by a plurality of devices.
  • the present technology may be carried out as cloud computing in which a plurality of devices shares and jointly performs processing via a network.
  • the present technology may be carried out in a cloud service that provides services related to images (moving images) to an optional terminal such as a computer, an audio visual (AV) device, a portable information processing terminal, or an Internet of Things (IoT) device.
  • an optional terminal such as a computer, an audio visual (AV) device, a portable information processing terminal, or an Internet of Things (IoT) device.
  • IoT Internet of Things
  • a system means a set of a plurality of components (devices, modules (parts), and the like), and it does not matter whether or not all components are in the same housing.
  • a plurality of devices housed in separate housings and connected via a network, and one device having a plurality of modules housed in one housing are both systems.
  • Systems, devices, processing units, and the like to which the present technology is applied can be used in any field such as transportation, medical care, crime prevention, agriculture, livestock industry, mining, beauty, factories, home appliances, weather, or nature monitoring. Furthermore, they can be used for any intended use.
  • a "flag” is information for identifying a plurality of situations, and includes not only information used for identifying two situations, true (1) and false (0), but also information that enables identification of three or more situations. Consequently, the number of values that this "flag” can take may be two such as “1" and "0", or may be three or more. That is to say, the number of bits constituting this "flag” is optional, and may be one bit or may be a plurality of bits. Furthermore, assumption of identification information (including a flag) includes not only a case where the identification information is included in a bitstream but also a case where difference information between the identification information and a certain piece of information serving as a reference is included in a bitstream. Thus, in the present specification, a “flag” and “identification information” include not only the information but also difference information between the information and a piece of information serving as a reference.
  • coded data may be transmitted or recorded in any form as long as it is associated with the coded data.
  • the term "associated with” means, for example, allowing one piece of data to be used (linked) when another piece of data is processed. That is, pieces of data associated with each other may be combined as one piece of data, or may be treated as separate pieces of data.
  • information associated with coded data (image) may be transmitted via a transmission path different from that of the coded data (image).
  • information associated with coded data (image) may be recorded on a recording medium different from that where the coded data (image) is recorded (or in a different recording area in the same recording medium).
  • this "associated with” may indicate association with not the entire data but a part of the data.
  • an image and information corresponding to the image may be associated with each other by any unit such as a plurality of frames, one frame, or a part of a frame.
  • embodiments of the present technology are not limited to the embodiments described above but can be modified in various ways within a scope of the present technology.
  • a configuration described as one device may be divided and configured as a plurality of devices (or processing units).
  • configurations described above as a plurality of devices (or processing units) may be combined and configured as one device (or processing unit).
  • a configuration other than those described above may be added to the configurations of the devices (or the processing units).
  • a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or another processing unit).
  • the program described above may be executed by any device.
  • the device is only required to have necessary functions (functional blocks and the like) and be able to obtain necessary information.
  • the steps of one flowchart may be executed by one device, or may be shared and executed by a plurality of devices.
  • the plurality of pieces of processing may be executed by one device, or may be shared and executed by a plurality of devices.
  • a plurality of pieces of processing included in one step may be processed as a plurality of steps.
  • processing described as a plurality of steps may be collectively executed as one step.
  • the program to be executed by the computer may be configured so that the steps described are processed in chronological order as described in the present specification, or the steps are processed in parallel or processed individually when needed, for example, when a call is made. That is, as long as no contradiction arises, the steps may be processed in an order different from the order described above.
  • the program may be configured so that processing of the steps described is executed in parallel with processing of another program, or executed in combination with processing of another program.
  • a plurality of technologies related to the present technology can each be carried out independently and individually as long as no contradiction arises.
  • any two or more technologies related to the present technology may be used together and carried out.
  • some or all of the technologies related to the present technology described in any one of the embodiments may be carried out in combination with some or all of the technologies related to the present technology described in another embodiment.
  • some or all of any of the technologies related to the present technology described above may be carried out in combination with another technology that is not described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Generation (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
EP19843809.5A 2018-08-02 2019-07-19 Dispositif et procédé de traitement d'image Withdrawn EP3833029A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018145658 2018-08-02
PCT/JP2019/028420 WO2020026846A1 (fr) 2018-08-02 2019-07-19 Dispositif et procédé de traitement d'image

Publications (2)

Publication Number Publication Date
EP3833029A1 true EP3833029A1 (fr) 2021-06-09
EP3833029A4 EP3833029A4 (fr) 2021-09-01

Family

ID=69231745

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19843809.5A Withdrawn EP3833029A4 (fr) 2018-08-02 2019-07-19 Dispositif et procédé de traitement d'image

Country Status (5)

Country Link
US (1) US11405644B2 (fr)
EP (1) EP3833029A4 (fr)
JP (1) JP7331852B2 (fr)
CN (1) CN112514396A (fr)
WO (1) WO2020026846A1 (fr)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7490555B2 (ja) * 2018-08-08 2024-05-27 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置
US11107249B2 (en) * 2019-03-18 2021-08-31 Sony Group Corporation Point cloud global tetris packing
US12047604B2 (en) * 2019-03-19 2024-07-23 Nokia Technologies Oy Apparatus, a method and a computer program for volumetric video
CN114501032A (zh) * 2020-10-23 2022-05-13 富士通株式会社 视频编码装置及方法、视频解码装置及方法和编解码系统
CN114915794B (zh) * 2021-02-08 2023-11-14 荣耀终端有限公司 基于二维规则化平面投影的点云编解码方法及装置
US11606556B2 (en) 2021-07-20 2023-03-14 Tencent America LLC Fast patch generation for video based point cloud coding
CN113852829A (zh) * 2021-09-01 2021-12-28 腾讯科技(深圳)有限公司 点云媒体文件的封装与解封装方法、装置及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2810159C (fr) * 2010-09-19 2017-04-18 Lg Electronics Inc. Procede et appareil pour traiter un signal de telediffusion pour un service de diffusion 3d (en 3 dimensions)
CN102724520A (zh) * 2011-03-29 2012-10-10 青岛海信电器股份有限公司 视频处理方法和系统
US9351028B2 (en) * 2011-07-14 2016-05-24 Qualcomm Incorporated Wireless 3D streaming server
US9984504B2 (en) * 2012-10-01 2018-05-29 Nvidia Corporation System and method for improving video encoding using content information
US10318543B1 (en) * 2014-03-20 2019-06-11 Google Llc Obtaining and enhancing metadata for content items
US10334221B2 (en) * 2014-09-15 2019-06-25 Mantisvision Ltd. Methods circuits devices systems and associated computer executable code for rendering a hybrid image frame
CN105187841B (zh) * 2015-10-16 2017-12-15 中国工程物理研究院机械制造工艺研究所 一种带误差反馈的三维数据编解码方法
CN106131535B (zh) * 2016-07-29 2018-03-02 传线网络科技(上海)有限公司 视频采集方法及装置、视频生成方法及装置
US20180053324A1 (en) * 2016-08-19 2018-02-22 Mitsubishi Electric Research Laboratories, Inc. Method for Predictive Coding of Point Cloud Geometries
EP3349182A1 (fr) * 2017-01-13 2018-07-18 Thomson Licensing Procédé, appareil et flux de format vidéo immersif
US10909725B2 (en) * 2017-09-18 2021-02-02 Apple Inc. Point cloud compression
US11611774B2 (en) 2018-01-17 2023-03-21 Sony Corporation Image processing apparatus and image processing method for 3D data compression
WO2019197708A1 (fr) * 2018-04-09 2019-10-17 Nokia Technologies Oy Appareil, procédé et programme d'ordinateur pour vidéo volumétrique
KR20210020028A (ko) * 2018-07-12 2021-02-23 삼성전자주식회사 3차원 영상을 부호화 하는 방법 및 장치, 및 3차원 영상을 복호화 하는 방법 및 장치

Also Published As

Publication number Publication date
JP7331852B2 (ja) 2023-08-23
US11405644B2 (en) 2022-08-02
WO2020026846A1 (fr) 2020-02-06
CN112514396A (zh) 2021-03-16
US20210297696A1 (en) 2021-09-23
EP3833029A4 (fr) 2021-09-01
JPWO2020026846A1 (ja) 2021-08-26

Similar Documents

Publication Publication Date Title
US11405644B2 (en) Image processing apparatus and method
US11611774B2 (en) Image processing apparatus and image processing method for 3D data compression
US11741575B2 (en) Image processing apparatus and image processing method
US11721048B2 (en) Image processing apparatus and method
US11699248B2 (en) Image processing apparatus and method
US11399189B2 (en) Image processing apparatus and method
US11356690B2 (en) Image processing apparatus and method
KR20200140256A (ko) 화상 처리 장치 및 방법
EP4167573A1 (fr) Dispositif et procédé de traitement d'informations
US11915390B2 (en) Image processing device and method
EP4102841A1 (fr) Dispositif de traitement d'informations et procédé de traitement d'informations
US20230370636A1 (en) Image processing device and method
US20230222693A1 (en) Information processing apparatus and method
US20220303578A1 (en) Image processing apparatus and method

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210118

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SONY GROUP CORPORATION

A4 Supplementary search report drawn up and despatched

Effective date: 20210802

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 19/597 20140101AFI20210727BHEP

Ipc: H04N 19/46 20140101ALI20210727BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

17Q First examination report despatched

Effective date: 20231201

18W Application withdrawn

Effective date: 20231211