WO2014073873A1 - 비디오 신호 처리 방법 및 장치 - Google Patents
비디오 신호 처리 방법 및 장치 Download PDFInfo
- Publication number
- WO2014073873A1 WO2014073873A1 PCT/KR2013/010056 KR2013010056W WO2014073873A1 WO 2014073873 A1 WO2014073873 A1 WO 2014073873A1 KR 2013010056 W KR2013010056 W KR 2013010056W WO 2014073873 A1 WO2014073873 A1 WO 2014073873A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- motion vector
- interview motion
- neighboring block
- interview
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
Definitions
- the present invention relates to a method and apparatus for coding a video signal.
- Compression coding refers to a series of signal processing techniques that transmit digitized information through a communication line or store the data in a form suitable for a storage medium.
- the object of compression encoding includes objects such as voice, video, text, and the like.
- a technique of performing compression encoding on an image is called video image compression.
- the general feature of multi-view video images is that they have spatial redundancy, temporal redundancy and inter-view redundancy.
- An object of the present invention is to improve the coding efficiency of a video signal.
- the present invention is characterized by deriving the interview motion vector of the current block by searching for at least one of the interview motion vectors of the neighboring blocks according to a predetermined priority.
- the present invention is also characterized by generating an interview motion vector candidate list including at least one of the interview motion vectors of neighboring blocks to derive the interview motion vector of the current block.
- the present invention is characterized by deriving the interview motion vector of the current block by using index information corresponding to the interview motion vector candidate.
- inter-view inter prediction using an interview motion vector may be used to increase the accuracy of video data prediction by utilizing the correlation between views.
- an accurate interview motion vector may be derived, thereby increasing the accuracy of inter-view inter prediction, and reducing coding amount by improving the amount of residual data transmitted.
- FIG. 1 is an embodiment to which the present invention is applied and shows an internal block diagram of a broadcast receiver to which depth coding is applied.
- FIG. 2 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
- FIG. 3 illustrates an example of a neighboring block of the current block as an embodiment to which the present invention is applied.
- FIG. 4 is a flowchart illustrating a first embodiment of obtaining an interview motion vector of a current block from an interview motion vector used for coding a neighboring block as an embodiment to which the present invention is applied.
- FIG. 5 illustrates an example of a spatial neighboring block coded using an interview motion vector as an embodiment to which the present invention is applied.
- FIG. 6 illustrates an example of a temporal neighboring block coded using an interview motion vector as an embodiment to which the present invention is applied.
- FIG. 7 is a flowchart illustrating a second embodiment of obtaining an interview motion vector of a current block from an interview motion vector used for coding a neighboring block in a decoder according to an embodiment to which the present invention is applied.
- FIG. 8 illustrates an example of a flowchart of generating index information in an encoder and transmitting the same to an decoder according to an embodiment to which the present invention is applied.
- FIG 9 illustrates an example of an interview motion vector candidate list as an embodiment to which the present invention is applied.
- the present invention obtains the first interview motion vector from any one of the left neighboring block and the bottom left neighboring block of the current block, and the second interview from any one of the top block, top right block, and top left block of the current block.
- Obtaining a motion vector, acquiring interview motion vector flag information, and determining one of the first interview motion vector and the second interview motion vector as the interview motion vector of the current block based on the interview motion vector flag information A video signal processing method and apparatus.
- the lower left neighboring block uses the interview motion vector.
- the upper neighboring block is a block coded using the interview motion vector, and if the upper neighboring block is not a block coded using the interview motion vector, the upper right neighboring block uses the interview motion vector. It is determined whether the block is coded, and if the upper right neighboring block is not a block coded using the interview motion vector, it is determined whether the upper left neighboring block is a block coded using the interview motion vector. Processing method and apparatus.
- a third interview motion vector is obtained from the temporal neighboring block, and the first interview motion vector and the second interview motion are obtained.
- the temporal neighboring block includes at least one of a block of the same position and a coding block including a neighboring block adjacent to the block of the same position.
- the coded block is searched using the interview motion vector among the spatial neighboring blocks of the current block, the interview motion vector candidate list is generated using the interview motion vector of the searched spatial neighboring block, and the interview of the current block is performed.
- the spatial neighboring block may include a left neighboring block, an upper neighboring block, an upper right neighboring block, a lower left neighboring block, and an upper left neighboring block, and the left neighboring block, the upper neighboring block, the upper right neighboring block, and the And a video signal processing method for searching for a block coded using the interview motion vector in the order of the lower left neighboring block and the upper left neighboring block.
- the present invention also provides a video signal processing method and apparatus for searching a coded block using an interview motion vector among temporal neighboring blocks of the current block, and storing the interview motion vector of the searched temporal neighboring block in the interview motion vector candidate list. .
- the temporal neighboring block is a video signal processing method and apparatus including at least one of a block of the same position and a coding block including a neighboring block adjacent to the block of the same position.
- Techniques for compression encoding or decoding multi-view video signal data take into account spatial redundancy, temporal redundancy, and redundancy existing between viewpoints.
- a multiview texture image photographed from two or more viewpoints may be coded to implement a 3D image.
- depth data corresponding to a multiview texture image may be further coded as necessary.
- compression coding may be performed in consideration of spatial redundancy, temporal redundancy, or inter-view redundancy.
- the depth data represents distance information between the camera and the corresponding pixel, and within the present specification, the depth data may be flexibly interpreted as information related to depth, such as depth information, a depth image, a depth picture, a depth sequence, and a depth bitstream.
- the term coding in this specification may include both the concepts of encoding and decoding, and may be flexibly interpreted according to the technical spirit and technical scope of the present invention.
- FIG. 1 is an embodiment to which the present invention is applied and shows an internal block diagram of a broadcast receiver to which depth coding is applied.
- the broadcast receiver is for receiving an over-the-air broadcast signal and playing back an image.
- the broadcast receiver may generate 3D content using the received depth related information.
- the broadcast receiver includes a tuner 100, a demodulation / channel decoder 102, a transport demultiplexer 104, a packet release unit 106, an audio decoder 108, a video decoder 110, and a PSI / PSIP processing unit ( 114, a 3D renderer 116, a formatter 120, and a display 122.
- the tuner 100 selects and outputs a broadcast signal of one channel selected by a user from among a plurality of broadcast signals input through an antenna (not shown).
- the demodulation / channel decoder 102 demodulates the broadcast signal from the tuner 100 and performs error correction decoding on the demodulated signal to output the transport stream TS.
- the transport demultiplexer 104 demultiplexes the transport stream, separates the video PES and the audio PES, and extracts PSI / PSIP information.
- the packet release unit 106 releases the packet for the video PES and the audio PES to restore the video ES and the audio ES.
- the audio decoder 108 decodes the audio ES and outputs an audio bitstream.
- the audio bitstream is converted into an analog voice signal by a digital-to-analog converter (not shown), amplified by an amplifier (not shown), and then output through a speaker (not shown).
- the video decoder 110 decodes the video ES to reconstruct the original image.
- the decoding process of the audio decoder 108 and the video decoder 110 may be performed based on a packet ID (PID) confirmed by the PSI / PSIP processing unit 114.
- PID packet ID
- the video decoder 110 may extract depth information.
- additional information required to generate an image of a virtual camera viewpoint for example, camera information, or information for estimating occlusion covered by a relatively forward object (eg, geometric information such as an object outline, an object Transparency information, color information), and the like, may be extracted and provided to the 3D renderer 116.
- the depth information and / or additional information may be separated by the transport demultiplexer 104.
- the PSI / PSIP processing unit 114 receives the PSI / PSIP information from the transport demultiplexing unit 104, parses it, and stores it in a memory (not shown) or a register so that the broadcast is reproduced based on the stored information.
- the 3D renderer 116 may generate color information, depth information, and the like at the virtual camera position by using the reconstructed image, depth information, additional information, and camera parameters.
- the 3D renderer 116 generates a virtual image at a virtual camera position by performing 3D warping using the reconstructed image and depth information of the reconstructed image.
- the 3D rendering unit 116 is described as being composed of a separate block from the video decoder 110, but this is only an example, and the 3D rendering unit 116 is the video decoder 110. It may be included in the).
- the formatter 120 formats the image reconstructed in the decoding process, that is, the image captured by the actual camera and the virtual image generated by the 3D rendering unit 116 according to the display method of the corresponding receiver, and displays the display unit 122. ) Will display the 3D image.
- the synthesis of the depth information and the virtual image at the virtual camera position by the 3D rendering unit 116 and the image formatting by the formatter 120 may be selectively performed in response to a user's command. That is, the viewer may operate the remote controller (not shown) so that the synthesized image is not displayed or may specify a time point at which the image synthesis is to be performed.
- the depth information is used by the 3D rendering unit 116 to generate a 3D image, but may be used by the video decoder 110 as another embodiment.
- various embodiments of using depth information in the video decoder 110 will be described.
- FIG. 2 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
- the video decoder 110 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, a deblocking filter unit 240, a decoded picture buffer unit 250,
- the inter predictor 260 and the intra predictor 270 may be included.
- the solid line means the flow of color picture data
- the dotted line means the flow of depth picture data.
- the color picture data and the depth picture data are separately displayed in FIG. 2, this may mean a separate bitstream or may be regarded as only a flow of data in one bitstream. That is, the color picture data and the depth picture data may be transmitted in one bitstream or in separate bitstreams.
- the color picture data and the depth picture data are not limited to all performed in one decoder.
- the slice header area, the slice header extension area, the slice data area, or the macro block area may include various attribute information related to the depth.
- Depth coding may be used as a separate codec, but in case of compatibility with an existing codec, it may be more efficient to add various attribute information related to depth only in the case of a depth bitstream.
- depth identification information for identifying whether a bit stream is a depth bitstream may be added in the sequence header area (eg, a sequence parameter set) or an extension area of the sequence header.
- property information regarding a depth sequence may be added only when the input bitstream is a depth coded bitstream.
- the parsed depth bitstream 200 is entropy decoded through the entropy decoding unit 210, and coefficients, motion vectors, and the like of each macroblock are extracted.
- the inverse quantizer 220 multiplies the received quantized value by a constant constant to obtain a transformed coefficient value
- the inverse transform unit 230 inversely transforms the coefficient value to restore depth information of the depth picture.
- the intra predictor 270 performs intra prediction using the reconstructed depth information of the current depth picture.
- the deblocking filter unit 240 applies deblocking filtering to each coded macroblock in order to reduce block distortion.
- the filter smoothes the edges of the block to improve the quality of the decoded frame. The choice of filtering process depends on the boundary strength and the gradient of the image samples around the boundary.
- the filtered depth pictures are output or stored in the decoded picture buffer unit 250 for use as a reference picture.
- the decoded picture buffer unit 250 stores or opens previously coded depth pictures in order to perform inter prediction.
- the frame_num and the POC (Picture Order Count) of each picture are used. Accordingly, in depth coding, some of the previously coded pictures may have depth pictures that are different from the current depth picture. Depth viewpoint information can also be used together.
- the decoded picture buffer unit 250 may use information about the depth view in order to generate a reference picture list for inter-view prediction of the depth picture.
- depth-view reference information may be used.
- Depth-view reference information refers to information used for indicating a dependency relationship between viewpoints of depth pictures. For example, there may be a total number of depth views, a depth view identification number, a number of depth-view reference pictures, a depth view identification number of a depth-view reference picture, and the like.
- the decoded picture buffer unit 250 manages the reference picture in order to more flexibly implement inter prediction. For example, an adaptive memory management control method and a sliding window method may be used. This is to manage the memory of the reference picture and the non-reference picture into one memory and manage them efficiently with less memory.
- the depth pictures may be marked with a separate display to distinguish them from color pictures in the decoded picture buffer unit, and information for identifying each depth picture may be used in the marking process. Reference pictures managed through this process may be used for depth coding in the inter prediction unit 260.
- the inter predictor 260 may include a motion compensator 261, a virtual view synthesizer 262, and a depth picture predictor 263.
- the motion compensator 261 compensates for the motion of the current block by using the information transmitted from the entropy decoder 210.
- a motion vector of blocks neighboring the current block is extracted from the video signal, and a motion vector prediction value of the current block is obtained.
- the motion of the current block is compensated by using the motion vector prediction value and the difference vector extracted from the video signal.
- motion compensation may be performed using one reference picture or may be performed using a plurality of pictures.
- depth coding when the current depth picture refers to a depth picture at a different time point, information about a reference picture list for inter-view prediction of the depth picture stored in the decoded picture buffer unit 250 is used. Motion compensation can be performed.
- motion compensation may be performed using depth view information identifying a view of the depth picture.
- the virtual view synthesizing unit 262 synthesizes the color picture of the virtual view using the color picture of the view neighboring to the view of the current color picture.
- viewpoint identification information indicating a viewpoint of the color picture may be used.
- flag information indicating whether to generate the color picture of the virtual view may be defined.
- the color picture of the virtual view may be generated using the view identification information.
- the color picture of the virtual view obtained through the virtual view synthesizing unit 262 may be used as a reference picture.
- the view identification information may also be assigned to the color picture of the virtual view.
- the virtual view synthesizing unit 262 may synthesize a depth picture of a virtual view using a depth picture located at a view adjacent to the view of the current depth picture.
- depth view identification information may be used to indicate the view point of the depth picture.
- the depth view identification information may be derived from view identification information of a corresponding color picture.
- the corresponding color picture may have the same picture output order information and the same view identification information as the current depth picture.
- the depth picture generator 263 may generate a current depth picture by using the depth coding information.
- the depth coding information may include a distance variable (eg, a Z coordinate value in a camera coordinate system) indicating a distance between a camera and an object, macroblock type information for depth coding, boundary identification information in a depth picture, and data in an RBSP.
- the current depth picture may be predicted using the depth coding information. That is, inter prediction using depth pictures neighboring the current depth picture may be performed, and intra prediction using depth information decoded in the current depth picture may be possible.
- FIG. 3 illustrates an example of a neighboring block of the current block as an embodiment to which the present invention is applied.
- the neighboring block is a block adjacent to the current block 300 and may be divided into a spatial neighboring block and a temporal neighboring block.
- the spatial neighboring block is a block in the current picture located around the current block 300, and as shown in FIG. 3, the lower left neighboring block 310, the left neighboring block 320, the upper right neighboring block 330, and the upper neighboring block ( 340, the upper left neighboring block 350 may be included.
- the temporal neighboring block is a block included in or located around the block 360 in the same position, and the lower right neighboring block 380 for the block in the same position as the center block 370 in the same position is located. It may include.
- the block 360 of the same position may be defined as a block including pixel positions Xr and Yr of the reference picture corresponding to positions Xc and Yc of the center pixel of the current block 300.
- the block may be defined as a block including a position in the reference picture corresponding to the position of the upper left pixel of the current block 300.
- the reference picture may mean a picture located at the same time point and a different time zone as the current picture including the current block 300.
- the center block 370 of the same position is a block including the center pixel of the block 360 of the same position.
- the lower right neighboring block 370 for the block of the same position is a block including the lower right pixel of the block 360 of the same position.
- the present invention proposes a method for obtaining an interview motion vector of a current block by using the interview motion vector used for coding the neighboring block described above.
- interview motion vector of the current block will be described.
- FIG. 4 is a flowchart illustrating a first embodiment of obtaining an interview motion vector of a current block from an interview motion vector used for coding a neighboring block as an embodiment to which the present invention is applied.
- the first interview motion vector may be obtained from one of the left neighboring block and the lower left neighboring block.
- the first interview motion vector may be obtained from the block coded using the interview motion vector by searching whether the left neighboring block and the lower left neighboring block are coded blocks using the interview motion vector according to the predetermined order.
- the block coded using the interview motion vector may mean an interview prediction block and a reference view temporal prediction block.
- the interview prediction block represents a block coded using a reference block of another view obtained through the interview motion vector.
- the reference view temporal prediction block represents a block coded using a temporal motion vector obtained using the interview motion vector. Examples of the interview prediction block and the reference view temporal prediction block will be described in detail with reference to FIGS. 5 and 6.
- the interview motion vector used for coding the left neighboring block may be stored as the first interview motion vector. If the left neighboring block is not a block coded using the interview motion vector, it may be detected whether the lower left neighboring block is a block coded using the interview motion vector. When the lower left neighboring block is a block coded using the interview motion vector, the interview motion vector used for coding the lower left neighboring block may be stored as the first interview motion vector.
- the second interview motion vector is generated from one of the upper neighbor block, the upper right neighbor block, and the upper left neighbor block. It may be obtained (S410).
- the second neighboring motion vector is obtained from a block coded using the interview motion vector by searching whether the upper neighboring block, the upper right neighboring block, and the upper left neighboring block are coded blocks using the interview motion vector in a predetermined order. Can be. For example, when the upper neighboring block is a block coded using the interview motion vector, the interview motion vector used for coding the upper neighboring block may be stored as the second interview motion vector.
- the interview motion vector used for coding the upper right neighboring block may be stored as the second interview motion vector. If the upper right neighboring block is not a block coded using the interview motion vector, it may be detected whether the upper left neighboring block is a block coded using the interview motion vector. When the upper left neighboring block is a block coded using the interview motion vector, the interview motion vector used for coding the upper left neighboring block may be stored as the second interview motion vector.
- Interview motion vector flag information may be obtained (S420).
- the interview motion vector flag information may include information for determining an optimal interview motion vector of the current block among the first interview motion vector and the second interview motion vector.
- the optimal interview motion vector of the current block among the first interview motion vector and the second interview motion vector may be determined using the interview motion vector flag information (S430).
- the current block may be decoded using the determined interview motion vector of the current block.
- FIG. 5 illustrates an example of a spatial neighboring block coded using an interview motion vector as an embodiment to which the present invention is applied.
- the left neighboring block 500 of the current block is an interview prediction block and is a block coded through interview prediction (or inter-view inter prediction).
- the interview prediction is inter prediction using pixel values of a reference block in a reference picture located at a different time point than the current block to obtain a prediction value of the current block.
- the reference block 510 of the left neighboring block may be specified using the interview motion vector 520 of the left neighboring block.
- the interview motion vector 520 of the left neighboring block is a vector pointing to the top left pixel of the reference block 510 for the left neighboring block in the upper left pixel of the left neighboring block 500.
- the upper neighboring block 530 of the current block is a reference view temporal inter prediction block and is a block coded through reference view temporal inter prediction.
- the reference view temporal inter prediction is temporal inter prediction, which is inter prediction using a temporal motion vector obtained using an interview motion vector.
- the corresponding block 540 of the upper neighboring block is a block located at a different time point than the current block and coded using temporal inter prediction.
- the upper neighboring block 530 may obtain the reference block 560 of the upper neighboring block by using the temporal motion vector 550 of the corresponding block.
- the vector pointing to the top left pixel of the corresponding block 540 in the top left pixel of the top neighboring block 530 is the interview motion vector 570 of the top neighboring block.
- FIG. 6 illustrates an example of a temporal neighboring block coded using an interview motion vector as an embodiment to which the present invention is applied.
- the lower right neighboring block (col-RB) 600 for a block co-located with the current block in the reference picture is an interview prediction block, which is coded through interview prediction.
- the reference block 610 of the lower right neighboring block may be specified using the interview motion vector 620 of the lower right neighboring block.
- the interview motion vector 620 of the lower right neighboring block is a vector indicating the upper left pixel of the reference block 610 of the lower right neighboring block of the upper left pixel of the lower right neighboring block 600.
- the center block (col-CTR) 630 of the same location is a reference view temporal inter prediction block and is a block coded through reference view temporal inter prediction.
- the corresponding block 640 of the center block of the same location is a block located at a different time point than the current block and coded using temporal inter prediction.
- the center block 630 of the same position may obtain the reference block 660 of the center block of the same position using the temporal motion vector 650 of the corresponding block.
- the vector pointing to the upper left pixel of the corresponding block 640 in the upper left pixel of the center block 630 at the same position is the interview motion vector 670 of the center block at the same position.
- FIG. 7 is a flowchart illustrating a second embodiment of obtaining an interview motion vector of a current block from an interview motion vector used for coding a neighboring block in a decoder according to an embodiment to which the present invention is applied.
- the interview motion vector candidate list may be generated by sequentially obtaining the obtained interview motion vector (S700).
- the interview motion vector candidate list is a list capable of storing the interview motion vectors used for coding neighboring blocks.
- the interview motion vector stored in the interview motion vector candidate list may be a candidate for being the interview motion vector of the current block.
- An example of the interview motion vector candidate list will be described with reference to FIG. 9.
- the priority of searching for a neighboring block may be configured in various ways. For example, after searching for a temporal neighboring block, a spatial neighboring block may be searched or a spatial neighboring block is searched, and then a temporal neighboring block may be searched.
- the left neighboring block, the upper neighboring block, the upper right neighboring block, the lower left neighboring block, and the upper left neighboring block may be searched in the order.
- only a part of the spatial neighboring blocks may be searched.
- only the left neighboring block and the upper neighboring block may be searched.
- Examples of priorities for searching for neighboring blocks may be variously configured and are not limited only to the above-mentioned examples.
- Index information for acquiring the interview motion vector of the current block may be obtained (S710).
- the index information may be information indicating an interview motion vector in the interview motion vector candidate list used to obtain the interview motion vector of the current block.
- Index information may be transmitted from the encoder and obtained from the bitstream. An example of generating index information in the encoder and transmitting the index information to the decoder will be described in detail later with reference to FIG. 8.
- the decoder may obtain an optimal interview motion vector among the interview motion vector candidates in the interview motion vector candidate list in the motion compensation unit.
- the interview motion vector candidate list and the index information may be used to determine the interview motion vector of the current block (S720).
- the interview motion vector indicated by the index information in the interview motion vector candidate list may be determined as the interview motion vector of the current block.
- the interview motion vector used for coding the neighboring block indicated by the index information may be determined as the interview motion vector of the current block.
- FIG. 8 illustrates an example of a flowchart of generating index information in an encoder and transmitting the same to an decoder according to an embodiment to which the present invention is applied.
- the interview motion vector candidate list may be generated by sequentially storing the obtained interview motion vector (S800).
- An interview motion vector candidate list may be generated in the same manner as in operation S700.
- Index information corresponding to the interview motion vector used to obtain the interview motion vector of the current block among the interview motion vector candidates may be generated (S810).
- the encoder may determine an optimal interview motion vector for coding of the current block among the interview motion vector candidates in the interview motion vector candidate list. For example, each prediction motion vector candidate in the interview motion vector candidate list is used to obtain the prediction value of the current block, and then the pixel value of the original image is compared with the prediction value of the current block, and the interview is at the smallest difference.
- the motion vector candidate may be determined as the interview motion vector of the current block.
- the encoder may transmit the generated index information to the decoder through the bitstream (S820).
- FIG 9 illustrates an example of an interview motion vector candidate list as an embodiment to which the present invention is applied.
- the interview motion vector candidate list is a list capable of storing the interview motion vectors used for coding neighboring blocks.
- the interview motion vector stored in the interview motion vector candidate list may be a candidate for being the interview motion vector of the current block.
- the interview motion vector may store only the horizontal component or the horizontal component and the vertical component together in the interview motion vector candidate list.
- information about a neighboring block from which the interview motion vector is obtained may be stored in the interview motion vector candidate list.
- index information 0 may represent a left neighboring block
- index information 1 may represent an upper neighboring block
- index information 2 may represent an upper left neighboring block.
- the encoder may generate an interview motion vector candidate list in the manner described above, and search for the optimal interview motion vector from the stored interview motion vector.
- the index information corresponding to the neighboring block coded using the optimal interview motion vector may be transmitted to the decoder.
- the decoder may generate the interview motion vector candidate list in the same manner as described above with the encoder.
- the index information may be obtained from the encoder or an optimal interview motion vector may be obtained from the interview motion vector of the interview motion vector candidate list.
- the decoding / encoding device to which the present invention is applied may be provided in a multimedia broadcasting transmission / reception device such as DMB (Digital Multimedia Broadcasting), and may be used to decode video signals and data signals.
- the multimedia broadcasting transmission / reception apparatus may include a mobile communication terminal.
- the decoding / encoding method to which the present invention is applied may be stored in a computer-readable recording medium that is produced as a program for execution on a computer, and multimedia data having a data structure according to the present invention may also be read by a computer. It can be stored in the recording medium.
- the computer readable recording medium includes all kinds of storage devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, which are also implemented in the form of a carrier wave (eg, transmission over the Internet). It also includes.
- the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted using a wired / wireless communication network.
- the present invention can be used to code a video signal.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (18)
- 현재 블록의 좌측 이웃 블록 및 좌측 하단 이웃 블록 중 어느 하나로부터 상기 제 1 인터뷰 모션 벡터를 획득하는 단계;상기 현재 블록의 상단 블록, 우측 상단 블록 및 좌측 상단 블록 중 어느 하나로부터 상기 제 2 인터뷰 모션 벡터를 획득하는 단계;인터뷰 모션 벡터 플래그 정보 획득 단계;상기 인터뷰 모션 벡터 플래그 정보에 기초하여 상기 제 1 인터뷰 모션 벡터 및 상기 제 2 인터뷰 모션 벡터 중 어느 하나를 상기 현재 블록의 인터뷰 모션 벡터로 결정하는 단계를 포함하는 비디오 신호 처리 방법.
- 제 1 항에 있어서,상기 좌측 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하는 단계;상기 좌측 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록이 아닌 경우, 상기 좌측 하단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하는 단계; 를 더 포함하는 비디오 신호 처리 방법.
- 제 1 항에 있어서,상기 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하는 단계;상기 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록이 아닌 경우, 상기 우측 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하는 단계;상기 우측 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록이 아닌 경우, 상기 좌측 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하는 단계;를 더 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 1 항에 있어서,상기 현재 블록의 시간적 이웃 블록 중 적어도 하나가 인터뷰 모션 벡터를 이용하여 코딩된 블록인 경우, 시간적 이웃 블록으로부터 제 3 인터뷰 모션 벡터를 획득하는 단계;를 더 포함하되,상기 현재 블록의 인터뷰 모션 벡터를 결정하는 단계는 상기 제 1 인터뷰 모션 벡터, 상기 제 2 인터뷰 모션 벡터 및 상기 제 3 인터뷰 모션 벡터 중 어느 하나를 이용하여 상기 현재 블록의 인터뷰 모션 벡터를 획득하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 4 항에 있어서,상기 시간적 이웃 블록은,동일 위치의 블록 및 동일 위치의 블록에 인접한 이웃 블록을 포함한 코딩 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 현재 블록의 공간적 이웃 블록 중 인터뷰 모션 벡터를 이용하여 코딩된 블록을 탐색하는 단계;상기 탐색된 공간적 이웃 블록의 상기 인터뷰 모션 벡터를 이용하여 인터뷰 모션 벡터 후보자 리스트를 생성하는 단계;상기 현재 블록의 인터뷰 모션 벡터를 획득하기 위한 인덱스 정보를 획득하는 단계;상기 인터뷰 모션 벡터 후보자 리스트 및 상기 인덱스 정보를 이용하여 상기 현재 블록의 인터뷰 모션 벡터를 획득하는 단계를 포함하는 비디오 신호 처리 방법.
- 제 6 항에 있어서,상기 공간적 이웃 블록은 좌측 이웃 블록, 상단 이웃 블록, 우측상단 이웃 블록, 좌측하단 이웃 블록 및 좌측상단 이웃 블록을 포함하고,상기 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 여부를 탐색하는 단계는,상기 좌측 이웃 블록, 상기 상단 이웃 블록, 상기 우측상단 이웃 블록, 상기 좌측하단 이웃 블록, 상기 좌측상단 이웃 블록 순으로 상기 인터뷰 모션 벡터를 이용하여 코딩된 블록을 탐색하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 6 항에 있어서,상기 현재 블록의 시간적 이웃 블록 중 인터뷰 모션 벡터를 이용하여 코딩된 블록을 탐색하는 단계; 및상기 탐색된 시간적 이웃 블록의 상기 인터뷰 모션 벡터를 상기 인터뷰 모션 벡터 후보자 리스트에 저장하는 단계; 를 더 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 8 항에 있어서,상기 시간적 이웃 블록은,동일 위치의 블록 및 동일 위치의 블록에 인접한 이웃 블록을 포함한 코딩 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 현재 블록의 좌측 이웃 블록 및 좌측 하단 이웃 블록 중 어느 하나로부터 상기 제 1 인터뷰 모션 벡터를 획득하고, 상기 현재 블록의 상단 블록, 우측 상단 블록 및 좌측 상단 블록 중 어느 하나로부터 상기 제 2 인터뷰 모션 벡터를 획득하고,인터뷰 모션 벡터 플래그 정보 획득하고, 상기 인터뷰 모션 벡터 플래그 정보에 기초하여 상기 제 1 인터뷰 모션 벡터 및 상기 제 2 인터뷰 모션 벡터 중 어느 하나를 상기 현재 블록의 인터뷰 모션 벡터로 결정하는 인터 예측부를 포함하는 비디오 디코더 장치.
- 제 10 항에 있어서,상기 인터 예측부는, 상기 좌측 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하고, 상기 좌측 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록이 아닌 경우, 상기 좌측 하단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하는 것을 특징으로 하는 비디오 디코더 장치.
- 제 10 항에 있어서,상기 인터 예측부는, 상기 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하고, 상기 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록이 아닌 경우, 상기 우측 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하고, 상기 우측 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록이 아닌 경우, 상기 좌측 상단 이웃 블록이 인터뷰 모션 벡터를 이용하여 코딩된 블록인지 판단하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제 10 항에 있어서,상기 인터 예측부는, 상기 현재 블록의 시간적 이웃 블록 중 적어도 하나가 인터뷰 모션 벡터를 이용하여 코딩된 블록인 경우, 시간적 이웃 블록으로부터 제 3 인터뷰 모션 벡터를 획득하고, 상기 제 1 인터뷰 모션 벡터, 상기 제 2 인터뷰 모션 벡터 및 상기 제 3 인터뷰 모션 벡터 중 어느 하나를 이용하여 상기 현재 블록의 인터뷰 모션 벡터를 획득하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제 13 항에 있어서,상기 시간적 이웃 블록은, 동일 위치의 블록 및 동일 위치의 블록에 인접한 이웃 블록을 포함한 코딩 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 디코딩 장치.
- 현재 블록의 공간적 이웃 블록 중 인터뷰 모션 벡터를 이용하여 코딩된 블록을 탐색하고, 상기 탐색된 공간적 이웃 블록의 상기 인터뷰 모션 벡터를 이용하여 인터뷰 모션 벡터 후보자 리스트를 생성하고, 상기 현재 블록의 인터뷰 모션 벡터를 획득하기 위한 인덱스 정보를 획득하고, 상기 인터뷰 모션 벡터 후보자 리스트 및 상기 인덱스 정보를 이용하여 상기 현재 블록의 인터뷰 모션 벡터를 획득하는 인터 예측부를 포함하는 비디오 디코딩 장치.
- 제 15 항에 있어서,상기 공간적 이웃 블록은 좌측 이웃 블록, 상단 이웃 블록, 우측상단 이웃 블록, 좌측하단 이웃 블록 및 좌측상단 이웃 블록을 포함하고,상기 인터 예측부는, 상기 좌측 이웃 블록, 상기 상단 이웃 블록, 상기 우측상단 이웃 블록, 상기 좌측하단 이웃 블록, 상기 좌측상단 이웃 블록 순으로 상기 인터뷰 모션 벡터를 이용하여 코딩된 블록을 탐색하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제 15 항에 있어서,상기 인터 예측부는, 상기 현재 블록의 시간적 이웃 블록 중 인터뷰 모션 벡터를 이용하여 코딩된 블록을 탐색하고, 상기 탐색된 시간적 이웃 블록의 상기 인터뷰 모션 벡터를 상기 인터뷰 모션 벡터 후보자 리스트에 저장하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제 17 항에 있어서,상기 시간적 이웃 블록은, 동일 위치의 블록 및 동일 위치의 블록에 인접한 이웃 블록을 포함한 코딩 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 디코딩 장치.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13854106.5A EP2919464A4 (en) | 2012-11-07 | 2013-11-07 | DEVICE AND METHOD FOR PROCESSING VIDEO SIGNALS |
JP2015541685A JP2015534416A (ja) | 2012-11-07 | 2013-11-07 | ビデオ信号処理方法及び装置 |
US14/441,038 US20150304676A1 (en) | 2012-11-07 | 2013-11-07 | Method and apparatus for processing video signals |
KR1020157014424A KR101753592B1 (ko) | 2012-11-07 | 2013-11-07 | 비디오 신호 처리 방법 및 장치 |
CN201380064767.4A CN104871541A (zh) | 2012-11-07 | 2013-11-07 | 处理视频信号的方法和设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261723332P | 2012-11-07 | 2012-11-07 | |
US61/723,332 | 2012-11-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014073873A1 true WO2014073873A1 (ko) | 2014-05-15 |
Family
ID=50684906
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/010056 WO2014073873A1 (ko) | 2012-11-07 | 2013-11-07 | 비디오 신호 처리 방법 및 장치 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20150304676A1 (ko) |
EP (1) | EP2919464A4 (ko) |
JP (1) | JP2015534416A (ko) |
KR (1) | KR101753592B1 (ko) |
CN (1) | CN104871541A (ko) |
WO (1) | WO2014073873A1 (ko) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10123036B2 (en) * | 2014-06-27 | 2018-11-06 | Microsoft Technology Licensing, Llc | Motion vector selection for video encoding |
CN105468360B (zh) * | 2015-11-17 | 2018-09-21 | 南京邮电大学 | 一种面向第三方音频播放器的数据截取方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080007069A (ko) * | 2006-07-14 | 2008-01-17 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
KR20100030625A (ko) * | 2007-06-28 | 2010-03-18 | 톰슨 라이센싱 | 다시점 코딩된 영상의 단일 루프 디코딩 |
US20110216833A1 (en) * | 2008-10-17 | 2011-09-08 | Nokia Corporation | Sharing of motion vector in 3d video coding |
KR20110129805A (ko) * | 2010-05-26 | 2011-12-02 | 한양대학교 산학협력단 | 다중 참조를 허용하는 비디오 코덱에서 모션 벡터를 예측하는 방법, 및 그 방법을 이용한 모션 벡터 부호화/복호화 장치 |
JP2012520025A (ja) * | 2009-03-05 | 2012-08-30 | クゥアルコム・インコーポレイテッド | ビデオデータの動きベクトルを処理するためのシステムおよび方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1258925C (zh) * | 2003-06-27 | 2006-06-07 | 中国科学院计算技术研究所 | 多视角视频编解码预测补偿方法及装置 |
US7978769B2 (en) * | 2003-06-30 | 2011-07-12 | Ntt Docomo, Inc. | Method and apparatus for coding motion information |
US7430335B2 (en) * | 2003-08-13 | 2008-09-30 | Apple Inc | Pre-processing method and system for data reduction of video sequences and bit rate reduction of compressed video sequences using spatial filtering |
KR101370919B1 (ko) * | 2006-07-12 | 2014-03-11 | 엘지전자 주식회사 | 신호처리방법 및 장치 |
US8494046B2 (en) * | 2007-03-23 | 2013-07-23 | Lg Electronics Inc. | Method and an apparatus for decoding/encoding a video signal by performing illumination compensation |
EP2149262A4 (en) * | 2007-04-25 | 2010-09-01 | Lg Electronics Inc | METHOD AND DEVICE FOR DECODING / CODING A VIDEO SIGNAL |
EP2174506A1 (en) * | 2007-08-06 | 2010-04-14 | Thomson Licensing | Methods and apparatus for motion skip mode with multiple inter-view reference pictures |
US20130188719A1 (en) * | 2012-01-20 | 2013-07-25 | Qualcomm Incorporated | Motion prediction in svc using motion vector for intra-coded block |
-
2013
- 2013-11-07 EP EP13854106.5A patent/EP2919464A4/en not_active Withdrawn
- 2013-11-07 KR KR1020157014424A patent/KR101753592B1/ko active IP Right Grant
- 2013-11-07 WO PCT/KR2013/010056 patent/WO2014073873A1/ko active Application Filing
- 2013-11-07 US US14/441,038 patent/US20150304676A1/en not_active Abandoned
- 2013-11-07 CN CN201380064767.4A patent/CN104871541A/zh active Pending
- 2013-11-07 JP JP2015541685A patent/JP2015534416A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080007069A (ko) * | 2006-07-14 | 2008-01-17 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
KR20100030625A (ko) * | 2007-06-28 | 2010-03-18 | 톰슨 라이센싱 | 다시점 코딩된 영상의 단일 루프 디코딩 |
US20110216833A1 (en) * | 2008-10-17 | 2011-09-08 | Nokia Corporation | Sharing of motion vector in 3d video coding |
JP2012520025A (ja) * | 2009-03-05 | 2012-08-30 | クゥアルコム・インコーポレイテッド | ビデオデータの動きベクトルを処理するためのシステムおよび方法 |
KR20110129805A (ko) * | 2010-05-26 | 2011-12-02 | 한양대학교 산학협력단 | 다중 참조를 허용하는 비디오 코덱에서 모션 벡터를 예측하는 방법, 및 그 방법을 이용한 모션 벡터 부호화/복호화 장치 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2919464A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20150304676A1 (en) | 2015-10-22 |
KR101753592B1 (ko) | 2017-07-19 |
KR20150087257A (ko) | 2015-07-29 |
CN104871541A (zh) | 2015-08-26 |
JP2015534416A (ja) | 2015-11-26 |
EP2919464A1 (en) | 2015-09-16 |
EP2919464A4 (en) | 2016-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101619448B1 (ko) | 영상 신호 처리 방법 및 장치 | |
WO2010087589A2 (ko) | 경계 인트라 코딩을 이용한 비디오 신호 처리 방법 및 장치 | |
WO2014107083A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168443A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014010935A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014107029A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013176485A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014010918A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013191436A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009098A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014054897A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014054896A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009091A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014109547A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014109563A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014073873A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014077573A2 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014042459A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168445A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009092A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168411A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168444A1 (ko) | 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13854106 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14441038 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2015541685 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013854106 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20157014424 Country of ref document: KR Kind code of ref document: A |