WO2012131752A1 - Dispositif et procédé de mise à jour d'informations de profondeur et dispositif de production de vidéo stéréoscopique - Google Patents

Dispositif et procédé de mise à jour d'informations de profondeur et dispositif de production de vidéo stéréoscopique Download PDF

Info

Publication number
WO2012131752A1
WO2012131752A1 PCT/JP2011/001795 JP2011001795W WO2012131752A1 WO 2012131752 A1 WO2012131752 A1 WO 2012131752A1 JP 2011001795 W JP2011001795 W JP 2011001795W WO 2012131752 A1 WO2012131752 A1 WO 2012131752A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
depth information
stereoscopic
value
stereoscopic video
Prior art date
Application number
PCT/JP2011/001795
Other languages
English (en)
Japanese (ja)
Inventor
山本 純也
仁尾 寛
晴子 寺井
大輔 加瀬
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Priority to PCT/JP2011/001795 priority Critical patent/WO2012131752A1/fr
Publication of WO2012131752A1 publication Critical patent/WO2012131752A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Definitions

  • the present invention relates to a depth information update device, a stereoscopic video generation device, and a depth information update method for updating depth information for generating a stereoscopic video.
  • a video display device using a liquid crystal panel or the like has been used as a device for displaying a two-dimensional video.
  • technological development of a three-dimensional video display device capable of viewing three-dimensional video is progressing by alternately viewing two two-dimensional videos having parallax using active shutter glasses.
  • Patent Document 1 adjusts the depth value of a depth image for generating a stereoscopic video.
  • generates a stereo image using the depth value after adjustment is disclosed.
  • the depth value is set to a value between the maximum projection amount and the maximum depression amount for generating a stereoscopic image with less discomfort.
  • the setting of the depth value is not particularly considered for the improvement of the fine stereoscopic effect. Therefore, in the prior art A, the fine three-dimensional effect of a plurality of adjacent pixels is not improved.
  • the present invention has been made to solve the above-described problems, and an object thereof is to provide depth information capable of generating depth information for generating a stereoscopic image with an improved fine stereoscopic expression. It is providing an update apparatus etc.
  • a depth information updating apparatus is a depth information updating apparatus that performs processing using a depth value corresponding to each pixel constituting a stereoscopic video, A process of emphasizing a component of a specific band with respect to a depth image composed of a plurality of depth information indicating a plurality of depth values to be generated and a depth image including the plurality of depth values indicated by the depth information And a depth information update unit configured to update the depth information by performing the emphasizing process.
  • the depth information updating apparatus acquires a depth information acquiring unit that acquires depth information indicating a plurality of depth values, and a component of a specific band with respect to a depth image including the plurality of depth values indicated by the depth information. And a depth information updating unit configured to update the depth information by performing an emphasizing process that emphasizes.
  • the depth information update unit determines, in at least a part of the depth image, an absolute value of a difference between two depth values of which an absolute value of a difference between depth values of two adjacent pixels is less than a threshold.
  • the emphasizing process is performed to increase the size.
  • the depth information update unit is an absolute value of a difference between the depth value of the processing target pixel which is the pixel to be processed and the depth value of the pixel adjacent to the processing target pixel in the depth image.
  • a calculation unit which performs processing of calculating a certain difference depth value for all pixels constituting the depth image, and a difference depth value less than the threshold if the calculated difference depth value is less than a threshold
  • a processing execution unit that performs the enhancement processing on the corresponding processing target pixel.
  • the depth information update unit performs the enhancement process that is a filter process.
  • the depth information updating unit further indicates a plurality of depths indicated by the updated depth information such that absolute values of the plurality of depth values indicated by the updated depth information decrease at substantially the same rate.
  • the updated depth information is updated by changing the value.
  • the depth information update unit updates the updated depth information by multiplying an absolute value of the plurality of depth values indicated by the updated depth information by a coefficient smaller than one.
  • a stereoscopic video generation device includes the depth information update device, and a stereoscopic video generation unit that generates a new stereoscopic video using a plurality of depth values indicated by the updated depth information. .
  • the depth information update method is performed by the depth information update apparatus that performs processing using depth values corresponding to respective pixels that configure a stereoscopic video.
  • the depth information updating method includes a depth information acquisition step of acquiring depth information indicating a plurality of depth values for generating a stereoscopic video, and a depth image including the plurality of depth values indicated by the depth information. And a depth information updating step of updating the depth information by performing an emphasizing process which is a process of emphasizing a component of a specific band.
  • the present invention may realize all or part of a plurality of components constituting such a depth information update apparatus as a system LSI (Large Scale Integration: large scale integrated circuit).
  • LSI Large Scale Integration: large scale integrated circuit
  • the present invention may also be implemented as a program that causes a computer to execute the steps included in the depth information update method.
  • the present invention may be realized as a computer readable recording medium storing such a program.
  • the program may be distributed via a transmission medium such as the Internet.
  • FIG. 1 is a diagram showing an example of the configuration of a stereoscopic video viewing system according to the first embodiment.
  • FIG. 2 is a diagram for explaining a stereoscopic video.
  • FIG. 3 is a block diagram showing an example of the configuration of a stereoscopic video reproduction apparatus.
  • FIG. 4 is a diagram showing a depth table as depth information.
  • FIG. 5 is a diagram showing a relationship between a stereoscopic target object in a stereoscopic video and a depth value.
  • FIG. 6 is a view showing an example of a stereoscopic video showing a stereoscopic target object.
  • FIG. 7 is a flowchart of stereoscopic video generation processing.
  • FIG. 8 is a diagram for explaining changes in depth value.
  • FIG. 1 is a diagram showing an example of the configuration of a stereoscopic video viewing system according to the first embodiment.
  • FIG. 2 is a diagram for explaining a stereoscopic video.
  • FIG. 3 is a block
  • FIG. 9 is a flowchart of stereoscopic video generation processing A.
  • FIG. 10 is a diagram showing a relationship between a stereoscopic target object in a stereoscopic video and a depth value.
  • FIG. 11 is a block diagram showing an example of the configuration of the depth information update unit.
  • FIG. 12 is a flowchart of stereoscopic video generation processing B.
  • FIG. 13 is a diagram for explaining a processing target pixel and a pixel close to the processing target pixel.
  • FIG. 1 is a diagram showing an example of the configuration of a stereoscopic video viewing system 1000 according to the first embodiment.
  • the X, Y, and Z directions are orthogonal to one another.
  • Each of the X, Y, Z directions shown in the following figures are also orthogonal to one another.
  • a stereoscopic video viewing system 1000 includes a stereoscopic video reproduction device 100, a stereoscopic video display device 200, and active shutter glasses 300.
  • the stereoscopic video display device 200 is, for example, a plasma display, a liquid crystal display, an organic EL display, or the like. Note that the stereoscopic video display device 200 is not limited to the above display, and may be a display of another type. For example, the stereoscopic video display device 200 may be a volume display type display in which a plurality of liquid crystal panels are arranged in the depth direction.
  • the video and the stereoscopic video may be either a moving image or a still image.
  • the stereoscopic video display device 200 includes a display surface 210 for displaying a video.
  • the display surface 210 is assumed to be parallel to the XY plane. As an example, it is assumed that the display surface 210 can display an image composed of a plurality of pixels arranged in m (natural number) rows and n (natural number) columns.
  • m and n are assumed to be 1080 and 1920, respectively. That is, it is assumed that the display surface 210 can display an image having a size of 1920 ⁇ 1080 pixels (hereinafter, also referred to as full HD size).
  • an image of a size that can be displayed on the display surface 210 is also referred to as a displayable image.
  • the size of the displayable video is not limited to the full HD size.
  • the size of the displayable image may be, for example, the size of 1280 ⁇ 720 pixels.
  • the stereoscopic video display device 200 is, for example, a device that displays a stereoscopic video by a frame sequential method.
  • the size of the stereoscopic video displayed by the display surface 210 is equal to the size of the displayable video.
  • the display method of the stereoscopic video of the stereoscopic video display device 200 is not limited to the frame sequential method.
  • the stereoscopic video display method of the stereoscopic video display device 200 may be, for example, a lenticular method.
  • the size of the stereoscopic video displayed by the display surface 210 is smaller than the size of the displayable video.
  • the 3D image reproduction apparatus 100 is connected to the 3D image display apparatus 200 by the signal cable 10.
  • the signal cable 10 is a high-definition multimedia interface (HDMI) cable.
  • the signal cable 10 is not limited to the HDMI cable, and may be, for example, a cable for D terminal, a coaxial cable, or the like. Further, the communication between the stereoscopic video reproduction device 100 and the stereoscopic video display device 200 is not limited to communication using a wire, and may be wireless communication.
  • the stereoscopic video reproduction device 100 transmits a signal indicating the stereoscopic video 30 as a three-dimensional video to the stereoscopic video display device 200 via the signal cable 10.
  • the stereoscopic video 30 is configured of a video 31 for the left eye and a video 32 for the right eye.
  • the left-eye video 31 is a video to be shown to the left eye of the viewer (user) (hereinafter also referred to as a first viewpoint).
  • the right-eye image 32 is an image for showing the viewer's right eye (hereinafter, also referred to as a second viewpoint).
  • the left-eye video 31 and the right-eye video 32 are two-dimensional video images having parallax.
  • the stereoscopic video display device 200 alternately displays the left-eye video 31 and the right-eye video 32 on the display surface 210.
  • the active shutter glasses 300 shield the right eye of the viewer. Further, when the right-eye image 32 is displayed on the display surface 210, the active shutter glasses 300 shield the left eye of the viewer.
  • a viewer who wears the active shutter glasses 300 having such a configuration can view the left-eye video 31 with the left eye, and can view the right-eye video 32 with the right eye. Thereby, the viewer can feel the stereoscopic video 30 stereoscopically. That is, the stereoscopic video 30 is expressed using the first viewpoint video (left-eye video 31) of the first viewpoint and the second viewpoint video (right-eye video 32) of the second viewpoint.
  • the display method of the stereoscopic video is not limited to the frame sequential method using the active shutter glasses 300.
  • the display method of the stereoscopic video may be a method using polarized glasses.
  • a method of displaying a stereoscopic image may be a method using a parallax barrier, a lenticular sheet or the like.
  • the number of viewpoints required for the stereoscopic video display device 200 to display the stereoscopic video 30 is not limited to two, and may be three or more.
  • the stereoscopic image 30 is an image generated using a plurality of depth values.
  • the depth value corresponds to the amount of parallax between the left-eye video and the right-eye video.
  • FIG. 2 is a diagram for explaining a stereoscopic video.
  • a stereoscopic video is composed of a plurality of pixels.
  • each pixel forming a stereoscopic video is referred to as a stereoscopic display pixel.
  • a plurality of depth values are respectively associated with a plurality of stereoscopic display pixels constituting a stereoscopic video.
  • the display surface 210 is a parallax zero surface.
  • the parallax zero plane is a plane in which the parallax of the pixels at the same position of the left eye image and the right eye image displayed on the parallax zero plane is zero.
  • the depth value indicates a value larger than a predetermined parallax zero reference value when a stereoscopic display pixel in the stereoscopic image is disposed (displayed) on the other side of the display surface 210. That is, when the insertion amount of the stereoscopic display pixel from the display surface 210 is a positive value, the depth value indicates a value larger than the parallax zero reference value.
  • the parallax zero reference value is a value for arranging a stereoscopic display pixel at the position of the parallax zero surface (display surface 210).
  • the depth value indicates a value less than the parallax zero reference value when stereoscopic display pixels are arranged (displayed) on the front side of the display surface 210 in a stereoscopic image. That is, when the projection amount of the stereoscopic display pixel from the display surface 210 is a positive value, the depth value indicates a value less than the parallax zero reference value.
  • each depth value is a value indicating the projection amount or back amount of the stereoscopic display pixel corresponding to the depth value from the zero parallax surface (display surface 210).
  • the depth value is represented in the range of 0 to 255 so that the value increases as the distance from the position of the viewer 40 increases, and the case where the parallax zero reference value is 123 is An example will be described.
  • the depth value is not limited to the range of 0 to 255, and may be represented, for example, in the range of 0 to 511. Also, if the disparity zero reference value is 0, the depth value may be represented by a positive value and a negative value.
  • a pixel for stereoscopic display is also referred to as a stereoscopic display pixel.
  • the position where the depth value is the parallax zero reference value (123) corresponds to the position of the display surface 210 (parallax zero surface) in the Z direction.
  • the stereoscopic display pixels are displayed at, for example, a position P11 in the left-eye video and a position P12 in the right-eye video.
  • the viewer 40 looks at the stereoscopic display pixels arranged at the position P11 with the left eye 41 by the operation of the active shutter glasses 300.
  • the viewer 40 looks at the pixel for stereoscopic display disposed at the position P12 with the right eye 42 by the operation of the active shutter glasses 300.
  • the viewer 40 looks as if the stereoscopic display pixel is arranged at the position P10 ahead of the display surface 210 by the distance d1 in the stereoscopic video.
  • the depth value of the stereoscopic display pixel indicates a value less than the parallax zero reference value (123).
  • the horizontal distance between the position P12 and the position P11 corresponds to the amount of parallax.
  • the parallax amount also corresponds to the projection amount d1 of the stereoscopic display pixel in the depth value.
  • the depth value is the parallax zero reference value
  • the viewer 40 looks as if stereoscopic display pixels are arranged on the display surface 210 (parallax zero surface).
  • the relation between the parallax amount and the depth value of the stereoscopic display pixel may be performed by a predetermined conversion formula.
  • the amount of parallax is d
  • the depth value of the stereoscopic display pixel is L
  • the depth value of the parallax zero plane is L b
  • the distance between the left eye 41 and the right eye 42 of the viewer 40 is e. It may be At this time, the depth value L is proportional to the difference in distance between the viewer 40 and P10 on the Z axis.
  • the stereoscopic display pixels are displayed at, for example, a position P21 in the left-eye video and a position P22 in the right-eye video.
  • the viewer 40 looks at the stereoscopic display pixels arranged at the position P21 with the left eye 41 by the operation of the active shutter glasses 300. Further, in this case, the viewer 40 looks at the stereoscopic display pixel arranged at the position P22 with the right eye 42 by the operation of the active shutter glasses 300.
  • the viewer 40 looks as if the stereoscopic display pixel is arranged at the position P20 on the opposite side of the display surface 210 by the distance d2 in the stereoscopic image.
  • the depth value of the stereoscopic display pixel indicates a value larger than the parallax zero reference value (123).
  • the relation between the parallax amount and the depth value of the stereoscopic display pixel may be performed by a predetermined conversion formula.
  • the amount of parallax is d
  • the depth value of the stereoscopic display pixel is L
  • the depth value of the parallax zero plane is L b
  • the distance between the left eye 41 and the right eye 42 of the viewer 40 is e. It may be At this time, the depth value L is proportional to the difference in distance between the viewer 40 and P10 on the Z axis.
  • FIG. 3 is a block diagram showing an example of the configuration of the stereoscopic video reproduction device 100. As shown in FIG.
  • the stereoscopic video reproduction device 100 includes a stereoscopic video generation device 110.
  • the 3D image reproduction apparatus 100 also includes a processing unit and the like (not shown).
  • the three-dimensional video generation device 110 is a device for generating a three-dimensional video. Although details will be described later, the three-dimensional video generation apparatus 110 acquires depth information of the three-dimensional video to be processed, and updates the depth information. Then, using the updated depth information, a new stereoscopic image is generated.
  • the depth information is information indicating a plurality of depth values respectively corresponding to a plurality of stereoscopic display pixels constituting a stereoscopic video to be processed.
  • the stereoscopic video generation device 110 receives the stereoscopic video signal SG1.
  • the stereoscopic video signal SG1 is a signal indicating stereoscopic video data and depth information corresponding to the stereoscopic video.
  • the stereoscopic video data is data of a left-eye video and a right-eye video.
  • the stereoscopic video signal SG1 may be, for example, a signal indicating data of a two-dimensional video and depth information for converting the two-dimensional video into a stereoscopic video.
  • the stereoscopic video signal SG1 is, for example, a signal indicating data read from the recording medium.
  • the recording medium is, for example, a BD (Blu-ray Disc (registered trademark)).
  • the above-described processing unit included in the stereoscopic video reproduction device 100 performs processing of reading stereoscopic video data and depth information from the recording medium.
  • the stereoscopic video signal SG1 is not limited to the signals described above.
  • the stereoscopic video signal SG1 may be, for example, a signal acquired from a broadcast wave.
  • the above-described processing unit included in the stereoscopic video reproduction device 100 has, for example, the functions of a tuner and a demodulation circuit.
  • depth information will be described.
  • the depth information is expressed, for example, as the following depth table T100.
  • FIG. 4 is a diagram showing a depth table T100 as depth information.
  • FIG. 4 shows the depth table T100 when the stereoscopic video is one frame of a moving image or a still image.
  • the depth table T100 indicates a plurality of depth values respectively corresponding to a plurality of stereoscopic display pixels constituting a stereoscopic video.
  • the plurality of depth values indicated by the depth table T100 are values for generating a stereoscopic video.
  • the plurality of depth values indicated by the depth table T100 are arranged in a matrix like the plurality of stereoscopic display pixels constituting the stereoscopic video.
  • each of the plurality of depth values indicated by the depth table T100 is regarded as a pixel value.
  • the depth table T100 as depth information indicates an image composed of a plurality of depth values (hereinafter, also referred to as a depth image).
  • the depth image corresponds to a depth map (disparity map).
  • the value of each pixel constituting the depth image is a depth value.
  • Cmn indicates the depth value of a pixel corresponding to m rows and n columns in a stereoscopic video.
  • C12 indicates a depth value of a pixel corresponding to one row and two columns in a stereoscopic video.
  • each depth value indicated by the depth table T100 is not limited to the depth value corresponding to one pixel.
  • Each depth value indicated by the depth table T100 may be, for example, a depth value corresponding to four pixels.
  • the number of depth values indicated by the depth table T100 is one fourth of the number of stereoscopic display pixels constituting the stereoscopic video.
  • FIG. 5 is a diagram showing a relationship between a stereoscopic target object in a stereoscopic video and a depth value.
  • a stereoscopic target object is an object to be displayed stereoscopically in a stereoscopic video.
  • FIG. 5A is a perspective view showing the positional relationship between the stereoscopic target object 50 in the stereoscopic video and the display surface 210.
  • FIG. FIG. 5B is a graph for explaining the depth value of the three-dimensional object 50.
  • the vertical axis indicates the depth value (Z direction).
  • the horizontal axis indicates the X direction (horizontal direction of stereoscopic video).
  • FIG. 6 is a view showing an example of a stereoscopic video G100 showing the stereoscopic target object 50. As shown in FIG. FIG. 6 is a diagram showing a stereoscopic image G100 parallel to the XY plane. Note that, in FIG. 6, in order to explain the depth value, a depth line L10 not shown in the stereoscopic video G100 is actually shown.
  • the depth line L10 is a line indicating n depth values respectively corresponding to n stereoscopic display pixels arranged in one line in the stereoscopic video G100.
  • the stereoscopic target object 50 is disposed, for example, on the front side of the display surface 210.
  • the depth line L10 is shown on the stereoscopic target object 50 and the display surface 210.
  • FIG. 5B is a graph showing the depth line L10.
  • the depth line L10 in the region R10 corresponds to the depth line L10 shown in the stereoscopic target object 50. From the depth line L10 in the region R10 of FIG. 5B, it can be seen that the surface of the three-dimensional object 50 has unevenness.
  • the depth table T100 is a table corresponding to the stereoscopic video G100
  • the depth image indicated by the depth table T100 indicates the stereoscopic video G100 expressed in gray scale.
  • the stereoscopic video generation device 110 includes a depth information updating device 101 and a stereoscopic video generation unit 113.
  • the depth information updating apparatus 101 performs processing using depth values corresponding to respective pixels constituting a stereoscopic video.
  • the depth information update device 101 includes a depth information acquisition unit 111 and a depth information update unit 112.
  • the depth information acquisition unit 111 and the stereoscopic video generation unit 113 receive the stereoscopic video signal SG1.
  • the depth information acquisition unit 111 acquires depth information indicated by the received stereoscopic video signal SG1.
  • the depth information is, for example, the depth table T100 of FIG.
  • the depth information acquisition unit 111 transmits the acquired depth information to the depth information update unit 112.
  • the depth information acquisition unit 111 acquires depth information by performing stereo matching, which is a known technique, for example.
  • stereo matching which is a known technique, for example.
  • a value obtained by passing a specific conversion equation on the depth information may be acquired as depth information.
  • the depth information update unit 112 updates the received depth information, the details of which will be described later. Then, the depth information update unit 112 transmits the updated depth information to the stereoscopic video generation unit 113.
  • updated depth information transmitted by the depth information update unit 112 to the stereoscopic video generation unit 113 is referred to as updated depth information.
  • the stereoscopic video generation unit 113 generates a new stereoscopic video using stereoscopic video data indicated by the received stereoscopic video signal SG1 and the received updated depth information.
  • the stereoscopic video generation unit 113 transmits, to the stereoscopic video display device 200, the stereoscopic video signal SG2 indicating the generated new stereoscopic video (video for left eye and video for right eye).
  • the stereoscopic video display device 200 alternately displays the video for the left eye and the video for the right eye indicated by the received stereoscopic video signal SG2 on the display surface 210 for each frame.
  • FIG. 7 is a flowchart of stereoscopic video generation processing.
  • step S110 the depth information acquisition unit 111 acquires depth information indicated by the stereoscopic video signal SG1.
  • the depth information is, for example, the depth table T100 of FIG.
  • the depth information acquisition unit 111 transmits the acquired depth information to the depth information update unit 112.
  • step S120 depth information update processing is performed.
  • the depth information update processing processing in which the depth information update unit 112 emphasizes a component of a specific band for a depth image composed of a plurality of depth values indicated by the depth information (depth table T100) (hereinafter, emphasis processing)
  • the depth information is updated by performing (a).
  • the specific band is a specific frequency band.
  • a stereoscopic video to be processed is referred to as a processing stereoscopic video.
  • the processing target stereoscopic video is a still image.
  • the processing target three-dimensional video is described as an example of the three-dimensional video G100.
  • the depth image processed in the depth information update process corresponds to the stereoscopic video G100.
  • step S125 depth enhancement processing is performed.
  • the depth information updating unit 112 emphasizes (increases) the change in depth value in a specific band for a depth image configured of a plurality of depth values indicated by the depth table T100 as depth information.
  • Perform filter processing That is, the depth information updating unit 112 performs an emphasizing process, which is a process of emphasizing a component of a specific band, on a depth image composed of a plurality of depth values indicated by the depth information (depth table T100).
  • the enhancement process is, for example, a process of increasing the absolute value of the difference between the two depth values at which the absolute value of the difference between the depth values of two adjacent pixels is less than a threshold in at least part of the depth image.
  • the threshold is a threshold used in step S122 described later.
  • the threshold is, for example, 20.
  • emphasizing the change of the depth value of the high frequency can be considered to be equivalent to the sharpening processing in the image processing. That is, emphasizing a change in depth value of a specific band can be performed by, for example, an FIR filter, as with a filter in image processing.
  • the depth information update unit 112 performs filter processing using a two-dimensional FIR (Finite Impulse Response) filter on the depth image indicated by the depth table T100.
  • FIR Finite Impulse Response
  • the filter process of step S125 is performed with one pixel among the plurality of pixels forming the depth image as a pixel of interest. Note that each time the process of step S125 is performed, the filter process is performed with the different pixel as the pixel of interest.
  • the depth table T100 as depth information is updated.
  • the depth table T100 is updated each time the process of step S125 is performed.
  • the FIR filter used for the above-mentioned filter processing is, for example, a sharpening filter using an unsharp mask.
  • the filter used for the filtering process is not limited to the sharpening filter, and may be another filter as long as it is a filter that emphasizes a change in depth value in a specific band.
  • step S125 When the process of step S125 is performed on all the pixels constituting the depth image, the depth information update unit 112 transmits the updated depth table T100 to the stereoscopic video generation unit 113 as updated depth information. . Then, the depth information update process in step S120 ends, and the process proceeds to step S130.
  • step S120 By the depth information updating process of step S120, the amplitude of the depth line L10 in the region R10 shown in FIG. 8A becomes large as shown in FIG. 8B. That is, the depth (three-dimensional effect) of the three-dimensional target object 50 is emphasized by the process of step S125.
  • step S130 the three-dimensional video generation unit 113 uses the three-dimensional video data indicated by the received three-dimensional video signal SG1 and the received updated depth information (the updated depth table T100) to perform a new process.
  • the stereoscopic video data indicated by the stereoscopic video signal SG1 is data of a left-eye video and a right-eye video.
  • the stereoscopic video generation unit 113 calculates the parallax amount of the left-eye video and the right-eye video using the plurality of depth values indicated by the updated depth information. Then, using the calculated amount of parallax, the stereoscopic video generation unit 113 updates one or both of the left-eye video and the right-eye video indicated by the stereoscopic video signal SG1. As a result, a new three-dimensional video composed of a new left-eye video and a right-eye video is generated.
  • the process of generating a stereoscopic video using depth information is, for example, a process according to the MVC (Multiview Video Coding) standard. Therefore, detailed description of the process of calculating the amount of parallax, the process of updating the left-eye video and the right-eye video, and the like is not performed.
  • MVC Multiview Video Coding
  • the stereoscopic video generation unit 113 determines a plurality of depths indicated by the depth information. The amount of parallax is calculated using the value. The three-dimensional video generation unit 113 generates a new left-eye video and a right-eye video from the two-dimensional video using the calculated parallax amount. As a result, a new three-dimensional video composed of a new left-eye video and a right-eye video is generated.
  • the stereoscopic video generation unit 113 transmits, to the stereoscopic video display device 200, the stereoscopic video signal SG2 indicating the generated new stereoscopic video (video for left eye and video for right eye).
  • the three-dimensional video generation processing described above is processing when the processing target three-dimensional video is a still image, but when the processing target three-dimensional video is a moving image, the above-described three-dimensional video is generated for each frame constituting the moving image. The video generation process is repeated.
  • the depth information updating unit 112 applies a component (depth of a specific band) to a depth image composed of a plurality of depth values indicated by the depth information (depth table T100). Process to emphasize the change of value).
  • a stereoscopic image is generated using such depth information. Therefore, according to the present embodiment, it is possible to generate a three-dimensional video in which the expression of fine three-dimensional effect is improved.
  • the writable stereoscopic effect is, for example, a stereoscopic effect in which each of a plurality of planes representing an object image is disposed at a different depth position.
  • the simplistic 3D effect is a 3D effect in which the 3D effect of the object image alone is hardly expressed.
  • the 3D image viewing system in the first modification of the first embodiment is the 3D image viewing system 1000 of FIG. That is, the three-dimensional video reproduction apparatus in the first modification of the first embodiment is the three-dimensional video reproduction apparatus 100 of FIG. Therefore, detailed description of the configuration of stereoscopic video reproduction device 100 will not be repeated.
  • the stereoscopic video generation device in the first modification of the first embodiment is the stereoscopic video generation device 110 of FIG. 3.
  • stereoscopic video generation processing A processing for generating a stereoscopic video (hereinafter also referred to as stereoscopic video generation processing A) in the first modification of the present embodiment will be described.
  • FIG. 9 is a flowchart of stereoscopic video generation processing A.
  • the process of the same step number as the step number of FIG. 7 is performed in the same manner as the process described in the first embodiment, and therefore the detailed description will not be repeated.
  • differences from the first embodiment will be mainly described.
  • the depth information acquisition unit 111 acquires depth information indicated by the stereoscopic video signal SG1 (S110).
  • step S120A is performed.
  • step S120A depth information update processing A is performed.
  • the depth information update process A differs from the depth information update process of FIG. 7 in that the process of step S125A is performed instead of step S125 and the process of step S126 is further performed.
  • the other processing of depth information update processing A is the same processing as the depth information update processing, and therefore detailed description will not be repeated.
  • step S125A depth enhancement processing A is performed.
  • the depth information update unit 112 determines the depth in a specific band with respect to the depth image including a plurality of depth values indicated by the depth information (depth table T100). Apply a filter process that emphasizes (increases) changes in value. That is, the depth information updating unit 112 performs an emphasizing process, which is a process of emphasizing a component of a specific band, on a depth image composed of a plurality of depth values indicated by the depth information.
  • the filter process of step S125A is performed with one pixel among the plurality of pixels constituting the depth image indicated by the depth information (depth table T100) as the pixel of interest. Note that each time the process of step S125A is performed, the filter process is performed with the different pixel as the pixel of interest.
  • step S125A When the process of step S125A is performed on all the pixels constituting the depth image, the depth enhancement process A ends, and the process proceeds to step S126.
  • the depth table T100 as depth information is updated by the depth enhancement process A in step S125A.
  • the depth information updated by the depth enhancement processing A is referred to as first updated depth information.
  • the first updated depth information is the updated depth table T100.
  • the updated depth table T100 is also referred to as a first updated depth table.
  • the depth enhancement processing A By the depth enhancement processing A, the amplitude of the depth line L10 in the region R10 shown in FIG. 8A is increased as shown in FIG. 8B. That is, the depth (three-dimensional effect) of the three-dimensional object 50 is emphasized by the process of step S125A.
  • step S126 depth compression processing is performed.
  • the depth information updating unit 112 sets the plurality of depth values indicated by the updated depth information such that the absolute values of the plurality of depth values indicated by the updated depth information decrease at substantially the same rate. By changing, the updated depth information is updated.
  • the depth information updating unit 112 updates the first updated depth information by multiplying the absolute value of the plurality of depth values indicated by the first updated depth information by a coefficient less than one.
  • a coefficient less than 1 is, for example, a value in the range of 0.4 to 0.7.
  • the process which updates 1st updated depth information is not limited to the said process.
  • the process of updating the first updated depth information may be a process of subtracting a predetermined value from the absolute values of the plurality of depth values indicated by the first updated depth information.
  • the processing target stereoscopic video is the stereoscopic video G100 of FIG.
  • the amplitude of the depth line L10 shown in FIG. 8B is entirely reduced as shown in FIG. 10A.
  • the distance between the stereoscopic target object 50 whose stereoscopic effect is enhanced by the processing of step S126 and the display surface 210 (parallax zero surface) is reduced.
  • the greater the distance between the stereoscopic target object 50 and the display surface 210 (parallax zero surface) the harder it is for the viewer to identify the stereoscopic effect of the stereoscopic target object 50.
  • the first updated depth information after update for reducing the distance between the stereoscopic target object 50 in which the stereoscopic effect is emphasized and the display surface 210 (parallax zero surface) is a stereoscopic object in which the writing stereoscopic effect is eliminated.
  • Information for generating a video is a stereoscopic object in which the writing stereoscopic effect is eliminated.
  • the first updated depth information after updating is referred to as second updated depth information.
  • the second updated depth information is the first updated depth table after the update.
  • the depth information update unit 112 transmits the second updated depth information as the updated depth information to the stereoscopic video generation unit 113.
  • step S130 is performed.
  • the stereoscopic video generation processing A described above is processing when the processing target stereoscopic video is a still image, but when the processing target stereoscopic video is a moving image, the above-described processing is performed for each frame constituting the moving image.
  • the stereoscopic video generation processing A is repeatedly performed.
  • the depth information updating unit 112 reduces the absolute values of the plurality of depth values indicated by the first updated depth information at substantially the same rate,
  • the first updated depth information is updated by changing the plurality of depth values indicated by the first updated depth information.
  • the first updated depth information is information generated by the process of step S125A similar to the process of step S125 of the first embodiment.
  • a stereoscopic video is generated using such updated first updated depth information. Therefore, according to the modification of the present embodiment, it is possible to improve the expression of the fine three-dimensional effect and to generate a three-dimensional video in which the sloppy three-dimensional effect is eliminated.
  • the parallax amount when the viewer views the stereoscopic video generated according to the modification of the present embodiment Can reduce eye fatigue caused by being too large.
  • the 3D image viewing system in the second modification of the first embodiment is the 3D image viewing system 1000 of FIG. That is, the three-dimensional video reproduction apparatus in the modification 2 of the first embodiment is the three-dimensional video reproduction apparatus 100 of FIG. Therefore, detailed description of the configuration of stereoscopic video reproduction device 100 will not be repeated.
  • the stereoscopic video generation device in the modification 2 of the first embodiment is the stereoscopic video generation device 110 of FIG. 3.
  • FIG. 11 is a block diagram showing an example of the configuration of the depth information update unit 112. As shown in FIG.
  • the depth information update unit 112 includes a calculation unit 121 and a processing execution unit 123.
  • the calculation unit 121 calculates the difference between the depth value of the processing target pixel that is the processing target pixel and the depth value of the pixel near the processing target pixel among the depth images, although the details will be described later.
  • the process of calculating the depth value is performed on all the pixels constituting the depth image.
  • the process execution unit 123 performs the enhancement process on the processing target pixel corresponding to the difference depth value that is less than the threshold. , Update depth information.
  • stereoscopic video generation processing B processing for generating a stereoscopic video (hereinafter also referred to as stereoscopic video generation processing B) according to the second modification of the present embodiment will be described.
  • FIG. 12 is a flowchart of stereoscopic video generation processing B.
  • the process of the same step number as the step number of FIG. 7 is performed in the same manner as the process described in the first embodiment, and therefore the detailed description will not be repeated.
  • differences from the first embodiment will be mainly described.
  • the depth information acquisition unit 111 acquires depth information indicated by the stereoscopic video signal SG1.
  • the depth information acquisition unit 111 transmits the acquired depth information to the depth information update unit 112. (S110).
  • step S120B is performed.
  • step S120B depth information update processing B is performed.
  • the depth information update process B differs from the depth information update process of FIG. 7 in that the process of step S125B is performed instead of step S125 and the process of steps S121 and S122 is further performed.
  • the other processing of depth information update processing B is the same processing as the depth information update processing, and therefore detailed description will not be repeated.
  • the depth information update processing B will be described in detail below.
  • a stereoscopic video to be processed is referred to as a processing stereoscopic video.
  • the processing target stereoscopic video is a still image.
  • the processing target three-dimensional video is described as an example of the three-dimensional video G100.
  • the depth image processed in the depth information update process B corresponds to the stereoscopic video G100.
  • step S121 the calculation unit 121 calculates the absolute value of the difference between the depth values of the two adjacent pixels forming a plurality of pixels forming the depth image indicated by the depth information (depth table T100) indicated by the stereoscopic video signal SG1 ( Hereinafter, the difference depth value is calculated.
  • the depth image corresponds to the processing target stereoscopic video.
  • the calculation unit 121 sets one pixel of the plurality of pixels forming the depth image as a processing target pixel.
  • FIG. 13 is a diagram for explaining a processing target pixel and a pixel close to the processing target pixel.
  • the pixels P31, P32, P33, and P34 are pixels adjacent to the processing target pixel P30.
  • the pixel P31 is a pixel that is close to the processing target pixel P30 in the vertical direction.
  • the pixel P32 is a pixel close to the processing target pixel P30 in the left-right direction.
  • the pixel P33 is a pixel close to the processing target pixel P30 in the left-right direction.
  • the pixel P34 is a pixel which is close to the processing target pixel P30 in the vertical direction.
  • each of the pixels P31, P32, P33, and P34 is referred to as a processing target proximity pixel.
  • the processing target proximity pixels are only the pixels P33 and P34. Further, when the processing target pixel is a pixel in the first row and the n-th column (upper right end) of the processing target stereoscopic video, the processing target proximity pixels are only the pixels P32 and P34.
  • the processing target neighboring pixels are only the pixels P31 and P33.
  • the processing target pixel is a pixel in the m-th row and the n-th column (lower right end) of the processing target three-dimensional video, the processing target neighboring pixels are only the pixels P31 and P32.
  • processing target proximity pixel may be a pixel that is close to the processing target pixel P30 in the oblique direction, as with pixels P35, P36, P37, and P38 in FIG.
  • a pixel separated by two or more pixels such as the pixel P41 may be set as the processing close pixel.
  • P42 and P33 may be similarly set as the processing target proximity pixels instead of the pixel P32.
  • the pixel P44 may be set as the processing target proximity pixel.
  • step S121 calculation unit 121 calculates the absolute value of the difference between the depth value of processing target pixel P30 and the depth value of each processing target proximity pixel (hereinafter referred to as difference depth value). calculate.
  • the pixel P30 and the peripheral pixels of the pixel P30 are added at a constant ratio instead of the depth value of the pixel P30. It is good.
  • the pixel P41 and the peripheral pixels of the pixel p41 may be added at a constant ratio.
  • the depth values of pixels P31, P30, and P32 are added at a ratio of 1: 2: 1, and the depth values of pixels P41A, P41, and P41B instead of the depth value of pixel P41.
  • What added at a ratio of 1: 2: 1 may be used, and the difference between these two values may be used as the difference depth value.
  • step S122 the calculation unit 121 determines whether at least one of the plurality of difference depth values calculated by the process of step S121 is equal to or greater than a predetermined threshold.
  • the threshold is, for example, a value of 5 to 20% of the difference between the maximum value and the minimum value set for the depth value.
  • the threshold is, for example, 20 when the depth value is represented in the range of 0 to 255.
  • step S122 If NO in step S122, the process proceeds to step S125B. That is, when the calculated plurality (all) of the difference depth values is less than the threshold, the process proceeds to step S125B. On the other hand, if YES in step S123, the process of step S121 is performed again on the next pixel.
  • step S125B depth enhancement processing B is performed.
  • the processing execution unit 123 sets the processing target pixel as a target pixel, as in the first embodiment, for a depth image composed of a plurality of depth values indicated by the depth table T100 as depth information. Filter processing to emphasize (increase) the change in depth value in a specific band. That is, the process execution unit 123 performs an emphasizing process, which is a process of emphasizing a component of a specific band, on a depth image composed of a plurality of depth values indicated by the depth information (depth table T100).
  • Step S125B is executed only in the case of NO at step S122. That is, when the calculated difference depth value is less than the threshold, the processing execution unit 123 performs the enhancement process on the processing target pixel corresponding to the difference depth value that is less than the threshold to obtain depth information. Update.
  • steps S121, S122, and S125B are repeated for all the pixels forming the depth image. Note that, each time the process of step S121 is performed, a different pixel is set as the processing target pixel among the plurality of pixels forming the depth image.
  • the calculation unit 121 calculates a difference depth value which is an absolute value of a difference between the depth value of the processing target pixel which is the processing target pixel and the depth value of the pixel close to the processing target pixel in the depth image.
  • the calculation process is performed on all the pixels constituting the depth image.
  • depth information is updated as in the first embodiment.
  • depth emphasis processing B is executed only in the case of NO at step S122.
  • the change in depth is emphasized, but two objects at different depths The change in depth is not emphasized at the boundaries.
  • the change in depth in the same object is hardly expressed. That is, when the change in depth value at the boundary between the same object and another object is much larger than the change in depth value in the range considered to be the same object, the change in depth in the same object is almost all It is not expressed. In this case, the viewer 40 feels as writing sensible.
  • the enhancement process is performed on the processing target pixel corresponding to the difference depth value which is less than the threshold value. This makes it possible to emphasize changes in depth within the same object without emphasizing changes in depth between two objects at different depths. In other words, it is possible to eliminate the scribed stereoscopic effect that the change in depth in the same object is hardly expressed.
  • step S130 is performed.
  • step S126 is performed as in the first modification of the first embodiment. Depth compression processing may be performed.
  • the process of updating depth information may be performed on frequency-converted values.
  • the stereoscopic video generation device 110 may receive a 2D video and generate depth information for generating a stereoscopic video from the 2D video.
  • the depth information acquisition unit 111 of the three-dimensional video generation device 110 sets the depth value so that the image of the subject in focus in the two-dimensional video is on the near side of the zero parallax surface in the three-dimensional video. calculate. Then, the depth information acquisition unit 111 generates information indicating the calculated depth value as depth information.
  • the depth information acquisition unit 111 transmits the depth information acquired by the generation to the depth information update unit 112.
  • two adjacent pixels are not limited to two pixels in contact with each other.
  • the two adjacent pixels may be, for example, pixels arranged to sandwich one or more pixels between the two adjacent pixels.
  • all or part of the plurality of components constituting the depth information updating apparatus 101 described above may be configured by hardware. Further, all or part of the components constituting the memory management unit may be a module of a program executed by a central processing unit (CPU) or the like.
  • CPU central processing unit
  • all or part of the plurality of components constituting the stereoscopic image generation device 110 or the depth information update device 101 described above may be configured from one system LSI (Large Scale Integration: large scale integrated circuit).
  • the system LSI is a super multifunctional LSI manufactured by integrating a plurality of components on one chip, and specifically, a microprocessor, a read only memory (ROM), a random access memory (RAM), etc.
  • a computer system configured to include
  • the stereoscopic video generation device 110 or the depth information update device 101 may be configured from one system LSI (integrated circuit). That is, the stereoscopic video generation device 110 or the depth information update device 101 may be an integrated circuit.
  • LSI integrated circuit
  • the present invention may be realized as a depth information updating method in which the operation of a characteristic configuration unit included in the stereoscopic video generating device 110 or the depth information updating device 101 is a step.
  • the present invention may also be implemented as a program that causes a computer to execute the steps included in such a depth information update method.
  • the present invention may be realized as a computer readable recording medium storing such a program.
  • the program may be distributed via a transmission medium such as the Internet.
  • the configuration of the depth information update device 101 is an example of a configuration for specifically explaining the present invention. That is, the depth information updating apparatus 101 may not include all the components shown in FIG. That is, the depth information updating apparatus 101 according to the present invention only needs to have the minimum configuration that can realize the effects of the present invention. For example, if the process execution unit 123 also performs the process performed by the calculation unit 121, the depth information update unit 112 of the depth information update apparatus 101 may not include the calculation unit 121.
  • the depth information updating method according to the present invention corresponds to the depth information updating process of FIG. 7, the depth information updating process A of FIG. 9, or the depth information updating process B of FIG.
  • the depth information updating method according to the present invention does not necessarily include all the corresponding steps in FIG. 7, FIG. 9 or FIG. That is, the depth information updating method according to the present invention may include only the minimum steps capable of realizing the effects of the present invention.
  • the order in which the steps in the depth information update method are performed is an example for specifically explaining the present invention, and may be an order other than the above. Also, some of the steps in the depth information update method and other steps may be performed in parallel independently of each other.
  • the present invention can be used as a depth information updating device capable of generating depth information for generating a stereoscopic video with an improved representation of fine stereoscopic effect.
  • 3D Object 100 3D Image Reproduction Device 101 Depth Information Update Device 110 3D Image Generation Device 111 Depth Information Acquisition Unit 112 Depth Information Update Unit 113 3D Image Generation Unit 121 Calculation Unit 123 Processing Execution Unit 200 3D Image Display Device 210 Display Surface 300 Active shutter glasses 1000 3D viewing system

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

L'invention concerne un dispositif de mise à jour d'informations de profondeur (101) comprenant : une unité d'acquisition d'informations de profondeur (111) qui acquière des informations de profondeur indiquant une pluralité de valeurs de profondeur ; et une unité de mise à jour d'informations de profondeur (112) qui met à jour les informations de profondeur en soumettant une image de profondeur configurée à partir de la pluralité de valeurs de profondeur indiquées par les informations de profondeur à un traitement d'amélioration destiné à améliorer des composants dans une certaine bande.
PCT/JP2011/001795 2011-03-25 2011-03-25 Dispositif et procédé de mise à jour d'informations de profondeur et dispositif de production de vidéo stéréoscopique WO2012131752A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/001795 WO2012131752A1 (fr) 2011-03-25 2011-03-25 Dispositif et procédé de mise à jour d'informations de profondeur et dispositif de production de vidéo stéréoscopique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/001795 WO2012131752A1 (fr) 2011-03-25 2011-03-25 Dispositif et procédé de mise à jour d'informations de profondeur et dispositif de production de vidéo stéréoscopique

Publications (1)

Publication Number Publication Date
WO2012131752A1 true WO2012131752A1 (fr) 2012-10-04

Family

ID=46929607

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/001795 WO2012131752A1 (fr) 2011-03-25 2011-03-25 Dispositif et procédé de mise à jour d'informations de profondeur et dispositif de production de vidéo stéréoscopique

Country Status (1)

Country Link
WO (1) WO2012131752A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005065162A (ja) * 2003-08-20 2005-03-10 Matsushita Electric Ind Co Ltd 表示装置、送信装置、送受信システム、送受信方法、表示方法、送信方法、およびリモコン
JP2007110360A (ja) * 2005-10-13 2007-04-26 Ntt Comware Corp 立体画像処理装置およびプログラム
JP2008141666A (ja) * 2006-12-05 2008-06-19 Fujifilm Corp 立体視画像作成装置、立体視画像出力装置及び立体視画像作成方法
JP2010206774A (ja) * 2009-02-05 2010-09-16 Fujifilm Corp 3次元画像出力装置及び方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005065162A (ja) * 2003-08-20 2005-03-10 Matsushita Electric Ind Co Ltd 表示装置、送信装置、送受信システム、送受信方法、表示方法、送信方法、およびリモコン
JP2007110360A (ja) * 2005-10-13 2007-04-26 Ntt Comware Corp 立体画像処理装置およびプログラム
JP2008141666A (ja) * 2006-12-05 2008-06-19 Fujifilm Corp 立体視画像作成装置、立体視画像出力装置及び立体視画像作成方法
JP2010206774A (ja) * 2009-02-05 2010-09-16 Fujifilm Corp 3次元画像出力装置及び方法

Similar Documents

Publication Publication Date Title
EP2648414B1 (fr) Appareil d'affichage 3D et procédé de traitement d'image utilisant celui-ci
JP6147275B2 (ja) 立体画像処理装置、立体画像処理方法、及びプログラム
JP6308513B2 (ja) 立体画像表示装置、画像処理装置及び立体画像処理方法
EP3350989B1 (fr) Appareil d'affichage en 3d et procédé de commande de ce dernier
US20120293489A1 (en) Nonlinear depth remapping system and method thereof
CN102404592B (zh) 图像处理设备和方法以及立体图像显示设备
CN103444193B (zh) 图像处理设备和图像处理方法
CN102905145B (zh) 立体影像系统、影像产生方法、影像调整装置及其方法
US10694173B2 (en) Multiview image display apparatus and control method thereof
JP6033625B2 (ja) 多視点画像生成装置、画像生成方法、表示装置、プログラム、及び、記録媒体
US20140063206A1 (en) System and method of viewer centric depth adjustment
WO2014038476A1 (fr) Dispositif de traitement d'images stéréoscopiques, procédé de traitement d'images stéréoscopiques et programme
JP6377155B2 (ja) 多視点映像処理装置及びその映像処理方法
WO2012131752A1 (fr) Dispositif et procédé de mise à jour d'informations de profondeur et dispositif de production de vidéo stéréoscopique
JP2014053782A (ja) 立体画像データ処理装置、および、立体画像データ処理方法
CN102970498A (zh) 菜单立体显示的显示方法及显示装置
KR101912242B1 (ko) 3d 디스플레이 장치 및 그 영상 처리 방법
JP2011223126A (ja) 立体映像表示装置および立体映像表示方法
Li et al. On adjustment of stereo parameters in multiview synthesis for planar 3D displays
US20130114884A1 (en) Three-dimension image processing method and a three-dimension image display apparatus applying the same
Kim et al. Crosstalk Reduction of Glasses-free 3D Displays using Multiview Image Processing
US9547933B2 (en) Display apparatus and display method thereof
Masia et al. Perceptually-optimized content remapping for automultiscopic displays
US20140055579A1 (en) Parallax adjustment device, three-dimensional image generation device, and method of adjusting parallax amount
TW201325202A (zh) 三維影像處理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11862537

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11862537

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP