WO2014136140A1 - Dispositif et procédé de traitement vidéo - Google Patents

Dispositif et procédé de traitement vidéo Download PDF

Info

Publication number
WO2014136140A1
WO2014136140A1 PCT/JP2013/001346 JP2013001346W WO2014136140A1 WO 2014136140 A1 WO2014136140 A1 WO 2014136140A1 JP 2013001346 W JP2013001346 W JP 2013001346W WO 2014136140 A1 WO2014136140 A1 WO 2014136140A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
luminance
luminance value
control unit
pixels
Prior art date
Application number
PCT/JP2013/001346
Other languages
English (en)
Japanese (ja)
Inventor
進吾 宮内
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Priority to PCT/JP2013/001346 priority Critical patent/WO2014136140A1/fr
Publication of WO2014136140A1 publication Critical patent/WO2014136140A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers

Definitions

  • the present invention relates to a video processing apparatus and a video processing method, and more particularly to a video processing apparatus and a video processing method in 3D (3D) 2D (2D) mixed display.
  • the 3D video display method includes, for example, a parallax barrier method.
  • a parallax barrier method right-eye images and left-eye images are alternately displayed on the same screen, and a barrier composed of striped slits is used to display a right-eye image for the right eye and a left-eye image. Show video for.
  • the parallax barrier method it is possible to view a three-dimensional image with the naked eye without using dedicated glasses or the like (see, for example, Patent Document 1 and Patent Document 2).
  • This disclosure provides a video processing apparatus and a video processing method capable of suppressing a difference in luminance value in 3D2D mixed display.
  • the video processing apparatus is a video processing apparatus that displays a 3D video and a 2D video mixed on the same screen, and includes a luminance value of the 3D video and a luminance value of the 2D video.
  • a 3D signal level control unit that performs a luminance increase process for increasing the luminance value of the 3D video is provided.
  • the video processing device and the video processing method in the present disclosure it is possible to suppress a difference in luminance value in the 3D2D mixed display.
  • FIG. 1 is a block diagram illustrating a configuration of a video processing apparatus according to an embodiment.
  • FIG. 2 is a diagram illustrating a configuration of a display area of the display device.
  • FIG. 3 is a flowchart illustrating a processing procedure of the video processing method according to the embodiment.
  • FIG. 4A is a diagram illustrating an example of a distribution of luminance values of 3D video.
  • FIG. 4B is a diagram illustrating an example of a distribution of luminance values of 3D video.
  • FIG. 4C is a diagram illustrating an example of a luminance value distribution of a 3D video.
  • FIG. 5A is a diagram illustrating an example of a distribution of luminance values of 2D video.
  • FIG. 5B is a diagram illustrating an example of a distribution of luminance values of 2D video.
  • FIG. 5C is a diagram illustrating an example of a distribution of luminance values of 2D video.
  • FIG. 6 is a diagram illustrating luminance values of 3D video and 2D video before and after the luminance increasing process.
  • FIG. 7 is a diagram illustrating luminance values of 3D video and 2D video before and after the luminance increasing process and the luminance decreasing process.
  • FIG. 8A is a diagram illustrating luminance values of 3D video and 2D video before and after luminance increase processing, luminance reduction processing, and boundary processing.
  • FIG. 8B is a diagram illustrating the first offset amount and the second offset amount in consideration of the luminance increasing process, the luminance decreasing process, and the boundary process.
  • FIG. 9 is a flowchart showing a processing procedure of a video processing method according to another embodiment.
  • FIG. 10 is a perspective view showing an example of a naked-eye 3D display.
  • the video processing apparatus 100 displays a 3D video (hereinafter referred to as “3D video” as appropriate) and a 2D video (hereinafter referred to as “2D video” as appropriate) mixed on the same screen. It is an apparatus which performs display control of the display apparatus to perform.
  • the video processing apparatus 100 is mounted on, for example, a set-top box (STB: Set Top Box) that displays video on a naked-eye 3D display (an example of a display device) illustrated in FIG.
  • STB Set Top Box
  • the display device is a naked-eye 3D display of a parallax barrier system that displays a naked-eye 3D image.
  • the display device is configured to be able to display only 3D video, display only 2D video, and 3D2D mixed display.
  • 3D2D mixed display in which 3D video and 2D video are displayed on the same screen, for example, for a 3D region for displaying 3D video, the display device has a barrier on state and for a 2D region for displaying 2D video. Realizes mixed display by turning off the barrier.
  • the apparent brightness of the 3D image is attenuated according to the transmittance of the barrier, but the 2D image has a small attenuation amount because the barrier is turned off. For this reason, when the input video is displayed as it is, the apparent brightness of the 3D area 201 is darker than the apparent brightness of the 2D area 202.
  • the video processing apparatus corrects a difference in brightness due to a difference in barrier attenuation rate.
  • FIG. 1 is a block diagram illustrating a configuration of a video processing apparatus 100 according to the present embodiment.
  • the video processing apparatus 100 includes a window control unit 101, a 3D video acquisition unit 102, a 3D feature amount extraction unit 103, a 3D signal level control unit 104, a 2D video acquisition unit 105, and a 2D video
  • a feature amount extraction unit 106, a 2D signal level control unit 107, a window composition unit 108, a display control unit 109, and a backlight control unit 110 are provided.
  • the window control unit 101 determines the size and position of the 3D area 201 for displaying 3D video and the 2D area 202 for displaying 2D video in the display area 200.
  • FIG. 2 is a block diagram illustrating an example of the display area 200.
  • the 3D region 201 displays main images with dynamic motion such as movies or television broadcasts.
  • information on the content displayed in the 3D area 201, or a video with relatively little movement is displayed.
  • the 3D video acquisition unit 102 acquires 3D video input from the outside and outputs the 3D video to the 3D feature amount extraction unit 103 and the 3D signal level control unit 104.
  • the 2D video acquisition unit 105 acquires 2D video input from the outside, and outputs the 2D video to the 2D feature amount extraction unit 106 and the 2D signal level control unit 107.
  • the 3D feature amount extraction unit 103 acquires the feature amount of the 3D video input from the 3D video acquisition unit 102.
  • the 3D feature amount extraction unit 103 obtains an average value of luminance as a first feature amount for obtaining an increase amount (first offset amount) of the luminance value in the luminance increase process.
  • the first feature amount may not be an average value but a median value or a luminance value that maximizes the number of distributions.
  • the 3D feature quantity extraction unit 103 uses the distribution of the brightness value of the 3D video as a second feature quantity for determining whether or not it is appropriate to increase the brightness value of the 3D video. Ask.
  • the 2D feature amount extraction unit 106 acquires the feature amount of the 2D video input from the 2D video acquisition unit 105.
  • the 2D feature amount extraction unit 106 according to the present embodiment obtains an average value of luminance as a third feature amount for obtaining a reduction amount (second offset amount) of the luminance value in the luminance reduction process.
  • the third feature amount is the same type of feature amount as the first feature amount. Similar to the first feature value, the third feature value may not be an average value, but may be a median value or a luminance value that maximizes the number of distributions.
  • the 2D feature quantity extraction unit 106 obtains a distribution of luminance values of the 2D video as a fourth feature quantity for determining whether or not it is appropriate to reduce the brightness of the 2D video. .
  • the 3D signal level control unit 104 determines whether or not it is appropriate for the 3D signal level control unit 104 to increase the luminance value of the 3D image using the second feature amount (luminance value distribution) of the 3D image obtained by the 3D feature amount extraction unit 103. Determine whether. If the 3D signal level control unit 104 determines that it is appropriate to increase the luminance value of the 3D video, it calculates a first difference that is the difference between the luminance value of the 3D video and the luminance value of the 2D video, A first offset amount (corresponding to an increase amount) corresponding to the first difference is determined. Note that the first difference is obtained by the difference between the second feature value of the 3D video and the fourth feature value of the 2D video. If the 3D signal level control unit 104 determines that it is not appropriate to increase the luminance value of the 3D video, it determines the first offset amount to be 0.
  • the 3D signal level control unit 104 performs boundary processing that changes the first offset amount so that the difference in luminance value becomes moderate at the boundary portion between the 3D region 201 and the 2D region 202. .
  • the 3D signal level control unit 104 changes the luminance value of the 3D video according to the determined first offset amount.
  • the 2D signal level control unit 107 determines the luminance value of the 2D video based on the information from the 3D signal level control unit 104 and the fourth feature amount (luminance value distribution) of the 2D video obtained by the 2D feature amount extraction unit 106. It is determined whether it is appropriate to reduce. If the 2D signal level control unit 107 determines that it is appropriate to decrease the luminance value of the 2D video, it is the difference between the luminance value of the 3D video and the luminance value of the 2D video after the luminance value is increased. Calculate the second difference. Note that the second difference is obtained by (second feature amount of 3D video + first offset amount) ⁇ fourth feature amount of 2D video.
  • the 2D signal level control unit 107 determines a second offset amount (corresponding to a luminance value reduction amount in the luminance reduction processing) according to the second difference. If the 2D signal level control unit 107 determines that it is not appropriate to reduce the luminance value of the 2D video, the 2D signal level control unit 107 determines the second offset amount to be 0.
  • the 2D signal level control unit 107 performs boundary processing to change the second offset amount so that the difference in luminance value becomes moderate at the boundary portion between the 3D region 201 and the 2D region 202. .
  • the window composition unit 108 controls the 3D video output from the 3D signal level control unit 104 in the 3D region 201 set by the window control unit 101, and controls the 2D signal level in the 2D region 202 set by the window control unit 101.
  • a display image is generated by synthesizing the 2D image output from the unit 107.
  • the window composition unit 108 outputs the display video to the display control unit 109.
  • the display control unit 109 displays the display video output from the window composition unit 108 on the screen of the display device (on the display area 200).
  • the backlight control unit 110 adjusts the brightness of the backlight based on the control signal from the window control unit 101.
  • the brightness of the backlight is constant throughout the display area 200.
  • FIG. 3 is a flowchart showing a processing procedure of the video processing method of the present embodiment.
  • the video processing method shown in FIG. 3 is executed for each frame. Further, a case where the luminance values of 3D video and 2D video are 0 to 255 will be described as an example.
  • the video processing apparatus 100 starts the 3D2D mixed display when a mixed display of 3D2D video is instructed by an input from a remote controller, for example.
  • the 3D feature amount extraction unit 103 acquires the feature amount of the 3D video input from the 3D video acquisition unit 102 (S101).
  • the 3D feature quantity extraction unit 103 obtains the average value of the luminance values of the 3D video (first feature quantity) and the distribution of the luminance values of the 3D video (second feature quantity) as the feature quantities.
  • FIG. 4A is a diagram illustrating an example of a luminance value distribution (second feature amount) of a 3D video.
  • the luminance values 0 to 255 are divided into eight ranges A to H, and the number of pixels is obtained for each range.
  • the 2D feature amount extraction unit 106 acquires the feature amount of the 2D video input from the 2D video acquisition unit 105 (S102).
  • the 2D feature quantity extraction unit 106 obtains, as feature quantities, an average value of brightness values of the 2D video (third feature quantity) and a distribution of brightness values of the 2D video (fourth feature quantity). .
  • the 3D signal level control unit 104 determines that the first difference, which is the difference between the average value of the luminance value of the 3D image (first feature amount) and the average value of the luminance value of the 2D image (third feature amount), is a predetermined value. It is determined whether it is within the first range (S103). For example, when the 3D video and the 2D video are mixedly displayed, the first range is set to a luminance difference that does not make the viewer feel uncomfortable.
  • the 3D signal level control unit 104 determines that the first difference is within the first range (“within the first range” in S103), it is not necessary to increase the luminance value of the 3D video, and the 2D It is determined that there is no need to reduce the luminance value of the video.
  • the video processing device 100 sets the first offset amount and the second offset amount to “0”, and proceeds to step S109.
  • the 3D signal level control unit 104 determines that the first difference is outside the first range (“outside the first range” in S103), the second 3D image obtained by the 3D feature amount extraction unit 103 is second. It is determined whether or not it is appropriate to increase the luminance using the feature amount (luminance value distribution) (S104).
  • the 3D signal level control unit 104 determines the luminance value of the 3D video when the number of pixels corresponding to the luminance value in a predetermined range including the maximum value is within 10% of the total number of pixels of the 3D video. Determine that it is appropriate to increase.
  • the range of luminance values used for determination is a region surrounded by a broken line, that is, the luminance range G (luminance values 192 to 223) and the luminance range H ( Luminance values 224 to 255 and maximum value 255 are included).
  • the 3D signal level control unit 104 increases the luminance value if the total number of pixels in the luminance range G and the number of pixels in the luminance range H is within 10% of the total number of pixels in the 3D video. It is determined to be appropriate. In contrast, as shown in FIG. 4C, the 3D signal level control unit 104, when the total number of pixels in the luminance range G and the number of pixels in the luminance range H is larger than 10% of the total number of pixels in the 3D video, It is determined that it is not appropriate to increase the luminance value.
  • a 3D video in which the total of the number of pixels in the luminance range G and the number of pixels in the luminance range H is more than 10% of the total number of pixels in the 3D video is considered to be an image of a scene with high overall brightness.
  • the luminance value is increased, there is a high possibility of deteriorating the image quality such as white attrition or gradation collapse.
  • the 3D signal level control unit 104 determines that it is not appropriate to increase the luminance value (“impossible” in S104), the 3D signal level control unit 104 sets the first offset amount to “0”, and the step The process proceeds to S106.
  • the 3D signal level control unit 104 determines that it is appropriate to increase the luminance value (“Yes” in S104), the amount of increase in the luminance value of the 3D video in the luminance increase process according to the first difference (First offset amount) is determined (S105).
  • First offset amount the setting range of the first offset amount is limited by the upper limit value.
  • the first offset amount is an upper limit value.
  • the first offset amount is set to be larger as the first difference is larger. For example, the first offset amount is set to ⁇ ⁇ first difference.
  • FIG. 4B illustrates a case where the first offset amount is one luminance range (32).
  • the 2D signal level control unit 107 calculates the difference between the average value of the luminance values of the 3D video after the luminance increase processing (predicted value of the average value of the luminance values of the 3D video) and the average value of the luminance values of the 2D video. It is determined whether a certain second difference is within the first range (S106). That is, the 2D signal level control unit 107 determines whether or not it is necessary to decrease the luminance value of the 2D video when it is determined that the luminance increase processing by the 3D signal level control unit 104 is not sufficient. Note that the predicted value of the average value of the luminance value of the 3D video is obtained by the average value of the luminance value of the 3D video before the execution of the luminance increasing process + the first offset amount.
  • the 2D signal level control unit 107 determines that the second difference is within the first range (“inside the first range” in S106), the 2D signal level control unit 107 determines that it is not necessary to decrease the luminance value of the 2D video.
  • the 2D signal level control unit 107 sets the second offset amount to “0”, and proceeds to step S109.
  • the 2D signal level control unit 107 determines that the first difference is out of the first range (“outside the first range” in S106)
  • the second 2D video level of the 2D image obtained by the 2D feature amount extraction unit 106 is obtained. It is determined whether it is appropriate to reduce the luminance value of the 2D video using the feature amount (luminance value distribution) (S107).
  • the 2D signal level control unit 107 sets the luminance value of the 2D video when the number of pixels corresponding to the luminance value in a predetermined range including the minimum value is within 10% of the total number of pixels of the 2D video. Determine that it is appropriate to reduce.
  • the range of luminance values used for determination is a region surrounded by a broken line, that is, the luminance range A (including luminance values 0 to 31 and minimum value 0).
  • luminance range B luminance values 32 to 63).
  • the 2D signal level control unit 107 when the total number of pixels in the luminance range A and the number of pixels in the luminance range B is larger than 10% of the total number of pixels in the 2D video, It is determined that it is not appropriate to reduce the luminance value of the 2D video. In this case, it is because there is a possibility of degrading the image quality such that the portion with a small luminance value of the image is painted black.
  • the 2D signal level control unit 107 determines that it is not appropriate to reduce the luminance value of the 2D video as shown in FIG. 3 (“impossible” in S107), the second offset amount is set to “0”. Then, the process proceeds to step S109.
  • the luminance value of the 2D video in the luminance reduction process is determined (S108).
  • the setting range of the second offset amount is limited by the upper limit value.
  • the second offset amount is an upper limit value.
  • the 3D signal level control unit 104 and the 2D signal level control unit 107 perform boundary processing (S109, S110).
  • the boundary processing is performed on a pixel basis.
  • the 3D signal level control unit 104 determines whether or not the difference between the luminance value of the boundary pixel of the 3D region 201 and the luminance value of the boundary pixel of the 2D region 202 is outside the second range (step S109). Specifically, the difference between the luminance value of the pixel in the 3D region 201 adjacent to the boundary (boundary pixel on the 3D region 201 side) and the luminance value of the pixel in the 2D region 202 (boundary pixel on the 2D region 202 side) It is determined whether a certain third difference is outside the second range. Note that whether or not the third difference is outside the second range is determined in units of pixels.
  • the 3D signal level control unit 104 and the 2D signal level control unit 107 perform the first offset amount and the second offset amount for pixels for which the third difference is determined by the 3D signal level control unit 104 to be within the second range. Is not changed (“within second range” in S109).
  • the 3D signal level control unit 104 and the 2D signal level control unit 107 determine whether the 3D signal level control unit 104 determines that the third difference is outside the second range between the 3D region 201 and the 2D region 202. Boundary processing is performed to reduce the difference in luminance value at the boundary portion (“outside second range” in S109).
  • the first offset amount is changed so as to increase as it approaches the 2D region 202 so that the gradient of the luminance value becomes constant, and the second offset amount is changed to the 3D region 201.
  • a process of changing so as to increase as it approaches can be considered.
  • the number of pixels whose luminance value is changed differs depending on the gradient of the luminance value and the configuration of 3D video and 2D video.
  • the first offset amount is changed so as to increase as it approaches the 2D region 202 so that the range of pixels whose luminance value is changed is constant, and the second offset amount is set.
  • the first offset amount of luminance in the region from the boundary between the 2D region 202 and the 3D region 201 to about 10 pixels on the 3D region 201 side is changed to increase as the 2D region 202 is approached.
  • the second offset amount of luminance in the region up to about 10 pixels on the 2D region 202 side from the boundary between the 2D region 202 and the 3D region 201 is changed so as to become smaller as it approaches the 3D region 201.
  • the 3D signal level control unit 104 adds the first offset amount to the luminance value of the pixel constituting the 3D video, and the 2D signal level control unit 107 calculates the second offset amount from the luminance value of the pixel constituting the 2D video. Is subtracted (S111). Thereby, the luminance increasing process, the luminance decreasing process, and the boundary process are executed.
  • step S103 If it is determined in step S103 that the first difference is within the first range, and if it is determined in step S104 that the luminance increase process cannot be performed, the first offset amount is Since it is set to “0”, the result is substantially the same as when the luminance increase processing is not performed. Furthermore, if it is determined in step S106 that the second difference is within the first range, and if it is determined in step S107 that the luminance reduction process cannot be performed, the second offset amount is Since it is set to “0”, the result is substantially the same as when the luminance reduction process is not performed.
  • FIG. 6 shows the luminance values before and after executing the luminance increasing process.
  • 6A shows the luminance value before execution of the luminance increasing process
  • FIG. 6B shows the luminance value after execution of the luminance increasing process.
  • FIG. 7 shows the luminance values before and after executing the luminance increasing process and the luminance decreasing process.
  • FIG. 7A shows the luminance values before execution of the luminance increase processing and luminance reduction processing
  • FIG. 7B shows the luminance values after execution of the luminance increase processing and luminance reduction processing.
  • the change of the offset value in the boundary process is not considered.
  • FIG. 8A shows the luminance values before and after the execution of the boundary processing
  • FIG. 8B shows the first offset amount and the second offset amount corresponding to FIG. 8A, respectively.
  • 8A and 8B show a case where, in the boundary processing, the first offset amount and the second offset amount are changed so that the range of the pixel whose luminance value is changed in (2) is constant. .
  • the apparent brightness of the 3D video attenuates according to the transmittance of the barrier.
  • the 2D video display area 200 has the barrier turned off, the apparent brightness of the 2D video has a small attenuation. For this reason, when the input video is displayed as it is, even if the video has the same luminance value, the apparent brightness of the 3D area 201 may be darker than the apparent brightness of the 2D area 202.
  • the brightness adjustment using the backlight is generally an adjustment in a predetermined rectangular area unit (for example, 16 ⁇ 16 pixels) or a surface unit. For this reason, it is difficult to increase the luminance value of the arbitrarily set 3D region 201.
  • a configuration capable of adjusting the amount of light in units of pixels is necessary, but it is very expensive and difficult.
  • the method (2) for adjusting the brightness of the image it is possible to adjust the brightness in units of pixels while suppressing an increase in cost.
  • the luminance increasing process for increasing the luminance value of the 3D video and the luminance decreasing process for decreasing the luminance value of the 2D video are executed in an appropriate combination, and therefore, between the 3D region 201 and the 2D region 202 is performed. It is possible to reduce a sense of incongruity due to a difference in brightness, that is, to make a 3D image feel dark. Furthermore, in the above-described embodiment, since the brightness of the video is adjusted, it is possible to adjust the brightness of the arbitrarily set 3D region 201 while suppressing an increase in cost.
  • the first offset amount is determined according to the distribution of the luminance value of the 3D video. A feeling of discomfort can be reduced while maintaining the quality (luminance gradation, etc.).
  • the second offset amount is determined according to the distribution of the luminance value of the 2D video, the video quality (luminance gradation, etc.) is determined. ) Can be maintained while reducing the sense of incongruity.
  • the first offset amount and the second offset amount are set in consideration of all of the luminance increasing process, the luminance decreasing process, and the boundary process, and then collectively 3D video and 2D video
  • the present invention is not limited to this.
  • the brightness value of 2D video and 3D video may be directly changed without setting the offset amount.
  • FIG. 9 is a flowchart showing the processing procedure of the video processing method when only the luminance increase processing is executed.
  • the boundary processing is not always performed, but may be configured to be performed when the luminance value is not sufficiently changed only by the luminance increasing process and the luminance decreasing process.
  • the case where the luminance value is not sufficiently changed is the difference between the average value of the 3D video luminance value after the luminance value is increased and the average luminance value of the 2D video after the luminance value is decreased. Is outside the first range.
  • the number of pixels corresponding to the luminance value in a predetermined range including the maximum value is 3D.
  • the range (predetermined range) of pixel values used for determination can be arbitrarily set.
  • the ratio of the number of pixels is not limited to 10%, and can be arbitrarily set, for example, 5%.
  • the number of pixels in a predetermined range including the minimum value is the number of pixels of the entire 2D video. In the case of 10% or less, it is determined that it is appropriate to reduce the luminance value of the 2D video, but the present invention is not limited to this.
  • the range of pixel values (predetermined range) used for the determination and the ratio of the number of pixels can be arbitrarily set.
  • the first offset amount is set according to the first difference and the second offset amount is set according to the second difference.
  • the first offset amount and the second offset amount are fixed. It may be a value.
  • This disclosure is applicable to a video processing apparatus that performs 3D2D mixed display.
  • the present disclosure is applicable to a liquid crystal television, an organic EL television, a game machine, etc. that perform 3D2D mixed display.
  • the present disclosure can be applied as a video processing method in a video processing apparatus that performs 3D2D mixed display.

Abstract

L'invention concerne un dispositif de traitement vidéo (100) qui mélange et affiche une vidéo tridimensionnelle et une vidéo bidimensionnelle sur le même écran, ledit dispositif de traitement vidéo étant équipé d'une unité de commande de niveau de signal en 3D (104) qui effectue un processus d'augmentation de luminance de sorte que la valeur de luminance de la vidéo tridimensionnelle est augmentée lorsqu'une première différence, qui est la différence entre la valeur de luminance de la vidéo tridimensionnelle et la valeur de luminance de la vidéo bidimensionnelle, est à l'extérieur d'une première plage prescrite.
PCT/JP2013/001346 2013-03-05 2013-03-05 Dispositif et procédé de traitement vidéo WO2014136140A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/001346 WO2014136140A1 (fr) 2013-03-05 2013-03-05 Dispositif et procédé de traitement vidéo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/001346 WO2014136140A1 (fr) 2013-03-05 2013-03-05 Dispositif et procédé de traitement vidéo

Publications (1)

Publication Number Publication Date
WO2014136140A1 true WO2014136140A1 (fr) 2014-09-12

Family

ID=51490712

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/001346 WO2014136140A1 (fr) 2013-03-05 2013-03-05 Dispositif et procédé de traitement vidéo

Country Status (1)

Country Link
WO (1) WO2014136140A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018211854A1 (fr) * 2017-05-19 2018-11-22 オリンパス株式会社 Dispositif d'endoscope 3d et dispositif de traitement vidéo 3d
WO2020218018A1 (fr) * 2019-04-22 2020-10-29 株式会社ジャパンディスプレイ Dispositif d'affichage

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0973049A (ja) * 1995-06-29 1997-03-18 Canon Inc 画像表示方法及びそれを用いた画像表示装置
WO2006054518A1 (fr) * 2004-11-18 2006-05-26 Pioneer Corporation Dispositif d’affichage 3d
JP2006229818A (ja) * 2005-02-21 2006-08-31 Sharp Corp 立体画像表示装置
WO2010007787A1 (fr) * 2008-07-15 2010-01-21 Yoshida Kenji Système d'affichage d'images vidéo 3d à l'oeil nu, afficheur de ce type, machine de jeux de divertissement et feuille barrière à parallaxe

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0973049A (ja) * 1995-06-29 1997-03-18 Canon Inc 画像表示方法及びそれを用いた画像表示装置
WO2006054518A1 (fr) * 2004-11-18 2006-05-26 Pioneer Corporation Dispositif d’affichage 3d
JP2006229818A (ja) * 2005-02-21 2006-08-31 Sharp Corp 立体画像表示装置
WO2010007787A1 (fr) * 2008-07-15 2010-01-21 Yoshida Kenji Système d'affichage d'images vidéo 3d à l'oeil nu, afficheur de ce type, machine de jeux de divertissement et feuille barrière à parallaxe

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018211854A1 (fr) * 2017-05-19 2018-11-22 オリンパス株式会社 Dispositif d'endoscope 3d et dispositif de traitement vidéo 3d
US10966592B2 (en) 2017-05-19 2021-04-06 Olympus Corporation 3D endoscope apparatus and 3D video processing apparatus
WO2020218018A1 (fr) * 2019-04-22 2020-10-29 株式会社ジャパンディスプレイ Dispositif d'affichage

Similar Documents

Publication Publication Date Title
US10319078B2 (en) Image signal processing apparatus and image signal processing method to suppress color shift caused by lens distortion
TWI598846B (zh) 影像資料處理方法以及使用該方法的立體影像顯示裝置
US9799304B2 (en) Drive method and drive device of liquid crystal display based on different gray scale values applied to two pixels of same color
EP2959676B1 (fr) Systèmes et procédés permettant un mappage de l'apparence pour composer des graphiques de recouvrement
US9858843B2 (en) Drive method and drive device of liquid crystal display employing color washout compensation of image pixels based on skin color weights of primary color components
US9886880B2 (en) Drive method and drive device of liquid crystal display
JP6213812B2 (ja) 立体画像表示装置及び立体画像処理方法
US9805636B2 (en) Drive method and drive device of liquid crystal display
US9761167B2 (en) Drive method and drive device of liquid crystal display
US9253429B2 (en) Video image processing apparatus and method for controlling video image processing apparatus
US9715847B2 (en) Drive method and drive device of liquid crystal display
US9380284B2 (en) Image processing method, image processing device and recording medium
US9824616B2 (en) Drive method and drive device of liquid crystal display
JP2001258052A (ja) 立体映像表示装置
US8723919B2 (en) Three-dimensional image processing apparatus and three-dimensional image processing method
WO2014136140A1 (fr) Dispositif et procédé de traitement vidéo
WO2017159182A1 (fr) Dispositif de commande d'affichage, appareil d'affichage, récepteur de télévision, procédé de commande de dispositif de commande d'affichage, programme de commande et support d'enregistrement
US20130050283A1 (en) Display device and electronic unit
CN105635789A (zh) 降低视频图像中osd亮度的方法和装置
KR20130050612A (ko) 디스플레이장치 및 그 제어방법
CN105187742A (zh) 一种动态调整电视画面清晰度的方法
KR101796663B1 (ko) 영상처리장치 및 그 제어방법
JP2014032301A (ja) 液晶表示装置
JP2013009030A (ja) 画像信号処理装置及び画像信号処理方法、表示装置、並びにコンピューター・プログラム
JP2011257554A (ja) 画像表示装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13876792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13876792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP