JP2016001326A - Image display device - Google Patents

Image display device Download PDF

Info

Publication number
JP2016001326A
JP2016001326A JP2015153729A JP2015153729A JP2016001326A JP 2016001326 A JP2016001326 A JP 2016001326A JP 2015153729 A JP2015153729 A JP 2015153729A JP 2015153729 A JP2015153729 A JP 2015153729A JP 2016001326 A JP2016001326 A JP 2016001326A
Authority
JP
Japan
Prior art keywords
image
display
microlens
plurality
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2015153729A
Other languages
Japanese (ja)
Inventor
岩根 透
Toru Iwane
透 岩根
Original Assignee
株式会社ニコン
Nikon Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2010137391 priority Critical
Priority to JP2010137391 priority
Application filed by 株式会社ニコン, Nikon Corp filed Critical 株式会社ニコン
Priority to JP2015153729A priority patent/JP2016001326A/en
Publication of JP2016001326A publication Critical patent/JP2016001326A/en
Application status is Pending legal-status Critical

Links

Abstract

PROBLEM TO BE SOLVED: To enable displaying a stereoscopic image without using a refractive index distribution lens.SOLUTION: Based on a plurality of image signals output from a plurality of imaging pixels arranged correspondingly to a plurality of different photographic microlens, an image display device comprises input means for inputting image data formed by one pixel data, a generating means 106 for generating an image data for display having three-dimensional information using the image data, displaying means 110 in which a plurality of display pixels are two-dimensionally arranged and luminous fluxes are emitted from a plurality of display pixels depending on the image data for display and a microlens array in which a plurality of microlens for synthesizing luminous fluxes emitted from a plurality of display pixels to form a three-dimensional image are arranged two-dimensionally.

Description

  The present invention relates to an image display device.

  Conventionally, a display device that displays a three-dimensional image by an integral system (integral photography) is known (for example, Patent Document 1).

JP 2008-216340 A

  However, the conventional display device projects and displays an image captured as a three-dimensional image, and thus has a problem that it is necessary to use a refractive index distribution lens or the like in order to display the image as an erect image.

  In the image display device according to the first aspect, one pixel data is generated based on a plurality of image signals output from a plurality of imaging pixels arranged corresponding to a plurality of different photographing microlenses. An input means for inputting the image data, a generation means for generating display image data having three-dimensional information based on the image data, and a plurality of display pixels are arranged in a two-dimensional manner, and according to the display image data Display means for emitting light beams from a plurality of display pixels, and a microlens array in which a plurality of microlenses that combine the light beams emitted from the plurality of display pixels to form a three-dimensional image are two-dimensionally arranged. It is characterized by that.

  According to the present invention, an image of a subject having a three-dimensional shape can be displayed as a three-dimensional image in the air.

It is a figure explaining the structure of the digital camera by embodiment of this invention. It is a figure which shows an example of arrangement | positioning of a micro lens and an image pick-up element. It is a figure explaining the positional relationship of the microlens and base point pixel in embodiment. It is a figure explaining the production | generation principle of a synthesized image. It is a figure which shows the relationship between the integration area | region for producing | generating a synthesized image, and an imaging pixel. It is a figure which shows an example of the positional relationship of the imaging pixel which outputs the image signal for integrating | accumulating on a base signal. It is a figure which shows an example of the relationship between a ring zone and a micro lens. FIG. 11 illustrates a structure of a display device in an embodiment. It is a figure which shows an example of the display pixel to which the image signal output from the imaging pixel was allocated. It is a figure explaining the optical cross section by which the light beam emitted from the light spot is cut off by the light-receiving surface of an imaging pixel. It is a figure explaining the relationship between a micro lens and an optical cross section. It is a figure explaining the relationship between a micro lens and an optical cross section. It is a figure explaining the optical cross section at the time of expand | deploying area division to a base microlens. It is a figure explaining the division | segmentation area | region when a light spot is decentered with respect to the pseudo optical axis of a base microlens. It is a figure explaining the depth of the displayed aerial image. It is a figure which shows the relationship between a focus position and the surface of a pixel for a display.

  The digital camera according to the present embodiment is configured to be able to generate image data in which an arbitrary focal position is set. When a subject having a three-dimensional shape is photographed by this digital camera, the generated image data includes information about the three-dimensional shape (three-dimensional information). The digital camera of the present embodiment displays an image corresponding to image data including such stereoscopic information so that the user can observe it as a three-dimensional stereoscopic image (aerial image). Details will be described below.

  FIG. 1 is a diagram illustrating a configuration of a digital camera according to an embodiment. The digital camera 1 is configured so that an interchangeable lens 2 having a photographing lens L1 can be attached and detached. The digital camera 1 includes an imaging unit 100, a control circuit 101, an A / D conversion circuit 102, a memory 103, an operation unit 108, a memory card interface 109, and a display device 110. The imaging unit 100 includes a microlens array 12 and an imaging element 13 in which a large number of microlenses 120 are arranged two-dimensionally. In the following description, the z axis is set to be parallel to the optical axis of the photographic lens L1, and the x axis and the y axis are set to be orthogonal to each other in a plane orthogonal to the z axis. Shall.

  The taking lens L1 is composed of a plurality of optical lens groups, and forms an image of a light flux from a subject near the focal plane. In FIG. 1, the taking lens L1 is represented by a single lens for convenience of explanation. Behind the photographic lens L1, a microlens array 12 and an image sensor 13 are sequentially arranged in a two-dimensional manner in a plane perpendicular to the optical axis. The image sensor 13 is configured by a CCD or CMOS image sensor including a plurality of photoelectric conversion elements. The image sensor 13 captures a subject image formed on the imaging surface, and outputs a photoelectric conversion signal (image signal) corresponding to the subject image to the A / D conversion circuit 102 under the control of the control circuit 101. The details of the imaging unit 100 will be described later.

  The A / D conversion circuit 102 is a circuit that performs analog processing on the image signal output from the image sensor 13 and then converts it to a digital image signal. The control circuit 101 includes a CPU, a memory, and other peripheral circuits. Based on the control program, the control circuit 101 performs a predetermined calculation using signals input from each unit constituting the digital camera 1 and sends a control signal to each unit of the digital camera 1 to control the photographing operation. . Further, the control circuit 101 determines the aperture value of the composite image selected by the user based on the operation signal input from the operation unit 108 in response to the operation of the aperture value input button, as will be described later.

  The control circuit 101 functionally includes an image integration unit 105, an image pattern generation unit 106, and a display control unit 107. The image integration unit 105 generates composite image data from the image signal using a composite pixel affiliation table corresponding to the aperture value of the composite image determined according to the operation of the aperture value input button. As will be described later, the image pattern generation unit 106 generates display image data for displaying an aerial image having three-dimensional information on the display device 110 to be described later from the composite image data generated by the image integration unit 105. The display control unit 107 controls driving of the display device 110, outputs the display image data generated by the image pattern generation unit 106 to the display device 110, and displays an aerial image having corresponding three-dimensional information. To display. Details of the image integration unit 105 and the image pattern generation unit 106 will be described later.

  The memory 103 is used to temporarily store the image signal digitally converted by the A / D conversion circuit 102, and data during or after the image processing, image compression processing, and display image data creation processing. It is a volatile storage medium. The memory card interface 109 is an interface to which the memory card 109a can be attached and detached. The memory card interface 109 is an interface circuit that writes image data to the memory card 109a or reads image data recorded on the memory card 109a in accordance with the control of the control circuit 101. The memory card 109a is a semiconductor memory card such as a compact flash (registered trademark) or an SD card.

  The operation unit 108 receives a user operation and outputs various operation signals corresponding to the operation content to the control circuit 101. The operation unit 108 includes an aperture value input button, a power button, a release button, other setting menu display switching buttons, a setting menu determination button, and the like. The aperture value input button is operated by the user when inputting the aperture value F of the composite image. When the aperture value input button is operated by the user and the aperture value F is selected, the operation unit 108 outputs an operation signal to the control circuit 101.

  The display device 110 displays the display data created by the control circuit 101 based on the image data recorded on the memory card 109a in the playback mode based on the command of the control circuit 101. The display device 110 displays a menu screen for setting various operations of the digital camera 1. Details of the display device 110 will be described later.

  Next, the configuration of the imaging unit 100 will be described in detail. The imaging unit 100 includes the microlens array 12 and the imaging element 13 as described above. The microlens array 12 includes a plurality of microlenses 120 arranged in a two-dimensional manner. In the imaging device 13, a pixel array 130 that receives light that has passed through each of the microlenses 120 is arranged in an arrangement pattern corresponding to the microlens 120. Each pixel array 130 includes a plurality of photoelectric conversion elements 131 (hereinafter referred to as imaging pixels 131) arranged in a two-dimensional manner.

  FIG. 2A shows a plan view of the microlenses 120 arranged in the microlens array 12 on the XY plane. As shown in FIG. 2A, the microlens array 12 has a plurality of microlenses 120 formed in a hexagonal shape on the XY plane, for example, in a honeycomb arrangement. FIG. 2A shows some of the microlenses 120 among the plurality of microlenses 120 provided in the microlens array 12. FIG. 2B is a diagram for explaining the positional relationship between the microlens array 12 and the image sensor 13 in the optical axis direction (z-axis direction) of the photographing lens L1. As shown in FIG. 2B, the image sensor 13 is arranged at a position separated by the focal length f of the microlens 120. That is, the pixel array 130 having a plurality of imaging pixels 131 is provided at a position separated by the focal length f of the microlens 120 corresponding to each pixel array 130. 2B shows a plurality of microlenses 120 provided in the microlens array 12, a plurality of pixel arrays 130 provided in the imaging element 13, and a part of the plurality of imaging pixels 131.

Using the image signal output from the image sensor 13 having the above-described configuration, the image integration unit 105 creates composite image data. The image integration unit 105 is output from a predetermined imaging pixel 131 (hereinafter referred to as a base pixel 132 (FIG. 3)) among the imaging pixels 131 included in the pixel array 130 provided corresponding to a certain microlens 120. An image signal (hereinafter referred to as a base point signal), an image signal output from the imaging lens 131 included in the pixel array 130 corresponding to the micro lens 120 corresponding to the base pixel 132 and the micro lens 120 provided in the vicinity thereof, and Is synthesized. As a result, the image integration unit 105 generates a composite image signal corresponding to one pixel. The image integrating unit 105 performs the above-described processing on all the base pixels corresponding to each microlens 120 and adds the generated composite image signals to generate composite image data.

  The image integrating unit 105 refers to the composite pixel affiliation table when generating the composite image signal as described above. The combined pixel affiliation table indicates in which position of the pixel array 130 corresponding to which microlens 120 the imaging pixel 131 that outputs an image signal to be combined with the base point signal. Hereinafter, a process in which the image integration unit 105 generates a composite image signal using the image signal output from the imaging pixel 131 will be described.

  FIG. 3 shows a base pixel 132 provided corresponding to each microlens 120, that is, each pixel array 130. In FIG. 3, some of the microlenses 120 are shown, and the base pixel 132 of the plurality of imaging pixels 131 is representatively shown. In FIG. 3, the base pixel 132 is arranged corresponding to the pseudo optical axis of the microlens 120. In the present embodiment, the pseudo optical axis will be described as the intersection of the center of the light beam incident from the pupil of the photographing lens L1 and the main surface of the microlens 120. FIG. 3 shows a case where the geometric center of the microlens 120 and the pseudo optical axis coincide with each other. In the following description, the microlens 120 corresponding to the base pixel 132 is referred to as a base microlens 121.

-Generation of composite image signal-
First, the principle of generating a composite image when the subject image shown in FIG. 4A is formed at the apex of the microlens 120, that is, when the focal plane S exists at the apex of the microlens 120 will be described. In this case, the light beams r <b> 1 to r <b> 7 from the subject are incident on the imaging pixels 131 of the pixel array 130 provided corresponding to the microlens 120. The image accumulating unit 105 accumulates image signals output from the imaged pixels 131 that are shaded among the imaged pixels 131 shown in FIG. 4A, thereby obtaining a composite image signal corresponding to one pixel of the composite image data. Generate. The image accumulating unit 106 generates composite image data by performing this process on the pixel array 130 corresponding to all the microlenses 120.

  Next, the principle when a composite image signal is generated for an image of a subject imaged on a certain focal plane (imaging plane) will be described. As shown in FIG. 4B, when the focal plane S exists at a position away from the apex of the microlens 120, the light beams r1 to r5 from the subject are incident on a plurality of different microlenses 120, respectively. For this reason, in order to generate a composite image signal, the image integration unit 105 needs to also use an image signal from the imaging pixel 131 disposed corresponding to the microlens 120 disposed in the vicinity of the base microlens 121. is there. In FIG. 4B, the chief rays of the light beams r1 to r5 are shown as representatives.

The image integration unit 105 integrates all the image signals output from the imaging pixels 131 included in the integration region determined according to the aperture value F of the composite image, so that one pixel (image formation of the composite image) in the composite image data is performed. A composite image signal corresponding to (region) is generated. The integrated area is represented by a circle having a diameter D. The diameter D of the integrated region is expressed by the following equation (1) using the aperture value (aperture value of the composite image data) F determined according to the operation of the aperture value input button 108a and the focal length f of the microlens 120. It is represented by
D = f / F (1)

FIG. 5 shows the relationship between the integration region Rs and the imaging pixel 131. As described above, the image integration unit 106 integrates the image signals output from all the imaging pixels 131 covered by the integration region Rs represented as a circular region. In FIG. 5, the imaging pixel 131 that outputs the integrated image signal is indicated by hatching. The microlens 120 is the microlens array 1
2, the integrated region Rs cannot be made larger than the diameter of each microlens 120 allowed by the arrangement of the microlenses 120. Therefore, the maximum aperture value Fmax allowed in the composite image data is expressed by the following equation (2). In Equation (2), “s” indicates the size of one side of the imaging pixel 131. Further, the minimum aperture value Fmin in the composite image data is the F value of the microlens 120.
Fmax = f / s (2)

The composite image signal obtained by integrating the image signals output from the pixel array 130 including the base pixel 132 by the image integration unit 105, that is, the integrated value is represented by the following expression (3). In Expression (3), P indicates the output value of the image signal output from the imaging pixel 131. In addition, “i” in Expression (3) indicates the imaging pixel 131 covered by the integration region Rs when the aperture value F of the composite image is “0”, and “0” corresponds to the pixel array 130 including the base pixel 132. This indicates that the microlens 120 is arranged, that is, the base microlens 121.
... (3)

As described above, the image integration unit 105 performs integration using the image signal output from the imaging pixel 131 included in the pixel array 130 corresponding to the microlens 120 provided in the vicinity of the base microlens 121. That is, the image integration unit 105 includes all the imaging pixels 131 included in the set F {i} of imaging pixels 131 covered by the integration region Rs determined by the aperture value F of the composite image, and includes the base microlens 121. The output values of the pixel signals from the imaging pixels 131 arranged corresponding to the neighboring microlens 120 are integrated. In this case, the output value P is expressed by the following equation (4). Note that “t” in Expression (4) represents the neighboring microlens 120 including the base microlens 121.
... (4)

  FIG. 6 shows the relationship between the imaging pixel 131 that has output an image signal used when generating one composite image signal by the image integration unit 105, the base microlens 121, and the adjacent microlenses 120a to 120f. Show. In FIG. 6, the display of the imaging pixel 131 that does not output the image signal used for generating the composite image signal is omitted. When the imaging pixels 131 dispersed in the microlenses 120a to 120f adjacent to the base microlens 121 shown in FIG. 6 are collected, they are covered with an area defined by the aperture value F of the composite image shown in FIG. A plurality of imaging pixels 131 are configured.

When the image integration unit 105 performs the above-described processing and integrates the image signal, the imaging pixel 131 that outputs the image signal to be added to the base signal is arranged at which position of the pixel array 130 corresponding to any microlens 120. It is important that Therefore, a table indicating the microlens 120a to 120f corresponding to which imaging lens 131 indicated by “i” in Equations (3) and (4) is provided, that is, a table indicating the dispersion of the imaging pixels 131 is a combined pixel affiliation table. Are stored in a predetermined storage area. Then, the image integration unit 105 refers to the composite pixel affiliation table when generating the composite image signal. Note that the combined pixel affiliation table is represented by the following equation (5).
t = T d (i) (5)

Hereinafter, the principle of creating the composite pixel affiliation table will be described.
FIG. 10 shows an optical cross section LFD of a light beam LF emitted from the light spot LP and cut off by the light receiving surface of the imaging pixel 131 in the microlens array 12. As shown in FIG. 10, the light beam LF that spreads from the light spot LP is limited by the spread angle by the preceding imaging lens L1. Therefore, the light beam LF incident on each microlens 120 does not protrude outside the covered area of the microlens 120 (in FIG. 10, the optical cross sections LFDc and LFDe partially protrude outside the covered area). Drawing). This can also be explained by the fact that the light receiving surface of the imaging pixel 131 is optically conjugate with the pupil of the imaging lens L1. When photographing is performed through the imaging lens L1, a photographing pupil image, that is, a light boundary is formed in an area covered by the microlens 120, and the light beam LF is not incident on the outside thereof.

  The description will be made on the assumption of the above points. In the microlens array 12 of FIG. 10, if the incident light amounts to the imaging pixels 131a to 131e corresponding to the light cross sections LFDa to LFDe of the light beam LF (collectively LFD) are integrated, the light beam LF from the light spot LP is integrated. Among them, the total radiation amount of the light beam LF limited to the pupil of the photographing lens L1 is obtained. Therefore, the image integration unit 105 only has to calculate the optical section LFD of the light receiving element surface of the imaging pixel 131 with respect to the z-axis direction coordinate of the light spot LP when integrating the image signal. On the contrary, if a display element is provided and light is emitted from each display element corresponding to each light section LFD of the light beam LF, the light beam LF always travels in the same direction as the incident, and thus the “light spot LP” is the light beam LF. It becomes the accumulation point of.

  As described above, the angle of the light beam LF spreading from the light spot LP is determined by the pupil of the photographing lens L1, that is, the F value of the imaging lens L1. When the imaging lens L1 is not present as in a display system, the maximum aperture (minimum F) is defined by the F value of the microlens 120. Therefore, if only the central part of the covering region of the microlens 120 is used, the opening can be limited.

With reference to FIG. 11, how many or which microlens 120 corresponds to which optical section LFD will be described by projecting the spread of the light beam LF from the light spot LP onto the microlens 120. Note that FIG. 11 shows a case where the microlenses 120 are in a square array for convenience of explanation. In FIG. 11, the light beam LF spreading from the light spot LP shows the case where the position of the light spot LP in the z-axis direction is the focal length f of the microlens 120 and the double of 2f. In FIG. 11, the spread of the light beam LF when the position of the light spot LP is f is indicated by a broken line, and the case of 2f is indicated by a one-dot chain line. When the light spot LP is located at the focal length f of the microlens 120, the spread of the light beam LP is defined by the microlens 120 (the optical cross section LFD is a circle, but the microlens 120 is optically extended to the end of the square. Therefore, the light beam LF is incident on one microlens 120. As described above, the microlens 120 corresponding to one light spot LP is determined.

  When the position of the light spot LP is the focal length f of the microlens 120, the light beam LF spreads as light of a circular opening over the entire region immediately below the microlens 120. For this reason, it is only necessary to select image signals from all the imaging pixels 131 included in a circle inscribed in the square area. When the absolute value of the position of the light spot LP is smaller than the focal length f, the light beam LF spreads without converging in the region immediately below the microlens 120. However, since the spread angle of the incident light beam LF is limited, the light section LFD remains in the covered region.

  Here, a case where the position of the light spot LP is 2f will be described. FIG. 12 shows a microlens 120 related to this case. As shown in FIG. 12A, the related microlens 120 is itself, that is, the base microlens 121 and the eight microlenses 120 adjacent thereto. When considering the restriction of the aperture by the microlens 120, the optical cross section LFD exists in the covered region indicated by the oblique lines in FIG. In this case, the optical cross section LFD by each microlens 120 is a region indicated by the oblique lines in FIG.

  As shown in FIG. 12B, the covering region of one base microlens 121 is divided and distributed to adjacent microlenses 120. The total area when the divided and distributed covering areas (partial areas) are integrated becomes an opening area of one microlens 120. Therefore, the size of the entire area of the light section LFD is the same at any position of the light spot LP. Therefore, when calculating the total area by adding the partial areas, the microlens to which each partial area belongs. 120 should be decided.

  FIG. 11 shows the relationship between the position of the light spot LP and the magnification, that is, the number of microlenses 120 adjacent to the base microlens 121, and this is applied to a virtual aperture region. In the present embodiment, a method is used in which an aperture region is divided by an array of microlenses 120 reduced in magnification, and fragments of the aperture region are arranged at the same position in the microlens 120 defined by this. A case will be described as an example where a square circumscribing the opening area is reduced by a magnification of 2 and the opening area is divided (area division) by the arrangement of the microlenses 120.

FIG. 13 shows an optical cross section LFD when the above-described area division is developed on the base microlens 121. When the same area division is performed according to the magnification, a pattern of the optical section LFD with respect to the magnification, that is, the light spot LP is obtained. Specifically, when the diameter of the microlens 120 (the size of one side of the microlens) is g, the opening area is divided by a grid having a g / m width. The magnification can be expressed as a ratio m = y / f between the height (position) y of the light spot LP and the focal length f of the microlens. There is also a negative sign in the ratio m. When the sign of the ratio m is negative, it is assumed that there is a light spot LP on the image sensor 13 side from the microlens 120.

  In the above example, it has been described that the light spot LP exists on the pseudo optical axis that is the lens central axis of a certain microlens 120. However, there is no problem in calculation even if it is actually decentered. If it is possible to calculate only on the center of the lens, the two-dimensional resolution of the composite image is equal to the number of microlenses 120, but this is usually not sufficient. The reason is that if the number of imaging pixels 131 covered by the microlens 120 is 100, the resolution of the composite image is 1/100 of the number of pixels. For this reason, in order to obtain a composite image of 1 million pixels, 100 million imaging pixels 131 are required. Therefore, synthesis at an eccentric position is performed so that a plurality of light spots LP can be accommodated in the microlens 120.

  Since the product of the area covered by the microlens 120 and the number of the microlenses 120 is substantially equal to the total number of pixels of the imaging pixel 131, synthesis is performed based on each of a plurality of eccentric points within one microlens 120. This is equivalent to using the image signal from the imaging pixel 131 superimposed. That is, the light beam LF from each decentered light spot LP is superimposed on the imaging pixel 131. However, when the magnification is 1, this calculation is merely an interpolation operation and does not substantially contribute to the resolution improvement. This indicates that if the image is formed near the apex of the microlens 120, information in the depth direction is optically lost.

  FIG. 14 illustrates a divided region for the light spot LP decentered to the left with respect to the pseudo optical axis of the base microlens 121. Described below is the case where the height (position) of the light spot LP is 2f by decentering from the center of the base microlens 121 (with the lens diameter g), that is, the pseudo optical axis by p in the left direction in FIG. In FIG. 14, a point O1 indicates an eccentric light point LP, and a point O2 indicates a pseudo optical axis. In this case, by dividing the microlens 120 shown in FIG. 13 by p in the right direction in the drawing and dividing the opening region, the divided region shown in FIG. 14 can be obtained.

  If the microlens 120 is divided into 16, the coordinates of the lens center (pseudo optical axis) are (0, 0), and −g / 2, −g / 4, 0, If the pattern of the position of g / 4, g / 2, the divided area by the pattern, and the total of the whole area are integrated, 16 light spot groups can be obtained for one microlens 120.

-Creation processing of composite pixel affiliation table-
The image integration unit 105 refers to the composite pixel affiliation table when integrating image signals. As described above, in this synthesized pixel affiliation table, the imaging pixel 131 that outputs an image signal to be synthesized with the base point signal is the pixel array 130 corresponding to the base microlens 121 and the microlens 120 provided in the vicinity. Determine if it is placed in position.

  When the focus position y of the composite image and the aperture value F (depth of field) of the composite image are determined, the image integration unit 105 combines the image pickup pixels 131 that output an image signal to be combined with the base point signal. Create a pixel affiliation table. As described above, which image signal from which imaging pixel 131 corresponding to which microlens 120 is added to the base point signal is determined by the focal position of the composite image.

  FIG. 6A shows a case where the focus position (focal plane) y of the composite image exists on the subject side with respect to the microlens array 12. FIG. 6B shows a case where the focus position (focal plane) y of the composite image exists on the image sensor 13 side with respect to the microlens array 12. As shown in FIGS. 6A and 6B, regarding the imaging pixel 131 corresponding to the microlens 120a, the arrangement of the imaging pixel 131 that outputs an image signal integrated with the base point signal according to the position of the focal plane. Is different. The other microlenses 120b to 120f and the base microlens 121 are similarly different.

Hereinafter, the synthetic pixel affiliation table creation process by the image integration unit 105 will be described in detail. It is assumed that the focal plane of the composite image exists at a distance y from the microlens array 12, that is, the focal distance is y. Further, the light beam passing through the pseudo optical axis of the nth micro lens 120 from the base micro lens 121 is incident on the position of the distance x from the pseudo optical axis of the base micro lens 121 as shown in the following formula (6). . “D” indicates the arrangement pitch of the microlenses 120.
x = fnd / y (6)

Considering that the imaging pixel 131 receives the light beam formed by the corresponding microlens 120, the light from the subject at the focal position y of the composite image is irradiated by each microlens 120 on the imaging surface of the imaging device 13. The light width l is expressed by the following equation (7).
l = fd / y (7)

  The light width l is represented by a ring-shaped region having a width l (hereinafter referred to as a ring zone) on the two-dimensional plane of the image sensor 13. Therefore, in the microlens 120 arranged at the nth position from the base microlens 121, the light beam defined by the aperture value F of the composite image enters the region indicated by the annular zone l. As shown in Expression (7), the width of the annular zone l decreases as the focal position y of the composite image increases.

In the present embodiment, the shape of each microlens 120 in the xy plane is a hexagon as shown in FIG. 3 and is arranged on the microlens array 12 in a honeycomb configuration. FIG. 7 shows the annular zone l1 when n = 1 and the annular zone l2 when n = 2 in the integrated region Rs corresponding to the aperture value F of a composite image. As shown in FIG. 7, the annular zone 11 when n = 1 is divided by the base microlens 121 and the microlenses 120 a to 120 f to form partial regions Rpa to Rpg, respectively. That is, each of the partial regions Rpa to Rpg is covered with a different microlens 120. Therefore, the image integration unit 105 calculates the output value Pi, s of the image signal from the imaging pixel 131 included in each of the partial regions Rpa to Rpg of the annular zone 11. Similarly, the image integration unit 105 may perform integration over the integration region Rs, that is, all the annular zones l.

  Regarding the base microlens 121 and the microlenses 120a to 120f, the relationship with the adjacent microlens 120 is basically the same. Therefore, the image integration unit 105 determines which partial region Rp an imaging pixel 131 belongs to for each imaging pixel 131 included in each of the partial regions Rpa to Rpg constituting the annular zone 11.

  The diameter of the integration region Rs including the imaging pixel 131 that outputs the image signal to be integrated with respect to the base pixel 132 is (D = f / F). Also, the arrangement pitch d of the microlenses 120 in the x-axis direction (horizontal direction), in other words, the diameter of a circle inscribed in each of the hexagonal microlenses 120 is equal to the maximum value Dmax of the diameter of the integration region Rs. Shall. Furthermore, the focal position (focal length) of the composite image is set to y with the virtual bent surface of the microlens 120 as a reference. In this case, an annular zone l is divided by each microlens 120 in a region projected on the integration region Rs by multiplying the arrangement pitch d of each microlens 120 in the microlens array 12 by f / y which is a projection magnification. This corresponds to each of the partial regions Rp. Therefore, the image integration unit 105 associates the position of the imaging pixel 131 included in the partial region Rp with the microlens 120 corresponding to the partial region Rp, and uses the above formula ( 5) is created. Note that the position of the microlens 120 corresponding to the partial region Rp is specified as a relative position based on the position of the base microlens 121.

  The composite image data generated with reference to the composite pixel affiliation table as described above includes three-dimensional information of subjects having different focal positions, that is, three-dimensional shapes. The digital camera 1 according to the present embodiment generates two-dimensional display image data having three-dimensional information on the basis of the composite image data having the three-dimensional information generated as described above. A display image corresponding to the display image data is displayed on the display device 100 configured to be capable of display. Then, the user observes the three-dimensional display image as an aerial image via the display device 100.

  A display device 110 for displaying a display image including the three-dimensional information generated as described above will be described with reference to FIG. FIG. 8 is a diagram schematically showing the configuration of the display device 110 in the z-axis direction. As shown in FIG. 8A, the display device 110 includes a display device 111, a display microlens array 112, and a virtual image lens 113. The display 111 is constituted by, for example, a liquid crystal display having a backlight, an organic EL display, or the like, and has a plurality of display pixel arrays 114 arranged in a two-dimensional manner. Each of the plurality of display pixel arrays 114 has a plurality of display pixels 115 arranged two-dimensionally. The display pixel 115 is controlled by the display control unit 107 described above, and emits light corresponding to display image data as described later.

  The display microlens array 112 includes a plurality of display microlenses 116 that are arranged two-dimensionally. The display microlens array 112 is arranged on the z-axis direction user (observer) side at a position separated from the display pixel 115 by the focal length f ′ of the display microlens 116. Each display microlens 116 is arranged in an arrangement pattern corresponding to the plurality of display pixel arrays 114. Each display microlens 116 forms an image of light emitted from the display pixel 115 in accordance with image data on a predetermined image plane on the user (observer) side in the z-axis direction.

The virtual image lens 113 is configured by, for example, a large-diameter Fresnel lens or a planar lens using diffraction or the like, and has a size that covers the entire surface of the display 111 on the xy plane. The virtual image lens 113 is disposed at a position where an image displayed on the display 111 can be observed as a virtual image when the user observes the virtual image lens 113. That is, the virtual image lens 113 is disposed at a position where the image plane Q by the display microlens 116 described above is inward of the focal position P of the virtual image lens 113 in the z-axis direction. In other words, the virtual image lens 113 is arranged so that the image plane Q is located between the virtual image lens 113 and the focal position P of the virtual image lens 113.

  As shown in FIG. 8B, the positional relationship between the display microlens array 112 and the display pixel 115 in the z-axis direction of the display device 110 described above is the microlens in the z-axis direction of the imaging unit 100 shown in FIG. This can be regarded as equivalent to the positional relationship between the array 12 and the imaging pixel 131. As shown in FIG. 4B, when subject light from a certain focal position S enters a plurality of imaging pixels 131, the image pattern generation unit 106 transmits the incident light of the imaging pixels 131 shown in FIG. Display image data is generated so that the display pixels 115 emit light with an array pattern similar to the array pattern. In this case, as shown in FIG. 8B, the light beams r1 to r5 from the display pixel 115 form an image at the focal position S ′ via the display microlens 116.

  When the correspondence relationship between the microlens 120 and the imaging pixel 131 represented by Expression (5) is reproduced by the display microlens 116 and the display pixel 115 for each pixel of the composite image data, the display 111 The emitted light forms an image at a focal position S ′ corresponding to a different focal position S in each pixel of the composite image data. As a result, a display image having three-dimensional information corresponding to the three-dimensional information of the composite image data is formed as an aerial image having a three-dimensional shape. In this case, in the aerial image, the actual distance in the depth direction of the subject is reproduced by being reduced in the display image while maintaining the sense of distance. That is, in the aerial image, the reciprocal of the actual distance of the subject is compressed. FIG. 8B also shows the chief rays of the light beams r1 to r5 from the display pixel 115.

The image pattern generation unit 106 generates display image data corresponding to the display image as described above, using the image signal output from each imaging pixel 131. At this time, the image pattern generation unit 106 determines a display pixel 115 that emits light with an intensity corresponding to the image signal output from a certain imaging pixel 131 based on the combined pixel affiliation table. In other words, the image pattern generation unit 106 assigns the image signal output from each imaging pixel 131 to the display pixel 115 arranged corresponding to the arrangement position of the imaging pixel 131. However, the traveling direction of the light from the display pixel 115 is opposite to that at the time of photographing, so that the positional relationship between the microlens 120 and the imaging pixel 131 recorded in the composite pixel affiliation table is generated as it is. When used for the positional relationship, the perspective of the observed aerial image is reversed. For this reason, the image pattern generation unit 106 assigns the display pixel 115 to a position where the imaging pixel 131 recorded in the composite pixel affiliation table is point-symmetric with respect to the base microlens 121, that is, an equivalent position. As a result, the image plane at the time of photographing is observed as an aerial image.

  First, the image pattern generation unit 106 detects the base microlens 121 of the imaging unit 100 corresponding to one display microlens 116 among the plurality of display microlenses 116. It is assumed that data indicating the correspondence between the display microlens 116 and the base microlens 121 is stored in a predetermined recording area in advance. Then, the image pattern generation unit 106 refers to the detected composite pixel affiliation table of the base microlens 121 and detects the image pickup pixel 131 that has output the image signal that forms the composite image signal, and the image pickup element 131 on the image pickup element 13. The position at is detected.

When the imaging pixel 131 and its position are detected, the image pattern generation unit 106 arranges the display pixel 115 to which the image signal from the imaging pixel 131 is allocated with respect to the display microlens 116 based on the detected position. Detect whether it has been. Then, the image pattern generation unit 106 assigns the image signal output from the imaging pixel 131 to the detected display pixel 115. That is, the image pattern generation unit 106 assigns the display pixel 115a shown in FIG. 9B as the display pixel 115 that should emit light corresponding to the image signal from the imaging pixel 131a shown in FIG. 9A. When the arrangement between the microlens 120 and the imaging pixel 131 and the arrangement between the display microlens 116 and the display pixel 115 cannot be regarded as equivalent, the image pattern generation unit 106 uses the display microlens 116. An image signal is assigned to the display pixel 115 arranged at a position normalized to a relative position from the pseudo optical axis. As a result, one pixel data of the display image data is configured based on the light beams r1 to r5 emitted from the plurality of display pixels 115.

  As described above, the display device 110 performs virtual image display. Therefore, the image pattern generation unit 106 generates display image data so as to be inverted in the vertical direction (y-axis direction) with respect to the composite image data. That is, the image pattern generation unit 106 assigns the image signal from the imaging pixel 131 to the display pixel 115 arranged at a position symmetrical in the vertical direction (y-axis direction) with respect to the pseudo optical axis of the display microlens 116. For example, the image pattern generation unit 106 assigns the image signal output from the imaging pixel 131b illustrated in FIG. 9A to the display pixel 115 illustrated in FIG. 9B.

  The image pattern generation unit 106 performs the above processing on all the display microlenses 116. If the image signals output from the plurality of image pickup devices 131 are assigned to the same display pixel 115, the image pattern generation unit 106 superimposes and adds the plurality of image signals. As a result, the image pattern generation unit 106 generates display image data. The image pattern generation unit 106 outputs the generated display image data to the display device 110 via the display control unit 107.

  When each display pixel 115 emits light based on the display image data, a three-dimensional image surface having a relief shape is formed by the display microlens 116 according to the three-dimensional information. The three-dimensional image plane is projected to a predetermined size by the virtual image lens 113 and is observed as a three-dimensional aerial image by the user. In other words, the three-dimensional image plane formed by the display device 111 and the display microlens 116 is optically equivalent to an imaging plane when a three-dimensional subject is photographed by the photographing lens L1. Therefore, the image of the three-dimensional object formed near the planned image plane is restored as a three-dimensional aerial image by the virtual image lens 113. As a result, the user can observe the display device 110 three-dimensionally as a three-dimensional aerial image.

According to the digital camera 1 according to the embodiment described above, the following operational effects can be obtained.
(1) The image integration unit 105 generates composite image data having information on a plurality of focal positions using the image signal output from the image sensor 13, and the image pattern generation unit 106 performs tertiary processing based on the composite image data. Display image data having original information is generated. In the display device 111, a plurality of display pixels 115 are two-dimensionally arranged, and the plurality of display pixels 115 emit light beams from the plurality of display pixels according to display image data. In the display microlens array 112, a plurality of display microlenses 116 that combine the light beams emitted from the plurality of display pixels 115 to form a three-dimensional image are two-dimensionally arranged. The three-dimensional image formed by the display microlens 116 is configured to be observable. As a result, the user observes a three-dimensional subject image as a three-dimensional aerial image on the screen without performing stereoscopic display using the illusion of parallax between the right eye and left eye as in the stereo type or lenticular method. can do. Therefore, it is not based on an illusion, but an aerial image that is actually reproduced in three dimensions is observed. Adverse effects can be prevented. Furthermore, since observation can be performed without using dedicated glasses or the like, long-time observation is possible.

  Further, when the hologram method is used, since the display redundancy is 1000: 1 or more, for example, in order to reproduce a stereoscopic image having a resolution of 100,000 pixels, the display device has 1 billion pixels or more. There is a need. On the other hand, the display device 110 according to the present embodiment can display a three-dimensional aerial image with the number of pixels whose redundancy with respect to the image resolution is about 100 to 1000 times. Furthermore, when projecting and displaying an image captured as a three-dimensional image, a three-dimensional aerial image can be reproduced with a simple configuration without using a configuration such as a refractive index distribution lens for displaying the image as an erect image.

(2) Each of the plurality of display microlenses 116 is arranged so as to correspond to the plurality of display pixels 115, and the image pattern generation unit 106 has one pixel of the three-dimensional image as a plurality of display microlenses 116. The display image data is generated so as to be formed by the light beams emitted from the plurality of display pixels 115 arranged corresponding to the above. In other words, the image pattern forming unit 106 is configured so that the arrangement relationship of the plurality of display pixels 115 that emit the light flux is equivalent to the arrangement relationship of the plurality of imaging pixels 131 corresponding to one pixel data of the composite image data. Display image data is generated. As a result, by assigning the image signal output from each imaging pixel 131 to the corresponding display pixel 115, display image data having three-dimensional information can be generated in advance.

(3) The virtual image lens 113 is arranged so that a three-dimensional image surface is formed by the display microlens 116 between the virtual image lens 113 and its focal length. Therefore, the two-dimensional display image data generated by the image pattern generation unit 106 can be observed as a three-dimensional aerial image with a simple configuration.

The digital camera 1 of the embodiment described above can be modified as follows.
(1) Based on the composite image data generated by the image integration unit 105, an external device different from the digital camera 1 is used instead of the display device 110 displaying the display image data generated by the image pattern generation unit 106. You may display on the monitor with which a display apparatus (for example, personal computer, television, etc.) is equipped. In this case, the external display device reads the composite image data generated by the digital camera 1 and recorded on the memory card 109a. Then, the display device performs processing similar to that of the image pattern generation unit 105 using the read composite image data, generates display image data, and outputs the display image data to the monitor.

  In this case, the monitor included in the display device needs to be configured so that a three-dimensional aerial image can be observed, similarly to the display device 110 of the embodiment. That is, as shown in FIG. 8, a microlens array in which a display device in which a plurality of display pixels are two-dimensionally arranged and a plurality of microlenses for imaging light beams from the display pixels are two-dimensionally arranged. And a virtual image lens for the user to observe as a virtual image a three-dimensional image formed by the microlens. Further, when the display device reads the multi-viewpoint image file from the digital camera 1, for example, an interface such as a LAN cable or wireless communication may be used.

(2) The display device included in the digital camera 1 or the external display device described above may not include the virtual image lens 113. In this case as well, display image data is generated in the same manner as in the embodiment. However, the image pattern generation unit 106 does not perform processing for generating display image data so as to be inverted in the vertical direction (y-axis direction) with respect to the composite image data. The image plane displayed at this time is not a reproduction of a three-dimensional image in a strict sense. It is a reproduction of the image plane compressed by the taking lens 1 near the focal point as when it was taken. Since it has a linear relationship with respect to the reciprocal of the distance and the focal length, if the distance is increased, the solid is hardly reproduced, and the opposite is greatly expressed even at the same depth. When expressing a thing in three dimensions closer to the actual subject, considering that the eye is an optical device and that it is unfolded on an image plane, the expressed thing should be compressed more deeply than it actually is It is. Therefore, when the display device 110 expresses the image plane PL1 indicated by the alternate long and short dash line as shown in FIG. 15 based on this idea, the display device 110 expresses the image plane PL2 compressed as shown by the broken line in the vicinity of the lens array. . When the image plane PL1 appears at a location different from the actual position, the depth may be adjusted accordingly.

The display screen is generally larger than the imaging area of the imaging element 131 of the digital camera 1 that has taken the image. For example, if the image pickup device size of the digital camera 1 is a so-called full size, the size is 35 × 24 (mm), which is 1/10 of a normal 20-inch TV monitor (40.6 × 30.5). Less than. Even if it is set to 10 times for convenience, as will be described later, if an image is taken as it is with a shooting magnification of 1/20 (position of 1 m with a 50 mm lens), the magnification becomes 1/2. In a 3D image, unlike a 2D display, a 10 times larger subject
The display contents are greatly different between the display at the double distance, the display at the same magnification and the same distance, and the display at the distance of 1/10 by 1/10. The latter has a relatively large depth and a three-dimensional effect.

  Next, the depth of display will be described. The height in the depth direction is compressed by the display microlens array 112 provided in the display device 110. Let k be the ratio between the size of the image sensor 13 of the digital camera 1 as the input member and the size of the display 111. Hereinafter, the case where the shooting magnification (ratio of the size of the subject to the size of the subject image) exceeds k times and the case where it is less than k times will be described.

Let n be the shooting magnification. When the photographing magnification exceeds k times, that is, when the subject is located at a position more than a certain distance from the digital camera 1, the subject is acquired two-dimensionally by the image sensor 13 with a size of 1 / n times, and the display 111 It is displayed in a size of k / n times above. Since n> k, the subject is smaller than the actual size, that is, the subject is displayed so that it is considered that there is a subject facing away from the display surface of the display 111. In the display device 110 of the present invention, a stereoscopic image is displayed near the surface of the display microlens array 112. When the distance to the display surface is d and the distance to the subject at the time of shooting is y, the magnification in the depth direction is expressed by the following equation (8).
... (8)

Image to be displayed because it is k / n times, it is also possible in other words (n / k) 2 times that. If the photographing magnification is 50 times and the size of the display 111 is 10 times that of the image sensor 13, the magnification in the depth direction is 25 times. An object 20 cm in the depth direction is 80 μm on the digital camera 1 and on the display 111. The image surface height is 8 mm. When the F value of the display microlens array 112 and the F value of the microlens array 12 at the time of photographing with the digital camera 1 are equal, the size of the display 111 becomes ten times. For this reason, the display 111 can only obtain an image plane height of about 800 μm.

  Therefore, the F value of the display microlens 116 is increased to ensure consistency. However, when the F value of the micro lens 120 of the digital camera 1 is 4, it is impractical to set the F value of the display micro lens 116 to 40. Therefore, the display microlens 116 is set to a value of about F = 8, 16, for example. As a result, although the depth distance of the aerial image is somewhat reduced, the stereoscopic effect itself can be obtained without being lost. This is because the three-dimensional effect in human perception is relative and qualitative.

It will be explained that an increase in the F value of the display microlens 116 leads to an increase in the depth of the aerial image. As shown in the relationship between the focal position and the surface of the display pixel 115 in FIG. 16, the height of the optical image is y, the distance coordinate from the center (pseudo optical axis) of the display microlens 116 is x, and the display micro When the distance between the lenses 116 is d and the focal length is f, the relationship expressed by the following equation (9) is established.
y / nd = f / x (9)
Note that n is an integer that represents the adjacent, n = 1 represents the adjacent display microlens 116, and n = 2 represents the further adjacent display microlens 116.

  The above equation (9) is equally valid for both the digital camera 1 and the display device 110. Due to the relationship shown in Expression (9), the height y of the aerial image reproduced by the display 111 is proportional to the focal length f of the display microlens 116. When the size of the display 111 with respect to the size of the image sensor 13 is 10 times, the focal length f is also 10 times. For this reason, if the F values are equal, an aerial image 10 times larger than the 10 times the size of the display 111 is expressed at 10 times the height y. Thus, in order to further enhance the height y, the focal length of the display microlens 116 should be increased by 20 times, 30 times, or the like, that is, the F value of the display microlens 116 should be increased by 2 or 3 times. .

Here, the case where the magnification is n / k> 1 will be described. The stereoscopic effect when there is a subject in a close place is slightly different from that in a distant place. In the case of a small magnification at a close distance, that is, close-up photography, the observer does not want the image to be the same magnification. For example, considering a close-up image of a bee, an observer does not expect the bee to be displayed in real size in the display 111 or in the air. When a bee is displayed as an aerial image, it is natural for the image to be as large as a pigeon and at some distance. The reason is that, when displayed at the same magnification, the bee displayed small cannot be identified as an ab or a bee by the observer. Furthermore, it is because these things have always been provided in enlarged images.

  For example, if the subject is photographed by the digital camera 1 with the 50 mm photographing lens L1 at the same magnification, the depth on the image plane is preserved as it is because it is the same magnification. When the subject stored at the same magnification is output to the display device 111 having a size of 10 times, a bee having a size of 20 cm is displayed at a position of 500 mm. If the observation position of the display 111 is 1 m, the viewer will perceive the depth four times, so that the stereoscopic effect is exaggerated.

  As described above, the stereoscopic effect changes considerably depending on the magnification of the captured image or the displayed image. However, as described above, the sense of depth that humans have is vague, and the order of the depth of the object, not the absolute value, is the standard for determining the stereoscopic effect. In the present invention, the absolute value of the depth amount changes depending on the photographing magnification, but the relative relationship is completely maintained, so that a clear stereoscopic effect can be provided to the observer. Further, since it is fundamentally different from a stereo system that provides a stereoscopic image completely and obtains a stereoscopic effect by parallax, the load on human vision is small, and phenomena such as so-called stereoscopic sickness are not caused.

  In addition, the present invention is not limited to the above-described embodiment as long as the characteristics of the present invention are not impaired, and other forms conceivable within the scope of the technical idea of the present invention are also within the scope of the present invention. included. The embodiments and modifications used in the description may be configured by appropriately combining them.

100: an imaging unit, 101: a control circuit,
105 ... an image integration unit, 106 ... an image pattern generation unit,
110 ... display device, 111 ... display device,
112 ... Display microlens array, 113 ... Virtual image lens,
114: Display pixel array, 115: Display pixel,
116 ... Display microlens

Claims (10)

  1. Input means for inputting image data in which one pixel data is generated based on a plurality of image signals output from a plurality of imaging pixels arranged corresponding to a plurality of different photographing microlenses;
    Generating means for generating display image data having three-dimensional information based on the image data;
    A plurality of display pixels arranged two-dimensionally, and display means for emitting a light beam from the plurality of display pixels according to the display image data;
    An image display device comprising: a microlens array in which a plurality of microlenses that combine light beams emitted from the plurality of display pixels to form a three-dimensional image are arranged two-dimensionally.
  2. The image display device according to claim 1,
    Each of the plurality of microlenses is arranged corresponding to the plurality of display pixels,
    The generating means combines the display image data so that one pixel of the three-dimensional image is configured by synthesizing light beams emitted from the plurality of display pixels arranged corresponding to the plurality of microlenses. An image display device characterized by generating.
  3. The image display device according to claim 2,
    The generation unit generates the display image data so that an arrangement relationship between the plurality of display pixels emitting the light flux is equivalent to an arrangement relationship between the plurality of imaging pixels corresponding to the one pixel data. An image display device characterized by that.
  4. The image display device according to claim 3,
    The generating means generates the display image data such that an arrangement relationship between the plurality of imaging pixels corresponding to the one pixel data and an arrangement relationship between the plurality of display pixels are point-symmetric with each other. An image display device characterized by the above.
  5. The image display device according to any one of claims 1 to 4,
    The generating unit reproduces the display image so that the three-dimensional image is reproduced in the vicinity of the microlens, and the magnification in the depth direction of the three-dimensional image is compressed more than the magnification in the direction of the array plane of the microlenses. An image display device that generates data.
  6. The image display device according to claim 5,
    The image display device, wherein the generation unit generates the display image data so that the magnification in the depth direction is a multiple of the square of the magnification in the direction of the arrangement surface of the microlenses.
  7. In the image display device according to any one of claims 1 to 6,
    The generation unit generates the display image data by normalizing an arrangement relationship of the plurality of imaging pixels corresponding to the one pixel data with reference to a pseudo optical axis of the microlens. Image display device.
  8. The image display device according to claim 7,
    An image display device, wherein an F value of the microlens is larger than an F value of the photographing microlens.
  9. In the image display device according to any one of claims 1 to 8,
    An image display device further comprising an observation optical system for observing the three-dimensional image formed by the microlens.
  10. The image display device according to claim 9,
    The image display apparatus, wherein the observation optical system includes a virtual image lens, and the virtual image lens is disposed so that the three-dimensional image plane is formed between the virtual image lens and a focal length thereof.
JP2015153729A 2010-06-16 2015-08-03 Image display device Pending JP2016001326A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2010137391 2010-06-16
JP2010137391 2010-06-16
JP2015153729A JP2016001326A (en) 2010-06-16 2015-08-03 Image display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2015153729A JP2016001326A (en) 2010-06-16 2015-08-03 Image display device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2011130067 Division 2011-06-10

Publications (1)

Publication Number Publication Date
JP2016001326A true JP2016001326A (en) 2016-01-07

Family

ID=55076917

Family Applications (2)

Application Number Title Priority Date Filing Date
JP2015153729A Pending JP2016001326A (en) 2010-06-16 2015-08-03 Image display device
JP2017242383A Active JP6593428B2 (en) 2010-06-16 2017-12-19 Display device

Family Applications After (1)

Application Number Title Priority Date Filing Date
JP2017242383A Active JP6593428B2 (en) 2010-06-16 2017-12-19 Display device

Country Status (1)

Country Link
JP (2) JP2016001326A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08289329A (en) * 1995-04-11 1996-11-01 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup device
JP2000308091A (en) * 1999-04-26 2000-11-02 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup device
US20030016444A1 (en) * 2001-07-13 2003-01-23 Brown Daniel M. Autostereoscopic display with rotated microlens and method of displaying multidimensional images, especially color images
JP2003075946A (en) * 2001-09-03 2003-03-12 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup unit
JP2003279894A (en) * 2002-03-22 2003-10-02 Takesumi Doi Multi-projection stereoscopic video display device
JP2004333691A (en) * 2003-05-02 2004-11-25 Nippon Hoso Kyokai <Nhk> Lens position detecting method and device
US20050088749A1 (en) * 1997-07-08 2005-04-28 Kremen Stanley H. Modular integral magnifier
JP2005173190A (en) * 2003-12-11 2005-06-30 Nippon Hoso Kyokai <Nhk> Stereoscopic image display apparatus and stereoscopic image pickup apparatus
JP2007004471A (en) * 2005-06-23 2007-01-11 Nikon Corp Image synthesis method and image pickup apparatus
JP2007071922A (en) * 2005-09-02 2007-03-22 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup device and stereoscopic image display device
JP2007514188A (en) * 2003-11-21 2007-05-31 ナノヴェンションズ インコーポレイテッド Micro optical security and image display system
WO2007092545A2 (en) * 2006-02-07 2007-08-16 The Board Of Trustees Of The Leland Stanford Junior University Variable imaging arrangements and methods therefor
JP2008515110A (en) * 2004-10-01 2008-05-08 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Imaging apparatus and method
JP2008216340A (en) * 2007-02-28 2008-09-18 Nippon Hoso Kyokai <Nhk> Stereoscopic and planar images display device
JP2010050707A (en) * 2008-08-21 2010-03-04 Sony Corp Image pickup apparatus, display apparatus, and image processing apparatus
US20100328433A1 (en) * 2008-02-03 2010-12-30 Zhiyang Li Method and Devices for 3-D Display Based on Random Constructive Interference

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10319342A (en) * 1997-05-15 1998-12-04 Olympus Optical Co Ltd Eye ball projection type video display device
JP4103848B2 (en) * 2004-03-19 2008-06-18 ソニー株式会社 Information processing apparatus and method, recording medium, program, and display apparatus
JP4970798B2 (en) * 2006-01-25 2012-07-11 オリンパスメディカルシステムズ株式会社 Stereoscopic image observation device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08289329A (en) * 1995-04-11 1996-11-01 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup device
US20050088749A1 (en) * 1997-07-08 2005-04-28 Kremen Stanley H. Modular integral magnifier
US7002749B2 (en) * 1997-07-08 2006-02-21 Kremen Stanley H Modular integral magnifier
JP2000308091A (en) * 1999-04-26 2000-11-02 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup device
US20030016444A1 (en) * 2001-07-13 2003-01-23 Brown Daniel M. Autostereoscopic display with rotated microlens and method of displaying multidimensional images, especially color images
JP2003075946A (en) * 2001-09-03 2003-03-12 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup unit
JP2003279894A (en) * 2002-03-22 2003-10-02 Takesumi Doi Multi-projection stereoscopic video display device
JP2004333691A (en) * 2003-05-02 2004-11-25 Nippon Hoso Kyokai <Nhk> Lens position detecting method and device
JP2007514188A (en) * 2003-11-21 2007-05-31 ナノヴェンションズ インコーポレイテッド Micro optical security and image display system
JP2005173190A (en) * 2003-12-11 2005-06-30 Nippon Hoso Kyokai <Nhk> Stereoscopic image display apparatus and stereoscopic image pickup apparatus
JP2008515110A (en) * 2004-10-01 2008-05-08 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Imaging apparatus and method
JP2007004471A (en) * 2005-06-23 2007-01-11 Nikon Corp Image synthesis method and image pickup apparatus
JP2007071922A (en) * 2005-09-02 2007-03-22 Nippon Hoso Kyokai <Nhk> Stereoscopic image pickup device and stereoscopic image display device
WO2007092545A2 (en) * 2006-02-07 2007-08-16 The Board Of Trustees Of The Leland Stanford Junior University Variable imaging arrangements and methods therefor
JP2008216340A (en) * 2007-02-28 2008-09-18 Nippon Hoso Kyokai <Nhk> Stereoscopic and planar images display device
US20100328433A1 (en) * 2008-02-03 2010-12-30 Zhiyang Li Method and Devices for 3-D Display Based on Random Constructive Interference
JP2010050707A (en) * 2008-08-21 2010-03-04 Sony Corp Image pickup apparatus, display apparatus, and image processing apparatus

Also Published As

Publication number Publication date
JP2018084828A (en) 2018-05-31
JP6593428B2 (en) 2019-10-23

Similar Documents

Publication Publication Date Title
US9131136B2 (en) Lens arrays for pattern projection and imaging
JP4538766B2 (en) Imaging device, display device, and image processing device
US9100639B2 (en) Light field imaging device and image processing device
US8929677B2 (en) Image processing apparatus and method for synthesizing a high-resolution image and a refocused image
Okano et al. Real-time integral imaging based on extremely high resolution video system
US20030103744A1 (en) Image input device
KR101605392B1 (en) Digital imaging system, plenoptic optical device and image data processing method
CN104730825B (en) Photoelectricity projection device
JP6230239B2 (en) Image processing apparatus, imaging apparatus, image processing method, image processing program, and storage medium
EP0899969B1 (en) 3D image reconstructing apparatus and 3D image capturing apparatus
US20100238270A1 (en) Endoscopic apparatus and method for producing via a holographic optical element an autostereoscopic 3-d image
JP2010183316A (en) Image pickup apparatus
JP5327658B2 (en) Projection type display device and use thereof
CN101888481B (en) Imaging device
US20140192238A1 (en) System and Method for Imaging and Image Processing
CN1783980B (en) Display apparatus, image processing apparatus and image processing method and imaging apparatus
JP5515396B2 (en) Imaging device
JP6004235B2 (en) Imaging apparatus and imaging system
JP4295520B2 (en) Stereoscopic viewing method and stereoscopic viewing system
Manolache et al. Analytical model of a three-dimensional integral image recording system that uses circular-and hexagonal-based spherical surface microlenses
JPH10512060A (en) 3-dimensional imaging system
KR20140100656A (en) Point video offer device using omnidirectional imaging and 3-dimensional data and method
JPH06118343A (en) Optical device
JP2004514951A (en) Spherical stereoscopic imaging system and method
US20120229688A1 (en) Image pickup apparatus

Legal Events

Date Code Title Description
A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20160524

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20160525

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20160725

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20161101

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20170425

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20170626

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170823

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A821

Effective date: 20170828

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20170919