JP2012033104A - Display device and imaging device - Google Patents

Display device and imaging device Download PDF

Info

Publication number
JP2012033104A
JP2012033104A JP2010173793A JP2010173793A JP2012033104A JP 2012033104 A JP2012033104 A JP 2012033104A JP 2010173793 A JP2010173793 A JP 2010173793A JP 2010173793 A JP2010173793 A JP 2010173793A JP 2012033104 A JP2012033104 A JP 2012033104A
Authority
JP
Japan
Prior art keywords
display
unit
window
imaging
displayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2010173793A
Other languages
Japanese (ja)
Inventor
優一郎 ▲崎▼野
Yuichiro Sakino
Original Assignee
Olympus Imaging Corp
オリンパスイメージング株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Imaging Corp, オリンパスイメージング株式会社 filed Critical Olympus Imaging Corp
Priority to JP2010173793A priority Critical patent/JP2012033104A/en
Publication of JP2012033104A publication Critical patent/JP2012033104A/en
Pending legal-status Critical Current

Links

Images

Abstract

A display device and the like capable of performing intuitive selection with respect to information having a hierarchical structure by a three-dimensional operation.
A display device includes a display unit capable of displaying a three-dimensional image, and information for each layer of information having a hierarchical structure, and is displayed at a position corresponding to the hierarchy via the display unit. A 3D image generation unit 20c that generates a plurality of images displayed in a three-dimensional manner, and a finger position sensor that three-dimensionally detects the coordinates of representative points of an object existing in a three-dimensional space including the display areas of the plurality of images 16 and an input signal receiving unit 20d that receives an input to information included in each of the plurality of images based on the coordinates of the representative point detected by the finger position sensor 16.
[Selection] Figure 1

Description

  The present invention relates to a display device and an imaging device that perform three-dimensional display of images.
  In recent years, display devices capable of displaying three-dimensional (3D) images have been developed. Specifically, 3D display is performed on a television, a display of a personal computer, a display unit provided in a digital camera, or the like. 3D display is a display technique in which a human's right eye and left eye are independently provided with images having different viewpoints so that the brain recognizes a stereoscopic image popping out of the display screen or a stereoscopic image retracted from the display screen.
On the other hand, as a device for inputting to a device having a display device, a touch panel capable of intuitive input by touching a two-dimensional plane panel is widely used.
In addition, a technique for performing input by three-dimensional operation has been devised. For example, in the data input device disclosed in Patent Document 1, the position of a pointing means such as a finger or a pointing stick that moves three-dimensionally in space is detected, and an advanced component of the three-dimensional position information (input of the data input device) Based on the component on the axis orthogonal to the plane), for example, enlargement / reduction of the screen, selection of hierarchical menu items, etc. are performed.
JP 2007-219676 A
  When a display device that displays a 3D image is combined with a device that performs input through a two-dimensional operation such as a touch panel, the user may feel a gap between the three-dimensional vision and the two-dimensional operational feeling. There is. On the other hand, in Patent Document 1, hierarchical information is selected by a three-dimensional operation. However, since the display area of information to be selected and the data input device are physically separated, intuitive It is difficult to perform a typical operation. In the case of Patent Document 1, since an input is performed on a two-dimensionally displayed image by a three-dimensional operation, a gap is generated between the visual sense and the operational feeling.
  The present invention has been made in view of the above, and includes a display device capable of performing intuitive selection with respect to hierarchical information by a three-dimensional operation, and such a display device. An object is to provide an imaging device.
In order to solve the above-described problems and achieve the object, a display device according to the present invention includes a display unit capable of displaying a three-dimensional image, and information for each layer of information having a hierarchical structure, A representative of an object existing in a three-dimensional space including a three-dimensional image generation unit that generates a plurality of images that are three-dimensionally displayed at positions corresponding to the hierarchy via a display unit, and a display area of the plurality of images A detection unit that three-dimensionally detects the coordinates of a point; and an input signal reception unit that receives an input for information included in each of the plurality of images based on the coordinates of the representative point detected by the detection unit. It is characterized by providing.
  In the display device, the plurality of images are a plurality of display windows each including information for each layer, and are a plurality of display windows displayed at different positions from the display screen of the display unit. It is characterized by that.
  In the display device, the plurality of display windows are displayed at positions separated from the display screen by a distance corresponding to a conceptual vertical relationship of information for each layer included in each of the display windows.
  In the display device, each of the plurality of display windows includes a plurality of icons corresponding to information for each layer.
  In the display device, a plurality of icons included in two adjacent display windows of the plurality of display windows are arranged so as not to overlap each other when the two display windows are projected onto the display screen. It is characterized by that.
  In the display device, the input signal receiving unit may display an icon on the display area when the coordinates of the representative point overlap any one of the plurality of icons included in the plurality of display windows. A corresponding signal is output.
  The display device displays any one of the plurality of display windows, and a signal corresponding to any one of the plurality of icons included in the display window is output from the input signal receiving unit. In this case, the image display device may further include a display control unit that erases at least a part of the display window and displays a display window corresponding to the signal among the plurality of display windows.
  An imaging device according to the present invention includes the display device, a lens unit that collects light from a visual field region, an imaging unit that generates electronic image data using light collected by the lens unit, An imaging condition setting unit configured to set imaging conditions for the imaging unit based on an input signal received by the input signal receiving unit, and the three-dimensional image generation unit adds image data generated by the imaging unit to the image data Based on this, a three-dimensional image is generated.
  According to the present invention, for information having a hierarchical structure, an image including information for each hierarchy is displayed three-dimensionally at a position corresponding to the hierarchy, and the coordinates of the representative point of the object detected three-dimensionally are displayed. Since the input of information for each image is received based on this, the user can intuitively select information by a three-dimensional operation.
FIG. 1 is a block diagram showing a configuration of an imaging apparatus according to an embodiment of the present invention. FIG. 2 is a perspective view showing the configuration of the back side of the imaging apparatus shown in FIG. FIG. 3 is a diagram for explaining the detection principle of the finger position sensor. FIG. 4 is a diagram for explaining the detection principle of the Z coordinate of the finger position. FIG. 5 is a diagram illustrating a hierarchical structure of shooting conditions. FIG. 6 is a top view showing the display position of the display window for setting the photographing conditions. FIG. 7 is a flowchart showing the operation of the imaging apparatus shown in FIG. FIG. 8 is a flowchart showing the operation of the imaging apparatus in the shooting condition setting process. FIG. 9A is a top view showing the position of the first hierarchy window of the three-dimensional image. FIG. 9B is a front view showing a first hierarchy window of a three-dimensional image. FIG. 10A is a top view showing the position of the second hierarchy window of the three-dimensional image. FIG. 10B is a front view showing a second hierarchy window of the three-dimensional image. FIG. 11 is a diagram illustrating the positional relationship between icons displayed in the first hierarchy window and the second hierarchy window. FIG. 12A is a top view showing the position of the third hierarchy window of the three-dimensional image. FIG. 12B is a front view showing a third hierarchy window of a three-dimensional image. FIG. 13 is a diagram illustrating a first modification of the display position of the three-dimensional image displayed by the display unit. FIG. 14 is a diagram illustrating a second modification of the display position of the three-dimensional image displayed by the display unit. FIG. 15A is a diagram illustrating the display content of the first hierarchy window in the second modification. FIG. 15B is a diagram illustrating the display content of the second hierarchy window in the second modification. FIG. 15C is a diagram illustrating the display content of the third hierarchy window in the second modification.
  DESCRIPTION OF EMBODIMENTS Hereinafter, embodiments for carrying out the present invention (hereinafter referred to as “embodiments”) will be described with reference to the drawings. The present invention is not limited to the embodiments described below. In the description of the drawings, the same parts are denoted by the same reference numerals.
  FIG. 1 is a block diagram illustrating a configuration of the imaging apparatus according to the present embodiment. FIG. 2 is a diagram illustrating a configuration of a side (back side) facing the user of the imaging apparatus illustrated in FIG. As illustrated in FIGS. 1 and 2, the imaging device 1 includes two imaging units 10 </ b> L and 10 </ b> R, a camera shake detection unit 11, a posture detection unit 12, an operation input unit 13, a display unit 14, and a touch panel 15. A finger position sensor 16, a recording medium interface 17, a nonvolatile memory 18, a volatile memory 19, and a system controller 20.
  The imaging unit 10L includes a lens unit 2L, a lens driving mechanism 3L, a diaphragm driving mechanism 4L, a shutter driving mechanism 5L, an imaging element 6L, and a signal processing unit 7L. The imaging unit 10R includes a lens unit 2R, a lens driving mechanism 3R, a diaphragm driving mechanism 4R, a shutter driving mechanism 5R, an imaging element 6R, and a signal processing unit 7R.
The lens unit 2L includes a focus lens, a zoom lens, and the like, and collects light from a predetermined visual field region. Lens unit 2L has an optical zoom function of changing the angle of view by the zoom lens is moved on the optical axis Q L.
Lens driving mechanism 3L is constituted by a DC motor or the like, to change the focal position and focal length of the lens portion 2L by moving the focus lens and the zoom lens of the lens section 2L on the optical axis Q L.
  The diaphragm drive mechanism 4L is configured by a diaphragm 4a, a stepping motor, and the like, and adjusts the incident amount of light condensed by the lens unit 2L by driving the diaphragm 4a.
  The shutter drive mechanism 5L includes a shutter 5a, a stepping motor, and the like, and drives the shutter 5a to set the state of the image sensor 6L to an exposure state or a light shielding state.
  The image sensor 6L is realized by a CCD (Charge Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor), or the like that receives the light collected by the lens unit 2L and converts it into an electrical signal (analog signal). The signal is output to the signal processing unit 7L.
  The signal processing unit 7L performs signal processing such as amplification on the electrical signal output from the image sensor 6L, and then converts the electrical signal into digital image data by performing A / D conversion. Output to the memory 19.
On the other hand, in the lens unit 2R, parallel to the optical axis Q L, the optical axis Q R on the zoom lens away about 6~8cm from the optical axis Q L is moved. Since the configuration of the other imaging unit 10R is the same as that of the imaging unit 10L, description thereof is omitted.
  Such imaging units 10L and 10R continuously generate electronic image data using light collected by the lens units 2L and 2R, respectively. The lens portions 2L and 2R are provided separately on the left and right sides of the housing of the imaging device 1, and shoot the subject from different angles. In such imaging units 10L and 10R, two images (left-eye image and right-eye image) with different viewpoints are obtained by operating the shutter drive mechanisms 5L and 5R in synchronization. By synthesizing these images and displaying them on the display unit 14 to be described later, it is possible to generate a three-dimensional image that appears in the three-dimensional space.
The camera shake detection unit 11 is configured by a gyro sensor, and detects an image blur state due to a camera shake of the user by detecting an angular velocity of the imaging device 1. Z axis Specifically, the hand shake detecting unit 11, orthogonal lens section 2L and 2R of the optical axis Q L, a plane orthogonal to Q R X axis when the XY plane, Y-axis, and the XY plane The user's camera shake is detected by detecting the angular velocities around the respective axes with individual gyro sensors.
  The posture detection unit 12 is configured by an acceleration sensor, and detects the posture state of the imaging device 1 by detecting the acceleration of the imaging device 1. Specifically, the posture detection unit 12 detects the posture of the imaging device 1 with reference to a horizontal plane that is a plane orthogonal to the direction of gravity of the imaging device 1.
  As shown in FIG. 2, the operation input unit 13 includes a power switch 13 a that switches the power state of the imaging apparatus 1 to an on state or an off state, a release switch 13 b that inputs a release signal that gives a still image shooting instruction, A mode selector switch 13c for switching various modes (photographing mode and playback display mode) of the apparatus 1, an operation switch 13d for performing various settings of the imaging apparatus 1, a zoom switch 13e for performing a zoom operation of the lens units 2L and 2R, and 2D A 2D / 3D shooting changeover switch 13f that switches between shooting and 3D shooting alternately is provided.
  The display unit 14 is a display device capable of displaying a two-dimensional image and a three-dimensional image, and is realized by providing a parallax barrier on a display panel made of liquid crystal or organic EL (Electro Luminescence). The parallax barrier is a panel provided with a light-shielding portion at a predetermined interval, and separates an image displayed on the display panel into an image for the user's left eye and an image for the right eye. By configuring the parallax barrier with a liquid crystal panel or the like whose light-shielding state changes according to an applied voltage, a three-dimensional image display state (light-shielding barrier ON) and a two-dimensional image display state (light-shielding barrier OFF). And can be switched. The display unit 14 is for a normal two-dimensional image corresponding to the image data generated by the image sensor 6L or the image sensor 6R, or a three-dimensional display generated based on the image data respectively generated by the image sensors 6L and 6R. Display an image. In addition, the display unit 14 appropriately displays the operation information of the imaging device 1 and information related to shooting in two dimensions or three dimensions. In place of the parallax barrier, a lenticular lens in which cylindrical lenses are arranged in an array may be provided. In this embodiment, a display device capable of recognizing a three-dimensional image even with the naked eye is shown. Instead, a “frame sequential method” display device used in 3D television is used. It may be used.
  The touch panel 15 is provided, for example, on the display unit 14. The touch panel 15 detects a position touched (touched) by the user based on information displayed on the display unit 14 and accepts an input of an operation signal corresponding to the contact position. In general, the touch panel includes a resistance film method, a capacitance method, an optical method, and the like. In the present embodiment, any type of touch panel is applicable. In the present embodiment, the touch panel 15 functions as a part of the input unit.
  The finger position sensor 16 three-dimensionally detects the coordinates of a representative point (for example, a tip) of an object such as a user's finger or input pen in a three-dimensional space where a three-dimensional image is displayed by the display unit 14. Output a detection signal. The finger position sensor 16 can be configured by combining a motion sensor, a stereo camera, a photo sensor, an ultrasonic sensor, and the like. In the present embodiment, an image sensor such as a CCD or a CMOS and an infrared distance sensor are used. . As shown in FIG. 2, the finger position sensor 16 is provided on the finger detection imaging device 16 a arranged along the X-axis direction side of the display screen 14 a of the display unit 14 and on the Y-axis direction side of the display screen 14 a. And an infrared distance sensor including a plurality of infrared light emitting elements (IRFD) 16c and a plurality of infrared light receiving elements (photodiodes) 16d arranged in an array. .
  FIG. 3 is a diagram for explaining the detection principle of the finger position sensor 16. The finger detection imaging device 16a is disposed at a predetermined angle with respect to the display screen 14a so as to face the space on the display screen 14a, and light from the object (finger or the like) 100 that has entered the visual field range. Is received and converted into an electrical signal, and the X coordinate of the representative point P is detected. Similarly, the finger detection imaging element 16b is disposed at a predetermined angle with respect to the display screen 14a so as to face the space on the display screen 14a, and emits light from the object 100 that has entered the field of view. Light is received and converted into an electrical signal, and the Y coordinate of the representative point P is detected.
  The infrared light emitting elements 16c are arranged along one side around the display screen 14a. In addition, the infrared light receiving elements 16d are arranged in a plurality of rows along a side opposite to the side where the infrared light emitting elements 16c are arranged. In FIG. 3, the infrared light emitting element 16 c and the infrared light receiving element 16 d are arranged in the X-axis direction, but may be arranged in the Y-axis direction. In FIG. 3, the infrared light receiving elements 16d are arranged in three rows, but the number of arrangements may be further increased.
FIG. 4 is a diagram for explaining the principle of distance detection by the infrared distance sensor. The infrared light emitting element 16c emits infrared rays toward the space on the display screen 14a. The infrared light receiving elements 16d (1) to 16d (3) are arranged so as to receive infrared rays reflected by the object 100 at different positions. Infrared light emitted from the infrared light emitting element 16c has a height (Z coordinate) z 1 , z 2 , z 3 of representative points P 1 , P 2 , P 3 on a certain XY coordinate (x 0 , y 0 ). Reflected at an appropriate angle. Therefore, the height of the object 100 can be derived by detecting which of the infrared light receiving elements 16d (1) to 16d (3) has received the infrared light. In FIG. 4, the display screen 14a is the origin of the Z coordinate (z = 0).
In this way, the finger position sensor 16 detects the three-dimensional coordinates (x, y, z) of the representative point P, and outputs a detection signal to the system controller 20.
  Referring to FIG. 1 again, the recording medium interface 17 stores information such as image data in the memory card 17a of the recording medium mounted from the outside of the imaging device 1, and reads out information stored in the memory card 17a. .
  The nonvolatile memory 18 is realized by a flash memory or the like. The nonvolatile memory 18 is executing a program such as program code 18a storing various programs for operating the imaging apparatus 1 and correlation data between the sensor coordinate position of the touch panel 15 and the pixel position of the display unit 14. And a control parameter 18b storing various data used for the control.
  The volatile memory 19 is realized by an SDRAM (Synchronous Dynamic Random Access Memory). The volatile memory 19 has a work area 19a in which image data output from the signal processing units 7L and 7R and information being processed by the system controller 20 are temporarily recorded. Specifically, the volatile memory 19 is used when an image (live view image) corresponding to image data output by the imaging elements 6L and 6R every frame (for example, 1/30 second) or the release switch 13b is operated. Temporarily store images corresponding to the image data output by the image sensors 6L and 6R.
  The system controller 20 is realized by a CPU (Central Processing Unit) or the like, and reads out and executes a program from the nonvolatile memory 18 in accordance with an operation signal from the operation input unit 13 and the like. Data transfer and the like are performed to comprehensively control the operation of the imaging apparatus 1. The system controller 20 includes an image processing unit 20a, a compression / decompression unit 20b, a 3D image generation unit 20c, an input signal reception unit 20d, an imaging condition setting unit 20e, an AF (Auto Focus) control unit 20f, and an AE. An (Auto Exposure) control unit 20g, a timer counter 19h, and a display control unit 20i are included.
  The image processing unit 20 a performs various types of image processing on the image data output from the signal processing units 7 </ b> L and 7 </ b> R and outputs the processed image data to the volatile memory 19. Specifically, the image processing unit 20a performs processing such as edge enhancement, color balance (RGB), color correction, and γ correction on the image data output from the signal processing units 7L and 7R.
  The compression / decompression unit 20b stores the image data stored in the work area 19a of the volatile memory 19 in the memory card 17a or displays the image data recorded in the memory card 17a on the display unit 14. Based on the JPEG (Joint Photographic Experts Group) compression method, image data compression processing and decompression processing are performed.
  The 3D image generation unit 20c is for three-dimensional display for virtually displaying an image at a position separated from the display screen 14a by a predetermined distance based on the image data output from the signal processing units 7L and 7R, respectively. Generate an image. Specifically, the pixels constituting the left-eye image corresponding to the image data output from the signal processing unit 7L and the pixels constituting the right-eye image corresponding to the image data output from the signal processing unit 7R are: The arrangement is rearranged according to the interval of the parallax barrier of the display unit 14 and output to the display unit 14.
  In addition, the 3D image generation unit 20c displays a display window used for setting shooting conditions and the like based on a program code 18a stored in the nonvolatile memory 18 when an instruction is issued from a shooting condition setting unit 20e described later. A three-dimensional display image for virtually displaying the image at a position separated by a predetermined distance from the display screen 14 a is generated and output to the display unit 14.
  The input signal receiving unit 20d detects the coordinates of the representative point P output from the finger position sensor 16 when the display window for setting the shooting conditions and the like generated by the 3D image generation unit 20c is displayed. A signal is received as an input signal to the display window. Further, the input signal receiving unit 20d outputs a signal to an imaging condition setting unit 20e and the like described later when the received input signal satisfies a predetermined condition.
  The imaging condition setting unit 20e sets various imaging conditions when the imaging device 1 captures an object. Specifically, when the mode changeover switch 13c is set to the shooting mode, the shooting condition setting unit 20e sets a display window used for setting shooting conditions in accordance with the program code 18a stored in the nonvolatile memory 18 in 3D. The image generation unit 20c generates the image. The imaging condition setting unit 20e controls the imaging units 10L and 10R by setting predetermined imaging conditions in each unit of the imaging device 1 according to the signal output from the input signal receiving unit 20d.
  FIG. 5 shows a hierarchical structure of imaging conditions that can be set in the imaging apparatus 1. In the present embodiment, the shooting conditions are set stepwise through the first hierarchy 21 for selecting the higher-level concept shooting conditions and the second hierarchy 22 for selecting the lower-level concept shooting conditions. The The first layer 21 is used to select the type of shooting condition. A “filter” for setting a predetermined visual effect on the shot image, a “program” for setting an automatic exposure control method, and shooting. It includes a “scene” that sets various conditions for doing this. On the other hand, the second hierarchies 22a to 22c are for selecting specific shooting conditions among the types of shooting conditions. For example, the second layer 22a of the “filter” includes a “sketch” mode in which the outline of the subject is drawn with a line drawing, a “fish-eye” mode in which a distortion effect obtained by photographing with an ultra-wide angle lens is obtained, and pins at the four corners. It includes a “pinhole” mode in which a shade effect as if taken with a hall camera is obtained. The second level 22b of the “program” is an “aperture priority” mode in which an appropriate exposure value is obtained by automatically adjusting the shutter speed with priority on the aperture, and by adjusting the aperture with priority on the shutter speed. And “shutter priority” mode for obtaining an appropriate exposure value. The “scene” second level 22c includes a “person” mode, a “night view” mode, and a “landscape” mode in which sensitivity, dynamic range, shutter speed, and the like are set according to the scene.
  The AF control unit 20f performs automatic focus adjustment based on the image data output from the signal processing units 7L and 7R. For example, the AF control unit 20f drives the lens driving mechanisms 3L and 3R based on the contrast of the image data, and moves the lens units 2L and 2R on the optical axis Q so that the sharpness of the subject image to be captured is maximized. Move.
  The AE control unit 20g performs still image shooting based on the image data output from the signal processing units 7L and 7R and the shooting conditions set in the shooting condition setting unit 20e ("aperture priority" or "shutter priority"). Automatic exposure is performed by determining the conditions, for example, the set value of the aperture 4a, the shutter speed, and the like.
  The timer counter 20 h generates a time signal that serves as a reference for the operation of the imaging apparatus 1. Thereby, the system controller 20 can set the acquisition interval of image data, the exposure time of the image sensors 6L and 6R, and the like.
  The display control unit 20i performs control so that a two-dimensional or three-dimensional image is displayed on the display unit 14 in a predetermined format.
  FIG. 6 is a top view showing the display position of the display window for setting the photographing conditions generated by the 3D image generation unit 20c. Since the shooting conditions are set in three stages in the present embodiment, the first hierarchy window 31, the second hierarchy window 32, and the third hierarchy window 33 corresponding to each stage are used as display windows for setting the shooting conditions. It is prepared. These hierarchical windows are three-dimensionally displayed at positions virtually separated from the display screen 14a of the display unit 14 by different distances. That is, the first hierarchy window 31 is displayed at a position separated from the display screen 14a by the distance L3. In other words, the display unit 14 is controlled so that the user recognizes that the first hierarchy window 33 exists at a position away from the display screen 14a by the distance L3 (hereinafter the same). The second hierarchy window 32 is displayed at a position separated from the display screen 14a by a distance L2 (L2 <L3). Further, the third hierarchy window 33 is displayed at a position separated from the display screen 14a by a distance L1 (L1 <L2). These distances L1, L2, and L3 are set according to the area of the display screen 14a and the like, and in this embodiment, L1≈3 cm, L2≈6 cm, and L3≈9 cm. The display contents of each hierarchical window will be described later.
Next, the operation of the imaging apparatus 1 shown in FIG. 1 (hereinafter also simply referred to as operation) will be described with reference to FIG. FIG. 7 is a flowchart showing the operation of the imaging apparatus 1.
First, the system controller 20 determines whether or not the power of the imaging device 1 is turned on (step S1). When the power supply of the imaging device 1 is turned on (step S1: Yes), the operation proceeds to step S2. On the other hand, when the power supply of the imaging device 1 is not turned on (step S1: No), the imaging device 1 ends the operation.
  Subsequently, the system controller 20 determines whether or not the mode switch 13c is set to the shooting mode (step S2). When the mode switch 13c is set to the photographing mode (step S2: Yes), the operation proceeds to step S3 described later. On the other hand, when the imaging device 1 is not set to the shooting mode (step S2: No), the operation proceeds to step S10 described later.
  In step S <b> 3, the system controller 20 causes the display unit 14 to display a live view image (through image) corresponding to image data continuously generated by the imaging elements 6 </ b> L and / or 6 </ b> R at a constant minute time interval. At this time, if the 2D shooting mode is set by the 2D / 3D shooting changeover switch 13f, the system controller 20 displays only the 2D live view image by operating only one of the image pickup devices 6L and 6R. . On the other hand, when the 3D shooting mode is set, the system controller 20 displays a 3D live view image by operating both the imaging elements 6L and 6R.
  Subsequently, the system controller 20 determines whether or not the operation switch 13d has been operated and an imaging condition setting instruction signal has been input (step S4). When a shooting condition setting instruction signal is input (step S4: Yes), the system controller 20 sets shooting conditions (step S5). Thereafter, the operation proceeds to step S6. The operation in step S5 will be described in detail later. On the other hand, when the instruction signal for setting the shooting conditions is not input (step S4: No), the operation directly proceeds to step S6.
  In step S <b> 6, the system controller 20 determines whether or not a release signal instructing photographing is input by operating (full pressing) the release switch 13 b. When the release signal instructing photographing is not input (step S6: No), the operation returns to step S1. On the other hand, when a release signal instructing photographing is input (step S6: Yes), the operation proceeds to step S7.
  In step S7, the system controller 20 captures an image of the subject. At this time, if the 2D shooting mode is set, the system controller 20 operates only one of the shutter drive mechanisms 5L and 5R and causes the image sensor 6L or 6R to generate image data representing one 2D image. . On the other hand, when the 3D shooting mode is set, the system controller 20 operates the shutter drive mechanisms 5L and 5R in synchronization, thereby causing the image sensors 6L and 6R to display image data representing two images with different viewpoints. Generate each.
  Subsequently, in step S8, the system controller 20 stores the image data generated by the image sensor 6L and / or 6R in the memory card 17a. Thereafter, the operation returns to step S1.
  Next, the case where the imaging device 1 is not set to the shooting mode in step S2 (step S2: No) will be described. In this case, the system controller 20 performs a reproduction display process for reproducing the image data stored in the memory card 17a (step S10). Thereafter, the operation returns to step S1.
Next, the photographing condition setting process in step S5 will be described. FIG. 8 is a flowchart showing the operation of the imaging apparatus 1 in the shooting condition setting process.
First, in step S101, the imaging condition setting unit 20e starts the operation of the finger position sensor 16, and the representative point P of the object 100 that has entered the detection range of the finger position sensor 16 via the input signal receiving unit 20d. The coordinates (x, y, z) of are acquired.
  Subsequently, in step S102, the shooting condition setting unit 20e causes the 3D image generation unit 20c to generate the first hierarchy window 31 and the selection status display window 34 that are images for setting shooting conditions as shown in FIG. 9A, for example. It is displayed on the display unit 14. In the first hierarchy window 31, a “filter” icon 31a, a “program” icon 31b, and a “scene” icon 31c for performing selection on the first hierarchy 21 (see FIG. 5) of the shooting condition setting are displayed. Has been. On the other hand, the selection status display window 34 is an area for displaying the selection status of the shooting condition (mode). At this stage, since no icon is selected, the content of the selection status itself is not displayed. 9B, the first hierarchy window 31 is displayed at a position separated from the display screen 14a of the display unit 14 by a distance L3. Similarly, the selection status display window 34 is displayed at a position separated from the display screen 14a by a distance L3.
  Subsequently, in step S103, the input signal receiving unit 20d determines whether or not the representative point P of the object 100 has reached the display position of the first hierarchical window 31, and the Z coordinate value z of the representative point P and the first point. The determination is based on the distance L3 between the one-layer window 31 and the display screen 14a. In the following description, the display screen 14a is assumed to be the origin of the Z coordinate (z = 0).
  As shown in FIG. 9B, when the Z coordinate of the representative point P has not reached the display position of the first hierarchy window 31 (step S103: No), a signal instructing release of the shooting condition setting by operating the operation switch 13d is received. Unless input (step S121: No), the imaging condition setting unit 20e repeats the determination in step S103. When the operation switch 13d is operated to input a signal for canceling the shooting condition setting (step S121: Yes), the operation returns to the main routine after the operation of the finger position sensor 16 is stopped (step S116). .
  On the other hand, when the object 100 further approaches the display screen 14a from the state shown in FIG. 9B and the representative point P reaches the display position of the first hierarchy window 31 (step S103: Yes), the input signal receiving unit 20d receives the representative point P. It is determined in which display area of the icons 31a to 31c the XY coordinates (x, y) of the icon 31a to 31c and a selection signal for the icons 31a to 31c including the representative point P is output (step S104: Yes). In this case, it is determined that any of the icons 31a to 31c has been selected by the user, and the operation proceeds to step S105. On the other hand, when there is no icon including the representative point P, no selection signal is output (step S104: No), and in this case, the operation returns to step S103.
  In step S105, the display control unit 20i causes the icon corresponding to the selection signal output from the input signal receiving unit 20d to blink, and then displays the selection condition of the imaging condition in the selection condition display window 34. Further, the display control unit 20 i deletes the first hierarchical window 31 while leaving a part thereof. In the present embodiment, the left and right ends of the first hierarchy window 31 are left and deleted. At this time, the display control unit 20i may change the display state by an animation in which the curtain opens from the center of the first hierarchy window 31 to the left and right.
  Subsequently, in step S106, the imaging condition setting unit 20e causes the 3D image generation unit 20c to generate the second hierarchy window 32 as illustrated in FIG. 10A, for example, according to the selection signal output from the input signal reception unit 20d. Are displayed on the display unit 14. FIG. 10A shows a case where the “filter” icon 31 a is selected in the first hierarchy window 31. The operation in step S106 may be performed almost simultaneously with the erasure of the first hierarchy window 31 in step S105. Or according to the display state of the 1st hierarchy window 31, you may make it increase the display amount of the 2nd hierarchy window 32 gradually.
  In the second hierarchy window 32, a “sketch” icon for selecting the second hierarchy 22a of the “filter” setting selected in step S104 among the second hierarchy 22a to 22c (see FIG. 5) of the photographing condition setting. 32a, a "fisheye" icon 32b, and a "pinhole" icon 32c are displayed. The second hierarchy window 32 also displays a “return” icon 32 d for redoing the selection of icons in the first hierarchy window 31. As shown in FIG. 10B, the second hierarchy window 32 is displayed at a position separated by a distance L2 from the display screen 14a. Therefore, for the user, the end of the first hierarchy window 31 left and left and the selection are selected. Compared to the status display window 34, it seems to be retracted to the display screen 14a side.
  FIG. 11 is a diagram illustrating the positional relationship of the icons 32 a to 32 d on the second hierarchy window 32 with respect to the icons 31 a to 31 c on the first hierarchy window 31. In the present embodiment, the icons 31a to 31c and the icons 32a to 32d are arranged so as not to overlap each other when projected on the same XY plane (for example, the display screen 14a). Thereby, when the user selects any of the icons 31a to 31c on the first hierarchy window 31, even if the finger is too close to the display screen 14a, any of the icons on the second hierarchy window 32 is mistakenly displayed. The situation where 32a-32d is selected can be prevented.
  Subsequently, in step S107, the input signal receiving unit 20d determines whether or not the representative point P of the object 100 has reached the display position of the second hierarchical window 32, and the Z-coordinate value z of the representative point P and the first point. The determination is made based on the distance L2 between the second layer window 32 and the display screen 14a. As shown in FIG. 10B, when the Z coordinate of the representative point P has not reached the display position of the second hierarchy window 32 (step S107: No), the operation proceeds to step S122. In this case, the input signal receiving unit 20d repeats the determination in step S107 unless a signal instructing release of the shooting condition setting by operating the operation switch 13d is input (step S122: No). When the operation switch 13d is operated to input a signal for canceling the shooting condition setting (step S122: Yes), the operation returns to the main routine after the operation of the finger position sensor 16 is stopped (step S116). .
  On the other hand, when the object 100 further approaches the display screen 14a from the state shown in FIG. 10B and the representative point P reaches the display position of the second hierarchical window 32 (step S107: Yes), the input signal receiving unit 20d receives the representative point P. It is determined in which display area of the icons 32a to 32d the XY coordinates (x, y) of is included, and a selection signal for the icons 32a to 32d including the representative point P is output (step S108: Yes). In this case, it is determined that any of the icons 32a to 32d has been selected by the user, and the operation proceeds to step S109. On the other hand, when there is no icon including the representative point P, the selection signal is not output (step S108: No), and in this case, the operation returns to step S107.
  In step S109, the imaging condition setting unit 20e determines whether the icon corresponding to the selection signal output from the input signal receiving unit 20d is the “return” icon 32d. When the icon is the “return” icon 32d (step S109: Yes), the operation proceeds to step S131. On the other hand, when the icon is not the “return” icon 32d (step S109: No), the operation proceeds to step S110.
  In step S <b> 110, the display control unit 20 i causes the selected icon to blink, and then displays the shooting condition selection status in the selection status display window 34. Further, the display control unit 20 i erases the second hierarchical window 32 while leaving a part thereof.
  In step S111, the imaging condition setting unit 20e causes the 3D image generation unit 20c to generate, for example, the third hierarchy window 33 as illustrated in FIG. 12A based on the selection signal output from the input signal reception unit 20d, and the display unit. 14 is displayed. FIG. 12A shows a case where the “fish eye” icon 32 b is selected in the second hierarchy window 32. Further, the operation in step S111 may be performed almost simultaneously with the erasure of the second hierarchy window 32 in step S110. Or according to the display state of the 2nd hierarchy window 32, you may make it increase the display amount of the 3rd hierarchy window 33 gradually.
  In the third layer window 33, a “confirm” icon 33a for confirming the photographing condition setting contents displayed on the icon selected in step S108 and a “return” for redoing the selection of the icon in the second layer window 32 are displayed. "Icon 33b" is displayed. Also, as shown in FIG. 12B, the third hierarchy window 33 is displayed at a position separated by a distance L1 from the display screen 14a, so that for the user, at the end of the second hierarchy window 32 left and right. In comparison, it seems to be further retracted to the display screen 14a side.
  Subsequently, in step S112, the input signal receiving unit 20d determines whether or not the representative point P of the object 100 has reached the display position of the third layer window 33, and the Z-coordinate value z of the representative point P and the first point. The determination is made based on the distance L1 between the three-level window 33 and the display screen 14a. As shown in FIG. 12B, when the Z coordinate of the representative point P has not reached the display position of the third layer window 33 (step S112: No), a signal instructing release of the shooting condition setting by operating the operation switch 13d is received. Unless input is made (step S123: No), the input signal receiving unit 20d repeats the determination in step S112. When the operation switch 13d is operated to input a signal for canceling the shooting condition setting (step S123: Yes), the operation returns to the main routine after the operation of the finger position sensor 16 is stopped (step S116). .
  On the other hand, when the object 100 further approaches the display screen 14a from the state shown in FIG. 12B and the representative point P reaches the display position of the third hierarchy window 33 (step S112: Yes), the input signal receiving unit 20d receives the representative point P. It is determined whether or not the XY coordinates (x, y) of the icon 33a and 33b are included in any one of the display areas of the icons 33a and 33b, and a selection signal for the icons 33a and 33b including the representative point P is output (step S113: Yes). ). In this case, it is determined that one of the icons 33a and 33b has been selected by the user, and the operation proceeds to step S114. On the other hand, when there is no icon including the representative point P, no selection signal is output (step S113: No), and in this case, the operation returns to step S112.
  In step S114, the imaging condition setting unit 20e determines whether the icon corresponding to the selection signal output from the input signal receiving unit 20d is the “return” icon 33b. When the icon is the “return” icon 33b (step S114: Yes), the operation proceeds to step S132. On the other hand, when the icon is not the “return” icon 33b (step S114: No), the operation proceeds to step S115.
  In step S115, the shooting condition setting unit 20e stores the shooting condition displayed on the icon selected in the second hierarchy window 32 in the work area 19a of the volatile memory 19 as a control parameter. Subsequently, the imaging condition setting unit 20e stops the operation of the finger position sensor 16 (step S116). Thereafter, the operation returns to the main routine.
Next, the operation in step S131 will be described.
In step S131, the input signal receiving unit 20d determines whether or not the representative point P of the object 100 is farther from the display screen 14a than the display position of the first hierarchy window 31, based on the value z of the Z coordinate of the representative point P. Judgment. When z> L3 is satisfied (step S131: Yes), it is determined that the representative point P is away from the display position of the first hierarchy window 31, and the operation returns to step S102. Thereby, the user can redo the selection of the icon on the first hierarchy window 31. On the other hand, when z ≦ L3 (step S131: No), the input signal receiving unit 20d stands by until z> L3. In this case, the operation may be returned to step S102 after a predetermined time has elapsed since the “return” icon 32d was selected.
Next, the operation in step S132 will be described.
In step S132, the input signal receiving unit 20d determines whether or not the representative point P of the object 100 is farther from the display screen 14a than the display position of the second hierarchy window 32, based on the value z of the Z coordinate of the representative point P. Judgment. When z> L2 is satisfied (step S132: Yes), it is determined that the representative point P is away from the display position of the second hierarchy window 32, and the operation returns to step S106. Thereby, the user can redo the selection of the icon on the second hierarchy window 32. On the other hand, when z ≦ L2 (step S132: No), the input signal receiving unit 20d stands by until z> L2. In this case, the operation may be returned to step S102 after a predetermined time has elapsed since the “return” icon 33b is selected.
  As described above, according to the present embodiment, the first hierarchical window 31 to the third hierarchical window 33 are sequentially displayed in a three-dimensional manner for information having a hierarchical structure such as shooting conditions, and the user Since the input signal is received by three-dimensionally detecting the representative point P of the object 100 such as a finger, the user can intuitively select an icon by a three-dimensional operation.
  In the present embodiment, as the information displayed in each hierarchical window is a higher concept, the amount that rises from the display screen 14a is increased (for example, the distance L3 with respect to the first hierarchical window 31). The lower the concept, the smaller the amount of protrusion from the display screen 14a (for example, the distance L2 with respect to the second layer window 32). This gives the user a sense of advancing the object 100 such as a finger toward the back as the displayed information is subordinated (that is, approaching the display screen 14a). Input operations can be performed.
(Modification 1)
Next, Modification 1 of the imaging device according to the present embodiment will be described.
As shown in FIG. 13, in the first modification, the representative points of the object 100 detected on a predetermined plane different from the display positions of the first hierarchy window 31, the second hierarchy window 32, and the third hierarchy window 33. An input signal is received based on the coordinates of P. That is, the input to the first hierarchy window 31 displayed at the distance L3 from the display screen 14a is received on the input surface PL1 that is separated from the display screen 14a by the distance D3 (D3> L3). Similarly, the input to the second hierarchy window 32 displayed at the distance L2 from the display screen 14a is received on the input surface PL2 that is separated from the display screen 14a by the distance D2 (D2 <D3). Furthermore, the input to the third hierarchy window 33 displayed at the distance L1 from the display screen 14a is received on the input surface PL3 that is separated from the display screen 14a by the distance D1 (D1 <D2). Thereby, even when the amount of protrusion (distance L3 to L1) of the first layer window 31 to the third layer window 33 from the display screen 14a is small, the user accidentally touches the object 100 with the display screen 14a. Can prevent the situation. Further, even when the interval between the first layer window 31 and the second layer window 32 (L3-L2) and the interval between the second layer window 32 and the third layer window 33 (L2-L1) are small, By setting the distance (D3-D2) between the input surface PL1 and the input surface PL2 and the distance (D2-D1) between the input surface PL2 and the input surface PL3 wide, it is possible to facilitate the user's operation. Become.
(Modification 2)
Next, a second modification of the imaging device according to the present embodiment will be described.
In the above embodiment, the higher the concept of the displayed information is, the larger the amount of protrusion from the display screen 14a is, but the opposite is also possible. For example, as shown in FIG. 14 and FIGS. 15A to 15C, a position where the first hierarchy window 41 including an icon (for example, a local name) 41a in which higher concept information is described is separated from the display screen 14a by a distance L1. And display the second hierarchy window 42 including the icon 42a in which the subordinate concept information (for example, the name of the prefecture) is described at a position separated from the display screen 14a by the distance L2, and further subordinate concept information ( For example, the third hierarchy window 43 including the icon 43a in which the city name is described is displayed at a position separated from the display screen 14a by the distance L3. In this case, it is possible to display information that increases as the subordinate concept appears on a wider screen.
  In the above embodiment, while the lower hierarchy window (for example, the second hierarchy window 32) is displayed, the upper hierarchy window (for example, the first hierarchy window 31) is changed to, for example, the left and right end portions. It is erased leaving behind. However, at this time, as long as the lower hierarchy window is visible, any portion of the upper hierarchy window may be left. Further, all of the upper hierarchy windows may be deleted.
  In the embodiment described above, the imaging apparatus 1 has been described as a digital camera capable of capturing a three-dimensional image, but the present invention is a general digital single-lens reflex camera or digital video camera that captures a two-dimensional image. Alternatively, the present invention can be applied to various electronic devices having a photographing function and a display function such as a camera-equipped mobile phone. The present invention can also be applied to display devices such as televisions, personal computer displays, and tablet computers.
DESCRIPTION OF SYMBOLS 1 Imaging device 2L, 2R Lens part 3L, 3R Lens drive mechanism 4L, 4R Aperture drive mechanism 5L, 5R Shutter drive mechanism 5a Shutter 6L, 6R Image pick-up element 7L, 7R Signal processing part 10L, 10R Image pickup part 11 Camera shake detection part 12 Posture detection unit 13 Operation input unit 13a Power switch 13b Release switch 13c Mode switch 13d Operation switch 13e Zoom switch 13f 2D / 3D shooting switch 14 Display unit 14a Display screen 15 Touch panel 16 Finger position sensor 16a Finger detection image sensor 16b Finger Image sensor for detection 16c Infrared light emitting element 16d, 16d (1) to 16d (3) Infrared light receiving element 17 Recording medium interface 17a Memory card 18 Non-volatile memory 18a Program code 18b Control parameter 19 Source memory 19a Work area 20 System controller 20a Image processing unit 20b Compression / decompression unit 20c 3D image generation unit 20d Input signal reception unit 20e Imaging condition setting unit 20f AF control unit 20g AE control unit 20h Timer counter 20i Display control unit 21 1st layer 22a, 22b, 22c 2nd layer 31, 41 1st layer window 32, 42 2nd layer window 33, 43 3rd layer window 31a, 31b, 31c, 32a, 32b, 32c, 32d, 33a, 33b, 41a , 42a, 43a Icon 34 Selection status display window 100 Object

Claims (8)

  1. A display unit capable of displaying a three-dimensional image;
    A three-dimensional image generation unit that generates a plurality of images that are three-dimensionally displayed at positions corresponding to the hierarchy via the display unit, including information for each layer of information having a hierarchical structure;
    A detection unit that three-dimensionally detects the coordinates of a representative point of an object existing in a three-dimensional space including a display area of the plurality of images;
    Based on the coordinates of the representative point detected by the detection unit, an input signal reception unit that receives an input for information included in each of the plurality of images;
    A display device comprising:
  2.   The plurality of images are a plurality of display windows each including information for each layer, and are a plurality of display windows displayed at different positions from the display screen of the display unit. The display device according to claim 1.
  3.   The display device according to claim 2, wherein the plurality of display windows are displayed at positions separated from the display screen by a distance corresponding to a conceptual vertical relationship of information for each layer included in each of the display windows. .
  4.   The display device according to claim 2, wherein each of the plurality of display windows includes a plurality of icons corresponding to information for each layer.
  5.   A plurality of icons included in two adjacent display windows of the plurality of display windows are arranged so as not to overlap each other when the two display windows are projected onto the display screen. The display device according to claim 4.
  6.   The input signal receiving unit outputs a signal corresponding to the icon on the display area when the coordinates of the representative point overlap with any display area of the plurality of icons included in the plurality of display windows. The display device according to claim 4, wherein the display device is a display device.
  7.   The display is displayed when any one of the plurality of display windows is displayed and a signal corresponding to any of the plurality of icons included in the display window is output from the input signal receiving unit. The display device according to claim 6, further comprising: a display control unit that erases at least a part of the window and displays a display window corresponding to the signal among the plurality of display windows.
  8. A display device according to any one of claims 1 to 7,
    A lens unit that collects light from the field of view;
    An imaging unit that generates electronic image data using light collected by the lens unit;
    A shooting condition setting unit configured to set shooting conditions for the imaging unit based on the input signal received by the input signal receiving unit;
    With
    The three-dimensional image generation unit generates a three-dimensional image based on the image data generated by the imaging unit.
JP2010173793A 2010-08-02 2010-08-02 Display device and imaging device Pending JP2012033104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010173793A JP2012033104A (en) 2010-08-02 2010-08-02 Display device and imaging device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010173793A JP2012033104A (en) 2010-08-02 2010-08-02 Display device and imaging device

Publications (1)

Publication Number Publication Date
JP2012033104A true JP2012033104A (en) 2012-02-16

Family

ID=45846405

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010173793A Pending JP2012033104A (en) 2010-08-02 2010-08-02 Display device and imaging device

Country Status (1)

Country Link
JP (1) JP2012033104A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013175113A (en) * 2012-02-27 2013-09-05 Casio Comput Co Ltd Information processing device, information processing method and program
JP2014232475A (en) * 2013-05-30 2014-12-11 京セラドキュメントソリューションズ株式会社 Display device, electronic apparatus, and image forming apparatus
JP2015103073A (en) * 2013-11-26 2015-06-04 京セラドキュメントソリューションズ株式会社 Operation display device
JP2016024752A (en) * 2014-07-24 2016-02-08 セイコーエプソン株式会社 GUI device
JP2017027595A (en) * 2015-07-17 2017-02-02 モトローラ モビリティ エルエルシーMotorola Mobility Llc Biometric authentication system with proximity sensor
JP2017182500A (en) * 2016-03-30 2017-10-05 富士通株式会社 Input device, input program, and input method
JP2018142251A (en) * 2017-02-28 2018-09-13 株式会社コロプラ Method for providing virtual reality, program for causing computer to execute method, and information processing apparatus for executing program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10105735A (en) * 1996-09-30 1998-04-24 Terumo Corp Input device and picture display system
JP2005196530A (en) * 2004-01-08 2005-07-21 Alpine Electronics Inc Space input device and space input method
JP2007317050A (en) * 2006-05-29 2007-12-06 Nippon Telegr & Teleph Corp <Ntt> User interface system using three-dimensional display
JP2008210348A (en) * 2007-02-28 2008-09-11 Pfu Ltd Image display device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10105735A (en) * 1996-09-30 1998-04-24 Terumo Corp Input device and picture display system
JP2005196530A (en) * 2004-01-08 2005-07-21 Alpine Electronics Inc Space input device and space input method
JP2007317050A (en) * 2006-05-29 2007-12-06 Nippon Telegr & Teleph Corp <Ntt> User interface system using three-dimensional display
JP2008210348A (en) * 2007-02-28 2008-09-11 Pfu Ltd Image display device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013175113A (en) * 2012-02-27 2013-09-05 Casio Comput Co Ltd Information processing device, information processing method and program
JP2014232475A (en) * 2013-05-30 2014-12-11 京セラドキュメントソリューションズ株式会社 Display device, electronic apparatus, and image forming apparatus
JP2015103073A (en) * 2013-11-26 2015-06-04 京セラドキュメントソリューションズ株式会社 Operation display device
JP2016024752A (en) * 2014-07-24 2016-02-08 セイコーエプソン株式会社 GUI device
JP2017027595A (en) * 2015-07-17 2017-02-02 モトローラ モビリティ エルエルシーMotorola Mobility Llc Biometric authentication system with proximity sensor
JP2017182500A (en) * 2016-03-30 2017-10-05 富士通株式会社 Input device, input program, and input method
JP2018142251A (en) * 2017-02-28 2018-09-13 株式会社コロプラ Method for providing virtual reality, program for causing computer to execute method, and information processing apparatus for executing program

Similar Documents

Publication Publication Date Title
JP2012033104A (en) Display device and imaging device
US10274706B2 (en) Image capture control methods and apparatus
US9423588B2 (en) Methods and apparatus for supporting zoom operations
CN104349051B (en) The control method of object detection device and object detection device
US20120019528A1 (en) Display apparatus, display method, and computer-readable recording medium
EP3058713B1 (en) Image capture control methods and apparatus
KR101851800B1 (en) Imaging control device and imaging control method
JP2011081480A (en) Image input system
US20110304706A1 (en) Video camera providing videos with perceived depth
US20110074928A1 (en) Image processing apparatus, camera, and image processing method
JP5704854B2 (en) Display device
US20120033046A1 (en) Image processing apparatus, image processing method, and program
US20110304693A1 (en) Forming video with perceived depth
CN108476284B (en) Image pickup apparatus
JP5530322B2 (en) Display device and display method
JP2010226362A (en) Imaging apparatus and control method thereof
JP2012015619A (en) Stereoscopic display device and stereoscopic photographing device
JP2012032964A (en) Display device
JP5754044B2 (en) Imaging apparatus and image communication system
JP5675197B2 (en) Display device
JP2019087924A (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP2019087923A (en) Imaging apparatus, imaging method, and program
JP5586377B2 (en) Display device
JP2012058903A (en) Display device and imaging device
TWI505708B (en) Image capture device with multiple lenses and method for displaying stereo image thereof

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20130524

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20131212

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140107

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140218

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20140610