JP2009053748A - Image processing apparatus, image processing program, and camera - Google Patents

Image processing apparatus, image processing program, and camera Download PDF

Info

Publication number
JP2009053748A
JP2009053748A JP2007217219A JP2007217219A JP2009053748A JP 2009053748 A JP2009053748 A JP 2009053748A JP 2007217219 A JP2007217219 A JP 2007217219A JP 2007217219 A JP2007217219 A JP 2007217219A JP 2009053748 A JP2009053748 A JP 2009053748A
Authority
JP
Japan
Prior art keywords
image
image processing
processing apparatus
subject
apparatus according
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2007217219A
Other languages
Japanese (ja)
Inventor
Yutaka Iwasaki
Kyoichi Suwa
豊 岩崎
恭一 諏訪
Original Assignee
Nikon Corp
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corp, 株式会社ニコン filed Critical Nikon Corp
Priority to JP2007217219A priority Critical patent/JP2009053748A/en
Publication of JP2009053748A publication Critical patent/JP2009053748A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide an image processing apparatus, an image processing program, and a camera for enhancing a 3-dimensional effect of an image. <P>SOLUTION: The image processing apparatus 1 has an input unit 11 for inputting image data having information on distance to an object, and correcting units 11, 12 for correcting the image data so as to vary the 3-dimensional effect of the image based on the distance information. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

  The present invention relates to an image processing apparatus, an image processing program, and a camera.

  For example, a camera with a short actual focal length, such as an electronic camera having a small imaging area compared to a 35 mm version camera, often has a flat captured image and a poor sense of depth. A technique for performing image processing for adding blur to such a photographed image is known (see Patent Document 1).

JP 2003-125281 A

  There is a problem that the three-dimensional effect of the image is poor only by adding blur to the image.

(1) An image processing apparatus according to the present invention includes an input unit that inputs image data having distance information to a subject, and a correction unit that corrects the image data so as to change the stereoscopic effect of the image based on the distance information. It is characterized by having.
(2) In the image processing apparatus according to claim 1, the correction unit can also correct the image data so as to increase the depth of the image.
(3) In the image processing apparatus according to claim 1, the correction unit can also correct the image data so that the depth of at least a part of the subject included in the image is deepened.
(4) In the image processing apparatus according to claim 3, the correction unit can also correct the shape of the subject using a perspective projection method.
(5) In the image processing apparatus according to claim 3, the correction unit can add a shadow due to the unevenness of the subject.
(6) In the image processing apparatus according to claim 3, the correction unit can add a shadow of the subject.
(7) In the image processing apparatus according to claim 6, the image may include a plurality of subjects, and the image data may include distance information to each subject. In this case, the correction unit can add a shadow so that the shadow of one subject is applied to the other subject.
(8) In the image processing device according to any one of claims 1 to 3, the image may include a plurality of subjects, and the image data may include distance information to each subject. In this case, the correction unit can also correct the depth between a plurality of subjects deeply.
(9) In the image processing apparatus according to the eighth aspect, the correction unit can reduce the shape of a subject located far away.
(10) In the image processing apparatus according to the eighth aspect, the correction unit can add image blur to a subject located far away.
(11) The image processing apparatus according to claim 1 may include a conversion unit that converts at least a part of distance information to the subject. The correction unit in this case can also correct the image data according to the converted distance information.
(12) In the image processing apparatus according to the eleventh aspect, the conversion unit can also convert the distance information so that the distance becomes larger.
(13) In the image processing apparatus according to the twelfth aspect, the conversion unit can increase the distance conversion amount as the distance information becomes longer.
(14) In the image processing apparatus according to any one of claims 1 to 13, the input unit can also input stereo image data.
(15) In the image processing apparatus according to any one of claims 1 to 13, the input unit can also input image data having focus adjustment information for each divided region of the image.
(16) An image processing program according to the present invention is based on a process for inputting image data, a process for calculating a distance to a subject included in the image using distance information included in the image data, and a distance to the subject. And a process of correcting the image data so as to change the stereoscopic effect of the image.
(17) A camera according to the present invention includes the image processing device according to any one of claims 1 to 15.

  According to the present invention, it is possible to enhance the stereoscopic effect of an image.

  The best mode for carrying out the present invention will be described below with reference to the drawings. FIG. 1 is a block diagram illustrating a main configuration of a stereo electronic camera 1 according to an embodiment of the present invention. The electronic camera 1 is controlled by the CPU 11.

  The left and right photographing lenses 21L, 21R form subject images on the imaging surfaces of the corresponding imaging elements 22L, R, respectively. The imaging elements 22L and 22 are configured by a CCD image sensor or the like, take a subject image on the imaging surface, and output an imaging signal to the corresponding imaging circuits 23L and R, respectively. For example, R (red), G (green), and B (blue) color filters are provided on the imaging surfaces of the imaging elements 22L and 22R corresponding to the pixel positions. Since the image sensors 22L and R capture the subject image through the color filter, the photoelectric conversion signals output from the image sensors 22L and R have color information of the RGB color system.

  The imaging circuits 23L and R perform analog processing (such as gain control) on the photoelectric conversion signals output from the imaging elements 22L and R, and convert the analog imaging signals into digital data by using a built-in A / D conversion circuit.

  The CPU 11 inputs a signal output from each block, performs a predetermined calculation, and outputs a control signal based on the calculation result to each block. The image processing circuit 12 is configured, for example, as an ASIC, and performs image processing on digital image signals input from the imaging circuits 23L and R. The image processing includes, for example, contour enhancement, color temperature adjustment (white balance adjustment) processing, and format conversion processing for an image signal. The image processing circuit 12 also performs a stereoscopic image reproduction process described later.

  The image compression circuit 13 performs image compression processing at a predetermined compression ratio on the image signal processed by the image processing circuit 12 by the JPEG method. The display image creation circuit 15 creates display data for displaying an image on the liquid crystal monitor 16.

  The recording medium 30 includes a memory card that can be attached to and detached from the electronic camera 1. In the recording medium 30, image data and an image file including the information are recorded according to an instruction from the CPU 11. The image file recorded on the recording medium 30 can be read by an instruction from the CPU 11.

  The buffer memory 14 temporarily stores data before and after image processing and during image processing, stores an image file before being recorded on the recording medium 30, and stores an image file read from the recording medium 30. Used for.

  The operation member 17 includes various buttons and switches of the electronic camera 1, and outputs an operation signal corresponding to the operation content of each operation member to the CPU 11 such as a release button pressing operation, a mode switching switch switching operation, a menu operation, and the like. . It is also possible to specify an image to be subjected to stereoscopic image processing to be described later.

  The focus detection device 18 detects a focus adjustment state by the photographing lenses 21L and 21 by a known phase difference detection method using a light beam corresponding to the focus detection region. Specifically, a pair of subject images are formed on an autofocus sensor (not shown) via a focus detection optical system (not shown). The CPU 11 calculates the adjustment state (defocus amount) of the focal position by the photographing lenses 21L and 21R based on the relative distance between the pair of subject images on the sensor.

  The lens driving mechanism 19 moves a focus lens (not shown) constituting the photographing lenses 21L and R forward and backward in the optical axis direction according to an instruction from the CPU 11. Thereby, focus adjustment is performed.

  The electronic camera 1 is configured to be able to switch between a shooting mode and a playback mode. The shooting mode is an operation mode in which a subject image is shot and data of the shot image is stored in the recording medium 30 as an image file. The reproduction mode is a mode in which the reproduced image based on the image data is displayed on the liquid crystal monitor 16 by reading the captured image data from the recording medium 30.

<3D image reproduction processing>
The electronic camera 1 of the present embodiment has a stereoscopic image reproduction function that processes the image so that the depth of the image is deep. In particular,
[1] Measure stereo distance to the subject of interest using a pair of stereo images,
[2] For at least one of the stereo images, a shadow (shade) due to the unevenness of the subject of interest based on the distance measurement result is added / emphasized to the subject,
[3] Add or enhance the shadow of the subject of interest,
[4] Image processing for enhancing the depth (distance) of the image is performed.
[5] The three-dimensional image reproduction image after image processing is used for display on the liquid crystal monitor 16 and recording on the recording medium 30.
When the operation signal instructing execution of the stereoscopic image reproduction process is input from the operation member 17, the CPU 11 of the electronic camera 1 starts the following process.

  FIG. 2 is a flowchart for explaining the flow of the stereoscopic image reproduction process performed by the CPU 11. In step S11 of FIG. 2, the CPU 11 reads a pair of stereo image files from the recording medium 30, develops the image data in each file in the buffer memory 14, and proceeds to step S12. The stereo image file to be read is instructed by an operation signal from the operation member 17.

  FIG. 3 is a diagram illustrating a stereo image, where FIG. 3A is a left image and FIG. 3B is a right image. The electronic camera 1 captures a stereo image including a common subject (person A and person B) with the left and right imaging optical systems. According to FIGS. 3A and 3B, parallax occurs between a pair of stereo images, and the relative positions of the person A and the person B in each image are different. Such a stereo image file is recorded on the recording medium 30.

In step S12 of FIG. 2, the CPU 11 performs stereo distance measurement. In particular,
[1] Extract corresponding feature points from the data of the left image and the right image on the buffer memory 14,
[2] Stereo distance measurement is performed based on the data positions of the left and right images corresponding to the extracted feature points.

  FIG. 4 is a schematic diagram for explaining stereo distance measurement in the horizontal direction. In FIG. 4, a triangle is formed with a left image shooting position, a right image shooting position, and a subject position common to both the left and right images (example of person A). The CPU 11 calculates the subject distance Z as disclosed in, for example, Japanese Patent Laid-Open No. 63-18213.

  In this embodiment, since the person A and the person B are the subject of interest, a known face detection process is performed in order to extract the corresponding face image from the left image and the right image. For example, when acquiring distance information regarding the face of the person A, the face area 22 detected from the left image (illustrated in FIG. 3A) and the face area 21 detected from the right image (FIG. 3B). The subject distance Z is calculated using the pixel data included in both calculation target areas. If the subject of interest is not a person, pixel data corresponding to the left and right images may be extracted by using a known corresponding point detection technique in a stereo image, and may be used as a calculation target region in this embodiment.

The subject distance Z is calculated by the following equation (1).
Z = L × f / d (1)
Here, L is the distance (base line length) between the optical axes of the taking lens 21 at the time of stereo shooting (left image shooting position and right image shooting position), f is the focal length of the shooting lenses 21L and R, and d is parallax (= X L -X R ). X L corresponds to the data position on the left image to be subjected to distance measurement, X R corresponds to the data position on the right image to be subjected to distance measurement.

  The CPU 11 performs the above calculation for each corresponding pixel data between the face area 21 and the face area 22 and calculates the subject distance Z in units of pixels. Note that three-dimensional measurement information can be acquired for the face being photographed by performing the same calculation in the vertical direction as in the horizontal direction. Further, by performing the same calculation on a region other than the face, three-dimensional measurement information can be acquired for the body being photographed.

  In step S13 of FIG. 2, the CPU 11 determines the instructed processing content. The CPU 11 proceeds to step S14 when the addition / emphasis of shadows is instructed, and proceeds to step S15 when the addition / emphasis of shadows is instructed, and instructs to change the size of the subject. If YES in step S16, the flow advances to step S16. If blur addition is instructed, the flow advances to step S17. The processing content is instructed by an operation signal from the operation member 17.

<Shade addition>
In step S <b> 15, the CPU 11 sends an instruction to the image processing circuit 12 to perform a shadow addition / emphasis process. The image processing circuit 12 adds a shade as follows using the stereo distance measurement result. The image processing circuit 12 is pixel data having a distance Z longer than the distance Z B of the pixel data P having a luminance equal to or higher than a predetermined value for each predetermined area of the image, and is within a predetermined range (for example, in the image) By making the luminance of an image area including pixel data within ± 10 ° from the lower right 45 ° direction half of the original data, a shade due to unevenness of the subject of interest is added. FIG. 5 is a diagram illustrating an image in which a shade is added to the right image (FIG. 3B) in the lower right 45 ° direction.

  Further, the image processing circuit 12 sets the brightness of the pixel data with the shortest distance Z as the reference B for each predetermined region of the image, and lowers the brightness of the pixel data with the distance Z longer than the reference B to add brightness and darkness.

  In addition, when the shadow (shade) has arisen in the picked-up image with the illumination light at the time of imaging | photography, it is good to reduce a brightness | luminance further so that it may illustrate in FIG. The state of the shade to be added may be configured to be instructable by an operation signal from the operation member 17.

<Shadow addition 1>
In step S <b> 14, the CPU 11 sends an instruction to the image processing circuit 12 to perform shadow addition / enhancement processing. The image processing circuit 12 adds a shadow to each subject of interest using a stereo distance measurement result and a graphic technique. FIG. 7 is a diagram illustrating an image in which a shadow is added by parallel rays to the right image (FIG. 3B). The image processing circuit 12 virtualizes the direction of the light source so that the shadow of the foreground (person A) is applied to the background (person B), and the shadow of the person A reflected on the person B caused by the parallel rays from the virtual light source. Append.

<Shadow addition 2>
A divergent light source may be assumed instead of a parallel light source. FIG. 8 is a diagram illustrating an image in which a shadow is added by a divergent ray to the right image (FIG. 3B). The image processing circuit 12 virtualizes the direction of the light source so that the shadow of the foreground (person A) is applied to the background (person B), and the shadow of the person A reflected on the person B caused by the divergent light from the virtual light source is displayed. Append.

<Shadow addition 3>
Formal shadows may be added without using the stereo distance measurement result and without virtualizing the light source. FIG. 9 is a diagram illustrating an image in which a formal shadow is added to the subject of interest in the right image (FIG. 3B). Which one of the processes of “shadow addition 1” to “shadow addition 3” is performed is instructed by an operation signal from the operation member 17.

<Deformation 1>
In step S <b> 16, the CPU 11 sends an instruction to the image processing circuit 12 to perform enhancement processing on the depth (distance) of the image using the stereo distance measurement result. FIG. 10 is a diagram exemplifying an image obtained by reducing the person B who is a distant view from the right image (FIG. 3B).

<Deformation 2>
The subject of interest may be transformed using the stereo distance measurement result and the perspective projection method of graphics. For each subject of interest, for example, the image processing circuit 12 changes the distance Z to (Z + α) so as to extend the distance difference between the pixel data with the shortest distance Z and the pixel data with the longest distance Z. Generate an image. This makes the depth of the subject of interest deeper than it actually is. Here, if α is increased as the distance Z is longer, the depth can be emphasized more strongly. Which one of “deformation 1” and “deformation 2” is to be processed is instructed by an operation signal from the operation member 17.

<Add blur>
In step S <b> 17, the CPU 11 sends an instruction to the image processing circuit 12 to perform blurring processing using the stereo distance measurement result. FIG. 11 is a diagram exemplifying an image obtained by blurring an area including a person B as a distant view in the right image (FIG. 3B). In the blurring process, image data is subjected to a low-pass filter (LPF) process to degrade high-frequency spatial frequency components of the image, thereby blurring the edges of the image and reducing the contrast of the image.

  In step S <b> 18 of FIG. 2, the CPU 11 determines whether or not a processing combination for stereoscopic image reproduction is instructed. If other processing is also instructed, the CPU 11 makes a positive determination in step S18 and returns to step S13. When returning to step S13, the CPU 11 performs other processing. 12 shows an image obtained by combining the right image (FIG. 3B) with the shade addition illustrated in FIG. 5, the shadow addition 2 illustrated in FIG. 8, and the blur addition illustrated in FIG. It is a figure illustrated.

  On the other hand, if other processing is not instructed, the CPU 11 makes a negative determination in step S18 and proceeds to step S19.

  In step S19, the CPU 11 displays the stereoscopic image reproduction image after the image processing on the liquid crystal monitor 16, and proceeds to step S20. In step S20, the CPU 11 determines whether or not the processing result is satisfied. When the operation signal indicating “OK” is input from the operation member 17, the CPU 11 makes a positive determination in step S <b> 20 and proceeds to step S <b> 21. If the operation signal indicating “OK” is not input from the operation member 17, the CPU 11 makes a negative determination in step S20 and returns to step S13. When returning to step S13, for example, the CPU 11 displays a message for prompting an operation for instructing new processing contents on the liquid crystal monitor 16, and receives an operation signal from the operation member 17.

  In step S <b> 21, the CPU 11 determines whether to save the stereoscopic image reproduction image after the image processing. When the operation signal indicating “save” is input from the operation member 17, the CPU 11 makes a positive determination in step S <b> 21 and proceeds to step S <b> 22. If the operation signal indicating “save” is not input from the operation member 17, the CPU 11 makes a negative determination in step S <b> 21 and ends the process of FIG. 2. In step S <b> 22, the CPU 11 stores the stereoscopic image reproduction image file in the recording medium 30 and ends the processing of FIG. 2.

According to the embodiment described above, the following operational effects can be obtained.
(1) Since stereo distance measurement is performed using a stereo image, distance information to a subject can be obtained in pixel units (may be a predetermined pixel block unit). Since the processing for making the depth of the image deeper is performed using this distance information, it is possible to enhance the stereoscopic effect of the flat and shallow image.

(2) For example, by processing the person A (object), the three-dimensional effect of the object can be enhanced.

(3) By using the acquired distance information and deforming the shape of the object using the perspective projection method, it is possible to enhance the sense of depth about the object.

(4) By using the acquired distance information and adding a shade due to the unevenness of the person A (object), the stereoscopic effect of the object can be enhanced.

(5) By using the acquired distance information and adding a shadow to the person A (object), the stereoscopic effect of the object can be enhanced.

(6) By using the acquired distance information and adding the shadow of the person A (object) to the person B (other object), the stereoscopic effect of the image can be enhanced.

(7) By using the acquired distance information and processing the distance between the person A (object) and the person B (other object) deeper than the actual distance, the stereoscopic effect of the image can be enhanced.

(8) The stereoscopic effect of the image can be enhanced by using the acquired distance information and reducing the far person B (object) to be smaller than the actual size.

(9) Using the acquired distance information, the stereoscopic effect of the image can be enhanced by processing the image of the distant person B (object) to be blurred.

(Modification 1)
In the description of the shade addition, the case where the luminance of the target area of the image is lowered is illustrated, but the target area may be filled in the same manner as in the case of adding the shadow. Thereby, the unevenness of the subject can be expressed strongly. Further, a line may be drawn in the target area.

(Modification 2)
In addition, in the shade addition, the range of the target area may be wider than the actual range determined using the stereo distance measurement result. Thereby, it can emphasize from the actual unevenness | corrugation on the to-be-photographed object surface.

(Modification 3)
In the description of the shade addition, the shade addition direction is in the range of the lower right 45 ° direction ± 10 ° of the image, but this addition direction may be changed as appropriate.

(Modification 4)
In the description of the shadow additions 1 to 3, the case where the target area of the image is filled is illustrated, but a line may be drawn in the target area. As a result, the shadow can be expressed in software compared to the case of painting.

(Modification 5)
In the shadow additions 1 and 2, the direction and height of the light source are virtual so that the shadow of the foreground (person A) is applied to the background (person B), but a virtual light source may be provided at another position. Also, a plurality of virtual light sources may be provided, and shadows from the respective light sources may be added.

(Modification 6)
As another example of deformation, in the case of an image taken indoors, a structure (for example, a window or a door) provided on the wall may be reduced and reduced. Further, if the reduced structure is moved above the actual depth, the depth of the image can be further expressed.

(Modification 7)
In the case of an image taken outdoors in an environment where positioning information by the GPS system can be obtained, the positioning information may be used when determining the direction of shade addition and the direction of the virtual light source when adding a shadow. In this case, the CPU 11 determines the position of the virtual light source in accordance with the sun direction and altitude specified based on the positioning information. In this way, it is possible to align the direction of shadows and shades by natural light and shadows and shades added by image processing.

(Modification 8)
In the above description, an example in which processing for reproducing a stereoscopic image is performed on the right image constituting the stereo image has been described. Instead, it may be applied to the left image or both images.

(Modification 9)
Although an example in which a stereo image is shot and recorded by the two-lens electronic camera 1 has been described, the same applies to a stereo image shot by changing the shooting position with a monocular camera or an image shot by a trinocular camera. Processing may be performed.

(Modification 10)
In the case of image data having distance measurement information (for example, focus adjustment information) for each divided region of the captured image, the present invention can be applied without using a stereo image. In this case, the electronic camera 1 calculates the adjustment state (defocus amount) of the focal position by the photographing lenses 21L and 21R in each of a plurality of different focus detection areas in the photographing screen. The defocus amount is used as shooting distance information to an object (subject) existing in the focus detection area. If the defocus amount information in each focus detection area is stored in association with the captured image, the stereoscopic image reproduction process can be performed on the captured image.

(Modification 11)
Even in the case where captured images in different focus adjustment states are sequentially acquired while moving a focus lens (not shown) constituting the photographic lenses 21L and 21R from the closest end to the infinity end, the present invention is used regardless of the stereo image. Can be applied. In this case, distance measurement information (for example, focus lens position information) is stored in association with each captured image. The object (subject) in the shooting screen is an in-focus image in any of the images (the contrast of the object area is higher than when the object is out of focus). Is used as shooting distance information to the object.

(Modification 12)
The distance information to the subject in the shooting screen may be changed, and each subject may be rearranged according to the changed distance information to enhance the stereoscopic effect of the image. For example, as shown in FIG. 4, when there are a person A and a person B as subjects, the distance from the electronic camera to the person B is changed to be longer than the distance actually measured by the electronic camera. Specifically, the distance information to the person B stored together with the image data is changed to longer distance information. Next, when the person B is rearranged in the image based on the changed distance information, an image similar to that in FIG. 10 in which the person B as a distant view is reduced can be obtained.

  The stereoscopic effect of the image can also be enhanced by changing the distance information and rearranging the subject according to the changed distance. When changing the distance information, when there are a plurality of subjects in one image and the distance from the electronic camera to the subject is short, the distance is slightly increased and the distance to the subject is increased. The distance change amount may be increased according to the above. By doing so, the stereoscopic effect can be enhanced in the entire image.

(Modification 13)
Although an example in which stereoscopic image reproduction is executed in the electronic camera 1 has been described, the image processing apparatus may be configured by causing the computer apparatus 10 shown in FIG. 13 to execute an image processing program for performing the processing shown in FIG. When an image processing program is taken into the personal computer 10 and used, the program is loaded into the data storage device of the personal computer 10 and then used as an image processing device by executing the program.

  The loading of the program to the personal computer 10 may be performed by setting the recording medium 104 such as a CD-ROM storing the program in the personal computer 10 or by a method via the communication line 101 such as a network. You may load. When passing through the communication line 101, the program is stored in the hard disk device 103 of the server (computer) 102 connected to the communication line 101. The image processing program can be supplied as various types of computer program products such as provision via the recording medium 104 or the communication line 101.

  The above description is merely an example, and is not limited to the configuration of the above embodiment.

It is a block diagram explaining the principal part structure of the electronic camera by one embodiment of this invention. It is a flowchart explaining the flow of the stereoscopic image reproduction process which CPU performs. It is a figure which illustrates a stereo image, (a) is a figure of a left image, FIG.3 (b) is a figure of a right image. It is the schematic explaining the stereo distance measurement about a horizontal direction. It is a figure which illustrates the image which gave shade addition. It is a figure which illustrates the image which made the line to add thick. It is a figure which illustrates the image which gave the shadow addition by a parallel ray. It is a figure which illustrates the image which gave the shadow addition by a divergent ray. It is a figure which illustrates the image which gave formal shadow addition. It is a figure which illustrates the image which reduced the distant view person. It is a figure which illustrates the image which blurred the area | region containing a distant view person. It is a figure which illustrates the image which carried out the combination process. It is a figure which illustrates a computer apparatus.

Explanation of symbols

1 ... Electronic camera 11 ... CPU
DESCRIPTION OF SYMBOLS 12 ... Image processing circuit 16 ... Liquid crystal monitor 17 ... Operation member 21L, 21R ... Shooting lens 22L, 22R ... Imaging element 30 ... Recording medium A, B ... Person

Claims (17)

  1. An input unit for inputting image data having distance information to the subject;
    An image processing apparatus comprising: a correction unit that corrects the image data so as to change a stereoscopic effect of the image based on the distance information.
  2. The image processing apparatus according to claim 1.
    The image processing apparatus, wherein the correction unit corrects the image data so as to increase a depth of the image.
  3. The image processing apparatus according to claim 1.
    The image processing apparatus, wherein the correction unit corrects the image data so as to deepen at least a part of a subject included in the image.
  4. The image processing apparatus according to claim 3.
    The image processing apparatus, wherein the correction unit corrects the shape of the subject using a perspective projection method.
  5. The image processing apparatus according to claim 3.
    The image processing apparatus according to claim 1, wherein the correction unit adds a shadow due to the unevenness of the subject.
  6. The image processing apparatus according to claim 3.
    The image processing apparatus, wherein the correction unit adds a shadow of the subject.
  7. The image processing apparatus according to claim 6.
    The image includes a plurality of subjects, the image data has distance information to each subject,
    The image processing apparatus, wherein the correction unit adds the shadow so that the shadow of one subject is applied to the other subject.
  8. The image processing apparatus according to any one of claims 1 to 3,
    The image includes a plurality of subjects, the image data has distance information to each subject,
    The image processing apparatus, wherein the correction unit corrects a depth between the plurality of subjects deeply.
  9. The image processing apparatus according to claim 8.
    The image processing apparatus, wherein the correction unit reduces the shape of a subject located far away.
  10. The image processing apparatus according to claim 8.
    The image processing apparatus according to claim 1, wherein the correction unit adds image blur to a subject located far away.
  11. The image processing apparatus according to claim 1.
    A conversion unit that converts at least part of the distance information to the subject;
    The correction unit corrects the image data in accordance with the converted distance information.
  12. The image processing apparatus according to claim 11.
    The image processing apparatus, wherein the conversion unit converts the distance information so that the distance becomes large.
  13. The image processing apparatus according to claim 12.
    The image processing apparatus, wherein the conversion unit increases a distance conversion amount as the distance information becomes longer.
  14. The image processing apparatus according to any one of claims 1 to 13,
    The image processing apparatus, wherein the input unit inputs stereo image data.
  15. The image processing apparatus according to any one of claims 1 to 13,
    The image processing apparatus, wherein the input unit inputs image data having focus adjustment information for each divided region of the image.
  16. Processing to input image data;
    Processing for calculating the distance to the subject included in the image using the distance information included in the image data;
    An image processing program for causing a computer device to execute a process of correcting image data so as to change a stereoscopic effect of an image based on a distance to a subject.
  17.   A camera equipped with the image processing apparatus according to claim 1.
JP2007217219A 2007-08-23 2007-08-23 Image processing apparatus, image processing program, and camera Pending JP2009053748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007217219A JP2009053748A (en) 2007-08-23 2007-08-23 Image processing apparatus, image processing program, and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007217219A JP2009053748A (en) 2007-08-23 2007-08-23 Image processing apparatus, image processing program, and camera

Publications (1)

Publication Number Publication Date
JP2009053748A true JP2009053748A (en) 2009-03-12

Family

ID=40504813

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007217219A Pending JP2009053748A (en) 2007-08-23 2007-08-23 Image processing apparatus, image processing program, and camera

Country Status (1)

Country Link
JP (1) JP2009053748A (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011010128A (en) * 2009-06-26 2011-01-13 Canon Inc Image reproducing apparatus, image capturing apparatus, and control method therefor
JP2011077900A (en) * 2009-09-30 2011-04-14 Fujifilm Corp Image processing apparatus, camera, and image processing method
JP2011119926A (en) * 2009-12-02 2011-06-16 Sharp Corp Video processing apparatus, video processing method and computer program
JP2011188004A (en) * 2010-03-04 2011-09-22 Victor Co Of Japan Ltd Three-dimensional video imaging device, three-dimensional video image processing apparatus and three-dimensional video imaging method
WO2012032778A1 (en) * 2010-09-08 2012-03-15 パナソニック株式会社 Three-dimensional image processing apparatus, three-dimensional image-pickup apparatus, three-dimensional image-pickup method, and program
JP2012070305A (en) * 2010-09-27 2012-04-05 Toshiba Corp Image processor
JP2012100116A (en) * 2010-11-02 2012-05-24 Sony Corp Display processing device, display processing method, and program
WO2012095914A1 (en) * 2011-01-13 2012-07-19 パナソニック株式会社 Stereoscopic image processing device, stereoscopic image processing method, and program
JP2012142779A (en) * 2010-12-28 2012-07-26 Olympus Imaging Corp Imaging apparatus and imaging program
JP2012205148A (en) * 2011-03-25 2012-10-22 Kyocera Corp Electronic apparatus
JP2013157671A (en) * 2012-01-26 2013-08-15 Sony Corp Image processing device, image processing method, program, terminal device, and image processing system
JP2014014076A (en) * 2012-07-03 2014-01-23 Woodman Labs Inc Image blur based on 3d depth information
JP2014042238A (en) * 2012-08-23 2014-03-06 Stmicroelectronics (Canada) Inc Apparatus and method for depth-based image scaling of 3d visual content
WO2014041860A1 (en) * 2012-09-14 2014-03-20 ソニー株式会社 Image-processing device, image-processing method, and program
US8781215B2 (en) 2011-02-07 2014-07-15 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
US9113074B2 (en) 2010-12-22 2015-08-18 Olympus Corporation Imaging apparatus, imaging method, and computer readable storage medium for applying special effects processing to an automatically set region of a stereoscopic image
JP2015173457A (en) * 2015-04-16 2015-10-01 株式会社ニコン Image processing device, image processing method, and program
JP2016149772A (en) * 2016-03-01 2016-08-18 京セラ株式会社 Electronic apparatus
US9787862B1 (en) 2016-01-19 2017-10-10 Gopro, Inc. Apparatus and methods for generating content proxy
US9792502B2 (en) 2014-07-23 2017-10-17 Gopro, Inc. Generating video summaries for a video using video summary templates
US9838730B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing
US9871994B1 (en) 2016-01-19 2018-01-16 Gopro, Inc. Apparatus and methods for providing content context using session metadata
US9916863B1 (en) 2017-02-24 2018-03-13 Gopro, Inc. Systems and methods for editing videos based on shakiness measures
US9922682B1 (en) 2016-06-15 2018-03-20 Gopro, Inc. Systems and methods for organizing video files
US9953224B1 (en) 2016-08-23 2018-04-24 Gopro, Inc. Systems and methods for generating a video summary
US9953679B1 (en) 2016-05-24 2018-04-24 Gopro, Inc. Systems and methods for generating a time lapse video
US9967515B1 (en) 2016-06-15 2018-05-08 Gopro, Inc. Systems and methods for bidirectional speed ramping
US9972066B1 (en) 2016-03-16 2018-05-15 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US10002641B1 (en) 2016-10-17 2018-06-19 Gopro, Inc. Systems and methods for determining highlight segment sets
WO2018116580A1 (en) * 2016-12-19 2018-06-28 ソニー株式会社 Information processing device, information processing method, and program
US10044972B1 (en) 2016-09-30 2018-08-07 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
WO2018155235A1 (en) * 2017-02-24 2018-08-30 ソニー株式会社 Control device, control method, program, and projection system
US10078644B1 (en) 2016-01-19 2018-09-18 Gopro, Inc. Apparatus and methods for manipulating multicamera content using content proxy
US10096341B2 (en) 2015-01-05 2018-10-09 Gopro, Inc. Media identifier generation for camera-captured media
US10129464B1 (en) 2016-02-18 2018-11-13 Gopro, Inc. User interface for creating composite images
US10192585B1 (en) 2014-08-20 2019-01-29 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US10229719B1 (en) 2016-05-09 2019-03-12 Gopro, Inc. Systems and methods for generating highlights for a video
US10268898B1 (en) 2016-09-21 2019-04-23 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video via segments
US10282632B1 (en) 2016-09-21 2019-05-07 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video
US10339443B1 (en) 2017-02-24 2019-07-02 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US10338955B1 (en) 2015-10-22 2019-07-02 Gopro, Inc. Systems and methods that effectuate transmission of workflow between computing platforms
US10360663B1 (en) 2017-04-07 2019-07-23 Gopro, Inc. Systems and methods to create a dynamic blur effect in visual content
US10395122B1 (en) 2017-05-12 2019-08-27 Gopro, Inc. Systems and methods for identifying moments in videos
US10395119B1 (en) 2016-08-10 2019-08-27 Gopro, Inc. Systems and methods for determining activities performed during video capture
US10397415B1 (en) 2016-09-30 2019-08-27 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10402938B1 (en) 2016-03-31 2019-09-03 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US10402698B1 (en) 2017-07-10 2019-09-03 Gopro, Inc. Systems and methods for identifying interesting moments within videos

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011010128A (en) * 2009-06-26 2011-01-13 Canon Inc Image reproducing apparatus, image capturing apparatus, and control method therefor
US8836760B2 (en) 2009-06-26 2014-09-16 Canon Kabushiki Kaisha Image reproducing apparatus, image capturing apparatus, and control method therefor
JP2011077900A (en) * 2009-09-30 2011-04-14 Fujifilm Corp Image processing apparatus, camera, and image processing method
JP2011119926A (en) * 2009-12-02 2011-06-16 Sharp Corp Video processing apparatus, video processing method and computer program
JP2011188004A (en) * 2010-03-04 2011-09-22 Victor Co Of Japan Ltd Three-dimensional video imaging device, three-dimensional video image processing apparatus and three-dimensional video imaging method
US9240072B2 (en) 2010-09-08 2016-01-19 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional image processing apparatus, three-dimensional image-pickup apparatus, three-dimensional image-pickup method, and program
WO2012032778A1 (en) * 2010-09-08 2012-03-15 パナソニック株式会社 Three-dimensional image processing apparatus, three-dimensional image-pickup apparatus, three-dimensional image-pickup method, and program
JP2012070305A (en) * 2010-09-27 2012-04-05 Toshiba Corp Image processor
US9143755B2 (en) 2010-09-27 2015-09-22 Kabushiki Kaisha Toshiba Image processing device
JP2012100116A (en) * 2010-11-02 2012-05-24 Sony Corp Display processing device, display processing method, and program
CN102572466A (en) * 2010-11-02 2012-07-11 索尼公司 Display processing apparatus, display processing method, and display processing program
US9113074B2 (en) 2010-12-22 2015-08-18 Olympus Corporation Imaging apparatus, imaging method, and computer readable storage medium for applying special effects processing to an automatically set region of a stereoscopic image
JP2012142779A (en) * 2010-12-28 2012-07-26 Olympus Imaging Corp Imaging apparatus and imaging program
JP5909704B2 (en) * 2011-01-13 2016-04-27 パナソニックIpマネジメント株式会社 Stereoscopic image processing apparatus, stereoscopic image processing method, and program
WO2012095914A1 (en) * 2011-01-13 2012-07-19 パナソニック株式会社 Stereoscopic image processing device, stereoscopic image processing method, and program
US9064331B2 (en) 2011-01-13 2015-06-23 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional image processing apparatus, three-dimensional image processing method, and program
US8781215B2 (en) 2011-02-07 2014-07-15 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
JP2012205148A (en) * 2011-03-25 2012-10-22 Kyocera Corp Electronic apparatus
US9462259B2 (en) 2011-03-25 2016-10-04 Kyocera Corporation Electronic device
US9317957B2 (en) 2012-01-26 2016-04-19 Sony Corporation Enhancement of stereoscopic effect of an image through use of modified depth information
JP2013157671A (en) * 2012-01-26 2013-08-15 Sony Corp Image processing device, image processing method, program, terminal device, and image processing system
EP2683169A3 (en) * 2012-07-03 2017-04-12 GoPro, Inc. Image blur based on 3D depth information
US9185387B2 (en) 2012-07-03 2015-11-10 Gopro, Inc. Image blur based on 3D depth information
US10015469B2 (en) 2012-07-03 2018-07-03 Gopro, Inc. Image blur based on 3D depth information
JP2014014076A (en) * 2012-07-03 2014-01-23 Woodman Labs Inc Image blur based on 3d depth information
US9838669B2 (en) 2012-08-23 2017-12-05 Stmicroelectronics (Canada), Inc. Apparatus and method for depth-based image scaling of 3D visual content
JP2014042238A (en) * 2012-08-23 2014-03-06 Stmicroelectronics (Canada) Inc Apparatus and method for depth-based image scaling of 3d visual content
JPWO2014041860A1 (en) * 2012-09-14 2016-08-18 ソニー株式会社 Image processing apparatus, image processing method, and program
CN104620283A (en) * 2012-09-14 2015-05-13 索尼公司 Image-processing device, image-processing method, and program
US9600888B2 (en) 2012-09-14 2017-03-21 Sony Corporation Image processing device, image processing method, and program
WO2014041860A1 (en) * 2012-09-14 2014-03-20 ソニー株式会社 Image-processing device, image-processing method, and program
US10074013B2 (en) 2014-07-23 2018-09-11 Gopro, Inc. Scene and activity identification in video summary generation
US9792502B2 (en) 2014-07-23 2017-10-17 Gopro, Inc. Generating video summaries for a video using video summary templates
US10339975B2 (en) 2014-07-23 2019-07-02 Gopro, Inc. Voice-based video tagging
US10262695B2 (en) 2014-08-20 2019-04-16 Gopro, Inc. Scene and activity identification in video summary generation
US10192585B1 (en) 2014-08-20 2019-01-29 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US10096341B2 (en) 2015-01-05 2018-10-09 Gopro, Inc. Media identifier generation for camera-captured media
JP2015173457A (en) * 2015-04-16 2015-10-01 株式会社ニコン Image processing device, image processing method, and program
US10338955B1 (en) 2015-10-22 2019-07-02 Gopro, Inc. Systems and methods that effectuate transmission of workflow between computing platforms
US10402445B2 (en) 2016-01-19 2019-09-03 Gopro, Inc. Apparatus and methods for manipulating multicamera content using content proxy
US9787862B1 (en) 2016-01-19 2017-10-10 Gopro, Inc. Apparatus and methods for generating content proxy
US10078644B1 (en) 2016-01-19 2018-09-18 Gopro, Inc. Apparatus and methods for manipulating multicamera content using content proxy
US9871994B1 (en) 2016-01-19 2018-01-16 Gopro, Inc. Apparatus and methods for providing content context using session metadata
US10129464B1 (en) 2016-02-18 2018-11-13 Gopro, Inc. User interface for creating composite images
JP2016149772A (en) * 2016-03-01 2016-08-18 京セラ株式会社 Electronic apparatus
US9972066B1 (en) 2016-03-16 2018-05-15 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US10402938B1 (en) 2016-03-31 2019-09-03 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US10341712B2 (en) 2016-04-07 2019-07-02 Gopro, Inc. Systems and methods for audio track selection in video editing
US9838730B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing
US10229719B1 (en) 2016-05-09 2019-03-12 Gopro, Inc. Systems and methods for generating highlights for a video
US9953679B1 (en) 2016-05-24 2018-04-24 Gopro, Inc. Systems and methods for generating a time lapse video
US9967515B1 (en) 2016-06-15 2018-05-08 Gopro, Inc. Systems and methods for bidirectional speed ramping
US9922682B1 (en) 2016-06-15 2018-03-20 Gopro, Inc. Systems and methods for organizing video files
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
US10395119B1 (en) 2016-08-10 2019-08-27 Gopro, Inc. Systems and methods for determining activities performed during video capture
US9953224B1 (en) 2016-08-23 2018-04-24 Gopro, Inc. Systems and methods for generating a video summary
US10268898B1 (en) 2016-09-21 2019-04-23 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video via segments
US10282632B1 (en) 2016-09-21 2019-05-07 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video
US10044972B1 (en) 2016-09-30 2018-08-07 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10397415B1 (en) 2016-09-30 2019-08-27 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10002641B1 (en) 2016-10-17 2018-06-19 Gopro, Inc. Systems and methods for determining highlight segment sets
WO2018116580A1 (en) * 2016-12-19 2018-06-28 ソニー株式会社 Information processing device, information processing method, and program
WO2018155235A1 (en) * 2017-02-24 2018-08-30 ソニー株式会社 Control device, control method, program, and projection system
US9916863B1 (en) 2017-02-24 2018-03-13 Gopro, Inc. Systems and methods for editing videos based on shakiness measures
US10339443B1 (en) 2017-02-24 2019-07-02 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US10360663B1 (en) 2017-04-07 2019-07-23 Gopro, Inc. Systems and methods to create a dynamic blur effect in visual content
US10395122B1 (en) 2017-05-12 2019-08-27 Gopro, Inc. Systems and methods for identifying moments in videos
US10402698B1 (en) 2017-07-10 2019-09-03 Gopro, Inc. Systems and methods for identifying interesting moments within videos

Similar Documents

Publication Publication Date Title
JP4823179B2 (en) Imaging apparatus and imaging control method
JP5395678B2 (en) Distance map generation type multi-lens camera
US8106995B2 (en) Image-taking method and apparatus
US8564693B2 (en) Image processing device, image processing method, and image processing program
KR101265358B1 (en) Method of controlling an action, such as a sharpness modification, using a colour digital image
CN101605208B (en) Image processing apparatus, imaging apparatus, image processing method
US8111315B2 (en) Imaging device and imaging control method that detects and displays composition information
US8199147B2 (en) Three-dimensional display apparatus, method, and program
US20070266312A1 (en) Method for displaying face detection frame, method for displaying character information, and image-taking device
US9332194B2 (en) Imaging apparatus for obtaining a user-intended image when orientation of the imaging apparatus changes in applying a special effect that changes the image quality in a set direction
CN103828361B (en) Image processing device, method, stereoscopic image capture device, portable electronic apparatus, printer, and stereoscopic image player device
US6812969B2 (en) Digital camera
JP2007019973A (en) Imaging device and imaging method
KR100819809B1 (en) Image pickup apparatus and Method of picking up images
JP2008129554A (en) Imaging device and automatic focusing control method
US20110018970A1 (en) Compound-eye imaging apparatus
US7825969B2 (en) Image stabilization using multi-exposure pattern
EP1521456B1 (en) Image capture apparatus, image display method, and program
EP3053332A1 (en) Using a second camera to adjust settings of first camera
WO2007049634A1 (en) Imaging device, image processing device, and program
US20100157127A1 (en) Image Display Apparatus and Image Sensing Apparatus
JP2009054130A (en) Image processor, image processing method, and digital still camera
US10009540B2 (en) Image processing device, image capturing device, and image processing method for setting a combination parameter for combining a plurality of image data
US8593509B2 (en) Three-dimensional imaging device and viewpoint image restoration method
CN101582989B (en) Photography equipment