WO2010027080A1 - 画像処理装置および方法、撮像装置、並びにプログラム - Google Patents
画像処理装置および方法、撮像装置、並びにプログラム Download PDFInfo
- Publication number
- WO2010027080A1 WO2010027080A1 PCT/JP2009/065626 JP2009065626W WO2010027080A1 WO 2010027080 A1 WO2010027080 A1 WO 2010027080A1 JP 2009065626 W JP2009065626 W JP 2009065626W WO 2010027080 A1 WO2010027080 A1 WO 2010027080A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- input image
- unit
- composition
- area
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 198
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000003384 imaging method Methods 0.000 title claims abstract description 32
- 239000000203 mixture Substances 0.000 claims abstract description 385
- 230000033001 locomotion Effects 0.000 claims description 135
- 238000013459 approach Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 claims description 2
- 238000005520 cutting process Methods 0.000 abstract description 15
- 230000006870 function Effects 0.000 description 98
- 238000000605 extraction Methods 0.000 description 91
- 238000004458 analytical method Methods 0.000 description 71
- 238000005457 optimization Methods 0.000 description 61
- 238000001514 detection method Methods 0.000 description 30
- 238000004364 calculation method Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 20
- 239000000284 extract Substances 0.000 description 8
- 230000001815 facial effect Effects 0.000 description 5
- 239000002245 particle Substances 0.000 description 5
- 230000005855 radiation Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 244000025254 Cannabis sativa Species 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 238000009966 trimming Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000001020 rhythmical effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G06T3/10—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/3872—Repositioning or masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/22—Cropping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2101/00—Still video cameras
Definitions
- the present invention relates to an image processing apparatus and method, an imaging apparatus, and a program, and in particular, an image processing apparatus and method and an imaging apparatus that can cut out an image with an optimal composition even for a subject other than a person. , As well as programs.
- Patent Document 1 is based on the premise that the subject includes a person, and there is a possibility that optimal trimming cannot be performed on an image including a subject other than the person.
- the present invention has been made in view of such a situation, and makes it possible to cut out an image having an optimal composition even for a subject other than a person.
- the image processing apparatus includes a setting unit that sets a composition pattern corresponding to the input image based on the number of regions of interest in the input image and the scene of the input image, Determining means for determining an optimum cutout area in the input image of an image cut out from the input image with the composition pattern based on the composition pattern set by the setting means;
- the image processing apparatus may further include a cutting unit that cuts out the cut region determined by the determining unit from the input image.
- the determining means determines a plurality of candidates for an optimal cutout area in the input image of an image cut out from the input image with the composition pattern based on the composition pattern set by the setting means.
- a display unit for displaying a plurality of candidates for the cutout region on the input image; and a selection unit for selecting any one of the plurality of candidates for the cutout region displayed by the display unit, The cutout unit can cut out the cutout region selected by the selection unit from the input image.
- the image processing apparatus may further include an extraction unit that extracts the region of interest to be noted in the input image, and a determination unit that determines the scene of the input image.
- the determination unit can determine the cutout region so that a center position of a minimum rectangular region including all of the target region of interest in the input image approaches a center of the cutout region in the input image.
- the cutout area becomes larger, and the common area of the cutout area and the smallest rectangular area including all of the attention areas of interest in the input image becomes larger.
- the cut-out area can be determined.
- the determining means can determine the cutout region so that the cutout region does not protrude from the input image.
- the image processing apparatus further includes a determination unit that determines whether or not the input image is a panoramic image by comparing the aspect ratio of the input image with a predetermined threshold, and the determination unit includes: When the determination unit determines that the input image is a panoramic image, an image cut out from the input image with the composition pattern based on the composition pattern set by the setting unit A plurality of candidates for the optimal cutout region can be determined.
- the image processing apparatus may further include an adding unit that adds information indicating the cut-out area determined by the determining unit to the input image as EXIF information. *
- the attention area includes a subject to be noticed in the input image
- the image processing apparatus further includes detection means for detecting the orientation of the subject
- the determination means is set by the setting means Based on the composition pattern and the orientation of the subject detected by the detection means, it is possible to determine an optimum cut-out area in the input image of an image cut out from the input image with the composition pattern.
- the attention area includes a subject to be noticed in the input image
- the image processing apparatus further includes a movement direction determination unit that determines a direction of movement of the subject
- the determination unit includes the setting unit The optimal cutout in the input image of the image cut out from the input image with the composition pattern based on the composition pattern set by the movement direction and the direction of movement of the subject determined by the movement direction determination means The area can be determined.
- the image processing apparatus further includes an overall motion calculating unit that determines a motion of the entire input image, and a local motion calculating unit that determines a motion of the region of interest, and the motion direction determining unit includes the overall motion calculating unit.
- the direction of movement of the subject can be determined based on the direction of movement of the entire input image obtained by the above and the direction of movement of the attention area obtained by the local movement calculating means.
- the image processing method includes a setting step of setting a composition pattern corresponding to the input image based on the number of regions of interest in the input image and the scene of the input image; And a determining step of determining an optimal cutout region in the input image of an image cut out from the input image with the composition pattern based on the composition pattern set in the setting step.
- the program according to the first aspect of the present invention includes a setting step of setting a composition pattern corresponding to the input image based on the number of attention areas of interest in the input image and the scene of the input image, and the setting Based on the composition pattern set in the step, the computer is caused to execute a process including a determination step of determining an optimum cutout area in the input image of an image cut out from the input image with the composition pattern.
- An imaging apparatus includes an imaging unit that captures an image of an object, an acquisition unit that acquires a scene of a captured image captured by the imaging unit, and a region of interest that includes an object of interest in the captured image. Based on the number and the scene acquired by the acquisition means, setting means for setting a composition pattern corresponding to the captured image, and based on the composition pattern set by the setting means, from the captured image Determining means for determining an optimum cut-out area in the captured image of an image cut out with the composition pattern;
- a composition pattern corresponding to an input image is set based on the number of regions of interest in the input image and the scene of the input image, and based on the set composition pattern, An optimum cut-out area in the input image of the image cut out from the input image with the composition pattern is determined.
- a subject is imaged, a scene of the captured image is acquired, and imaging is performed based on the number of regions of interest including the subject of interest in the captured image and the acquired scene.
- a composition pattern corresponding to the image is set, and based on the set composition pattern, an optimum cut-out area in the captured image of an image cut out from the captured image with the composition pattern is determined.
- an image with an optimal composition can be cut out even for a subject other than a person.
- FIG. 1 It is a block diagram which shows the function structural example of one Embodiment of the image processing apparatus to which this invention is applied. It is a block diagram which shows the function structural example of an attention area extraction part.
- 3 is a flowchart for describing image cutout processing of the image processing apparatus in FIG. 1. It is a figure which shows the example of an attention area. It is a figure explaining the composition pattern set based on the number of attention areas, and a scene. It is a figure explaining the example of the composition pattern set by the composition pattern setting part. It is a flowchart explaining the extraction area
- FIG. 25 is a block diagram illustrating still another configuration example of the image processing apparatus. It is a flowchart explaining the image cutting-out process of the image processing apparatus of FIG. 5 is a diagram for explaining a coefficient of an objective function E. FIG. It is a figure explaining the extraction of the image of the extraction area
- FIG. 25 is a block diagram illustrating still another configuration example of the image processing apparatus.
- FIG. 25 is a block diagram illustrating still another configuration example of the image processing apparatus. It is a flowchart explaining the image cutting-out process of the image processing apparatus of FIG. FIG.
- FIG. 25 is a block diagram illustrating still another configuration example of the image processing apparatus.
- FIG. 31 is a flowchart for describing image cutting processing of the image processing apparatus of FIG. 30.
- FIG. It is a block diagram which shows the function structural example of one Embodiment of the imaging device to which this invention is applied.
- FIG. 33 is a flowchart for describing image cutout processing of the imaging apparatus of FIG. 32.
- FIG. FIG. 25 is a block diagram illustrating still another configuration example of the image processing apparatus.
- FIG. 35 is a flowchart for describing image cutting processing of the image processing apparatus of FIG. 34.
- FIG. It is a flowchart explaining a direction detection process.
- FIG. 35 is a flowchart for describing cutout region determination processing of the image processing apparatus of FIG. 34.
- FIG. 35 is a flowchart for describing cutout region determination processing of the image processing apparatus of FIG. 34.
- FIG. 25 is a block diagram illustrating still another configuration example of the image processing apparatus. It is a flowchart explaining the image cutting-out process of the image processing apparatus of FIG. It is a flowchart explaining a movement direction determination process. It is a flowchart explaining the cutting-out area
- FIG. 1 shows a functional configuration example of an embodiment of an image processing apparatus to which the present invention is applied.
- the image processing apparatus 11 in FIG. 1 sets a composition pattern corresponding to a region of interest and a scene of an input image input from an imaging apparatus such as a digital camera, and uses an optimum cutout region based on the composition pattern.
- the cut image is output as an output image.
- the image processing apparatus 11 includes an attention area extraction unit 31, a scene determination unit 32, a composition pattern setting unit 33, a composition analysis unit 34, and an image cutout unit 35.
- the input image input to the image processing apparatus 11 is supplied to the attention area extraction unit 31, the scene determination unit 32, and the image cutout unit 35.
- the attention area extraction unit 31 extracts the attention area of interest in the input image, and supplies attention area information representing the attention area to the composition pattern setting unit 33.
- the attention area is a rectangular area including (surrounding) the subject (object) in the input image, and is set and extracted by the number of subjects in the input image.
- the attention area information is, for example, the position of the vertex of the rectangular area.
- FIG. 2 shows a functional configuration example of the attention area extraction unit 31.
- the attention area extraction section 31 includes an attention degree calculation section 51, a attention rectangular area determination section 52, and a face rectangular area determination section 53.
- the attention level calculation unit 51 calculates a feature amount for each pixel of the input image, and calculates the attention level for each pixel from the feature amount.
- the feature amount includes the size of the edge component of the image, the difference in hue with neighboring pixels, the color distribution in a predetermined area of the image, the difference in average color of the entire image and the hue of each pixel, and the like.
- the attention level calculation unit 51 generates an attention level map corresponding to one input image from the attention level (feature amount) for each pixel, and supplies the attention level map to the attention rectangular area determination unit 52.
- the attention level map generated by the attention level calculation unit 51 is information that represents an area including a subject to be noted in one input image.
- the attention level calculation unit 51 calculates a facial degree (facialness degree) from the feature amount obtained for each pixel of the input image, generates a facial degree map corresponding to one input image, and generates a facial rectangular area. It supplies to the determination part 53.
- the face degree map generated by the attention degree calculation unit 51 is information that represents an area including a face to be noticed in one input image.
- the attention rectangular area determination unit 52 determines the attention rectangular area based on the attention degree map from the attention degree calculation unit 51, and supplies attention rectangle area information representing the attention rectangle area to the composition pattern setting unit 33. More specifically, the attention rectangle area determination unit 52 sets a pixel (position) having a higher attention level than a predetermined threshold as the center of the rectangle in the attention level map, and a pixel having a lower attention level than the other thresholds in the periphery.
- the target rectangular area is determined by setting (position) as an end point (vertex) of the rectangle.
- the smallest rectangular region including them together is set as the rectangular region of interest.
- the face rectangle area determination unit 53 determines a face rectangle area based on the face degree map from the attention level calculation unit 51, and supplies face rectangle area information representing the face rectangle area to the composition pattern setting unit 33. More specifically, the face rectangular area determining unit 53 sets the pixel (position) of the face nose as the center of the rectangle in the face degree map, and the pixel (position) in which the face degree changes (decreases) rapidly around the center. ) Is the end point (vertex) of the rectangle, the face rectangular area is determined.
- attention rectangle area information obtained by the attention rectangle area determination unit 52 and the face rectangle area information obtained by the face rectangle area determination unit 53 are collectively referred to as attention area information.
- the scene determination unit 32 extracts frequency information by frequency-converting the input image, determines the scene of the input image using the frequency information as a feature amount (vector), and is obtained as a result of the determination.
- the scene information representing the scene is supplied to the composition pattern setting unit 33. More specifically, the scene determination unit 32 performs scene determination using a preset learning image and machine learning such as SVM (Support Vector Vector Machine).
- two-class discrimination (one-to-one discrimination) is performed, for example, “coast” class and other classes, “rural landscape” class and other classes, “sky” class and Other classes, the “mountain” class and the other classes,... Are compared, and the respective scores are compared, and the class with the highest score is taken as the discrimination result.
- the scene discriminating unit 32 uses SVM.
- SVM Session-to-Network Interface
- pattern recognition using a neural network or pattern recognition using pattern matching may be used.
- the composition pattern setting unit 33 sets a composition pattern corresponding to the input image based on the number of region-of-interest information from the region-of-interest extraction unit 31 and the scene information from the scene determination unit 32, and sends the composition pattern to the composition analysis unit 34. Supply.
- the composition pattern is determined in advance corresponding to the number of attention areas (subjects) and the scene. Details of the composition pattern will be described later with reference to FIG.
- the composition analysis unit 34 determines the optimum cutout region in the input image of the image cut out with the composition pattern, and the optimum cutout region is sent to the image cutout unit 35. Supply.
- the composition analysis unit 34 includes a composition model creation unit 34a, a safety model creation unit 34b, a penalty model creation unit 34c, an objective function creation unit 34d, and an optimization unit 34e.
- the composition model creation unit 34 a creates a composition model representing a cut-out area based on the composition pattern from the composition pattern setting unit 33.
- the composition model is represented by a predetermined energy function E c .
- the safety model creation unit 34b creates a safety model for preventing the cutout region from becoming too small.
- Safety model is represented by a predetermined energy function E s.
- the penalty model creation unit 34c creates a penalty model for evaluating the area of the region that protrudes from the input image of the cutout region.
- the penalty model is represented by a predetermined energy function E p .
- the objective function creation unit 34d creates the objective function E from the energy function E c representing the composition model, the energy function E s representing the safety model, and the energy function E p representing the penalty model.
- the optimization unit 34e determines a cutout region that minimizes the objective function E, and supplies it to the image cutout unit 35 as the most appropriate extraction region.
- the image cutout unit 35 cuts out and outputs an image of the optimum extraction region from the input image based on the optimum extraction region from the composition analysis unit 34.
- step S11 the attention area extraction unit 31 generates an attention degree map and a face degree map corresponding to the input image. More specifically, the attention level calculation unit 51 generates an attention level map corresponding to the input image, supplies the attention level map to the attention rectangular region determination unit 52, generates a facial degree map corresponding to the input image, and generates a facial rectangular region. It supplies to the determination part 53.
- the attention area extraction unit 31 extracts and determines an attention area of interest in the input image based on the attention degree map and the face degree map. More specifically, the target rectangular area determination unit 52 determines a target rectangular area based on the attention level map from the attention level calculation unit 51, and displays the target rectangular area information representing the target rectangular area as a composition pattern setting unit. 33. Further, the face rectangular area determining unit 53 determines a face rectangular area based on the face degree map from the attention degree calculating unit 51 and supplies face rectangular area information representing the face rectangular area to the composition pattern setting unit 33. To do.
- step S12 the target rectangular area and the face rectangular area are determined as the target area.
- the face rectangular area may be collectively processed as the target rectangular area.
- FIG. 1 An example of the attention area determined in this way is shown in FIG.
- an input image P is an image in which one bird (crane) is flying in the sky.
- step S ⁇ b> 12 attention is paid to one bird, and one attention area L is determined so as to include the bird.
- step S ⁇ b> 13 the scene determination unit 32 extracts frequency information by performing frequency conversion on the input image, determines the scene of the input image using the frequency information as a feature amount (vector), The scene information representing the scene obtained as a result of the discrimination is supplied to the composition pattern setting unit 33.
- step S ⁇ b> 14 the composition pattern setting unit 33 sets a composition pattern corresponding to the input image based on the number of attention area information from the attention area extraction unit 31 and the scene information from the scene determination unit 32. It supplies to the analysis part 34.
- composition pattern set based on the number of attention areas (subjects) and a scene will be described with reference to FIG.
- a horizontal line composition is set as a composition pattern.
- the number of attention areas is 1, a three-division composition and a horizontal line composition are set as composition patterns. Further, when the number of attention areas is 2 to 5, a contrast composition and a horizontal line composition are set as composition patterns, and when the number of attention areas is 6 or more, a contrast composition and a horizontal line composition are set. .
- a radiation composition is set as a composition pattern.
- the number of attention areas is 1, a three-part composition and a radiation composition are set as the composition pattern.
- a contrast composition and a radiation composition are set as composition patterns, and when the number of attention areas is 6 or more, a radiation composition and a pattern composition are set as composition patterns. .
- composition pattern is set according to the number of attention areas in the input image whose scene is “sky”, the input image which is “mountain”, the input image which is “highway”,.
- composition pattern that satisfies each composition is set.
- composition pattern described in FIG. 5 and associated with the number of regions of interest and the scene may be set in advance or may be set as appropriate by the user.
- composition pattern setting unit 33 an example of a composition pattern set by the composition pattern setting unit 33 will be described with reference to FIG.
- FIG. 6 shows a three-part composition, and a balanced image is obtained by placing a subject at the intersection of a vertical line and a horizontal line.
- composition 6 is a composition in which a similar subject and similar subjects are arranged.
- composition B the main subject stands out by placing the main subject large and the other subjects small.
- the composition C in FIG. 6 shows a hatched composition, and is used when it is desired to give a rhythmic feeling.
- the composition C is a composition that can efficiently use a small area.
- the composition D in FIG. 6 shows a radiation composition, and is used when it is desired to provide a sense of openness and spread. Examples of subjects include tree branches and sunlight from between clouds.
- the composition E in FIG. 6 is a horizontal composition and is used when it is desired to have a lateral expansion.
- the subject can be changed by shifting the position of the horizon in the vertical direction.
- 6 is a vertical line composition, and is used when it is desired to emphasize the vertical direction of an image. Examples of subjects include tree trunks and roads.
- the composition G in FIG. 6 is a perspective composition, and is used when a spread from the vanishing point (intersection of diagonal lines in the figure) is desired.
- the composition H in FIG. 6 is a pattern composition, and is used when a plurality of similar subjects are regularly arranged to give a sense of rhythm and unity.
- the composition pattern setting unit 33 is associated with the number of attention areas and scenes in FIG. 5 among the composition patterns shown in FIG. 6 based on the number of attention area information and the scene information. Set the composition pattern. Note that the composition pattern is not limited to the eight types shown in FIG. 6, and there may be more types of patterns.
- composition pattern setting unit 33 performs the three-part composition associated in FIG. (Composition A) is set.
- step S ⁇ b> 15 the composition analysis unit 34 executes a cut-out area determination process based on the composition pattern from the composition pattern setting unit 33, and an input image of an image cut out with the composition pattern Determine the optimal cutout area at.
- step S ⁇ b> 31 the composition model creation unit 34 a of the composition analysis unit 34 creates a composition model representing a cut-out area based on the composition pattern from the composition pattern setting unit 33.
- the composition model creation unit 34a obtains the energy function E c for the composition model.
- the energy function E c is given by the following equation (1).
- S VA represents the area of the region of interest, G DLhn, G DLvn, G DPn is given by the following equation (2).
- L Dh, L Dv, P D respectively, divided into three lines divided into three horizontal in the composition (three horizontal dividing line), the vertical direction is divided into three in line (vertical 3 dividing line), An intersection (three-part line intersection) between the horizontal three-part line and the vertical three-part line is shown, and P n shows the center position of the region of interest. Further, d is the length of the diagonal line of the cutout region, and is given by the following formula (3).
- each of G DLhn , G DLvn , and G DPn in Equation (1) increases as the center position of the region of interest approaches each of the horizontal three-part dividing line, vertical three-part dividing line, and three-part dividing line intersection.
- the coefficients ⁇ hn , ⁇ vn , and ⁇ pn in equation (1) are the aspect ratio VA_aspect_ratio n of the region of interest given by the following equation (4) when the width and height of the region of interest are Crop_width and Crop_height, respectively.
- the horizontal axis indicates the aspect ratio VA_aspect_ratio n
- the vertical axis indicates the values of the coefficients ⁇ hn , ⁇ vn , and ⁇ pn .
- the coefficient ⁇ hn when the aspect ratio VA_aspect_ratio n is 0 to r_min, the coefficient ⁇ hn is 1.0, and when the aspect ratio VA_aspect_ratio n is larger than r_mid1, the coefficient ⁇ hn is 0.0. Further, when the aspect ratio VA_aspect_ratio n is r_min to r_mid1, the coefficient ⁇ hn decreases as the aspect ratio VA_aspect_ratio n increases. That is, in Expression (1), the coefficient ⁇ hn is effective when the attention area is vertically long.
- the coefficient ⁇ vn is 0.0, and when the aspect ratio VA_aspect_ratio n is greater than r_max, the coefficient ⁇ vn is 1.0. Further, when the aspect ratio VA_aspect_ratio n is r_mid2 to r_max, the coefficient ⁇ vn increases as the aspect ratio VA_aspect_ratio n increases. That is, in Equation (1), when the region of interest is horizontally long, the coefficient ⁇ vn is effective.
- the coefficient ⁇ pn is 0.0
- the coefficient ⁇ pn is 1.0
- the aspect ratio VA_aspect_ratio n is r_min to R_mid1
- coefficient alpha pn increases
- the aspect ratio VA_aspect_ratio n is when the r_mid2 to r_max
- the increase in the aspect ratio VA_aspect_ratio n Accordingly, the coefficient ⁇ pn decreases. That is, in Equation (1), the coefficient ⁇ pn is effective when the region of interest has a shape close to a square.
- the expression (1) is expressed as a horizontal three-part dividing line if the region of interest is vertically long, a vertical three-part dividing line if it is horizontally long, or a three-part dividing line intersection if it is close to a square. It shows that the value of the energy function E c increases as it gets closer.
- the attention area R h is vertically long and close to a horizontal dividing line
- the attention area R v is horizontally long and close to a vertical dividing line
- the attention area R p is close to a square. Since it is close to the intersection of the three dividing lines, the value of the energy function E c becomes large.
- the case where the three-division composition is used as the composition pattern has been described.
- a composition (9-division composition) obtained by further dividing one divided area in the three-division composition into three. You may make it use.
- the 9-part composition it is expected that the composition will be deeper than the 3-part composition.
- L dh , L dv , and P d are respectively a line that divides the horizontal direction into 9 (horizontal 9 division line), a line that divides the vertical direction into 9 (vertical 9 division line), The intersection (9 division line intersection) of the horizontal 9 division line and the vertical 9 division line is shown. However, as shown in FIG. 10, it is assumed that the intersection of the horizontal 9 dividing lines h1a and h1b and the vertical 9 dividing lines v1a and v1b in the central divided area in the three divided composition is not included.
- the safety model creation unit 34b creates a safety model for preventing the cutout region from becoming too small.
- the safety model creating section 34b obtains the energy function E s of the safety model.
- Energy function E s is given by the following equation (7).
- the minimum rectangle including all the attention areas in the input image is defined as all the attention rectangular areas, the area is S WVA , the center position is P WVA, and the area of the cut-out area is S Let Crop be the center position and P Crop . Furthermore, the area of the common area of all the attention rectangular areas and the cut-out areas is assumed to be SWVA & Crop .
- Equation (7) the energy function E s of Equation (7) becomes an area S WVA & Crop larger the larger the value of the common area of the whole target rectangle region and cutout region (the first term of equation (7)).
- step S33 the penalty model creation unit 34c creates a penalty model for evaluating the area of the region that protrudes from the input image of the cutout region.
- the penalty model creating section 34c obtains the energy function E p for the penalty model.
- the energy function E p is given by the following equation (8).
- the energy function E p of Expression (8) becomes a larger value as the area S Over of the cutout region protruding from the input image region is larger.
- step S34 the objective function creation unit 34d creates an objective function E given by the following equation (9) from the energy functions E c , E s , and E p .
- coefficients C C , C S , and C P are adjustment coefficients for the energy functions E c , E s , and E p , respectively.
- the objective function E indicates that the smaller the value, the closer the obtained cutout area approaches the optimum cutout area.
- step S35 the optimization unit 34e determines the most appropriate extraction region based on the position information of the extraction region that minimizes the objective function E, and supplies it to the image extraction unit 35. More specifically, the optimization unit 34e optimizes the objective function E, for example, using particle swarm optimization (ParticlePartSwarm Optimization: PSO).
- particle swarm optimization PSO
- the optimization unit 34e sets the objective function E by particle group optimization using the start position (horizontal direction and vertical direction) of the cutout region and the size (width and height) of the cutout region as variables. Find the minimum position information (start position and size of the cutout area). The optimization unit 34e determines the most appropriate extraction area based on the obtained position information, and the process returns to step S15.
- the optimization unit 34e may use the cutout start position (horizontal direction and vertical direction) of the cutout area and the size (width) of the cutout area as variables. Furthermore, the rotation angle of the cutout area may be added as a variable.
- FIG. 13 shows an example of the most appropriate extraction area determined in this way.
- the most appropriate extraction area Pc is determined so that one bird is arranged at the position of the intersection of the three dividing lines in the three divided composition.
- step S ⁇ b> 16 the image cutout unit 35 cuts out and outputs an image of the optimal extraction region from the input image based on the optimal extraction region from the composition analysis unit 34.
- the image cutout unit 35 cuts out an image of the optimal extraction region Pc of the three-part composition as shown in FIG. 14 based on the optimal extraction region Pc from the composition analysis unit 34.
- the cut-out area can be determined based on the number of attention areas in the input image and the composition pattern associated with the scene of the input image. Since the region of interest is determined even if the subject is other than a person, an image with an optimal composition can be cut out even if the subject is other than a person. In addition, since the composition pattern is set based on the number of regions of interest and the scene, it is possible to cut out an image with the optimum composition regardless of the category of the input image.
- the composition pattern is determined in advance in association with the number of regions of interest and the scene.
- object recognition is performed on the input image, and a composition pattern corresponding to the object is set. You may make it do.
- FIG. 15 shows a configuration example of an image processing apparatus in which object recognition is performed on an input image, and a composition pattern corresponding to the object is set.
- the image processing apparatus 111 in FIG. 15 configurations having the same functions as those provided in the image processing apparatus 11 in FIG. 1 are denoted by the same names and the same reference numerals, and description thereof is omitted as appropriate. It shall be.
- the image processing apparatus 111 in FIG. 15 differs from the image processing apparatus 11 in FIG. 1 in that a composition pattern setting unit 131 is provided instead of the composition pattern setting unit 33.
- the scene discriminating unit 32 discriminates the scene of the input image, and supplies scene information representing the scene obtained as a result of the discrimination to the composition pattern setting unit 131 together with the input image.
- the composition pattern setting unit 131 recognizes an object in the input image from the scene determination unit 32.
- the composition pattern setting unit 131 sets a composition pattern corresponding to the input image based on the scene represented by the scene information from the scene determination unit 32 and the recognized object, and supplies the composition pattern to the composition analysis unit 34. .
- the composition pattern setting unit 131 stores a composition pattern in which the arrangement and ratio of objects in the composition are predetermined for each scene, and the composition pattern corresponding to the scene and the object is stored from the stored composition pattern. By selecting, a composition pattern is set.
- the arrangement and ratio of objects in the composition can be set so as to improve the balance of the composition. Note that a composition pattern in which the arrangement and ratio of objects in the composition for each scene are determined in advance may be stored in a database (not shown) or the like.
- step S114 the composition pattern setting unit 131 recognizes an object in the input image from the scene determination unit 32.
- the composition pattern setting unit 131 sets a composition pattern corresponding to the input image based on the scene represented by the scene information from the scene determination unit 32 and the recognized object, and supplies the composition pattern to the composition analysis unit 34. .
- the composition pattern setting unit 131 recognizes these objects in the input image shown in FIG. 17, the proportion of the sky, rocks, grass, and people in the composition is 30% from the stored composition patterns. , 20%, 40%, 10% composition pattern is selected. As a result, the image having the composition indicated by the frame on the input image in FIG. 17 is finally cut out.
- an object in the input image can be recognized, and a composition pattern can be set according to the object and the scene. Since the arrangement and ratio of the objects in the composition determined by the composition pattern are set so as to improve the balance of the composition, it is possible to cut out an image having the optimum composition.
- FIG. 18 shows an example of the configuration of an image processing apparatus in which a plurality of extraction area candidates in an input image are determined.
- the same name and the same reference numeral are given to the configuration having the same function as that provided in the image processing device 11 of FIG. 1, and the description thereof is omitted as appropriate. It shall be.
- the image processing apparatus 211 in FIG. 18 differs from the image processing apparatus 11 in FIG. 1 in that a composition analysis unit 231 is provided instead of the composition analysis unit 34, and a display unit 232 and an operation input unit 233 are newly provided. It is a point.
- the composition analysis unit 231 determines a plurality of candidates (cutout region candidates) of the optimum cutout region in the input image of the image cut out with the composition pattern, This is supplied to the display unit 232. Further, the composition analysis unit 231 supplies the selected cutout region to the image cutout unit 35 based on an operation signal from the operation input unit 233 indicating that any one of the cutout region candidates has been selected.
- the composition analysis unit 231 includes a composition model creation unit 231a, a safety model creation unit 231b, a penalty model creation unit 231c, an objective function creation unit 231d, and an optimization unit 231e. Note that the composition model creation unit 231a to objective function creation unit 231d have the same functions as the composition model creation unit 34a to objective function creation unit 34d in FIG.
- the optimization unit 231e determines a cut-out area from which the top n pieces are obtained from the smaller objective function E, and supplies it to the display unit 232 as a cut-out area candidate.
- the display unit 232 is configured as a monitor in which an operation input unit 233 as a touch panel is stacked, and displays a frame indicating a cut-out area candidate from the composition analysis unit 231 on the input image or instructs the user to perform an operation.
- the operation image of is displayed.
- the operation input unit 233 is configured as a touch panel stacked on the display surface of the display unit 232, and supplies an operation signal corresponding to a user operation to the composition analysis unit 231.
- step S215 the composition analysis unit 231 determines a plurality of candidates for the optimum cutout region in the input image of the image cut out based on the composition pattern from the composition pattern setting unit 33. Then, a cut-out area candidate determination process is performed.
- composition area candidate determination processing of composition analysis unit [Composition area candidate determination processing of composition analysis unit]
- the extraction area candidate determination process in step S215 of the flowchart of FIG. 19 will be described. Note that the processing of steps S231 to S234 in the flowchart of FIG. 20 is the same as the processing of steps S31 to S34 described with reference to the flowchart of FIG.
- step S235 the optimization unit 231e determines a cutout region from which the top n pieces are obtained from the smaller objective function E, and supplies the cutout region candidate to the display unit 232 as a cutout region candidate.
- the optimization unit 231e holds a local minimum value and position information at that time.
- n sets whose position information is greatly different are supplied to the display unit 232 from the top in ascending order of the value of the objective function E, and the process returns to step S15.
- composition analysis unit 231 can determine the extraction region candidate.
- step S216 the display unit 232 displays a frame indicating the extraction region candidate from the composition analysis unit 231 on the input image, for example, as illustrated in FIG.
- the 21 displays a frame indicating two cutout area candidates and names of “candidate 1” and “candidate 2” for identifying each frame.
- the user can select cut-out area candidates indicated by “candidate 1” and “candidate 2” by the operation input unit 233 as a touch panel stacked on the display unit 232.
- step S217 the composition analysis unit 231 determines whether any one of the extracted region candidates has been selected. That is, the composition analysis unit 231 determines whether or not an operation signal indicating that any one of the cutout region candidates has been selected is supplied from the operation input unit 233.
- step S217 If it is determined in step S217 that none of the cutout area candidates has been selected, an operation signal indicating that any of the cutout area candidates has been selected from the operation input unit 233 is supplied. The process is repeated.
- step S217 if it is determined in step S217 that any one of the cutout area candidates has been selected, the composition analysis unit 231 indicates that one of the cutout area candidates from the operation input unit 233 has been selected. Based on the operation signal, the selected cutout region is supplied to the image cutout unit 35.
- step S218 the image cutout unit 35 cuts out and outputs the image of the selected cutout region from the input image based on the cutout region from the composition analysis unit 231.
- a plurality of optimal extraction area candidates can be displayed and selected, so that the user can confirm and select the extraction area candidates. Therefore, it is possible to cut out an image having an optimal composition that suits the user's preference.
- the size of the input image has not been mentioned, but a panoramic image may be input as the input image.
- FIG. 23 shows a configuration example of an image processing apparatus in which a panoramic image is input as an input image.
- the same name and the same reference numeral are given to the components having the same functions as those provided in the image processing apparatus 11 in FIG. 1, and the description thereof is omitted as appropriate. It shall be.
- the image processing apparatus 311 in FIG. 23 differs from the image processing apparatus 11 in FIG. 1 in that a composition analysis unit 332 is provided instead of the composition analysis unit 34 in which the panorama discrimination unit 331 is newly provided. is there.
- the panorama determination unit 331 determines whether or not the input image is a panorama image, and supplies the determination result to the composition analysis unit 332.
- the composition analysis unit 332 determines, based on the composition pattern from the composition pattern setting unit 33, an extraction region in the input image of the image extracted based on the composition pattern, according to the determination result from the panorama determination unit 331.
- the cutout area is supplied to the image cutout unit 35.
- the composition analysis unit 332 includes a composition model creation unit 332a, a safety model creation unit 332b, a penalty model creation unit 332c, an objective function creation unit 332d, and an optimization unit 332e.
- the composition model creation unit 332a, the safety model creation unit 332b, and the penalty model creation unit 332c have the same functions as the composition model creation unit 34a, the safety model creation unit 34b, and the penalty model creation unit 34c in FIG. 1, respectively. Since it is provided, the description thereof is omitted.
- Objective function creating section 332d the discrimination result from the panorama discriminating section 331, if the input image indicates that it is a panorama image, in the objective function E, to disable the term energy function E s.
- the optimization unit 332e determines a cutout region that minimizes the objective function E, and sets the image cutout unit as the most appropriate extraction region. 35.
- the optimization unit 231e determines a cut-out region from which the top n pieces are obtained from the smaller objective function E. Then, it is supplied to the image cutout unit 35 as a cutout region candidate.
- the panorama determining unit 331 determines whether the input image is a panoramic image. More specifically, the panorama discrimination unit 331 compares the aspect ratio In_aspect_ratio expressed by the following formula (10) with a predetermined threshold value In_aspect_ratio_th when the width and height of the input image are In_width and In_height, respectively. To do.
- step S315 If it is determined in step S315 that the input image is a panoramic image, the panorama determining unit 331 supplies the aspect ratio In_aspect_ratio to the composition analysis unit 332 together with information that the input image is a panoramic image. The process proceeds to step S316.
- step S316 the composition analysis unit 332 performs extraction region candidate determination processing based on the information that the input image from the panorama determination unit 331 is a panorama image and the aspect ratio In_aspect_ratio.
- the cut-out area candidate determination process by the image processing apparatus 311 in FIG. 23 is substantially the same as the process of the image processing apparatus 211 in FIG. 18 described with reference to the flowchart in FIG. .
- the objective function creating section 332d in the objective function E, to disable the term energy function E s. More specifically, the objective function creation unit 332d switches the value of the coefficient C S in the objective function E shown by the equation (9) according to the characteristics shown in FIG.
- FIG. 25 shows the relationship between the aspect ratio In_aspect_ratio of the input image and the coefficient C S in the objective function E.
- the aspect ratio In_aspect_ratio is, when the predetermined threshold In_aspect_ratio_th larger, when the value of the coefficient C S in objective function E less than 0.0, and the predetermined threshold In_aspect_ratio_th, the value of the coefficient C S in objective function E 1.0. That is, when the input image is a panoramic image, the energy function for the safety model for preventing the cutout region from becoming too small in the objective function E is set to zero.
- the cut-out area candidate is supplied to the image cut-out unit 35 as a relatively small cut-out area.
- step S317 the image cutout unit 35 cuts out from the input image (panoramic image) that is input based on the cutout region candidates from the composition analysis unit 332 as shown in FIG. A region candidate image is cut out and output.
- FIG. 26 shows an example of extraction region candidates in the panoramic image.
- a frame indicating three extracted area candidates 1 to 3 is set on a panoramic image as an input image.
- step S315 when it is determined in step S315 that the input image is not a panorama image, the panorama determination unit 331 supplies information indicating that the input image is not a panorama image to the composition analysis unit 332. Then, the process proceeds to step S318, where the most appropriate extraction area is determined, and in step S319, the image of the optimal extraction area is cut out from the input image.
- the user can select an image having an optimal composition that suits the user's preference from a plurality of compositions cut out from the panoramic image.
- FIG. 27 shows a configuration example of an image processing apparatus that outputs an input image as it is together with a cutout region image.
- components having the same functions as those provided in the image processing apparatus 11 in FIG. 1 are denoted by the same names and the same reference numerals, and description thereof will be omitted as appropriate. It shall be.
- the image processing apparatus 411 in FIG. 27 is different from the image processing apparatus 11 in FIG. 1 in that the input image is output as it is together with the cut-out area image.
- the user can compare the input image and the cut-out area image when these are output to the display device. For example, when the input image is an image captured by the user with the imaging device, the user can confirm the difference between the composition of the image captured by the user and the composition of the clipped image.
- extraction region candidate determination process by the image processing apparatus 411 in FIG. 27 is the same as the process of the image processing apparatus 11 in FIG. 1 described with reference to the flowchart in FIG.
- FIG. 28 shows an example of the configuration of an image processing apparatus that outputs only information representing a cutout area together with the cutout area image.
- the same name and the same reference numeral are given to the configuration having the same function as that provided in the image processing device 11 in FIG. 1, and the description thereof is omitted as appropriate. It shall be.
- the image processing apparatus 511 in FIG. 28 differs from the image processing apparatus 11 in FIG. 1 in that the image cutout unit 35 is deleted and the input image is output as it is.
- composition analysis unit 34 in FIG. 28 determines the optimum cutout region in the input image of the image cut out based on the composition pattern from the composition pattern setting unit 33, and determines the most suitable extraction region. Information to be output is output to an external device or the like.
- step S516 the image processing device 511 outputs the input image as it is, and the composition analysis unit 34 outputs information representing the most suitable extraction area in the determined input image to an external device or the like.
- the capacity of a frame memory (not shown) in the image processing apparatus 511 can be reduced.
- the configuration has been described in which the input image and the information indicating the most appropriate extraction area are output separately.
- the input image and the information indicating the optimal extraction area may be output as one data. .
- FIG. 30 shows a configuration example of an image processing apparatus configured to output an input image and information representing the most appropriate extraction area as one data.
- the image processing apparatus 611 in FIG. 30 components having the same functions as those provided in the image processing apparatus 11 in FIG. 1 are given the same names and the same reference numerals, and description thereof is omitted as appropriate. It shall be.
- the image processing apparatus 611 in FIG. 30 differs from the image processing apparatus 11 in FIG. 1 in that an adding unit 631 is provided in place of the image cutout unit 35.
- composition analysis unit 34 in FIG. 30 determines an optimum cutout region in the input image of the image cut out based on the composition pattern from the composition pattern setting unit 33, and determines the most suitable extraction region. Information to be represented is supplied to the adding unit 631.
- the adding unit 631 adds information representing the most suitable area from the composition analysis unit 34 to the input image as EXIF information, and outputs it as an output image.
- step S616 the adding unit 631 adds information representing the most suitable area from the composition analysis unit 34 to the input image that has been input as EXIF information, and outputs it as an output image.
- information indicating the most suitable extraction area can be added to the input image as EXIF information and output. Therefore, an image of a frame memory (not shown) in the image processing apparatus 611 is generated without generating an image of the extraction area. Capacity can be reduced.
- an image processing apparatus that outputs an image of a cutout area using an image picked up by an image pickup apparatus or the like as an input image has been described.
- the image pickup apparatus is configured to determine a cutout area for a picked-up imaged image. You may do it.
- FIG. 32 shows a configuration example of an imaging apparatus that determines a cutout region for a captured image.
- components having the same functions as those provided in the image processing device 11 in FIG. 1 are denoted by the same names and the same reference numerals, and description thereof will be omitted as appropriate. Shall.
- the imaging device 711 in FIG. 32 is different from the image processing device 11 in FIG. 1 in that an imaging unit 731, an image processing unit 732, and a display unit 733 are newly provided.
- composition analysis unit 34 in FIG. 32 determines an optimum cutout region in the input image of the image cut out based on the composition pattern from the composition pattern setting unit 33, and determines the most suitable extraction region. And supplied to the image cutout unit 35 and the display unit 733.
- the imaging unit 731 is configured to include an optical lens, an imaging device, and an A / D (Analog / Digital) conversion unit (none of which are shown).
- the imaging unit 731 captures a subject by photoelectrically converting light incident on the optical lens and receiving an image, and A / D converts the obtained analog image signal.
- the imaging unit 731 supplies digital image data (captured image) obtained as a result of A / D conversion to the image processing unit 732.
- the image processing unit 732 performs image processing such as noise removal processing on the captured image from the imaging unit 731, and supplies the processed image to the attention area extraction unit 31, the scene determination unit 32, the image cutout unit 35, and the display unit 733.
- the display unit 733 displays a frame indicating the most appropriate extraction region from the composition analysis unit 34 on the captured image from the image processing unit 732, or displays an image of the optimal extraction region cut out by the image cutout unit 35. .
- step S 711 the imaging unit 731 images the subject and supplies the obtained captured image to the image processing unit 732.
- step S ⁇ b> 712 the image processing unit 732 performs image processing such as noise removal processing on the captured image from the imaging unit 731, and the attention area extraction unit 31, the scene determination unit 32, the image cutout unit 35, and the display unit 733. To supply.
- step S ⁇ b> 718 the display unit 733 displays a frame indicating the most appropriate extraction area from the composition analysis unit 34 on the captured image from the image processing unit 732.
- the process proceeds to step S719.
- step S719 the image cutout unit 35 cuts out the image of the optimal extraction region from the captured image from the image processing unit 732 based on the optimal extraction region from the composition analysis unit 34.
- step S720 the display unit 733 displays the image of the most appropriate extraction area extracted by the image extraction unit 35.
- the cut-out area can be determined based on the number of attention areas in the captured image and the composition pattern associated with the scene of the captured image. Since the region of interest is determined even if the subject is other than a person, an image with an optimal composition can be cut out even if the subject is other than a person. In addition, since the composition pattern is set based on the number of regions of interest and the scene, it is possible to cut out an image with the optimum composition regardless of the category of the captured image.
- the configuration for determining the optimum cutout region has been described regardless of the direction in which the subject included in the attention region is directed.
- the optimum cutout region may be determined according to the orientation of the subject. Good.
- FIG. 34 shows an example of the configuration of an image processing apparatus in which the optimum cutout area is determined according to the orientation of the subject.
- the image processing apparatus 811 in FIG. 34 components having the same functions as those provided in the image processing apparatus 11 in FIG. 1 are denoted by the same names and the same reference numerals, and description thereof is omitted as appropriate. It shall be.
- composition analysis unit 832 differs from the image processing apparatus 11 in FIG. 1 in that an orientation detection unit 831 is newly provided and a composition analysis unit 832 is provided in place of the composition analysis unit 34. .
- the attention area extraction unit 31 extracts the attention area of interest in the input image, and supplies attention area information representing the attention area to the composition pattern setting unit 33 and the orientation detection unit 831.
- the scene discrimination unit 32 discriminates the scene of the input image, supplies the scene information representing the scene obtained as a result of the discrimination to the composition pattern setting unit 131 together with the input image, and sends the scene information to the orientation detection unit. 831.
- the orientation detection unit 831 detects the orientation of the subject included in the attention area represented by the attention area information from the attention area extraction unit 31 in the input image, and supplies the orientation information indicating the direction to the composition analysis unit 832. .
- the composition analysis unit 832 determines an optimum cut-out area in the input image of the image cut out with the composition pattern. , And supplied to the image cutout unit 35.
- the composition analysis unit 832 includes a composition model creation unit 832a, a safety model creation unit 832b, a penalty model creation unit 832c, an objective function creation unit 832d, and an optimization unit 832e.
- the composition model creation unit 832a to the objective function creation unit 832d have the same functions as the composition model creation unit 34a to the objective function creation unit 34d in FIG.
- the optimization unit 832e determines a cutout region that minimizes the objective function E based on the direction information from the direction detection unit 831, and supplies the cutout region to the image cutout unit 35 as the most appropriate extraction region.
- step S815 the orientation detection unit 831 executes orientation detection processing to detect the orientation of the subject included in the attention area represented by the attention area information from the attention area extraction unit 31 in the input image.
- step S821 the direction detection unit 831 determines whether or not the attention region represented by the attention region information from the attention region extraction unit 31 is a face rectangular region.
- step S821 If it is determined in step S821 that the attention area is a face rectangular area, that is, if attention area information from the attention area extracting unit 31 is face rectangular area information, the process proceeds to step S822.
- step S822 the orientation detection unit 831 detects the orientation of the face included in the face rectangular area represented by the face rectangular region information in the input image, and supplies the orientation information indicating the orientation to the composition analysis unit 832.
- the orientation detection unit 831 uses a tree structure formed by learning face images in various orientations as learning samples in advance, with respect to a face image included in the face rectangular area, Face orientation is identified (detected) by repeating discrimination from the most upstream node of the structure toward the end node. For example, the orientation detection unit 831 learns in advance face images facing nine directions of front, top, bottom, left, right, top right, bottom right, top left, and bottom left, and the face detection unit 831 The direction is selected from the nine directions.
- orientation detection unit 831 is not limited to the above-described method, and may naturally detect the face orientation by another method.
- step S821 determines whether the attention area is a face rectangular area. If it is determined in step S821 that the attention area is not a face rectangular area, that is, if attention area information from the attention area extraction unit 31 is attention rectangular area information, the process proceeds to step S823.
- step S823 the direction detection unit 831 detects the direction of the subject included in the target rectangular area represented by the target rectangular area information in the input image based on the scene information from the scene determination unit 32, and represents the direction.
- the orientation information is supplied to the composition analysis unit 832.
- the orientation detection unit 831 is an image in which objects that can exist in the scene face the nine directions of front, top, bottom, left, right, top right, bottom right, top left, and bottom left. Is stored, and a template of an object corresponding to a subject included in the target rectangular area is searched from templates corresponding to the scene represented by the scene information from the scene determination unit 32, and the searched template By performing template matching based on the above, the orientation of the subject included in the target rectangular area is specified (detected).
- the direction detection unit 831 determines “ The “flower” template is searched, and template matching is performed based on the template to identify the direction of “flower” as the subject.
- orientation detection unit 831 is not limited to the method described above, and may naturally detect the orientation of the subject by another method.
- the orientation detection unit 831 identifies the subject and its orientation from the template corresponding to the scene information using the template of the object corresponding to the subject. For example, the orientation detection unit 831 Using a recognizer that recognizes the target object, generated by executing statistical learning processing based on the feature quantity, determines whether the target object exists in the input image based on the feature quantity in the input image By doing so, the subject and its direction may be identified.
- the orientation detection unit 831 detects the orientation of the subject included in the attention area in the input image.
- step S816 the composition analysis unit 832 executes the cut-out area determination process based on the composition pattern from the composition pattern setting unit 33 and the direction information from the direction detection unit 831. An optimum cut-out area in an input image of an image cut out with a composition pattern is determined.
- steps S831 to S834 in the flowchart of FIG. 37 is the same as the processing of steps S31 to S34 described with reference to the flowchart of FIG.
- step S835 the optimization unit 832e determines the most appropriate extraction region based on the position information of the extraction region that minimizes the objective function E and the orientation information from the orientation detection unit 831, and the image extraction unit 35 To supply.
- FIG. 38 shows faces D 10 to D 18 facing front, top, top right, right, bottom right, bottom, bottom left, left, top left, respectively, facing each other. It corresponds to the orientation information that represents the orientation. That is, the orientation information D 10 represent respectively the particle diameters in the input image, face represents that faces the front, the orientation information D 11 represents the fact that the face is facing up, orientation information D 12 is It means that the face is facing the upper right.
- orientation information D 13 represents the fact that the face is facing to the right
- the orientation information D 14 represents that the face is facing the bottom right.
- the orientation information D 15 indicates that the face is facing downward
- the orientation information D 16 indicates that the face is facing lower left
- the orientation information D 17 indicates that the face is left. it represents that facing
- orientation information D 18 represents that the face is facing the upper left.
- the optimization section 832e in accordance with the orientation information D 10 to D 18, and determines the placement of the subject (face) in 3 divided composition, object The position information of the cutout area that minimizes the function E is obtained, and the most appropriate extraction area is determined based on the position information.
- the optimization unit 832 e is illustrated in FIG. 39. the placement of the face in the three-part composition, a 3 division line intersection P 0.
- the optimization unit 832 e arranges the face in the three-part composition shown in FIG. and the 3 dividing line intersection P 1.
- the optimization section 832e when the direction information is information D 12 orientation, i.e., when the face is facing the top right, the optimization section 832e has a placement of the face in the three-part composition shown in Figure 39, 3 division line intersection P 2 And Further, when the direction information is the direction information D 18, that is, when the face is facing to the upper left, the optimization section 832e has a placement of the face in the three-part composition shown in Figure 39, 3 division line intersection P 3 And
- the optimization unit 832 e arranges the face in the three-part composition shown in FIG. 39. Is one of the three dividing line intersection points P 0 and P 1 . Further, when the direction information is the direction information D 11, that is, when the face is facing up, the optimization section 832e has a placement of the face in the three-part composition shown in Figure 39, 3 division line intersection P 2 , and any of the P 3. As described above, when two or more face arrangements are selected for the orientation information, the objective function E is determined to be smaller.
- the optimization unit 832e determines the face arrangement in the three-part composition according to the face orientation. In particular, the optimization unit 832e determines the face arrangement so that the space on the side facing the face is widened in the three-part composition. As a result, an object or landscape ahead of the human face (line of sight) as a subject can be included in the cut-out area, so that it is possible to cut out an image with a wider and optimal composition.
- one of the orientation information D 20 to D 28 shown in FIG. 40 is supplied to the optimizing section 832e from the direction detecting unit 831.
- FIG 40 toward the figure, front, top, top right, right, lower right, lower, lower left, left, upper left and flowers D 20 to D 28 are shown facing each, each thereof facing to It corresponds to the orientation information that represents the orientation. That is, the orientation information D 20, in the input image, flowers represents that faces the front, the orientation information D 21 represents the fact that flowers are facing up, orientation information D 22 is It indicates that the flower is facing the upper right. Likewise, the orientation information D 23, the flower represents that points to the right, the orientation information D 24 represents that flowers facing lower right.
- the orientation information D 25 indicates that the flower is facing downward, the orientation information D 26 indicates that the flower is facing lower left, and the orientation information D 27 indicates that the flower is left.
- the direction information D 28 indicates that the flower is facing the upper left.
- the optimization section 832e in accordance with the orientation information D 20 to D 28, and determines the placement of the subject (flower) in 3 divided composition, the objective function E The position information of the cut-out area that minimizes the position is obtained, and the most appropriate extraction area is determined based on the position information.
- the optimization unit 832 e performs the flower arrangement in the three-part composition shown in FIG. 39. Let it be a three-partition line intersection point P0. Further, when the direction information is the direction information D 26, that is, when the flower is oriented left lower saw, optimization unit 832e includes, a placement of the flower in the 3 dividing the composition shown in Figure 39, 3 division line intersection P Set to 1 . Further, when the orientation information is any of the orientation information D 22 and D 23 , that is, when the flower is facing the upper right or the right, the optimization unit 832 e arranges the flowers in the three-part composition shown in FIG. and the 3 dividing line intersection P 2. When the orientation information is any of the orientation information D 27 and D 28 , that is, when the flower is facing left or upper left, the optimization unit 832 e arranges the flowers in the three-part composition shown in FIG. and the 3 dividing line intersection P 3.
- the optimization unit 832e determines the flower arrangement in the three-division composition shown in FIG. 39 as the three-division line intersection point P 0. , P 1 .
- the orientation information is any of the orientation information D 20 and D 21 , that is, when the flower is facing frontward or upward
- the optimization unit 832e arranges the flower in the three-part composition shown in FIG. Is one of the three dividing line intersection points P 2 and P 3 .
- the arrangement is determined such that the objective function E becomes smaller.
- the optimization unit 832e determines the arrangement of the flowers in the three-part composition according to the direction of the flowers. In particular, the optimization unit 832e determines the arrangement of the flowers so that the space on the side facing the flowers is widened in the three-divided composition. As a result, an object or landscape at the tip of a flower as a subject can be included in the cutout region, so that it is possible to cut out an image having a wider and optimal composition.
- the configuration for determining the optimum cutout area according to the direction of the subject has been described, but the optimum cutout area may be decided according to the movement of the subject.
- FIG. 41 shows a configuration example of an image processing apparatus in which the optimum cutout region is determined according to the movement of the subject.
- components having the same functions as those provided in the image processing apparatus 11 in FIG. 1 are denoted by the same names and the same reference numerals, and description thereof will be omitted as appropriate. It shall be.
- the image processing device 861 in FIG. 41 is different from the image processing device 11 in FIG. 1 in that it includes a frame buffer 881, a GMV (Global Motion Vector) calculation unit 882, an LMV (Local Motion Vector) calculation unit 883, and a motion direction.
- a determination unit 884 is newly provided, and a composition analysis unit 885 is provided instead of the composition analysis unit 34.
- the attention area extraction unit 31 extracts the attention area of interest in the input image, and supplies attention area information representing the attention area to the composition pattern setting unit 33 and the LMV calculation unit 883.
- the frame buffer 881 holds an input image for one frame and supplies it to the GMV calculation unit 882 and the LMV calculation unit 883.
- the GMV calculation unit 882 calculates GMV representing the motion of the entire image from the input image and the input image one frame before from the frame buffer 881 (hereinafter referred to as the previous frame input image), and sends it to the motion direction determination unit 884. Supply.
- the LMV calculation unit 883 calculates an LMV representing a local motion in the attention area represented by the attention area information from the attention area extraction unit 31 based on the input image and the previous frame input image from the frame buffer 881. It supplies to the direction determination part 884.
- the movement direction determination unit 884 determines the direction (movement direction) of the subject included in the attention area, and determines the movement direction.
- the motion direction information to be expressed is supplied to the composition analysis unit 885.
- the composition analysis unit 885 determines an optimum cut-out area in the input image of the image cut out with the composition pattern. It is determined and supplied to the image cutout unit 35.
- the composition analysis unit 885 includes a composition model creation unit 885a, a safety model creation unit 885b, a penalty model creation unit 885c, an objective function creation unit 885d, and an optimization unit 885e.
- the composition model creation unit 885a through the objective function creation unit 885d have the same functions as the composition model creation unit 34a through the objective function creation unit 34d in FIG.
- the optimization unit 885e determines a cutout region that minimizes the objective function E based on the motion direction information from the motion direction determination unit 884, and supplies it to the image cutout unit 35 as the most appropriate extraction region.
- step S865 the motion direction determination unit 884 executes a motion direction determination process to determine the motion direction of the subject included in the attention area represented by the attention area information from the attention area extraction unit 31 in the input image. To do.
- step S871 the GMV calculation unit 882 calculates the GMV from the input image and the previous frame input image from the frame buffer 881, and supplies the GMV to the motion direction determination unit 884.
- step S872 the LMV calculation unit 883 calculates the LMV of the attention area represented by the attention area information from the attention area extraction unit 31 from the input image and the previous frame input image from the frame buffer 881, and determines the motion direction. Part 884.
- step S873 the movement direction determination unit 884 determines whether the LMV is 0 or substantially 0.
- step S873 If it is determined in step S873 that the LMV is not 0 or substantially 0, that is, if the subject included in the region of interest has sufficient movement, the process proceeds to step S874, and the movement direction determination unit 884 selects the LMV direction.
- the motion direction information representing the motion direction is supplied to the composition analysis unit 885.
- step S873 determines whether GMV is 0 or substantially 0 is determined. If it is determined in step S873 that the LMV is 0 or substantially 0, that is, if the subject included in the region of interest has no or substantially no movement, the process proceeds to step S875, and the movement direction determination unit 884 , Whether GMV is 0 or substantially 0 is determined.
- step S875 If it is determined in step S875 that GMV is not 0 or substantially zero, that is, if there is sufficient motion in the entire image, the process proceeds to step S876, and the motion direction determination unit 884 has a direction opposite to the direction of GMV.
- the motion direction information representing the motion direction is supplied to the composition analysis unit 885.
- the state in step S875 is a state in which there is movement in the entire input image, but there is no movement in the subject included in the attention area, for example, a state in which the background is moving and the subject is stationary.
- the subject moves relative to the background in the opposite direction of the background movement. That is, the direction opposite to the direction of GMV is relatively the direction of movement of the subject.
- step S875 if it is determined in step S875 that GMV is 0 or substantially 0, that is, if there is no or substantially no movement in the entire image, the process proceeds to step S877, and the motion direction determination unit 884 As the absence of the movement direction, the movement direction information indicating the absence of the movement direction is supplied to the composition analysis unit 885.
- the movement direction determination unit 884 determines the movement direction of the subject included in the attention area in the input image.
- the motion direction determination unit 884 determines the motion direction from any one of nine types, for example, none, top, bottom, left, right, top right, bottom right, top left, and bottom left. To do.
- step S866 the composition analysis unit 885 executes the cut-out region determination process based on the composition pattern from the composition pattern setting unit 33 and the movement direction information from the movement direction determination unit 884. Then, an optimum cut-out area in the input image of the image cut out with the composition pattern is determined.
- steps S881 to S884 in the flowchart of FIG. 44 is the same as the processing of steps S31 to S34 described with reference to the flowchart of FIG.
- step S885 the optimization unit 885e determines the most appropriate extraction region based on the position information of the extraction region that minimizes the objective function E and the motion direction information from the motion direction determination unit 884, and extracts the image. To the unit 35.
- the motion direction information D 30 to D 38 shown in FIG. 45 is supplied from the motion direction determination unit 884 to the optimization unit 885e.
- FIG 45 on the upper right, right, lower right, lower, lower left, left, has been shown arrows represent the movement of the upper left, respectively, including the starting point, the motion direction information D 30 to D 38 It corresponds. That is, the motion direction information D 30, in the input image, and indicates that there is no motion direction of the object, the motion direction information D 31 represents the possible movement direction is upward, the motion direction information D 32 is , Indicating that the direction of movement is upper right. Similarly, the motion direction information D 33, which indicates that the motion direction is right, the motion direction information D 34 represents that the moving direction is the lower right. Further, the movement direction information D 35 indicates that the movement direction is downward, the direction information D 36 indicates that the movement direction is lower left, and the movement direction information D 37 indicates that the movement direction is left. The movement direction information D38 indicates that the movement direction is upper left.
- the optimization section 885e in accordance with the motion direction information D 30 to D 38, and determines the placement of the object in 3 divided composition Then, the position information of the cutout area that minimizes the objective function E is obtained, and the most appropriate extraction area is determined based on the position information.
- the optimization section 885e is subject in 3 divided composition shown in Figure 39 Is the three-partition line intersection point P 0 .
- the motion direction information is the direction information D 36 motion, i.e., when the motion direction of the subject is lower left
- the optimization section 885e is disposed in the subject in the three-part composition shown in Figure 39, divided into three lines to the intersection P 1.
- the optimization section 885e is the placement of the subject in 3 divided composition shown in Figure 39, 3 the division line intersection P 2. Further, if the motion direction information is the direction information D 38 motion, i.e., when the motion direction of the subject is the upper left, the optimization section 885e is disposed in the subject in the three-part composition shown in Figure 39, divided into three lines to the intersection P 3.
- the optimization section 885e is disposed in the subject in the three-part composition shown in Figure 39, divided into three lines One of the intersection points P 2 and P 3 .
- the motion direction information is the direction information D 33 motion, i.e., when the motion direction of the subject is right
- the optimization section 885e is disposed in the subject in the three-part composition shown in Figure 39, divided into three lines One of the intersection points P 0 and P 2 is assumed.
- the optimization section 885e is disposed in the subject in the three-part composition shown in Figure 39, divided into three lines One of the intersection points P 0 and P 1 .
- the motion direction information is the direction information D 37 motion, i.e., when the motion direction of the subject is the left, the optimization section 885e is disposed in the subject in the three-part composition shown in Figure 39, divided into three lines One of the intersection points P 1 and P 3 .
- the optimization section 885e is the placement of the subject in 3 divided composition shown in Figure 39, 3 division line intersection P 0 to be either P 3. That is, when there is no movement of the subject, the placement of the subject in the three-part composition may be any of the three-part line intersections.
- the objective function E is determined to be smaller.
- the optimization unit 885e determines the placement of the subject in the three-part composition according to the movement direction of the subject. In particular, the optimization unit 885e determines the placement of the subject so that the space in the direction in which the subject moves is widened in the three-part composition. As a result, the object or landscape ahead of the subject can be included in the cutout region, and thus it is possible to cut out an image having a wider and optimum composition.
- composition is not limited to the three-divided composition, and the contrast composition (composition B) and the pattern composition (composition H) shown in FIG.
- the subject may be arranged according to the direction and movement of the subject.
- the number of subjects that is, the number of regions of interest has been described as being one. However, even when the number of subjects is two or more, two or more subjects have different orientations. And arranged according to the direction of movement.
- the series of processes described above can be executed by hardware or software.
- a program constituting the software may execute various functions by installing a computer incorporated in dedicated hardware or various programs. For example, it is installed from a program recording medium in a general-purpose personal computer or the like.
- FIG. 46 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processes using a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- An input / output interface 905 is further connected to the bus 904.
- the input / output interface 905 includes an input unit 906 made up of a keyboard, mouse, microphone, etc., an output unit 907 made up of a display, a speaker, etc., a storage unit 908 made up of a hard disk, nonvolatile memory, etc., and a communication unit 909 made up of a network interface, etc.
- a drive 910 for driving a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is connected.
- the CPU 901 loads the program stored in the storage unit 908 to the RAM 903 via the input / output interface 905 and the bus 904 and executes the program, for example. Is performed.
- the program executed by the computer (CPU 901) is, for example, a magnetic disk (including a flexible disk), an optical disk (CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc), etc.), a magneto-optical disc, or a semiconductor
- the program is recorded on a removable medium 911 which is a package medium including a memory or the like, or is provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 908 via the input / output interface 905 by attaching the removable medium 911 to the drive 910.
- the program can be received by the communication unit 909 via a wired or wireless transmission medium and installed in the storage unit 908.
- the program can be installed in the ROM 902 or the storage unit 908 in advance.
- the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
Abstract
Description
1.第1の実施の形態
2.第2の実施の形態
3.第3の実施の形態
4.第4の実施の形態
5.第5の実施の形態
6.第6の実施の形態
7.第7の実施の形態
8.第8の実施の形態
9.第9の実施の形態
10.第10の実施の形態
[画像処理装置の構成例]
図1は、本発明を適用した画像処理装置の一実施の形態の機能構成例を示している。
次に、図3のフローチャートを参照して、図1の画像処理装置11の画像切出し処理について説明する。
ここで、図7のフローチャートを参照して、図3のフローチャートのステップS15における切出し領域決定処理について説明する。
[画像処理装置の構成例]
図15は、入力画像に対してオブジェクト認識をし、そのオブジェクトに応じた構図パターンを設定するようにした画像処理装置の構成例を示している。なお、図15の画像処理装置111において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図16のフローチャートを参照して、図15の画像処理装置111の画像切出し処理について説明する。なお、図16のフローチャートにおけるステップS111乃至S113,S115,S116の処理は、図3のフローチャートを参照して説明したステップS11乃至S13,S15,S16の処理と同様であるので、その説明は省略するものとする。
[画像処理装置の構成例]
図18は、入力画像における切出し領域の候補を複数決定するようにした画像処理装置の構成例を示している。なお、図18の画像処理装置211において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図19のフローチャートを参照して、図18の画像処理装置211の画像切出し処理について説明する。なお、図16のフローチャートにおけるステップS211乃至S114の処理は、図3のフローチャートを参照して説明したステップS11乃至S14の処理と同様であるので、その説明は省略するものとする。
ここで、図20のフローチャートを参照して、図19のフローチャートのステップS215における切出し領域候補決定処理について説明する。なお、図20のフローチャートにおけるステップS231乃至S234の処理は、図7のフローチャートを参照して説明したステップS31乃至S34の処理と同様であるので、その説明は省略するものとする。
[画像処理装置の構成例]
図23は、入力画像としてパノラマ画像を入力するようにした画像処理装置の構成例を示している。なお、図23の画像処理装置311において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図24のフローチャートを参照して、図23の画像処理装置311の画像切出し処理について説明する。なお、図24のフローチャートにおけるステップS311乃至S314の処理は、図3のフローチャートを参照して説明したステップS11乃至S14の処理と同様であるので、その説明は省略するものとする。また、図24のフローチャートにおけるステップS318,S319の処理は、図3のフローチャートを参照して説明したステップS15,S16の処理と同様であるので、その説明は省略するものとする。
[画像処理装置の構成例]
図27は、切出し領域画像とともに、入力画像をそのまま出力するようにした画像処理装置の構成例を示している。なお、図27の画像処理装置411において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
[画像処理装置の構成例]
図28は、切出し領域画像とともに、切出し領域を表す情報のみを出力するようにした画像処理装置の構成例を示している。なお、図28の画像処理装置511において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図29のフローチャートを参照して、図28の画像処理装置511の画像切出し処理について説明する。なお、図29のフローチャートにおけるステップS511乃至S515の処理は、図3のフローチャートを参照して説明したステップS11乃至S15の処理と同様であるので、その説明は省略するものとする。
[画像処理装置の構成例]
図30は、入力画像と、最適切出し領域を表す情報とを1つのデータとして出力するようにした画像処理装置の構成例を示している。なお、図30の画像処理装置611において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図31のフローチャートを参照して、図30の画像処理装置611の画像切出し処理について説明する。なお、図31のフローチャートにおけるステップS611乃至S615の処理は、図3のフローチャートを参照して説明したステップS11乃至S15の処理と同様であるので、その説明は省略するものとする。
[撮像装置の構成例]
図32は、撮像した撮像画像について、切出し領域を決定するようにした撮像装置の構成例を示している。なお、図32の撮像装置711において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図33のフローチャートを参照して、図32の撮像装置711の画像切出し処理について説明する。なお、図33のフローチャートにおけるステップS713乃至S717の処理は、図3のフローチャートを参照して説明したステップS11乃至S15の処理と同様であるので、その説明は省略するものとする。
[画像処理装置の構成例]
図34は、被写体の向きに応じて最適切り出し領域を決定するようにした画像処理装置の構成例を示している。なお、図34の画像処理装置811において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図35のフローチャートを参照して、図34の画像処理装置811の画像切出し処理について説明する。なお、図35のフローチャートにおけるステップS811乃至S814,S817の処理は、図3のフローチャートを参照して説明したステップS11乃至S14,S16の処理と同様であるので、その説明は省略するものとする。また、図35のフローチャートで説明する画像切出し処理においては、被写体の数、すなわち、注目領域の数は1であるものとする。
ここで、図36のフローチャートを参照して、図35のフローチャートのステップS815における向き検出処理について説明する。
ここで、図37のフローチャートを参照して、図35のフローチャートのステップS816における切出し領域決定処理について説明する。
[画像処理装置の構成例]
図41は、被写体の動きに応じて最適切り出し領域を決定するようにした画像処理装置の構成例を示している。なお、図41の画像処理装置861において、図1の画像処理装置11に設けられたものと同様の機能を備える構成については、同一名称および同一符号を付するものとし、その説明は、適宜省略するものとする。
次に、図42のフローチャートを参照して、図41の画像処理装置861の画像切出し処理について説明する。なお、図42のフローチャートにおけるステップS861乃至S864,S867の処理は、図3のフローチャートを参照して説明したステップS11乃至S14,S16の処理と同様であるので、その説明は省略するものとする。また、図42のフローチャートで説明する画像切出し処理においては、被写体の数、すなわち、注目領域の数は1であるものとする。
ここで、図43のフローチャートを参照して、図42のフローチャートのステップS865における動き方向決定処理について説明する。
ここで、図44のフローチャートを参照して、図42のフローチャートのステップS866における切出し領域決定処理について説明する。
Claims (15)
- 入力画像において注目する注目領域の数と、前記入力画像のシーンとに基づいて、前記入力画像に対応する構図パターンを設定する設定手段と、
前記設定手段によって設定された前記構図パターンを基に、前記入力画像から前記構図パターンで切出される画像の、前記入力画像における最適な切出し領域を決定する決定手段と
を備える画像処理装置。 - 前記入力画像から、前記決定手段によって決定された前記切出し領域を切出す切出し手段をさらに備える
請求項1に記載の画像処理装置。 - 前記決定手段は、前記設定手段によって設定された前記構図パターンを基に、前記入力画像から前記構図パターンで切出される画像の、前記入力画像における最適な切出し領域の、複数の候補を決定し、
前記入力画像上に、複数の前記切出し領域の候補を表示する表示手段と、
前記表示手段によって表示された複数の前記切出し領域の候補のうちのいずれかを選択する選択手段と
をさらに備え、
前記切出し手段は、入力画像から、前記選択手段によって選択された前記切出し領域を切出す
請求項2に記載の画像処理装置。 - 前記入力画像において注目する前記注目領域を抽出する抽出手段と、
前記入力画像の前記シーンを判別する判別手段と
をさらに備える
請求項1に記載の画像処理装置。 - 前記決定手段は、前記入力画像において注目する前記注目領域を全て含む最小の矩形領域の中心位置が、前記入力画像における前記切出し領域の中心に近づくように、前記切出し領域を決定する
請求項1に記載の画像処理装置。 - 前記決定手段は、前記切出し領域がより大きくなるように、かつ、前記入力画像において注目する前記注目領域を全て含む最小の前記矩形領域と、前記切出し領域との共通領域がより大きくなるように、前記切出し領域を決定する
請求項5に記載の画像処理装置。 - 前記決定手段は、前記切出し領域が前記入力画像からはみ出さないように、前記切出し領域を決定する
請求項1に記載の画像処理装置。 - 前記入力画像のアスペクト比と所定の閾値とを比較することで、前記入力画像がパノラマ画像であるか否かを判定する判定手段をさらに備え、
前記決定手段は、前記判定手段によって前記入力画像がパノラマ画像であると判定された場合、前記設定手段によって設定された前記構図パターンを基に、前記入力画像から前記構図パターンで切出される画像の、前記入力画像における最適な切出し領域の、複数の候補を決定する
請求項1に記載の画像処理装置。 - 前記決定手段によって決定された前記切出し領域を示す情報を、EXIF情報として前記入力画像に付加する付加手段をさらに備える
請求項1に記載の画像処理装置。 - 前記注目領域には、前記入力画像において注目する被写体が含まれ、
前記被写体の向きを検出する検出手段をさらに備え、
前記決定手段は、前記設定手段によって設定された前記構図パターンと、前記検出手段によって検出された前記被写体の向きとを基に、前記入力画像から前記構図パターンで切出される画像の、前記入力画像における最適な切出し領域を決定する
請求項1に記載の画像処理装置。 - 前記注目領域には、前記入力画像において注目する被写体が含まれ、
前記被写体の動きの方向を決定する動き方向決定手段をさらに備え、
前記決定手段は、前記設定手段によって設定された前記構図パターンと、前記動き方向決定手段によって決定された前記被写体の動きの方向とを基に、前記入力画像から前記構図パターンで切出される画像の、前記入力画像における最適な切出し領域を決定する
請求項1に記載の画像処理装置。 - 前記入力画像全体の動きを求める全体動き算出手段と、
前記注目領域の動きを求める局所動き算出手段と
をさらに備え、
前記動き方向決定手段は、前記全体動き算出手段によって求められた前記入力画像全体の動きの向きと、前記局所動き算出手段によって求められた前記注目領域の動きの向きとに基づいて、前記被写体の動きの方向を決定する
請求項11に記載の画像処理装置。 - 入力画像において注目する注目領域の数と、前記入力画像のシーンとに基づいて、前記入力画像に対応する構図パターンを設定する設定ステップと、
前記設定ステップにおいて設定された前記構図パターンを基に、前記入力画像から前記構図パターンで切出される画像の、前記入力画像における最適な切出し領域を決定する決定ステップと
含む画像処理方法。 - 前記入力画像において注目する注目領域の数と、前記入力画像のシーンとに基づいて、前記入力画像に対応する構図パターンを設定する設定ステップと、
前記設定ステップにおいて設定された前記構図パターンを基に、前記入力画像から前記構図パターンで切出される画像の、前記入力画像における最適な切出し領域を決定する決定ステップと
を含む処理をコンピュータに実行させるプログラム。 - 被写体を撮像する撮像手段と、
前記撮像手段によって撮像された撮像画像のシーンを取得する取得手段と、
前記撮像画像において注目する被写体を含む注目領域の数と、前記取得手段によって取得された前記シーンとに基づいて、前記撮像画像に対応する構図パターンを設定する設定手段と、
前記設定手段によって設定された前記構図パターンを基に、前記撮像画像から前記構図パターンで切出される画像の、前記撮像画像における最適な切出し領域を決定する決定手段と
を備える撮像装置。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BRPI0905360-3A BRPI0905360A2 (pt) | 2008-09-08 | 2009-09-08 | Aparelho e método de processamento de imagem, programa, e, aparelo de captura de imagem |
US12/740,451 US8538074B2 (en) | 2008-09-08 | 2009-09-08 | Image processing apparatus and method, image capturing apparatus, and program |
JP2010527850A JP5224149B2 (ja) | 2008-09-08 | 2009-09-08 | 画像処理装置および方法、撮像装置、並びにプログラム |
CN200980100857A CN101843093A (zh) | 2008-09-08 | 2009-09-08 | 图像处理设备和方法、图像捕捉设备、以及程序 |
EP09811603.1A EP2207341B1 (en) | 2008-09-08 | 2009-09-08 | Image processing apparatus and method, imaging apparatus, and program |
US13/914,365 US9390466B2 (en) | 2008-09-08 | 2013-06-10 | Image processing apparatus and method, image capturing apparatus and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008229310 | 2008-09-08 | ||
JP2008-229310 | 2008-09-08 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/740,451 A-371-Of-International US8538074B2 (en) | 2008-09-08 | 2009-09-08 | Image processing apparatus and method, image capturing apparatus, and program |
US74045110A Substitution | 2008-09-08 | 2010-04-29 | |
US13/914,365 Continuation US9390466B2 (en) | 2008-09-08 | 2013-06-10 | Image processing apparatus and method, image capturing apparatus and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010027080A1 true WO2010027080A1 (ja) | 2010-03-11 |
Family
ID=41797242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/065626 WO2010027080A1 (ja) | 2008-09-08 | 2009-09-08 | 画像処理装置および方法、撮像装置、並びにプログラム |
Country Status (7)
Country | Link |
---|---|
US (2) | US8538074B2 (ja) |
EP (1) | EP2207341B1 (ja) |
JP (1) | JP5224149B2 (ja) |
CN (1) | CN101843093A (ja) |
BR (1) | BRPI0905360A2 (ja) |
RU (1) | RU2462757C2 (ja) |
WO (1) | WO2010027080A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011193125A (ja) * | 2010-03-12 | 2011-09-29 | Sony Corp | 画像処理装置および方法、プログラム、並びに撮像装置 |
KR20130120841A (ko) * | 2012-04-26 | 2013-11-05 | 엘지전자 주식회사 | 영상표시기기 및 그 영상표시방법 |
CN104243801A (zh) * | 2013-06-17 | 2014-12-24 | 索尼公司 | 显示控制装置、显示控制方法、程序和成像设备 |
WO2018189961A1 (ja) * | 2017-04-14 | 2018-10-18 | シャープ株式会社 | 画像処理装置、端末装置、および画像処理プログラム |
JP2020123894A (ja) * | 2019-01-31 | 2020-08-13 | オリンパス株式会社 | 撮像装置、撮像方法及び撮像プログラム |
WO2023286367A1 (ja) * | 2021-07-15 | 2023-01-19 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、プログラム |
Families Citing this family (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8538074B2 (en) * | 2008-09-08 | 2013-09-17 | Sony Corporation | Image processing apparatus and method, image capturing apparatus, and program |
US8494259B2 (en) * | 2009-12-28 | 2013-07-23 | Teledyne Scientific & Imaging, Llc | Biologically-inspired metadata extraction (BIME) of visual data using a multi-level universal scene descriptor (USD) |
JP2012042720A (ja) * | 2010-08-19 | 2012-03-01 | Sony Corp | 画像処理装置および方法、並びにプログラム |
US8692907B2 (en) * | 2010-09-13 | 2014-04-08 | Sony Corporation | Image capturing apparatus and image capturing method |
US9325804B2 (en) | 2010-11-08 | 2016-04-26 | Microsoft Technology Licensing, Llc | Dynamic image result stitching |
JP5841538B2 (ja) * | 2011-02-04 | 2016-01-13 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | 関心度推定装置および関心度推定方法 |
JP2012205037A (ja) * | 2011-03-25 | 2012-10-22 | Olympus Imaging Corp | 画像処理装置および画像処理方法 |
JP5000781B1 (ja) * | 2011-11-09 | 2012-08-15 | 楽天株式会社 | 画像処理装置、画像処理装置の制御方法、プログラム、及び情報記憶媒体 |
US9025873B2 (en) * | 2011-11-10 | 2015-05-05 | Canon Kabushiki Kaisha | Image processing apparatus and control method therefor |
JP2013153375A (ja) * | 2012-01-26 | 2013-08-08 | Sony Corp | 画像処理装置、画像処理方法および記録媒体 |
US8773543B2 (en) * | 2012-01-27 | 2014-07-08 | Nokia Corporation | Method and apparatus for image data transfer in digital photographing |
JP5393927B1 (ja) * | 2012-02-16 | 2014-01-22 | パナソニック株式会社 | 映像生成装置 |
US9595298B2 (en) | 2012-07-18 | 2017-03-14 | Microsoft Technology Licensing, Llc | Transforming data to create layouts |
JP6137800B2 (ja) * | 2012-09-26 | 2017-05-31 | オリンパス株式会社 | 画像処理装置、画像処理方法、及び、画像処理プログラム |
JP5882975B2 (ja) * | 2012-12-26 | 2016-03-09 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法、及び記録媒体 |
US9582610B2 (en) * | 2013-03-15 | 2017-02-28 | Microsoft Technology Licensing, Llc | Visual post builder |
US10019823B2 (en) * | 2013-10-24 | 2018-07-10 | Adobe Systems Incorporated | Combined composition and change-based models for image cropping |
US9330334B2 (en) | 2013-10-24 | 2016-05-03 | Adobe Systems Incorporated | Iterative saliency map estimation |
US9299004B2 (en) | 2013-10-24 | 2016-03-29 | Adobe Systems Incorporated | Image foreground detection |
JP6381892B2 (ja) * | 2013-10-28 | 2018-08-29 | オリンパス株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
US9195903B2 (en) * | 2014-04-29 | 2015-11-24 | International Business Machines Corporation | Extracting salient features from video using a neurosynaptic system |
FR3021768B1 (fr) * | 2014-05-28 | 2017-12-01 | Dxo Sa | Procede parametrable de traitement d'un fichier representatif d'au moins une image |
US9373058B2 (en) | 2014-05-29 | 2016-06-21 | International Business Machines Corporation | Scene understanding using a neurosynaptic system |
US9798972B2 (en) | 2014-07-02 | 2017-10-24 | International Business Machines Corporation | Feature extraction using a neurosynaptic system for object classification |
US10115054B2 (en) | 2014-07-02 | 2018-10-30 | International Business Machines Corporation | Classifying features using a neurosynaptic system |
US10282069B2 (en) | 2014-09-30 | 2019-05-07 | Microsoft Technology Licensing, Llc | Dynamic presentation of suggested content |
US9626768B2 (en) * | 2014-09-30 | 2017-04-18 | Microsoft Technology Licensing, Llc | Optimizing a visual perspective of media |
US9626584B2 (en) * | 2014-10-09 | 2017-04-18 | Adobe Systems Incorporated | Image cropping suggestion using multiple saliency maps |
GB201501311D0 (en) | 2015-01-27 | 2015-03-11 | Apical Ltd | Method, system and computer program product |
CN105989572B (zh) * | 2015-02-10 | 2020-04-24 | 腾讯科技(深圳)有限公司 | 图片处理方法及装置 |
WO2016207875A1 (en) | 2015-06-22 | 2016-12-29 | Photomyne Ltd. | System and method for detecting objects in an image |
CN105357436B (zh) * | 2015-11-03 | 2018-07-03 | 广东欧珀移动通信有限公司 | 用于图像拍摄中的图像裁剪方法和系统 |
CN105323491B (zh) * | 2015-11-27 | 2019-04-23 | 小米科技有限责任公司 | 图像拍摄方法及装置 |
JP2017099616A (ja) * | 2015-12-01 | 2017-06-08 | ソニー株式会社 | 手術用制御装置、手術用制御方法、およびプログラム、並びに手術システム |
KR102147230B1 (ko) * | 2015-12-16 | 2020-08-25 | 그레이스노트, 인코포레이티드 | 동적 비디오 오버레이 |
CN105912259A (zh) * | 2016-04-14 | 2016-08-31 | 深圳天珑无线科技有限公司 | 照片优化的方法及设备 |
CN106162146B (zh) * | 2016-07-29 | 2017-12-08 | 暴风集团股份有限公司 | 自动识别并播放全景视频的方法及系统 |
WO2018094648A1 (zh) * | 2016-11-24 | 2018-05-31 | 华为技术有限公司 | 拍摄构图引导方法及装置 |
WO2018106213A1 (en) * | 2016-12-05 | 2018-06-14 | Google Llc | Method for converting landscape video to portrait mobile layout |
US10380228B2 (en) | 2017-02-10 | 2019-08-13 | Microsoft Technology Licensing, Llc | Output generation based on semantic expressions |
US10218901B2 (en) * | 2017-04-05 | 2019-02-26 | International Business Machines Corporation | Picture composition adjustment |
CN107545576A (zh) * | 2017-07-31 | 2018-01-05 | 华南农业大学 | 基于构图规则的图像编辑方法 |
US11282163B2 (en) | 2017-12-05 | 2022-03-22 | Google Llc | Method for converting landscape video to portrait mobile layout using a selection interface |
JP7013272B2 (ja) * | 2018-02-13 | 2022-01-31 | キヤノン株式会社 | 画像処理装置 |
CN108810418B (zh) * | 2018-07-16 | 2020-09-11 | Oppo广东移动通信有限公司 | 图像处理方法、装置、移动终端及计算机可读存储介质 |
CN109246351B (zh) * | 2018-07-20 | 2021-04-06 | 维沃移动通信有限公司 | 一种构图方法及终端设备 |
CN117880607A (zh) | 2019-02-28 | 2024-04-12 | 斯塔特斯公司 | 可跟踪视频帧的生成方法、识别系统和介质 |
US11373407B2 (en) * | 2019-10-25 | 2022-06-28 | International Business Machines Corporation | Attention generation |
CN116097308A (zh) * | 2020-08-21 | 2023-05-09 | 华为技术有限公司 | 自动摄影构图推荐 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001211315A (ja) * | 1999-11-18 | 2001-08-03 | Fuji Photo Film Co Ltd | 出力画像領域調整方法 |
JP2004274428A (ja) * | 2003-03-10 | 2004-09-30 | Konica Minolta Holdings Inc | 画像処理方法、画像処理装置、記憶媒体及びプログラム |
JP2005175684A (ja) * | 2003-12-09 | 2005-06-30 | Nikon Corp | デジタルカメラおよびデジタルカメラの画像取得方法 |
JP2008042800A (ja) | 2006-08-10 | 2008-02-21 | Fujifilm Corp | トリミング装置および方法並びにプログラム |
JP2008147997A (ja) * | 2006-12-11 | 2008-06-26 | Fujifilm Corp | 撮像装置、撮像方法、監視システム、監視方法、及びプログラム |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6834127B1 (en) * | 1999-11-18 | 2004-12-21 | Fuji Photo Film Co., Ltd. | Method of adjusting output image areas |
US20020063725A1 (en) * | 2000-11-30 | 2002-05-30 | Virtuallylisted Llc | Method and apparatus for capturing and presenting panoramic images for websites |
US7561288B2 (en) * | 2002-07-05 | 2009-07-14 | Canon Kabushiki Kaisha | Recording system and controlling method therefor |
JP4605345B2 (ja) * | 2004-06-18 | 2011-01-05 | 富士フイルム株式会社 | 画像処理方法及び装置 |
JP2006121298A (ja) * | 2004-10-20 | 2006-05-11 | Canon Inc | トリミングの実行方法及び装置 |
US7529390B2 (en) * | 2005-10-03 | 2009-05-05 | Microsoft Corporation | Automatically cropping an image |
US8218830B2 (en) * | 2007-01-29 | 2012-07-10 | Myspace Llc | Image editing system and method |
JP4854539B2 (ja) * | 2007-02-21 | 2012-01-18 | キヤノン株式会社 | 画像処理装置、その制御方法、及びプログラム |
US8538074B2 (en) * | 2008-09-08 | 2013-09-17 | Sony Corporation | Image processing apparatus and method, image capturing apparatus, and program |
US8406515B2 (en) * | 2009-06-24 | 2013-03-26 | Hewlett-Packard Development Company, L.P. | Method for automatically cropping digital images |
-
2009
- 2009-09-08 US US12/740,451 patent/US8538074B2/en active Active
- 2009-09-08 JP JP2010527850A patent/JP5224149B2/ja not_active Expired - Fee Related
- 2009-09-08 CN CN200980100857A patent/CN101843093A/zh active Pending
- 2009-09-08 RU RU2010117215/07A patent/RU2462757C2/ru not_active IP Right Cessation
- 2009-09-08 BR BRPI0905360-3A patent/BRPI0905360A2/pt not_active IP Right Cessation
- 2009-09-08 EP EP09811603.1A patent/EP2207341B1/en active Active
- 2009-09-08 WO PCT/JP2009/065626 patent/WO2010027080A1/ja active Application Filing
-
2013
- 2013-06-10 US US13/914,365 patent/US9390466B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001211315A (ja) * | 1999-11-18 | 2001-08-03 | Fuji Photo Film Co Ltd | 出力画像領域調整方法 |
JP2004274428A (ja) * | 2003-03-10 | 2004-09-30 | Konica Minolta Holdings Inc | 画像処理方法、画像処理装置、記憶媒体及びプログラム |
JP2005175684A (ja) * | 2003-12-09 | 2005-06-30 | Nikon Corp | デジタルカメラおよびデジタルカメラの画像取得方法 |
JP2008042800A (ja) | 2006-08-10 | 2008-02-21 | Fujifilm Corp | トリミング装置および方法並びにプログラム |
JP2008147997A (ja) * | 2006-12-11 | 2008-06-26 | Fujifilm Corp | 撮像装置、撮像方法、監視システム、監視方法、及びプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP2207341A4 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011193125A (ja) * | 2010-03-12 | 2011-09-29 | Sony Corp | 画像処理装置および方法、プログラム、並びに撮像装置 |
KR20130120841A (ko) * | 2012-04-26 | 2013-11-05 | 엘지전자 주식회사 | 영상표시기기 및 그 영상표시방법 |
KR101952174B1 (ko) * | 2012-04-26 | 2019-05-22 | 엘지전자 주식회사 | 영상표시기기 및 그 영상표시방법 |
CN104243801A (zh) * | 2013-06-17 | 2014-12-24 | 索尼公司 | 显示控制装置、显示控制方法、程序和成像设备 |
CN104243801B (zh) * | 2013-06-17 | 2019-09-03 | 索尼公司 | 显示控制装置、显示控制方法、记录介质和成像设备 |
WO2018189961A1 (ja) * | 2017-04-14 | 2018-10-18 | シャープ株式会社 | 画像処理装置、端末装置、および画像処理プログラム |
JPWO2018189961A1 (ja) * | 2017-04-14 | 2020-02-27 | シャープ株式会社 | 画像処理装置、端末装置、および画像処理プログラム |
JP2020123894A (ja) * | 2019-01-31 | 2020-08-13 | オリンパス株式会社 | 撮像装置、撮像方法及び撮像プログラム |
JP7236869B2 (ja) | 2019-01-31 | 2023-03-10 | オリンパス株式会社 | 撮像装置、撮像方法及び撮像プログラム |
WO2023286367A1 (ja) * | 2021-07-15 | 2023-01-19 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、プログラム |
Also Published As
Publication number | Publication date |
---|---|
BRPI0905360A2 (pt) | 2015-06-30 |
US8538074B2 (en) | 2013-09-17 |
US9390466B2 (en) | 2016-07-12 |
US20130272611A1 (en) | 2013-10-17 |
EP2207341A1 (en) | 2010-07-14 |
RU2010117215A (ru) | 2011-11-10 |
EP2207341A4 (en) | 2012-04-11 |
JP5224149B2 (ja) | 2013-07-03 |
US20100290705A1 (en) | 2010-11-18 |
CN101843093A (zh) | 2010-09-22 |
US20160171647A9 (en) | 2016-06-16 |
JPWO2010027080A1 (ja) | 2012-02-02 |
RU2462757C2 (ru) | 2012-09-27 |
EP2207341B1 (en) | 2013-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5224149B2 (ja) | 画像処理装置および方法、撮像装置、並びにプログラム | |
JP5016541B2 (ja) | 画像処理装置および方法並びにプログラム | |
JP4844657B2 (ja) | 画像処理装置及び方法 | |
JP5136474B2 (ja) | 画像処理装置および方法、学習装置および方法、並びに、プログラム | |
US20130058579A1 (en) | Image information processing apparatus | |
JP5609425B2 (ja) | 画像処理装置、撮像装置、及び画像処理プログラム | |
EP2112817A1 (en) | Composition analysis method, image device having composition analysis function, composition analysis program, and computer-readable recording medium | |
JP5525757B2 (ja) | 画像処理装置、電子機器、及びプログラム | |
US8565491B2 (en) | Image processing apparatus, image processing method, program, and imaging apparatus | |
JP2011054071A (ja) | 画像処理装置、画像処理方法及びプログラム | |
KR20090087670A (ko) | 촬영 정보 자동 추출 시스템 및 방법 | |
CN110909724B (zh) | 一种多目标图像的缩略图生成方法 | |
WO2015156149A1 (ja) | 画像処理装置および画像処理方法 | |
CN111881849A (zh) | 图像场景检测方法、装置、电子设备及存储介质 | |
US9332196B2 (en) | Image processing device, method and program | |
JP5016540B2 (ja) | 画像処理装置および方法並びにプログラム | |
JP7000921B2 (ja) | 撮像装置、撮像方法、及びプログラム | |
CN112017120A (zh) | 一种图像合成方法及装置 | |
JP5871175B2 (ja) | 領域抽出装置、撮像装置、及び領域抽出プログラム | |
CN112348822A (zh) | 图像处理设备和图像处理方法 | |
JP5287965B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP6996347B2 (ja) | 撮像装置、撮像方法、及びプログラム | |
WO2011086901A1 (ja) | 画像処理装置、撮像装置、および画像処理プログラム | |
JP2005071389A (ja) | 画像処理装置及び方法 | |
CN112070672A (zh) | 一种图像合成方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980100857.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010527850 Country of ref document: JP Ref document number: 2009811603 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2870/DELNP/2010 Country of ref document: IN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09811603 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12740451 Country of ref document: US Ref document number: 2010117215 Country of ref document: RU |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: PI0905360 Country of ref document: BR Kind code of ref document: A2 Effective date: 20100430 |