WO2013111552A1 - 画像処理装置、撮像装置および画像処理方法 - Google Patents
画像処理装置、撮像装置および画像処理方法 Download PDFInfo
- Publication number
- WO2013111552A1 WO2013111552A1 PCT/JP2013/000248 JP2013000248W WO2013111552A1 WO 2013111552 A1 WO2013111552 A1 WO 2013111552A1 JP 2013000248 W JP2013000248 W JP 2013000248W WO 2013111552 A1 WO2013111552 A1 WO 2013111552A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- embedded
- information
- area
- image
- depth
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/634—Warning indications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10148—Varying focus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/183—On-screen display [OSD] information, e.g. subtitles or menus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
Definitions
- the present invention relates to an image processing apparatus for image processing for embedding information in an area in an image.
- Patent Document 1 and Patent Document 2 There are techniques described in Patent Document 1 and Patent Document 2 as techniques related to image processing for embedding information in a region in an image.
- the present invention provides an image processing apparatus that can appropriately determine a region for embedding information.
- An image processing apparatus shows an image acquisition unit that acquires an image, an embedded information acquisition unit that acquires embedded information embedded in a region in the image, and a depth value of each pixel of the image
- a depth information acquisition unit that acquires depth information
- an embedding region determination unit that determines an embedding region, which is the region in which the embedding information is embedded, using the depth information.
- the image processing apparatus can appropriately determine a region for embedding information.
- FIG. 1 is a diagram showing an image after embedding information is embedded.
- FIG. 2 is a diagram illustrating a depth map.
- FIG. 3A is a diagram illustrating a depth map in which embedded information is not conspicuous.
- FIG. 3B is a diagram illustrating a depth map in which embedded information has a sense of incongruity.
- FIG. 3C is a diagram illustrating a depth map in which embedded information can be appropriately viewed.
- FIG. 4 is a configuration diagram of the image processing apparatus according to the first embodiment.
- FIG. 5A is a diagram illustrating a depth map of the input image according to the first example.
- FIG. 5B is a diagram illustrating a depth range according to the first example.
- FIG. 5C is a diagram illustrating a search area according to the first example.
- FIG. 5D is a diagram illustrating a buried region according to the first example.
- FIG. 6A is a diagram illustrating a depth map of the input image according to the second example.
- FIG. 6B is a histogram of depth values according to the second example.
- FIG. 6C is a diagram illustrating an embedded region according to the second example.
- FIG. 7A is a diagram illustrating a depth map of the input image according to the third example.
- FIG. 7B is a diagram illustrating an initial search area according to the third example.
- FIG. 7C is a diagram illustrating a new search area according to the third example.
- FIG. 7D is a diagram illustrating an embedded region according to a third example.
- FIG. 7E is a diagram illustrating a depth range according to a third example.
- FIG. 7A is a diagram illustrating a depth map of the input image according to the second example.
- FIG. 6B is a histogram of depth values according to the second example.
- FIG. 7F is a diagram illustrating a depth map after embedding according to a third example.
- FIG. 8A is a diagram illustrating a depth map of an input image according to the fourth example.
- FIG. 8B is a diagram showing a notification message for photographing according to the fourth example.
- FIG. 8C is a diagram showing a notification message for correction according to the fourth example.
- FIG. 8D is a diagram illustrating an embedded region according to a fourth example.
- FIG. 9A is a diagram illustrating a depth map after embedding a decorative part.
- FIG. 9B is a diagram illustrating a depth map after the frame is embedded.
- FIG. 9C is a diagram illustrating a depth map in which a part of text is emphasized.
- FIG. 9A is a diagram illustrating a depth map after embedding a decorative part.
- FIG. 9B is a diagram illustrating a depth map after the frame is embedded.
- FIG. 9C is a diagram illustrating
- FIG. 9D is a diagram illustrating a depth map in which text is embedded in accordance with the power of audio.
- FIG. 10A is a flowchart showing the operation of the image processing apparatus according to the first embodiment.
- FIG. 10B is a flowchart showing processing for searching for an embedded area according to the first embodiment.
- FIG. 11 is a configuration diagram of an image processing apparatus according to the second embodiment.
- FIG. 12A is a diagram showing a depth map of an input image according to the second embodiment.
- FIG. 12B is a diagram showing the depth map reliability according to the second embodiment.
- FIG. 12C is a diagram showing a first example of a depth map after embedding according to the second embodiment.
- FIG. 12D is a diagram illustrating a second example of the depth map after embedding according to the second embodiment.
- FIG. 13 is a configuration diagram of an image processing apparatus according to the third embodiment.
- FIG. 14 is a flowchart showing the operation of the image processing apparatus according to the third embodiment.
- FIG. 15 is a configuration diagram of an image processing apparatus according to the fourth embodiment.
- FIG. 16 is a flowchart showing the operation of the image processing apparatus according to the fourth embodiment.
- the present inventor has found the following problems related to the technique related to image processing for embedding information in a region in an image described in the “background art” column.
- an image processing apparatus having an embedding function for embedding a text message or a decorative part in an image obtained by photographing or the like.
- an embedding function for example, a participant may give a message to a photographed image at an event such as a birthday party or a wedding (see FIG. 1).
- embedded information information such as a message embedded in an image
- Embedding embedded information in an image means placing embedded information on the image and superimposing the embedded information on the image. Therefore, a part of the image is hidden by embedding the embedded information in the image.
- the embedded information may be embedded on the main subject in the captured image, and the main subject may be hidden by the embedded information. Therefore, as a method for appropriately determining an appropriate embedding position, a method for determining an embedding position using information included in an image is known (Patent Document 1).
- the image processing apparatus detects main positions in the image using information (focus area, date imprint, face position, character position, subject outline, etc.) included in the image. Embed embedding information, avoiding the main position. Thereby, the image processing apparatus can avoid embedding at main positions in the image and embed embedding information in an area other than the main positions. Therefore, the image processing apparatus can appropriately embed embedded information in a normal photograph.
- a value indicating the degree of depth corresponding to the parallax or the distance is referred to as a depth value (depth value).
- Information relating to the depth value of each pixel in an image or an image area is referred to as depth information.
- depth information regarding the depth value of each pixel of these images and the depth value of each pixel of those images is obtained from the parallax of the two left and right images.
- two left and right images having parallax are obtained.
- Patent Document 2 a 3D image is generated based on the measured distance, and depth information is added to the embedded information. Thereby, the 3D image in which the embedded information is embedded is viewed three-dimensionally. Here, by adding appropriate depth information to the embedded information, the embedded information embedded in the 3D image is displayed in 3D without a sense of incongruity.
- Patent Document 2 discloses a method using a user interface as a method for adding depth information to embedded information.
- the user can give appropriate depth information to the embedded information by pressing the “far” or “near” button an appropriate number of times.
- the embedded information is embedded in the 3D image based on a technique such as a method of determining an embedded position as in Patent Document 1 and a method of adding depth information to embedded information as in Patent Document 2.
- a technique such as a method of determining an embedded position as in Patent Document 1 and a method of adding depth information to embedded information as in Patent Document 2.
- the embedded information embedded in the 3D image may not be properly viewed.
- the embedded information embedded in the 3D image should jump out from the surroundings and be visible.
- a depth map (DepthMap) indicating the depth value of each pixel of the image will be described.
- Figure 2 shows the depth map.
- the depth map is usually represented by an image as shown in FIG.
- the distance from the camera to the subject is closer as the pixel color is whiter (the hatching is lighter), the distance from the camera to the subject is closer, and the pixel color is blacker (the hatching is darker). Indicates far away.
- the depth value around the embedded information indicates the same depth as the embedded information, or when the depth value around the embedded information indicates the depth closer to the embedded information, the embedded information embedded in the 3D image is appropriate. I can't see.
- the pop-out amount of the embedded information is equivalent to the surrounding area.
- Such embedded information has a poor popping feeling and gives a sense of writing. Such embedded information is not emphasized and is not noticeable.
- the embedding information when the depth value of the embedding information indicates a depth deeper than the surrounding depth value, the embedding information appears to be recessed. Such an image gives a sense of incongruity.
- the embedded information pops out and looks appropriate.
- the depth value is not used for determining the embedding position. Therefore, the relationship between the depth value of the embedded information and the depth value around the embedded position may not be appropriate, and the embedded information may not be properly viewed. Furthermore, even when a person's face is excluded from the embedding position, the embedding position may be set in an area such as a person's body or an object that the person has, and the image may not be properly viewed.
- setting the depth information in the embedded information may be inconvenient for the user.
- depth information is given to embedded information via a user interface. Therefore, every time this function is used, the user has to adjust the depth information.
- an image processing apparatus includes an image acquisition unit that acquires an image, an embedded information acquisition unit that acquires embedded information embedded in a region in the image, and a depth value of each pixel of the image.
- a depth information acquisition unit that acquires depth information indicating the embedded information
- an embedded region determination unit that determines an embedded region that is the region in which the embedded information is embedded, using the depth information.
- the image processing apparatus can appropriately determine an area for embedding information.
- the embedding area determination unit may determine the embedding area composed of a plurality of pixels having a depth value indicating a depth side with respect to a predetermined depth value using the depth information.
- the image processing apparatus determines the back area as the embedding area. Therefore, the image processing apparatus can avoid hiding the main position of the image.
- the image processing apparatus further detects subject position information indicating the subject position by detecting a subject position that is a position in the image of a predetermined subject included in the image.
- the embedding area determination unit uses the depth information and the subject position information, and the embedding area determination unit includes the embedding area including a plurality of pixels having a depth value indicating a depth side with respect to a depth value of a pixel at the subject position. The region may be determined.
- the image processing apparatus determines an area on the back side of the predetermined subject as an embedded area. Therefore, the image processing apparatus can avoid hiding an area equivalent to a predetermined subject.
- the embedding area determination unit may determine the embedding area including a plurality of pixels having a depth value within a predetermined range from a predetermined depth value using the depth information.
- the image processing apparatus determines an area where the degree of dispersion of the depth value is small as an embedded area.
- An area where the degree of dispersion of depth values is small is likely to be an inconspicuous area. Therefore, the image processing apparatus can determine an area that is highly inconspicuous as an embedded area.
- the embedding area determination unit determines the embedding area including a plurality of pixels having a depth value within a predetermined range from a depth value where the appearance frequency is a peak, using the depth information. Also good.
- the image processing apparatus determines an area having a depth value having a high appearance frequency as an embedded area.
- a region having a depth value with a high appearance frequency is likely to be inconspicuous. Therefore, the image processing apparatus can determine an area that is highly inconspicuous as an embedded area.
- the embedding area determination unit is an area composed of a plurality of pixels having a depth value indicating a depth side with respect to a predetermined depth value using the depth information, and a depth having a peak appearance frequency.
- the embedded area which is an area composed of a plurality of pixels having a depth value within a predetermined range from the value, may be determined.
- the image processing apparatus can determine the area on the back side that is highly inconspicuous as the embedded area.
- the depth information acquisition unit may acquire the depth information including information indicating the reliability of the depth value of each pixel of the image.
- the image processing apparatus can appropriately determine the embedding area by using the depth information including the reliability of the depth value of each pixel of the image.
- the embedding area determination unit may determine the embedding area including a pixel whose depth value reliability is lower than a predetermined reliability using the depth information.
- the image processing apparatus can hide the region including the pixel having the low reliability of the depth value.
- the embedding area determination unit may determine the embedding area using the depth information from an area excluding a subject position that is a position in the image of a predetermined subject included in the image. Good.
- the image processing apparatus can determine the embedding area from the area excluding the main subject. That is, the image processing apparatus can avoid hiding main subjects.
- the embedded area determination unit sets the size of the embedded area using the information amount of the embedded information acquired by the embedded information acquisition unit, and determines the embedded area having the set size May be.
- the image processing apparatus can appropriately set the size of the embedding area based on the embedding information.
- the embedded region determining unit when the embedded region satisfying the conditions used for determining the embedded region using the depth information does not exist, the embedded region determining unit includes the image of a predetermined subject included in the image.
- the embedding area may be determined from the area excluding the subject position which is the position at.
- the image processing apparatus can determine the embedding area from the areas excluding the subject.
- the image processing apparatus further includes a depth value determination unit that determines a depth value of the embedding information, and the embedding area in the image using the depth value determined by the depth value determination unit.
- An embedding unit for embedding the embedding information may be provided.
- the image processing apparatus can set the depth value in the embedded information. Therefore, embedded information that can be viewed three-dimensionally is obtained.
- the depth value determination unit determines the depth value of the embedded information as a depth value of the same degree as the depth value of the subject position that is the position in the image of the predetermined subject included in the image. Also good.
- the image processing apparatus can set the depth value of the embedded information so that the embedded information is conspicuous as much as the subject.
- the depth value determination unit may include the image of a predetermined subject included in the image when there is no embedded region that satisfies the conditions used for determining the embedded region using the depth information.
- the depth value of the embedding information embedded in the embedding area determined from the area excluding the subject position, which is the position at, may be determined as a depth value indicating the near side of the depth value at the subject position.
- the image processing apparatus can avoid setting an inconspicuous depth value in the embedded information.
- the subject position detection unit detects the position of the person's face included in the image as the subject position, thereby detecting the position of the person's face as the subject position. Information may be acquired.
- the image processing apparatus determines an area behind the subject's face as an embedded area. Therefore, the image processing apparatus can avoid hiding an area equivalent to the face of the subject.
- the embedded information acquisition unit may acquire the embedded information including at least one of a text, a decorative part, a frame, and an image.
- the image processing apparatus can determine an embedding area for embedding a text, a decorative part, a frame, an image, or the like.
- the image processing apparatus further includes a display unit that displays the image.
- a display unit that displays the image.
- a notification message indicating that the embedded area does not exist may be displayed.
- the image processing apparatus can notify the user that there is no area for embedding the embedded information in the image.
- the embedded region determining unit uses the depth information to determine whether a predetermined subject included in the image is near If it is determined that the subject is close, the display unit may display the notification message including a message prompting the user to take a picture of the subject away from the subject.
- the image processing apparatus can prompt the user to photograph the subject away from the subject when there is no region for embedding the embedded information in the image.
- an imaging apparatus includes the image processing apparatus, and the image acquisition unit acquires the image by photographing a subject.
- the imaging apparatus can appropriately determine the area of the embedded information embedded in the image obtained by shooting.
- FIG. 4 shows the configuration of the image processing apparatus according to the present embodiment and the input information to each component of the image processing apparatus.
- the image processing apparatus 100 includes an imaging unit 1, a subject position detection unit 2, an embedded information acquisition unit 3, a depth information acquisition unit 4, an embedded region search unit 5, and a depth information setting unit 6. And an embedded portion 7.
- the 3D image is generated by combining the depth information corresponding to the distance from the camera to the subject and the planar image.
- embedding information is embedded in the left and right images obtained by photographing the subject with the stereo camera. Specifically, the embedding area is determined, the embedding information is modified according to the difference between the embedding area of the left image and the embedding area of the right image, and the modified embedding information is embedded in the two left and right images. .
- the present embodiment is applied to a method of generating a 3D image by combining depth information corresponding to the distance from the camera to the subject and a planar image.
- the present embodiment may be applied to a method using a stereo camera.
- each component will be described.
- the imaging unit 1 includes a lens unit in which a lens for collecting light rays is incorporated, and an imaging device such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
- the imaging unit 1 images a subject and outputs an image.
- the subject position detection unit 2 uses a pre-imaging image (before recording), a post-imaging image (after recording), or both images, and a subject position (subject) that is the position (region) of the main subject in the image. Area).
- the main subject is a target that should not be covered with embedded information, for example, a human face.
- the main subject may be other than a person, an animal, or a structure.
- the subject position detection unit 2 detects a person's face (face region) as a main subject.
- the method for detecting the face area is not limited.
- a general method may be used for detection of a face region.
- the subject position detection unit 2 detects a plurality of face areas. If no person is present in the image, the subject position detection unit 2 determines that the face area cannot be detected. Then, the subject position detection unit 2 outputs the detection result to the embedded region search unit 5.
- the embedded information acquisition unit 3 acquires embedded information embedded in the image obtained by the imaging unit 1.
- the embedded information may include text, decorative parts, or an image different from the image obtained by the imaging unit 1.
- the embedded information acquisition unit 3 performs voice recognition processing via a microphone (not shown) attached to the image processing apparatus 100 immediately after shooting or at the time of editing an image obtained by shooting. May be obtained.
- the embedded information acquisition unit 3 may acquire embedded information via an input device (not shown) that accepts user input. Further, the embedded information acquisition unit 3 may cause the user to select embedded information from a plurality of templates prepared in advance.
- the method by which the embedded information acquisition unit 3 acquires embedded information is not limited to the above method, and any method may be used.
- the depth information acquisition unit 4 acquires the depth information corresponding to the image before or after imaging, or both images, and outputs the depth information to the embedded region searching unit 5.
- the depth information includes the depth value (depth value) of each pixel in the image.
- Depth information can be obtained by using TOF (Time Of Flight) or depth information using multiple images such as DFF (Depth From Focus) and DFD (Depth From Defocus). There is a way to get.
- the depth information may be acquired by any method.
- the size of the depth map is preferably the same as the size of the captured image, but may be smaller than the size of the captured image. That is, the resolution of the depth map may be lower than the resolution of the captured image.
- the number of gradations for the degree of depth is preferably about 8 to 16 from the viewpoint of distance resolution, but it may be less.
- the number of gradations of the depth degree is 256, and the depth value is represented by 0 to 255.
- the embedded area searching unit 5 searches for an embedded area for embedding embedded information.
- an example of a method for searching for an embedded area when the depth maps of FIGS. 5A, 6A, 7A, and 8A are obtained will be described.
- the embedded region search method is not limited to the following search method, and other search methods may be used.
- the embedded region searching unit 5 searches for an embedded region using the subject position information obtained from the subject position detecting unit 2 and the depth information obtained from the depth information acquiring unit 4. In the search for the embedded area, the embedded area search unit 5 first acquires the subject position from the subject position information, and acquires the depth value of the subject position from the depth information.
- FIG. 5B shows the depth range of the subject position (subject region).
- a depth range A shown in FIG. 5B indicates a range of depth values at the position detected as the subject position by the subject position detection unit 2 (that is, the position of the face in the present embodiment).
- the embedding area search unit 5 specifies that the depth value of the subject position is the depth range A using the depth information. Then, the embedding area search unit 5 searches for an embedding area from a search area composed of a plurality of pixels having a depth value in the depth range B farther from the depth value of the subject position (FIG. 5C).
- the depth value of the embedded information itself should be about the same as the depth value of the main subject. And it is better that the depth value around the embedding area indicates the depth side than the depth value of the embedding information. Therefore, the embedded area is searched from the search area corresponding to the depth range B on the back side of the depth value of the subject position.
- the embedding area searching unit 5 can exclude the body area from the embedding area by excluding an area having a depth value similar to the depth value of the face. That is, by using the depth value, it is possible to set a more appropriate embedding area.
- FIG. 5B there is a slight interval between the depth range A including the depth value of the subject area and the depth range B including the depth value of the search area.
- a larger difference is set between the depth value of the embedded information and the depth value around the embedded information. And as the difference is larger, the embedded information appears to pop out.
- the width of this section is a parameter that determines the conspicuousness of the embedded information, and can be arbitrarily set.
- the embedded area search unit 5 sets an area having an appropriate number of pixels as an embedded area in the search area set as shown in FIG. 5C.
- the embedded region may be rectangular as shown in FIG. 5D or may not be rectangular.
- the embedded area searching unit 5 may search for the embedded area so that the center of gravity of the embedded area and the centers of gravity of the plurality of subject positions detected by the subject position detecting unit 2 are included in a predetermined range.
- the embedding area may be a rectangular area having a size occupied by the embedding information in the image, a circular shape, or a mask indicating a shape included in the embedding information.
- the embedded region searching unit 5 searches for a region in the depth range B where the depth value is within a certain depth range, or a region that is within the depth range including the depth value having the highest appearance frequency, and the like. You may select as an embedding area. This is because if there are other subjects (walls or objects) other than the main subject (person's face) in the image, the embedded information may be easier to see if it is not covered by other subjects. This will be described in detail below with reference to FIGS. 6A to 6C.
- FIG. 6A shows a depth map of an image in which a person and an object other than a person (a desk in this example) enter.
- FIG. 6B shows a histogram of depth values included in the depth map of FIG. 6A, using the depth value as the horizontal axis and the number of pixels as the vertical axis.
- the embedding area search unit 5 When an object other than a person enters as shown in FIG. 6A, the embedding area search unit 5 first sets an area in the depth range B farther than the depth range A (subject area) shown in the histogram of FIG. 6B. Set as search area. Next, the embedding area search unit 5 sets, as the second search area, an area of the depth range D that includes the depth value where the number of pixels is a peak (maximum value) in the depth range B.
- the embedded area search unit 5 uses the second search area (depth range D). It is better to search the embedded area with.
- the second search region corresponding to the depth range D including the depth value indicating the deepest peak is estimated as the background region. Therefore, the embedding area search unit 5 can set an embedding area from the background area by setting the embedding area from the second search area.
- FIG. 6C shows an embedded area searched from the second search area. That is, in FIG. 6C, the desk area is excluded from the embedded area. Therefore, the embedded area searching unit 5 can appropriately set an area where the change in depth value is flat in the image as an embedded area.
- the embedding area search unit 5 sets an embedding area (FIG. 7D) from the search area (FIG. 7C) other than the face position, except for the face position that is a particularly important subject position obtained by the subject position detection unit 2. May be.
- a specific area may be displayed in advance on a display unit (not shown) of the image processing apparatus 100. Then, the image processing apparatus 100 captures an image with the imaging unit 1 while alerting the user so that a person is not captured in the specific area, and sets the predetermined specific area as an embedded area. Good.
- the image processing apparatus 100 may display an OSD (On Screen Display) on the display unit that the embedding area cannot be set and inform the user to prompt the user to shoot again.
- OSD On Screen Display
- the image processing apparatus 100 uses the depth information to determine whether or not the distance from the imaging unit 1 to the subject is short, and when it is close, displays that the image is taken again as an OSD on the display unit. May be. At that time, the image processing apparatus 100 may display a message prompting the user to correct the image size as shown in FIG. 8C and secure the embedded area as shown in FIG. 8D.
- the depth information setting unit 6 sets the depth value of the embedded information.
- the depth information setting unit 6 sets the depth value included in the depth range C of FIG. 5B. It is set, and a depth value similar to that of the person area is given to the embedded information.
- the depth information setting unit 6 when an embedding area including a depth value similar to that of the person area is set, the depth information setting unit 6 is configured so that the embedding information does not stand out as shown in FIG. A depth value is determined from a depth range E before the depth range A of the subject area. Then, the depth information setting unit 6 adds the determined depth value to the embedded information (FIG. 7F).
- the depth information setting unit 6 may set the depth value of the embedding area to be equal to or less than the depth value of the corrected image (back side). Good. Then, the depth information setting unit 6 may set the depth value of the embedded information to be higher (on the front side) than the depth value of the embedded area.
- the embedding unit 7 obtains the image obtained by the imaging unit 1, the embedding information obtained by the embedding information acquisition unit 3, the depth information obtained by the depth information acquisition unit 4, and the embedding area search unit 5. By using the embedding area and the depth value given by the depth information setting unit 6, the depth information after embedding and the image after embedding are output.
- the embedded depth information is generated by synthesizing the depth information obtained by the depth information acquisition unit 4 and the depth value obtained by the depth information setting unit 6 (FIG. 3C, FIG. 7F, etc.). Similarly, the image after embedding is also generated by combining the image obtained by the imaging unit 1 and the embedding information obtained by the embedding information acquisition unit 3.
- the embedded information obtained by the embedded information acquisition unit 3 is shown as text information, but the embedded information is not limited to text information.
- the image processing apparatus 100 may paste an embedded image such as a decorative part or a frame set in advance in the image processing apparatus 100 as embedded information on the captured image in addition to the text information.
- the image processing apparatus 100 may set the depth value of the embedded image on the same basis as the text information and embed the embedded image in the captured image.
- the image processing apparatus 100 may arrange a decoration frame as the background. Then, the image processing apparatus 100 sets the depth value of the decoration frame and the depth value of the text information so that the depth value of the decoration frame is set between the depth value of the person and the depth value of the text information. May be. Then, the image processing apparatus 100 may generate an image in which a decoration frame and text information are embedded.
- the depth value of the embedded information may not be a constant value.
- the image processing apparatus 100 may set the depth value so that words registered in the image processing apparatus 100 appear more popping out (FIG. 9C).
- the image processing apparatus 100 may set the depth value of embedded information according to the power of the sound so that the embedded information appears to pop out as the voice becomes louder (FIG. 9D). ).
- the embedding area search unit 5 may adjust the position of the embedding area so that the position of the embedding area does not change greatly in a plurality of frames.
- FIG. 10A is a diagram showing the flow of the entire process
- FIG. 10B is a diagram showing the flow of the process of searching for an embedded area (S105 in FIG. 10A).
- the imaging unit 1 generates an image by photographing a subject (S101).
- the subject position detection unit 2 detects the position of the subject (such as the position of a person's face) using the image obtained by the imaging unit 1 (S102).
- the embedded information acquisition unit 3 acquires embedded information based on voice information or an input from the user (S103).
- the depth information acquisition unit 4 acquires depth information corresponding to the image obtained by the imaging unit 1 (S104).
- the embedded area searching unit 5 uses the depth information obtained by the depth information obtaining unit 4 to obtain the depth value of the subject position obtained by the subject position detection unit 2, and is located behind the depth value of the subject position. An area is specified (S201).
- the embedding area searching unit 5 sets the rectangular size of the embedding area in which the embedding information is embedded based on the embedding information obtained by the embedding information acquisition unit 3 (S202).
- the embedding area search unit 5 uses a combination of a plurality of preset setting items such as the font size of the embedding information, the number of lines, and the arrangement direction (vertical or horizontal). Based on this, a plurality of candidates having a rectangular size may be set. Further, when the embedding information is a decorative part or an image, the embedding area searching unit 5 may set a plurality of rectangular size candidates based on a plurality of preset enlargement / reduction ratios. Then, the embedded region searching unit 5 may select a rectangular size from such a plurality of candidates.
- a plurality of preset setting items such as the font size of the embedding information, the number of lines, and the arrangement direction (vertical or horizontal). Based on this, a plurality of candidates having a rectangular size may be set. Further, when the embedding information is a decorative part or an image, the embedding area searching unit 5 may set a plurality of rectangular size candidates based on a plurality of preset enlargement / reduction
- the embedded area search unit 5 searches the search area for an embedded area that satisfies the set conditions (S203). Then, the embedded area search unit 5 determines whether an embedded area that satisfies the set condition has been found (S204).
- the embedded area searching unit 5 determines that the embedded area exists and ends the search for the embedded area. If no embedded area satisfying the set condition is found, the embedded area searching unit 5 determines whether or not an embedded area has been searched with all candidates of the set rectangular size (S205). If there are unsearched candidates remaining, the embedded region searching unit 5 sets the unsearched candidates as search conditions again and searches for embedded regions (S202 and S203).
- the embedded area search unit 5 has searched all the areas other than the face area as the search area (see FIG. 7C) without using the depth information when the embedded area is searched with all the candidates of the rectangular size. Is determined (S206).
- the embedded area searching unit 5 searches for an embedded area using the detected face area and depth information. At this time, the human body region and the like are also excluded from the search region. When no embedded region satisfying the condition is found, the embedded region search unit 5 widens the search region.
- the embedded area search unit 5 determines that there is no embedded area (S209), and ends the search for the embedded area. If the embedded area search unit 5 has not yet searched for an embedded area in a search area other than the face area, the embedded area search unit 5 sets all areas other than the face area as search areas using the subject position information (S207). Then, the embedded area searching unit 5 again sets the rectangular size and searches for the embedded area (S202 and S203).
- the embedded area searching unit 5 searches for an embedded area (S105) and determines whether an embedded area exists (S106). When it is determined that there is no embedded area, the image processing apparatus 100 displays that the embedded area does not exist as an OSD and prompts the user to re-shoot (S107) (see FIG. 8B). When it is determined that there is an embedded area, the depth information setting unit 6 sets depth information in the embedded information (S108).
- the depth information setting unit 6 may set the depth value of the embedded information with reference to the depth values around the position where the embedded information is embedded.
- the depth information setting unit 6 sets the depth value of the embedded information to the same level as the person area when the surrounding depth value indicates the back side of the person area.
- the depth information setting unit 6 sets, in the embedded information, a depth value indicating a depth before the depth of the person area when the surrounding depth value indicates the same depth as the person area.
- the embedding unit 7 embeds the embedded information after the depth information is set in the embedded information (S109).
- the image processing apparatus 100 uses the depth information to set the position where the embedded information is embedded and the depth value of the embedded information. Therefore, the embedded information is displayed at a more appropriate position without forcing the user to perform unnecessary operations.
- a depth map expressed as an image is used as a format for expressing depth information.
- the format for representing the depth information is not limited to such a depth map.
- Other formats may be used as a format for representing the depth information.
- the image processing apparatus uses the reliability of the depth value to determine the embedding area.
- FIG. 11 is a configuration diagram of the image processing apparatus according to the present embodiment.
- the same components as those shown in FIG. 4 are denoted by the same reference numerals, and the description thereof will be omitted.
- the components different from the components shown in FIG. 4 will be mainly described.
- the image processing apparatus 200 according to the present embodiment includes a depth reliability acquisition unit 8 as an additional component compared to the image processing apparatus 100 according to the first embodiment.
- the depth reliability acquisition unit 8 acquires depth map reliability, which is the reliability of the depth map obtained by the depth information acquisition unit 4.
- the depth map obtained by the depth information acquisition unit 4 usually includes an error.
- the reliability regarding the accuracy of the depth map varies depending on the method of obtaining the depth map.
- the depth information acquisition unit 4 may not be able to acquire correct distance information for some of all the pixels. Therefore, the depth reliability acquisition unit 8 calculates the reliability of the depth map for each pixel in order to estimate the error of the depth map.
- the depth information acquisition unit 4 when the depth information acquisition unit 4 generates the depth map from the left and right images of the stereo camera, the correspondence relationship between the left image and the right image is determined using the difference in pixel value in pixel units or block units. ,Ask.
- the depth reliability acquisition unit 8 determines that the reliability of the depth value of the pixel or block is high.
- the depth reliability acquisition unit 8 determines that the reliability of the depth value of the pixel or block is low.
- the depth reliability acquisition unit 8 calculates a higher reliability as the difference is smaller, based on the difference between the pixel value of the left image and the pixel value of the right image.
- the depth reliability acquisition unit 8 may determine that the reliability is low when the depth value is not within a predetermined range. In this case, the depth reliability acquisition unit 8 calculates a lower reliability based on the depth value of each pixel of the image as the deviation from the predetermined range increases. Further, the depth reliability acquisition unit 8 may determine that the reliability is low when the difference in depth value is large between adjacent pixels. In this case, the depth reliability acquisition unit 8 calculates a lower reliability as the difference is larger based on the magnitude of the difference.
- FIG. 12A shows a depth map obtained by the depth information acquisition unit 4.
- the depth map shown in FIG. 12A includes some depth value errors in the vicinity of the person.
- FIG. 12B shows the depth map reliability corresponding to the depth map shown in FIG. 12A.
- the depth reliability acquisition unit 8 acquires the depth map reliability as illustrated in FIG. 12B and outputs the depth map reliability to the embedded region search unit 5.
- the embedding area search unit 5 preferentially determines an area having a low depth value reliability as an embedding area using the depth map reliability. However, even when the reliability of the depth value of the subject position (such as the position of a person's face) obtained by the subject position detection unit 2 is low, the embedded region search unit 5 determines the embedded region while avoiding the subject position. Also good. Further, the embedded region searching unit 5 detects the position of the center of gravity of the region where the reliability of the depth value is low, and determines the position, size, and shape of the embedded region so that the center of gravity position coincides with the center of the embedded region. Also good.
- the image processing apparatus 200 can generate an appropriate 3D image by hiding an inappropriate area as a 3D image.
- the image processing apparatus 200 can generate an appropriate 3D image by setting the embedding position of the embedding information in an area where the reliability of the depth value is low.
- the depth map reliability expressed as an image is used as a format for expressing the reliability of the depth value.
- the format for representing the reliability of the depth value is not limited to such a format.
- Other formats may be used as the format for representing the reliability of the depth value.
- first embodiment and the second embodiment may be combined.
- a region with low reliability among the regions on the back side may be set as the embedded region.
- the back area may be set as an embedded area. It is possible to combine various conditions relating to the embedded region shown in the first embodiment and the second embodiment.
- FIG. 13 is a configuration diagram of the image processing apparatus according to the present embodiment.
- the image processing apparatus 110 includes an image acquisition unit 11, an embedded information acquisition unit 13, a depth information acquisition unit 14, and an embedded region determination unit 15.
- the image acquisition unit 11 acquires an image.
- the image acquisition unit 11 corresponds to the imaging unit 1 according to the first embodiment and the second embodiment.
- the embedded information acquisition unit 13 acquires embedded information.
- the embedded information is information embedded in an area in the image.
- the embedded information acquisition unit 13 corresponds to the embedded information acquisition unit 3 according to the first and second embodiments.
- the depth information acquisition unit 14 acquires depth information.
- the depth information indicates the depth value of each pixel of the image.
- the depth information acquisition unit 14 corresponds to the depth information acquisition unit 4 according to the first embodiment and the second embodiment.
- the embedding area determination unit 15 determines the embedding area using the depth information.
- the embedded area is an area in which embedded information is embedded.
- the embedding area determining unit 15 corresponds to the embedding area searching unit 5 according to the first and second embodiments.
- FIG. 14 is a flowchart showing the operation of the image processing apparatus 110 shown in FIG.
- the image acquisition unit 11 acquires an image (S301).
- the embedded information acquisition unit 13 acquires embedded information (S302).
- the depth information acquisition unit 14 acquires depth information (S303).
- the embedding area determination unit 15 determines an embedding area using the depth information (S304).
- the image processing apparatus 110 can appropriately determine an area for embedding information.
- Embodiment 4 In the present embodiment, like the third embodiment, the characteristic configuration and procedure shown in the first embodiment and the second embodiment are shown for confirmation. Compared with Embodiment 3, this embodiment includes a configuration and a procedure that are arbitrarily added.
- FIG. 15 is a configuration diagram of the image processing apparatus according to the present embodiment.
- the image processing apparatus 120 includes an image acquisition unit 21, a subject position detection unit 22, an embedded information acquisition unit 23, a depth information acquisition unit 24, an embedded region determination unit 25, and a depth value determination unit. 26, an embedding unit 27, and a display unit 28.
- the image acquisition unit 21 acquires an image.
- the image acquisition unit 21 corresponds to the imaging unit 1 according to the first embodiment, the imaging unit 1 according to the second embodiment, and the image acquisition unit 21 according to the third embodiment.
- the subject position detection unit 22 acquires subject position information indicating the subject position by detecting the subject position.
- the subject position is a position in the image of a predetermined subject included in the image.
- the subject position detection unit 22 corresponds to the subject position detection unit 2 according to the first embodiment and the subject position detection unit 2 according to the second embodiment.
- the embedded information acquisition unit 23 acquires embedded information.
- the embedded information is information embedded in an area in the image.
- the embedded information acquisition unit 23 corresponds to the embedded information acquisition unit 3 according to the first embodiment, the embedded information acquisition unit 3 according to the second embodiment, and the embedded information acquisition unit 13 according to the third embodiment.
- the depth information acquisition unit 24 acquires depth information.
- the depth information indicates the depth value of each pixel of the image.
- the depth information acquisition unit 24 corresponds to the depth information acquisition unit 4 according to the first embodiment, the depth information acquisition unit 4 according to the second embodiment, and the depth information acquisition unit 14 according to the third embodiment.
- the embedding area determination unit 25 determines the embedding area using the depth information.
- the embedded area is an area in which embedded information is embedded.
- the embedded region determination unit 25 corresponds to the embedded region search unit 5 according to the first embodiment, the embedded region search unit 5 according to the second embodiment, and the embedded region determination unit 15 according to the third embodiment.
- the depth value determination unit 26 determines the depth value of the embedded information.
- the depth value determination unit 26 corresponds to the depth information setting unit 6 according to the first embodiment and the depth information setting unit 6 according to the second embodiment.
- the embedding unit 27 embeds the embedding information in the embedding area using the depth value determined by the depth value determining unit 26.
- the embedding unit 27 corresponds to the embedding unit 7 according to the first embodiment and the embedding unit 7 according to the second embodiment.
- the display unit 28 displays an image.
- the display unit 28 may display the image acquired by the image acquisition unit 21 or may display the image in which the embedded information is embedded by the embedding unit 27.
- the display unit 28 may display the OSD according to the first embodiment.
- FIG. 16 is a flowchart showing the operation of the image processing apparatus 120 shown in FIG.
- the image acquisition unit 21 acquires an image (S401).
- the embedded information acquisition unit 23 acquires embedded information (S402).
- the depth information acquisition unit 24 acquires depth information (S403).
- the subject position detection unit 22 acquires subject position information indicating the subject position by detecting the subject position (S404).
- the embedding area determination unit 25 determines an embedding area using the depth information (S405).
- the depth value determination unit 26 determines the depth value of the embedded information (S406).
- the embedding unit 27 embeds the embedding information in the embedding area using the depth value determined by the depth value determining unit 26 (S407).
- the display unit 28 displays the image in which the embedding information is embedded by the embedding unit 27 (S408).
- the image processing apparatus 120 can appropriately determine an area for embedding information.
- the embedding area determination unit 25 may determine the embedding area using the depth information and the subject position information. Further, the embedding area determination unit 25 may determine, as the embedding area, an area composed of a plurality of pixels having a depth value indicating a depth side from a predetermined depth value.
- the predetermined depth value may be the depth value of the pixel at the subject position.
- the embedding area determination unit 25 may determine, as an embedding area, an area composed of a plurality of pixels having a depth value within a predetermined range from a predetermined depth value.
- the predetermined depth value may be a depth value whose appearance frequency is a peak, or a depth value whose appearance frequency is the deepest peak.
- the embedding area determination unit 25 includes a plurality of pixels having a depth value within a predetermined range from a depth value having a peak appearance frequency among a plurality of pixels having a depth value indicating a depth side from a predetermined depth value.
- An area to be processed may be determined as an embedded area.
- the depth information acquisition unit 24 may acquire depth information including information indicating the reliability of the depth value of each pixel of the image.
- the embedding area determination unit 25 may determine an area including pixels whose depth value reliability is lower than a predetermined reliability as an embedding area.
- the embedding area determination unit 25 may set the size of the embedding area using the information amount of the embedding information acquired by the embedding information acquisition unit 23. Then, the embedding area determination unit 25 may determine an embedding area having a set size.
- the embedding area determination unit 25 may determine the embedding area from the area excluding the subject position using the depth information.
- the embedding area determination unit 25 determines an embedding area from the area excluding the subject position without using the depth information when there is no embedding area that satisfies the conditions used for determining the embedding area using the depth information. May be.
- the depth value determination unit 26 may determine the depth value of the embedded information as a depth value indicating the near side of the depth value at the subject position.
- the depth value determination unit 26 may determine the depth value of the embedded information as a depth value having the same degree as the depth value of the subject position.
- the depth value determination unit 26 may determine the depth value of the embedded information from the depth value of the subject position to a depth value within a predetermined range.
- the subject may be a human face.
- the embedded information may include at least one of a text, a decorative part, a frame, and an image.
- the display unit 28 may display a notification message indicating that there is no embedded area to the user when there is no embedded area that satisfies the conditions used to determine the embedded area.
- the display unit 28 may display a notification message including a message that prompts the user to photograph the subject away from the subject.
- the display unit 28 notifies such notification when there is no embedded region that satisfies the conditions used to determine the embedded region, and when the depth value of the pixel at the subject position is closer to the front than the predetermined depth value.
- a message may be displayed.
- the embedding area determination unit 25 may determine whether or not a predetermined subject included in the image is near by using depth information when there is no embedding area that satisfies the conditions used for determining the embedding area. . For example, the embedding area determination unit 25 determines that the subject is close when the depth value of the pixel at the subject position is closer than a predetermined depth value. When it is determined that the subject is close, the display unit 28 may display a notification message including a message that prompts the user to take the subject away from the subject.
- the image processing device 120 may be an imaging device such as a camera, or may be a part of the imaging device.
- the image acquisition unit 21 may acquire an image by photographing a subject.
- the aspect which concerns on this invention may be implement
- the aspect according to the present invention may be realized as a program for causing a computer to execute these steps, or realized as a non-temporary recording medium such as a computer-readable CD-ROM in which the program is recorded. May be.
- the aspect according to the present invention may be realized as information, data, or a signal indicating the program.
- These programs, information, data, and signals may be distributed via a communication network such as the Internet.
- the image processing apparatus is a computer system including a microprocessor, a ROM, a RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like.
- a computer program is stored in the RAM or hard disk unit.
- the image processing apparatus achieves its functions by the microprocessor operating according to the computer program.
- the computer program is configured by combining a plurality of instruction codes indicating instructions for the computer in order to achieve a predetermined function.
- a part or all of the plurality of constituent elements constituting the image processing apparatus may be configured by a single system LSI (Large Scale Integration).
- the system LSI is a super multifunctional LSI manufactured by integrating a plurality of components on one chip, and specifically, a computer system including a microprocessor, a ROM, a RAM, and the like. .
- the RAM stores computer programs.
- the system LSI achieves its functions by the microprocessor operating according to the computer program.
- a part or all of the constituent elements constituting the image processing apparatus may be configured by an IC card that can be attached to and detached from the image processing apparatus or a single module.
- the IC card or module is a computer system that includes a microprocessor, ROM, RAM, and the like.
- the IC card or module may include the above-mentioned super multifunctional LSI.
- the IC card or the module achieves its function by the microprocessor operating according to the computer program.
- the IC card or module may have tamper resistance.
- the method according to the present invention may be the method described above. Further, the aspect according to the present invention may be a computer program for realizing this method by a computer or a digital signal constituting the computer program.
- a computer program or a digital signal can be recorded on a computer-readable recording medium such as a flexible disk, hard disk, CD-ROM, MO, DVD, DVD-ROM, DVD-RAM, BD (Blu-ray). Disc), or may be recorded in a semiconductor memory or the like.
- the aspect according to the present invention may be a digital signal recorded on these recording media.
- the aspect according to the present invention may be configured to transmit a computer program or a digital signal via an electric communication line, a wireless communication line, a wired communication line, a network represented by the Internet, a data broadcast, or the like.
- the aspect according to the present invention may be a computer system including a microprocessor and a memory.
- the memory may store the above computer program, and the microprocessor may operate according to the computer program.
- program or digital signal may be recorded on a recording medium and transferred, or transferred via a network or the like.
- aspect which concerns on this invention may be implemented by another independent computer system.
- each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
- Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
- the software that realizes the image processing apparatus according to each of the above embodiments is the following program.
- the program acquires an image, acquires embedding information embedded in an area in the image, acquires depth information indicating a depth value of each pixel of the image, and uses the depth information. Then, an image processing method for determining an embedding area that is the area in which the embedding information is embedded is executed.
- Each component may be a circuit. These circuits may constitute one circuit as a whole, or may be separate circuits. Each of these circuits may be a general-purpose circuit or a dedicated circuit.
- the present invention is not limited to this embodiment. Unless it deviates from the gist of the present invention, one or more of the present invention may be applied to various modifications that can be conceived by those skilled in the art, or forms constructed by combining components in different embodiments. It may be included within the scope of the embodiments.
- a process executed by a specific processing unit may be executed by another processing unit.
- the order of the plurality of processes may be changed, and the plurality of processes may be executed in parallel.
- the present invention is useful for embedding information in an image.
- a digital still camera a digital video camera, a consumer or commercial imaging device, a digital photo frame, a television receiver, a portable terminal, a cellular phone, or the like. Is available.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Editing Of Facsimile Originals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Studio Devices (AREA)
Abstract
Description
本発明者は、「背景技術」の欄において記載した、画像内の領域に情報を埋め込む画像処理に関する技術に関し、以下の課題を見出した。
図4は、本実施の形態に係る画像処理装置の構成、および、画像処理装置の各構成要素への入力情報を示す。図4の通り、本実施の形態に係る画像処理装置100は、撮像部1、被写体位置検出部2、埋め込み情報取得部3、デプス情報取得部4、埋め込み領域探索部5、デプス情報設定部6、および、埋め込み部7を有する。
本実施の形態に係る画像処理装置は、埋め込み領域の決定に、デプス値の信頼度を用いる。
本実施の形態は、実施の形態1および実施の形態2で示された特徴的な構成および手順を確認的に示す。
本実施の形態は、実施の形態3と同様に、実施の形態1および実施の形態2で示された特徴的な構成および手順を確認的に示す。実施の形態3と比較して、本実施の形態には、任意に追加される構成および手順が含まれる。
本発明に係る態様は、上記の複数の実施の形態に限定されない。本発明に係る態様は、以下の態様でもよい。
2、22 被写体位置検出部
3、13、23 埋め込み情報取得部
4、14、24 デプス情報取得部
5 埋め込み領域探索部
6 デプス情報設定部
7、27 埋め込み部
8 デプス信頼度取得部
11、21 画像取得部
15、25 埋め込み領域決定部
26 奥行き値決定部
28 表示部
100、110、120、200 画像処理装置
Claims (20)
- 画像を取得する画像取得部と、
前記画像内の領域に埋め込まれる埋め込み情報を取得する埋め込み情報取得部と、
前記画像の各画素の奥行き値を示すデプス情報を取得するデプス情報取得部と、
前記デプス情報を用いて、前記埋め込み情報が埋め込まれる前記領域である埋め込み領域を決定する埋め込み領域決定部とを備える
画像処理装置。 - 前記埋め込み領域決定部は、前記デプス情報を用いて、所定の奥行き値よりも奥側を示す奥行き値を有する複数の画素で構成される前記埋め込み領域を決定する
請求項1に記載の画像処理装置。 - 前記画像処理装置は、さらに、前記画像に含まれる所定の被写体の、前記画像における位置である被写体位置を検出することにより、前記被写体位置を示す被写体位置情報を取得する被写体位置検出部を備え、
前記埋め込み領域決定部は、前記デプス情報および前記被写体位置情報を用いて、前記被写体位置の画素の奥行き値よりも奥側を示す奥行き値を有する複数の画素で構成される前記埋め込み領域を決定する
請求項1または2に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記デプス情報を用いて、所定の奥行き値から所定の範囲内の奥行き値を有する複数の画素で構成される前記埋め込み領域を決定する
請求項1~3のいずれか1項に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記デプス情報を用いて、出現頻度がピークである奥行き値から所定の範囲内の奥行き値を有する複数の画素で構成される前記埋め込み領域を決定する
請求項1~4のいずれか1項に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記デプス情報を用いて、所定の奥行き値よりも奥側を示す奥行き値を有する複数の画素で構成される領域であり、出現頻度がピークである奥行き値から所定の範囲内の奥行き値を有する複数の画素で構成される領域である前記埋め込み領域を決定する
請求項1~5のいずれか1項に記載の画像処理装置。 - 前記デプス情報取得部は、前記画像の各画素の奥行き値の信頼度を示す情報を含む前記デプス情報を取得する
請求項1~6のいずれか1項に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記デプス情報を用いて、奥行き値の信頼度が所定の信頼度よりも低い画素を含む前記埋め込み領域を決定する
請求項7に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記画像に含まれる所定の被写体の、前記画像における位置である被写体位置を除く領域の中から、前記デプス情報を用いて前記埋め込み領域を決定する
請求項1~8のいずれか1項に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記埋め込み情報取得部で取得された前記埋め込み情報の情報量を用いて前記埋め込み領域のサイズを設定し、設定された前記サイズを有する前記埋め込み領域を決定する
請求項1~9のいずれか1項に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記デプス情報を用いて前記埋め込み領域を決定するために用いられる条件を満たす前記埋め込み領域が存在しない場合、前記画像に含まれる所定の被写体の、前記画像における位置である被写体位置を除く領域の中から前記埋め込み領域を決定する
請求項1~10のいずれか1項に記載の画像処理装置。 - 前記画像処理装置は、さらに、
前記埋め込み情報の奥行き値を決定する奥行き値決定部と、
前記奥行き値決定部で決定された前記奥行き値を用いて前記画像内の前記埋め込み領域に前記埋め込み情報を埋め込む埋め込み部とを備える
請求項1~11のいずれか1項に記載の画像処理装置。 - 前記奥行き値決定部は、前記埋め込み情報の奥行き値を、前記画像に含まれる所定の被写体の、前記画像における位置である被写体位置の奥行き値と同じ度合いの奥行き値に決定する
請求項12に記載の画像処理装置。 - 前記奥行き値決定部は、前記デプス情報を用いて前記埋め込み領域を決定するために用いられる条件を満たす前記埋め込み領域が存在しない場合、前記画像に含まれる所定の被写体の、前記画像における位置である被写体位置を除く領域の中から決定される前記埋め込み領域に埋め込まれる前記埋め込み情報の奥行き値を前記被写体位置における奥行き値よりも手前側を示す奥行き値に決定する
請求項12または13に記載の画像処理装置。 - 前記被写体位置検出部は、前記画像に含まれる人物の顔の、前記画像における位置を前記被写体位置として検出することにより、前記人物の顔の位置を前記被写体位置として示す前記被写体位置情報を取得する
請求項3に記載の画像処理装置。 - 前記埋め込み情報取得部は、テキスト、装飾部品、フレーム、および、画像のうち少なくとも1つを含む前記埋め込み情報を取得する
請求項1~15のいずれか1項に記載の画像処理装置。 - 前記画像処理装置は、さらに、前記画像を表示する表示部を備え、
前記表示部は、前記埋め込み領域を決定するために用いられる条件を満たす前記埋め込み領域が存在しない場合、利用者に前記埋め込み領域が存在しないことを示す通知メッセージを表示する
請求項1~16のいずれか1項に記載の画像処理装置。 - 前記埋め込み領域決定部は、前記埋め込み領域を決定するために用いられる条件を満たす前記埋め込み領域が存在しない場合、前記デプス情報を用いて、前記画像に含まれる所定の被写体が近いか否かを判定し、
前記表示部は、前記被写体が近いと判定された場合、前記被写体から離れて前記被写体を撮影することを利用者に促すメッセージを含む前記通知メッセージを表示する
請求項17に記載の画像処理装置。 - 請求項1~18のいずれか1項に記載の画像処理装置を備え、
前記画像取得部は、被写体を撮影することにより、前記画像を取得する
撮像装置。 - 画像を取得し、
前記画像内の領域に埋め込まれる埋め込み情報を取得し、
前記画像の各画素の奥行き値を示すデプス情報を取得し、
前記デプス情報を用いて、前記埋め込み情報が埋め込まれる前記領域である埋め込み領域を決定する
画像処理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013535164A JP6029021B2 (ja) | 2012-01-27 | 2013-01-21 | 画像処理装置、撮像装置および画像処理方法 |
US14/110,210 US9418436B2 (en) | 2012-01-27 | 2013-01-21 | Image processing apparatus, imaging apparatus, and image processing method |
CN201380000983.2A CN103460706B (zh) | 2012-01-27 | 2013-01-21 | 图像处理装置、摄像装置以及图像处理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012015174 | 2012-01-27 | ||
JP2012-015174 | 2012-01-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013111552A1 true WO2013111552A1 (ja) | 2013-08-01 |
Family
ID=48873284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/000248 WO2013111552A1 (ja) | 2012-01-27 | 2013-01-21 | 画像処理装置、撮像装置および画像処理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9418436B2 (ja) |
JP (1) | JP6029021B2 (ja) |
WO (1) | WO2013111552A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150037203A (ko) * | 2013-09-30 | 2015-04-08 | 엘지디스플레이 주식회사 | 3차원 입체 영상용 깊이지도 보정장치 및 보정방법 |
JP2015215252A (ja) * | 2014-05-12 | 2015-12-03 | 株式会社日立ソリューションズ | 衛星画像データ処理装置、衛星画像データ処理システム、衛星画像データ処理方法及びプログラム |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150037366A (ko) * | 2013-09-30 | 2015-04-08 | 삼성전자주식회사 | 깊이 영상의 노이즈를 저감하는 방법, 이를 이용한 영상 처리 장치 및 영상 생성 장치 |
US20190087926A1 (en) * | 2014-02-15 | 2019-03-21 | Pixmarx The Spot, LLC | Embedding digital content within a digital photograph during capture of the digital photograph |
US9477689B2 (en) * | 2014-02-15 | 2016-10-25 | Barry Crutchfield | Embedding digital content within a digital photograph during capture of the digital photograph |
US10004403B2 (en) * | 2014-08-28 | 2018-06-26 | Mela Sciences, Inc. | Three dimensional tissue imaging system and method |
US9906772B2 (en) * | 2014-11-24 | 2018-02-27 | Mediatek Inc. | Method for performing multi-camera capturing control of an electronic device, and associated apparatus |
US9396400B1 (en) * | 2015-07-30 | 2016-07-19 | Snitch, Inc. | Computer-vision based security system using a depth camera |
KR102423175B1 (ko) * | 2017-08-18 | 2022-07-21 | 삼성전자주식회사 | 심도 맵을 이용하여 이미지를 편집하기 위한 장치 및 그에 관한 방법 |
CN109584150B (zh) * | 2018-11-28 | 2023-03-14 | 维沃移动通信(杭州)有限公司 | 一种图像处理方法及终端设备 |
US20240020787A1 (en) * | 2020-11-12 | 2024-01-18 | Sony Semiconductor Solutions Corporation | Imaging element, imaging method, imaging device, and image processing system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10228547A (ja) * | 1997-02-14 | 1998-08-25 | Canon Inc | 画像編集方法及び装置並びに記憶媒体 |
JPH11289555A (ja) * | 1998-04-02 | 1999-10-19 | Toshiba Corp | 立体映像表示装置 |
JP2011029849A (ja) * | 2009-07-23 | 2011-02-10 | Sony Corp | 受信装置、通信システム、立体画像への字幕合成方法、プログラム、及びデータ構造 |
JP2012015771A (ja) * | 2010-06-30 | 2012-01-19 | Toshiba Corp | 画像処理装置、画像処理プログラム、及び画像処理方法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6473516B1 (en) * | 1998-05-22 | 2002-10-29 | Asa Systems, Inc. | Large capacity steganography |
JP4129786B2 (ja) | 2002-09-06 | 2008-08-06 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4875162B2 (ja) | 2006-10-04 | 2012-02-15 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 画像強調 |
ATE472230T1 (de) * | 2007-03-16 | 2010-07-15 | Thomson Licensing | System und verfahren zum kombinieren von text mit dreidimensionalem inhalt |
JP2009200784A (ja) | 2008-02-21 | 2009-09-03 | Nikon Corp | 画像処理装置およびプログラム |
US8599242B2 (en) * | 2008-12-02 | 2013-12-03 | Lg Electronics Inc. | Method for displaying 3D caption and 3D display apparatus for implementing the same |
JP5274359B2 (ja) * | 2009-04-27 | 2013-08-28 | 三菱電機株式会社 | 立体映像および音声記録方法、立体映像および音声再生方法、立体映像および音声記録装置、立体映像および音声再生装置、立体映像および音声記録媒体 |
US8830227B2 (en) * | 2009-12-06 | 2014-09-09 | Primesense Ltd. | Depth-based gain control |
CA2799704C (en) * | 2010-05-30 | 2016-12-06 | Jongyeul Suh | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle |
CN101902582B (zh) | 2010-07-09 | 2012-12-19 | 清华大学 | 一种立体视频字幕添加方法及装置 |
JP2012138787A (ja) * | 2010-12-27 | 2012-07-19 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
-
2013
- 2013-01-21 US US14/110,210 patent/US9418436B2/en not_active Expired - Fee Related
- 2013-01-21 WO PCT/JP2013/000248 patent/WO2013111552A1/ja active Application Filing
- 2013-01-21 JP JP2013535164A patent/JP6029021B2/ja not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10228547A (ja) * | 1997-02-14 | 1998-08-25 | Canon Inc | 画像編集方法及び装置並びに記憶媒体 |
JPH11289555A (ja) * | 1998-04-02 | 1999-10-19 | Toshiba Corp | 立体映像表示装置 |
JP2011029849A (ja) * | 2009-07-23 | 2011-02-10 | Sony Corp | 受信装置、通信システム、立体画像への字幕合成方法、プログラム、及びデータ構造 |
JP2012015771A (ja) * | 2010-06-30 | 2012-01-19 | Toshiba Corp | 画像処理装置、画像処理プログラム、及び画像処理方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150037203A (ko) * | 2013-09-30 | 2015-04-08 | 엘지디스플레이 주식회사 | 3차원 입체 영상용 깊이지도 보정장치 및 보정방법 |
KR102122523B1 (ko) * | 2013-09-30 | 2020-06-12 | 엘지디스플레이 주식회사 | 3차원 입체 영상용 깊이지도 보정장치 및 보정방법 |
JP2015215252A (ja) * | 2014-05-12 | 2015-12-03 | 株式会社日立ソリューションズ | 衛星画像データ処理装置、衛星画像データ処理システム、衛星画像データ処理方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
US20140049614A1 (en) | 2014-02-20 |
US9418436B2 (en) | 2016-08-16 |
CN103460706A (zh) | 2013-12-18 |
JP6029021B2 (ja) | 2016-11-24 |
JPWO2013111552A1 (ja) | 2015-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6029021B2 (ja) | 画像処理装置、撮像装置および画像処理方法 | |
JP6427711B1 (ja) | 電子機器、方法及びプログラム | |
JP5867424B2 (ja) | 画像処理装置、画像処理方法、プログラム | |
US8494301B2 (en) | Refocusing images using scene captured images | |
JP5683745B2 (ja) | 画像表示装置及び方法 | |
JP6336206B2 (ja) | 動画ファイルの識別子を処理する方法、装置、プログラム及び記録媒体 | |
TW201801516A (zh) | 影像擷取裝置及其攝影構圖的方法 | |
JPWO2016199483A1 (ja) | 画像処理装置、画像処理方法、プログラム | |
KR20170060411A (ko) | 피사체의 근접 여부에 따라 촬영 장치를 제어하는 방법 및 촬영 장치. | |
CN104247412A (zh) | 图像处理装置、摄像装置、图像处理方法、记录介质以及程序 | |
JP2013172446A (ja) | 情報処理装置、端末装置、撮像装置、情報処理方法、及び撮像装置における情報提供方法 | |
CN111954058A (zh) | 图像处理方法、装置、电子设备以及存储介质 | |
JP2011023898A (ja) | 表示装置、表示方法および集積回路 | |
JP2011191860A (ja) | 撮像装置、撮像処理方法及びプログラム | |
JP2010028452A (ja) | 画像処理装置および電子カメラ | |
JPWO2014155961A1 (ja) | 画像生成装置、撮影装置、画像生成方法及びプログラム | |
JP2015041865A (ja) | 画像処理装置及び画像処理方法 | |
JP6314321B2 (ja) | 画像生成装置、撮影装置、画像生成方法及びプログラム | |
JP6295717B2 (ja) | 情報表示装置、情報表示方法及びプログラム | |
JP5718502B2 (ja) | 画像作成装置及び画像作成方法 | |
CN107682556B (zh) | 信息展示方法及设备 | |
JP2011234229A (ja) | 撮像装置 | |
KR20160078878A (ko) | 화상 처리 장치, 화상 처리 방법 및 프로그램 | |
JP5448799B2 (ja) | 表示制御装置及び表示制御方法 | |
US20240251154A1 (en) | Image capture apparatus, image processing apparatus, and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2013535164 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13741348 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14110210 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13741348 Country of ref document: EP Kind code of ref document: A1 |