WO2013099628A1 - 画像処理装置、画像処理システム、画像処理方法、および、プログラム - Google Patents
画像処理装置、画像処理システム、画像処理方法、および、プログラム Download PDFInfo
- Publication number
- WO2013099628A1 WO2013099628A1 PCT/JP2012/082374 JP2012082374W WO2013099628A1 WO 2013099628 A1 WO2013099628 A1 WO 2013099628A1 JP 2012082374 W JP2012082374 W JP 2012082374W WO 2013099628 A1 WO2013099628 A1 WO 2013099628A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- region
- detection target
- detection
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 204
- 238000003672 processing method Methods 0.000 title claims abstract description 7
- 238000001514 detection method Methods 0.000 claims abstract description 457
- 210000004209 hair Anatomy 0.000 claims abstract description 411
- 238000006243 chemical reaction Methods 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims description 97
- 239000011159 matrix material Substances 0.000 claims description 87
- 238000003384 imaging method Methods 0.000 claims description 80
- 230000008569 process Effects 0.000 claims description 70
- 230000009466 transformation Effects 0.000 claims description 38
- 238000012937 correction Methods 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 14
- 230000001131 transforming effect Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 46
- 238000004458 analytical method Methods 0.000 description 44
- 239000000523 sample Substances 0.000 description 36
- 238000003860 storage Methods 0.000 description 29
- 238000010586 diagram Methods 0.000 description 24
- 230000003796 beauty Effects 0.000 description 22
- 230000003287 optical effect Effects 0.000 description 21
- 230000015572 biosynthetic process Effects 0.000 description 10
- 238000003786 synthesis reaction Methods 0.000 description 10
- 230000004048 modification Effects 0.000 description 9
- 238000012986 modification Methods 0.000 description 9
- 230000002093 peripheral effect Effects 0.000 description 9
- 238000009826 distribution Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 239000000203 mixture Substances 0.000 description 6
- 230000036548 skin texture Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000002194 synthesizing effect Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- XUMBMVFBXHLACL-UHFFFAOYSA-N Melanin Chemical compound O=C1C(=O)C(C2=CNC3=C(C(C(=O)C4=C32)=O)C)=C2C4=CNC2=C1C XUMBMVFBXHLACL-UHFFFAOYSA-N 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 201000004624 Dermatitis Diseases 0.000 description 1
- 208000010201 Exanthema Diseases 0.000 description 1
- 101000854908 Homo sapiens WD repeat-containing protein 11 Proteins 0.000 description 1
- 102100020705 WD repeat-containing protein 11 Human genes 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 208000010668 atopic eczema Diseases 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 235000012489 doughnuts Nutrition 0.000 description 1
- 201000005884 exanthem Diseases 0.000 description 1
- 230000003648 hair appearance Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 206010037844 rash Diseases 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000000106 sweat gland Anatomy 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
- A61B2576/02—Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
Definitions
- the present technology relates to an image processing device, an image processing system, an image processing method, and a program, and in particular, an image processing device, an image processing system, an image processing method, and the like that are preferable for detecting a predetermined target in an image. And related to the program.
- a technique has been proposed in which a body hair pixel constituting a body hair region in an image and a background pixel constituting a non-body hair region are separated (for example, see Patent Document 1).
- body hair pixels are detected by binarizing into pixels that are greater than or equal to the mode value of pixels in the image and pixels that are less than the mode value, or by using a Sobel filter.
- the hair pixel is detected by extracting the contour of the hair.
- the pixels that are less than the mode value of the pixel value include many pixels other than body hair.
- the image includes many contours other than body hair. Therefore, in the technique described in Patent Document 1, it is assumed that the detection error of the body hair region in the image becomes large.
- the present technology makes it possible to easily and accurately detect an area in which an object to be detected such as body hair is shown in an image.
- An image processing device includes a detection target region detection unit that detects a region in which the detection target is captured in an image obtained by photographing the detection target, and the detection target region detection unit includes the detection target A projection conversion unit that generates a projection image obtained by projecting and transforming the first image obtained by photographing the first image into a coordinate system of the second image obtained by photographing the detection target from a direction different from the first image; and the second image.
- a difference image generation unit that generates a difference image between the projection image and the projected image, and a candidate area including pixels having a difference value equal to or greater than a predetermined threshold in the difference image, and the second image corresponding to the candidate area
- An area detection unit that divides an image area into a detection target area in which the detection target is reflected and a non-detection target area other than the detection target area;
- a detection target removal unit that removes the detection target from an image obtained by photographing the detection target is further provided, and the detection target removal unit is based on the detection target region detection unit and a detection result by the detection target region detection unit.
- the removal unit projects at least pixels in the region of the first image corresponding to the detection target region of the second image to the second image, and replaces the pixels from the second image.
- the detection target can be removed.
- the removal unit can remove the detection target from the first image by projecting and replacing at least the pixels in the non-detection target region of the second image onto the first image. .
- the detection object removal unit selects two images from three or more images obtained by photographing the detection object from different directions, and newly generates an image from which the detection object is removed using the selected two images. After that, the process of newly generating an image from which the detection target has been removed can be repeated using one of the newly generated image and the remaining image until there is no remaining image.
- the detection target area detection unit includes a feature point extraction unit that extracts a feature point of the first image and a feature point of the second image, a feature point of the first image, and a feature point of the second image.
- An association unit that associates feature points with each other, and at least a part of a pair of feature points of the first image and feature points of the second image associated by the association unit,
- a projection matrix calculation unit that calculates a projection matrix that projects an image onto the coordinate system of the second image may be further provided, and the projection conversion unit may generate the projection image using the projection matrix.
- the projection matrix calculation unit calculates a plurality of the projection matrices based on a combination of a plurality of the feature point pairs, and the projection conversion unit uses a plurality of the projection matrices, respectively.
- Generating an image causing the difference image generation unit to generate a plurality of the difference images between each of the second image and the plurality of projection images, and causing the region detection unit to generate a plurality of the differences.
- the candidate area can be detected using the difference image that minimizes the difference between the second image and the second image.
- the region detection unit can separate the detection target region and the non-detection target region by comparing an image in the region of the second image corresponding to the candidate region with a surrounding image.
- the detection target region detection unit detects the detection target region in each of three or more images obtained by photographing the detection target from different directions, and the detection target region in an image selected from the three or more images Further, it is possible to further provide an area synthesis unit for synthesizing the detection target area of the remaining image with the area projected onto the coordinate system of the selected image.
- the image processing apparatus uses the first image obtained by photographing the detection target as the coordinates of the second image obtained by photographing the detection target from a different direction from the first image. Generating a projected image that is projectively transformed into a system, generating a difference image between the second image and the projected image, and detecting a candidate region including pixels having a difference value equal to or greater than a predetermined threshold in the difference image; And dividing the region of the second image corresponding to the candidate region into a detection target region in which the detection target is shown and a non-detection target region other than the detection target region.
- the program according to the first aspect of the present technology is a projected image obtained by projectively transforming a first image obtained by photographing a detection target into a coordinate system of a second image obtained by photographing the detection target from a different direction from the first image. And generating a difference image between the second image and the projected image, detecting a candidate area composed of pixels having a difference value equal to or larger than a predetermined threshold in the difference image, and corresponding to the candidate area Dividing the region of the second image into a detection target region in which the detection target is reflected and a non-detection target region other than the detection target region.
- An image processing system includes a photographing unit that photographs a detection target, and a detection target region detection unit that detects a region in which the detection target is captured in an image photographed by the photographing unit,
- the detection target region detection unit is a coordinate system of a second image in which the first target is captured by the imaging unit and the detection target is captured from a different direction from the first image.
- a projection conversion unit that generates a projection image that has undergone projective conversion, a difference image generation unit that generates a difference image between the second image and the projection image, and a difference value equal to or greater than a predetermined threshold in the difference image
- a region detection unit that detects a candidate region including pixels and divides the region of the second image corresponding to the candidate region into a detection target region in which the detection target is reflected and other non-detection target regions.
- a detection target removal unit that removes the detection target from an image obtained by photographing the detection target by the photographing unit is further provided, and the detection target removal unit includes detection by the detection target region detection unit and the detection target region detection unit. Based on the result, at least one pixel in the region of the other image corresponding to the region where the detection target of one of the first image and the second image is reflected is projected onto the one pixel.
- a removal unit that removes the detection target from the one image by replacement can be provided.
- the detection target removing unit selects two images from three or more images obtained by photographing the detection target from different directions by the photographing unit, and uses the selected two images to remove the detection target. After generating a new image, the process of newly generating an image from which the detection target has been removed is repeated using one of the newly generated image and the remaining image until there is no remaining image. it can.
- the detection target area detection unit includes a feature point extraction unit that extracts a feature point of the first image and a feature point of the second image, a feature point of the first image, and a feature point of the second image.
- An association unit that associates feature points with each other, and at least a part of a pair of feature points of the first image and feature points of the second image associated by the association unit,
- a projection matrix calculation unit that calculates a projection matrix that projects an image onto the coordinate system of the second image may be further provided, and the projection conversion unit may generate the projection image using the projection matrix.
- the detection target region detection unit detects the detection target region in three or more images obtained by photographing the detection target from different directions by the photographing unit, and the detection target region detection unit detects the detection target region in an image selected from the three or more images.
- a region combining unit that combines the detection target region and a region obtained by projecting the detection target region of the remaining image onto the coordinate system of the selected image may be further provided.
- the imaging unit is provided with a plurality of lenses arranged two-dimensionally and a plurality of imaging elements, and a plurality of the imaging with respect to one lens so that the relative positions with respect to the lenses are the same. It is possible to further provide an image generation unit that arranges elements and generates a plurality of images captured by the imaging elements having the same relative position to the lens.
- the photographing unit is configured to photograph an image reflected on a mirror that radially surrounds at least a part of the periphery of the region including the detection target, and a plurality of images are obtained from images obtained by photographing the image reflected on the mirror by the photographing unit
- An image cutout unit to be cut out and a geometric distortion correction unit that performs geometric distortion correction on the plurality of cut out images can be further provided.
- the first image and the second image can be images obtained by close-up of the detection target by the photographing unit.
- the image processing apparatus is a projected image obtained by performing projective transformation on a first image obtained by photographing body hairs into a coordinate system of a second image obtained by photographing the body hairs from a different direction from the first image.
- a projective transformation unit that generates a difference image generation unit that generates a difference image between the second image and the projection image, and a candidate region that includes pixels having a difference value equal to or greater than a predetermined threshold in the difference image.
- the region detection unit Based on the detection result by the region detection unit that detects and divides the region of the second image corresponding to the candidate region into a body hair region where the body hair is reflected and a non-hair region other than that, Projecting and replacing the pixel in the region of the other image corresponding to the region where the hair of the image of at least one of the first image and the second image is reflected on the one pixel, and replacing the one image Previous from image And a removal unit for removing hair.
- a difference image between the second image and the projected image is generated, a candidate area consisting of pixels having a difference value equal to or greater than a predetermined threshold is detected in the difference image, and the difference image corresponding to the candidate area is detected.
- the region of the second image is divided into the detection target region and the other non-detection target regions.
- the detection target is photographed, and the first image obtained by photographing the detection target is used as the coordinate system of the second image obtained by photographing the detection target from a different direction from the first image.
- a projective image obtained by projective transformation is generated, a difference image between the second image and the projection image is generated, and a candidate area including pixels having a difference value equal to or larger than a predetermined threshold is detected in the difference image,
- the region of the second image corresponding to the candidate region is divided into the detection target region and the other non-detection target regions.
- a projection image is generated by projectively transforming the first image obtained by photographing the body hair into the coordinate system of the second image obtained by photographing the body hair from a direction different from the first image.
- a difference image between the second image and the projected image is generated, a candidate area composed of pixels having a difference value equal to or larger than a predetermined threshold is detected in the difference image, and the second image corresponding to the candidate area is detected.
- the region of the image is divided into a body hair region in which the body hair is reflected and a non-hair region other than that, and based on the detection result of the body hair region, at least one of the first image and the second image
- the body hair is removed from the one image by projecting and replacing the pixel in the region of the other image corresponding to the region where the body hair is reflected on the one pixel.
- the first or second aspect of the present technology it is possible to easily and accurately detect an area in which an object to be detected such as body hair in the image is shown.
- the hair in the image can be accurately removed.
- FIG. 1 is a block diagram illustrating a functional configuration example of an image processing system 101 that is a first embodiment of an image processing system to which the present technology is applied.
- the image processing system 101 is configured to include a probe 111 and an image processing device 112.
- the image treatment system 101 is a system for detecting a region (hereinafter referred to as a body hair region) in which body hair is reflected in an image (hereinafter referred to as a skin image) taken with the probe 111 in contact with or close to a human skin. .
- the probe 111 is configured to include two imaging devices 121-1, 121-2.
- the photographing apparatuses 121-1 and 121-2 are constituted by, for example, cameras capable of close-up photography from a very short distance (for example, several mm to several centimeters). Note that the photographing apparatuses 121-1 and 121-2 can be configured by photographing means other than the camera.
- the imaging devices 121-1 and 121-2 can capture the same region of the human skin from different directions while the predetermined position of the probe 111 is in contact with or close to the human skin. Placed in. Then, the photographing devices 121-1 and 121-2 supply the photographed skin images to the image processing device 112, respectively.
- the photographing device 121-1 and the photographing device 121-2 are arranged so that at least a part of the photographing region overlaps.
- the photographing device 121-1 and the photographing device 121-2 are arranged such that at least one of the azimuth angle or the depression angle of the optical axis center with respect to the surface of the skin of the person to be photographed is different.
- the same region of the human skin can be photographed simultaneously from different directions by the photographing devices 121-1 and 121-2.
- the optical axis center of the imaging device 121-1 and the optical axis center of the imaging device 121-2 do not necessarily intersect.
- FIG. 2 and 3 show an example of an ideal arrangement of the photographing apparatuses 121-1, 121-2.
- FIG. 2 is a diagram showing the positional relationship between the imaging devices 121-1 and 121-2 using auxiliary lines.
- FIG. 3 shows the positional relationship between the imaging devices 121-1 and 121-2 in FIG. It is the figure seen from.
- the photographing apparatus 121-1 is arranged in an obliquely upward direction as seen from the intersection point with the optical axis center intersecting with the optical axis center of the photographing apparatus 121-2 on the surface of the human skin. As a result, the photographing apparatus 121-1 can photograph the skin region common to the photographing apparatus 121-2 from an obliquely upward direction.
- the photographing apparatus 121-2 is arranged above the skin so that the optical axis center is perpendicular to the skin surface, that is, the depression angle of the optical axis center is 90 degrees. As a result, the photographing apparatus 121-2 can photograph the skin from directly above, and can obtain a skin image without distortion.
- the angle ⁇ between the optical axis center of the imaging device 121-1 and the optical axis center of the imaging device 121-2 is determined by the degree of separation of the skin surface of the body hair, the thickness of the body hair, and the like. For example, the angle ⁇ is set to 45 degrees. Note that the depression angle of the optical axis center of the photographing apparatus 121-1 is 90 degrees - ⁇ .
- the photographing device 121 when it is not necessary to individually distinguish the photographing device 121-1 and the photographing device 121-2, they are simply referred to as the photographing device 121.
- the image processing apparatus 112 is configured to include an image acquisition unit 131, a body hair region detection unit 132, and a storage unit 133.
- the image acquisition unit 131 acquires skin images captured by the imaging device 121-1 and the imaging device 121-2 and supplies the skin image to the body hair region detection unit 132.
- the hair region detection unit 132 detects a hair region in the acquired skin image, and outputs information indicating the skin image and the detection result to a subsequent apparatus.
- the storage unit 133 appropriately stores data necessary for processing of the image processing apparatus 112.
- FIG. 4 is a block diagram illustrating a functional configuration example of the hair region detection unit 132.
- the body hair region detection unit 132 is configured to include a feature point extraction unit 151, an association unit 152, a homography matrix calculation unit 153, a projective transformation unit 154, a difference image generation unit 155, and a region detection unit 156.
- Feature point extraction unit 151 extracts feature points of each skin image.
- the feature point extraction unit 151 supplies the skin image and information indicating each extracted feature point to the association unit 152.
- the associating unit 152 associates feature points between two skin images, and detects a pair estimated to be the same feature point.
- the associating unit 152 supplies information indicating the skin image and the detection result of the feature point pair to the homography matrix calculating unit 153.
- the homography matrix calculation unit 153 converts one skin image (hereinafter, also referred to as a projection source image) to the other skin image (hereinafter, a projection destination image) based on at least a part of a pair of feature points between the two skin images.
- a homography matrix for projective transformation into the coordinate system also referred to as a coordinate system.
- the homography matrix calculation unit 153 supplies the skin image and information indicating the homography matrix and the feature point pair used for calculation of the homography matrix to the projective transformation unit 154.
- the projective transformation unit 154 performs projective transformation of the projection source image using the homography matrix.
- the projective transformation unit 154 uses the skin image (projection source image and projection destination image), the image generated by the projective transformation (hereinafter referred to as the projective transformation image), and the homography matrix and the features used for calculating the homography matrix.
- Information indicating the pair of points is supplied to the difference image generation unit 155.
- the difference image generation unit 155 generates a difference image between the projective transformation image and the projection destination image.
- the difference image generation unit 155 causes the storage unit 133 to store the difference image and information indicating the pair of the homography matrix and the feature point used in the generation process of the difference image.
- the difference image generation unit 155 instructs the homography matrix calculation unit 153 to calculate a homography matrix as necessary. Furthermore, the difference image generation unit 155 supplies the skin image (projection source image and projection destination image) to the area detection unit 156 when the generation process of the difference image ends.
- the region detection unit 156 detects a hair region in the projected image based on the difference image stored in the storage unit 133.
- the region detection unit 156 outputs information indicating the skin image (projection source image and projection destination image) and the detection result of the body hair region to the subsequent stage.
- the region detection unit 156 outputs information indicating at least one of the homography matrix or the feature point pair used in the process of generating the difference image used for the detection of the hair region, as necessary.
- the skin region where the hair BH1 exists is photographed in a direction looking down diagonally from the left by the photographing device 121-1, and obliquely downward from the right by the photographing device 121-2.
- the processing in the case of shooting in the direction of looking down will be described using specific examples as appropriate.
- step S1 the image processing system 101 acquires two skin images taken from different directions. Specifically, the photographing devices 121-1 and 121-2 photograph human skin almost simultaneously from different directions. At this time, the imaging devices 121-1 and 121-2, for example, close-up photographs the skin from a short distance with the probe 111 in contact with or close to the skin. Then, the photographing apparatuses 121-1 and 121-2 supply the skin image obtained as a result of photographing to the image acquisition unit 131. The image acquisition unit 131 supplies the acquired skin image to the feature point extraction unit 151 of the body hair region detection unit 132.
- a skin image including the image DL1 is captured by the imaging device 121-1
- a skin image including the image DR1 is captured by the imaging device 121-2.
- the image DL1 and the image DR1 are obtained by extracting portions corresponding to the same skin region from the skin images captured by the imaging device 121-1 and the imaging device 121-2, respectively.
- the image DL1 and the image DR1 there are a body hair area AL1 and a body hair area AR1 in which the body hair BH1 is reflected. Since the image DL1 and the image DR1 are taken from different directions, the skin areas hidden by the body hair BH1 are different. That is, the skin area hidden by the hair region AL1 in the image DL1 is different from the skin area hidden by the hair region AR1 in the image DR1.
- the reason why the left side of the image DL1 is long and the right side is short is that the area on the left side is closest to the photographing device 121 and appears smaller and closer to the right side. For the same reason, the right side of the image DR1 is long and the left side is short.
- step S2 the hair region detection unit 132 performs a body hair region detection process, and then the body hair detection process ends.
- step S21 the feature point extraction unit 151 extracts feature points of each skin image. Then, the feature point extraction unit 151 supplies the skin image and information indicating each extracted feature point to the association unit 152.
- any method can be adopted as a method for extracting feature points.
- SIFT Scale Invariant Feature Transform
- Feature points can be extracted using feature amounts or SURF (Speeded Up Up Robust Features) features.
- step S22 the associating unit 152 associates feature points between images. That is, the associating unit 152 detects a pair that is estimated to be the same feature point from among the combination of the feature point of one skin image and the feature point of the other skin image.
- the associating unit 152 selects one feature point of the image DL1, and SIFT between the selected feature point and each feature point of the image DR1. A distance between vectors using the feature vector information is calculated. Then, the associating unit 152 associates the feature points of the selected image DL1 and the feature points of the image DR1 having the smallest inter-vector distance as a pair of the same feature points. The associating unit 152 performs this process for all feature points.
- step S23 the homography matrix calculation unit 153 randomly selects four feature point pairs. For example, as shown in FIG. 8, the homography matrix calculation unit 153 calculates four feature point pairs of the feature points FP1L to FP4L of the image DL1 and the feature points FP1R to FP4R of the image DR1 paired with them. select.
- the homography matrix calculation unit 153 calculates a homography matrix based on the selected feature point pair.
- the homography matrix calculation unit 153 is configured to project the image DL1 (projection source image) to the coordinate system of the image DR1 (projection destination image) based on the four feature point pairs shown in FIG. A matrix H LR is calculated. Then, the homography matrix calculation unit 153 supplies the skin image and information indicating the calculated homography matrix and the feature point pair used to calculate the homography matrix to the projective transformation unit 154.
- the projective transformation unit 154 performs projective transformation on one skin image using the homography matrix.
- the projective transformation unit 154 using the homography matrix H LR calculated by homography matrix calculation section 153, by projective transformation of the image DL1 (projected original image), the image DL1 ′ (projection transformation image) is generated.
- the hair region AL1 of the image DL1 is projected onto the hair region AL1 ′ in the image DL1 ′.
- the hair region AL1 ′ is a region different from the hair region AR1 in the image DR.
- the projective transformation unit 154 receives information indicating the skin image (projection source image and projection destination image), the projective transformation image, and the feature point pair used for calculating the homography matrix and the homography matrix. 155.
- the difference image generation unit 155 generates a difference image between the skin image (projection conversion image) subjected to the projective transformation and the skin image (projection destination image) of the projection destination. For example, as illustrated in FIG. 10, the difference image generation unit 155 generates a difference image DS1 including absolute values of difference values of corresponding pixels between the image DL1 ′ and the image DR. In the difference image DS1, the difference value between the area AL2 corresponding to the body hair area AL1 'and the area AR2 corresponding to the body hair area AR1 is large.
- step S27 the difference image generation unit 155 associates the generated difference image with information indicating the combination of the homography matrix used in the generation process of the difference image and the four feature point pairs in the storage unit 133.
- step S28 the difference image generation unit 155 determines whether or not a predetermined number of difference images have been generated. If it is determined that a predetermined number of difference images have not yet been generated, the process returns to step S23. At this time, the difference image generation unit 155 instructs the homography matrix calculation unit 153 to calculate the homography matrix.
- steps S23 to S28 are repeatedly executed a predetermined number of times. That is, a difference image between a projective transformation image obtained by performing projective transformation using a homography matrix calculated based on the selected feature point pair, and a projection destination image, by randomly selecting four feature point pairs. The process of generating is repeated.
- step S28 if it is determined in step S28 that a predetermined number of difference images have been generated, the process proceeds to step S29.
- the difference image generation unit 155 supplies the skin image (projection source image and projection destination image) to the region detection unit 156.
- step S29 the region detection unit 156 selects a difference image that minimizes the difference value.
- the area detection unit 156 selects a difference image that minimizes the sum of difference values from the difference images stored in the storage unit 133.
- the region detection unit 156 detects a hair region using the selected difference image. Specifically, first, the region detection unit 156 detects a region composed of pixels having a difference value equal to or larger than a predetermined threshold in the selected difference image as a body hair candidate region. For example, in the difference image DS1 of FIG. 10, a region on the projection destination image corresponding to the region composed of the region AL2 and the region AR2 is detected as a body hair candidate region.
- the region AR2 is a region (hair region) where body hair is actually reflected in the projected image (image DR1).
- the region AL2 is a region (hereinafter referred to as a non-hair region) in which the body hair is not reflected in the projection destination image and the body hair is reflected in the projection conversion image (image DL1 ').
- region is detected as a continuous area
- the candidate hair region corresponding to the hair whose root (brow) is not shown in the skin image is detected as two regions in which the hair region and the non-hair region are separated.
- the region detection unit 156 divides the body hair candidate region into a region where the body hair is actually reflected in the projected image (a body hair region) and a region other than that (a non-hair region).
- the region detection unit 156 selects one pixel from the body hair candidate regions of the projection destination image as the target pixel. Furthermore, the region detection unit 156 detects the maximum value of the pixel values of each pixel in a rectangular region having a predetermined size centered on the pixel of interest (hereinafter referred to as the region of interest). The maximum pixel value indicates the maximum luminance value in the attention area.
- region detection part 156 determines with an attention pixel being a pixel in a body hair area
- the region detection unit 156 determines that the target pixel is a pixel in the non-hair region.
- the pixel P1 in the hair region A1 is the target pixel
- the pixel P1 is a pixel on the hair and the pixel value is close to 0. Therefore, the maximum pixel value in the target region B1 centered on the pixel P1 is The pixel value difference between the two becomes larger. Therefore, the pixel P1 is determined to be a pixel in the body hair region A1.
- the pixel P2 in the non-hair region A2 is the target pixel
- the pixel P2 is a pixel on the skin
- the pixel value between the maximum pixel value in the target region B2 centered on the pixel P2 The difference becomes smaller. Therefore, it is determined that the pixel P2 is a pixel in the non-hair region A2.
- the length (number of pixels) of one side of the attention area is desirably set to a value close to the maximum number of pixels, which is larger than the maximum number of pixels in the thickness direction of the body hair assumed in the skin image.
- the length of one side of the attention area is set to 51 pixels.
- the region detection unit 156 repeats the above processing until all the pixels in the body hair candidate region become the target pixel, and divides each pixel in the body hair candidate region into a pixel in the body hair region and a pixel in the non-hair region. In this way, by comparing the image in the body hair candidate region with the surrounding image, the body hair candidate region is separated into a body hair region and a non-hair region.
- the method for separating the hair region and the non-hair region is not limited to the method described above. For example, instead of judging in pixel units, as shown in FIG. 12, after dividing the hair candidate region into regions A1 and A2, it is judged which region is a hair region or a non-hair region. You may do it. In this case, it is possible to make a determination using only some of the pixels without using all of the pixels in each region.
- the region detection unit 156 includes a skin image (projection source image and projection destination image), a detection result of the body hair region and the non-hair region, and a homography used in the generation process of the difference image used for the detection of the body hair region. Information indicating the matrix is output to the subsequent stage.
- the optimal homography matrix is a homography matrix in which the difference between the projected transformed image and the projected image is the smallest among the plurality of calculated homography matrices.
- the hair region in the skin image can be detected easily and accurately.
- FIG. 13 is a block diagram illustrating a functional configuration example of an image processing system 201 which is the second embodiment of the image processing system to which the present technology is applied.
- the image processing system 201 is a system in which a function for removing body hair from a skin image is added to the image processing system 101.
- portions corresponding to those in FIG. 1 are denoted by the same reference numerals, and description of portions having the same processing will be omitted as appropriate because the description will be repeated.
- the image processing system 201 is different from the image processing system 101 in FIG. 1 in that an image processing apparatus 211 is provided instead of the image processing apparatus 112. Further, the image processing apparatus 211 is different from the image processing apparatus 112 in that a hair removal unit 231 is added.
- the hair removal unit 231 is configured to include a body hair region detection unit 132 and a removal unit 241.
- the removal unit 241 generates an image obtained by removing body hair from the skin image (hereinafter referred to as a body hair removal image) based on the detection result of the body hair region by the body hair region detection unit 132.
- the removal unit 241 outputs the generated hair removal image to the subsequent stage.
- step S101 two skin images taken from different directions are acquired in the same manner as in step S1 of FIG.
- step S102 a hair region detection process is executed in the same manner as in step S2 of FIG. Then, the region detection unit 156 of the body hair region detection unit 132 sends the skin image (projection source image and projection destination image), the detection result of the body hair region and the non-hair region, and information indicating the optimum homography matrix to the removal unit 241. Supply.
- step S103 the removal unit 241 generates an image from which the hair has been removed. Specifically, the removal unit 241 generates a hair removal image by replacing at least the pixels of the body hair region of the projection destination image with the corresponding pixels of the projection source image.
- the removal unit 241 calculates an inverse matrix of the optimal homography matrix. Then, the removal unit 241 calculates the region of the projection original image corresponding to the non-hair hair region in the body hair candidate region, that is, the body hair region in the projection source image, using the inverse matrix of the optimal homography matrix. Then, the removing unit 241 projects pixels other than the hair region of the projection source image onto the projection destination image using the optimal homography matrix, and replaces the pixels with the projection destination image. As a result, in the projection destination image, the pixels of the hair region including a part of the non-hair region are replaced with the corresponding pixels in the projection source image, and an image (hair removal image) in which the hair is removed from the projection destination image is generated. Is done.
- removal unit 241 as shown in Figure 15, the pixels other than hair region AL1 images DL1, using the optimal homography matrix H LR, projected on the image DR1, replaced with the pixel of the image DR1. Thereby, the hair region AR1 is removed from the image DR1, and a hair removal image in which the skin hidden by the hair region AR1 is shown is generated instead.
- the removal unit 241 calculates a region of the projection source image corresponding to the body hair region of the projection destination image using the inverse matrix of the optimal homography matrix. This calculated area is an area different from the body hair area of the projection source image, and no body hair is shown. Then, the removal unit 241 projects the pixels in the calculated region of the projection source image onto the projection destination image using the optimal homography matrix, and replaces the pixels in the projection destination image. Thereby, the pixel of the body hair area
- the removal unit 241 calculates a region of the image DL1 corresponding to the body hair region AR1 of the image DR1. Then, as shown in FIG. 16, the removal unit 241 projects the pixels in the calculated region of the image DL1 (the pixels surrounded by a circle in the drawing) onto the image DR1 using the optimal homography matrix. The pixel of the image DR1 (the pixel in the hair region AR1) is replaced. Thereby, the hair region AR1 is removed from the image DR1, and a hair removal image in which the skin hidden by the hair region AR1 is shown is generated instead.
- the removal unit 241 outputs the generated hair removal image to the subsequent stage.
- a body hair region is detected using three or more skin images acquired by photographing skin from three or more different directions.
- the body hair is linear, as shown in FIG. 6 described above, it is desirable to take a skin image from two directions orthogonal to the direction in which the body hair extends. This is because the probability (or area) that a skin region concealed by body hair in one skin image is not concealed by body hair in the other skin image increases.
- the direction of the body hair is not uniform and varies, it is not always possible to capture a skin image from two directions orthogonal to the direction in which the body hair extends.
- FIG. 17 there are cases where photographing is performed from two opposite directions parallel to and in the direction in which the hair BH11 extends by the photographing device 121-1 and the photographing device 121-2.
- the skin areas hidden by the body hair BH11 overlap when viewed from the two imaging devices. That is, the skin area hidden by the body hair area AL11 of the image DL11 obtained by the imaging apparatus 121-1 and the skin area hidden by the body hair area AR11 of the image DR11 obtained by the imaging apparatus 121-2 overlap.
- the difference value of the region corresponding to the body hair region in the difference image is small, it is difficult to accurately detect the body hair region.
- the skin region is photographed from three or more directions so that the hair region can be detected with higher accuracy.
- FIG. 18 is a block diagram illustrating a functional configuration example of an image processing system 301 that is the third embodiment of the image processing system to which the present technology is applied.
- portions corresponding to those in FIG. 1 are denoted by the same reference numerals, and description of portions having the same processing will be omitted as appropriate because the description will be repeated.
- the image processing system 301 is configured to include a probe 311 and an image processing device 312.
- the probe 311 differs from the probe 111 in FIG. 1 in the number of imaging devices 121 provided. That is, the probe 311 is provided with n imaging devices 121-1 to 121-n (n is 3 or more).
- the photographing devices 121-1 to 121-n are arranged in the probe 311 so that the same region of the human skin can be photographed from different directions while the predetermined position of the probe 311 is in contact with or close to the human skin. Is done. Then, the photographing devices 121-1 to 121-n supply the photographed skin images to the image processing device 312.
- the photographing devices 121-1 to 121-n are arranged so that there is a common photographing region for all.
- the two arbitrary photographing devices 121 are arranged so that at least one of the azimuth angle or depression angle of the optical axis center with respect to the surface of the skin of the person to be photographed is different.
- the same region of the human skin can be simultaneously photographed from different directions by any two photographing devices 121.
- the optical axis centers of any two imaging devices 121 do not necessarily intersect.
- FIG. 19 shows an example of an ideal arrangement when three imaging devices 121 are used.
- the photographing device 121-1 and the photographing device 121-2 are arranged at the same position as in FIG.
- the photographing device 121-3 has an optical axis center that intersects with the optical axis centers of the photographing device 121-1 and the photographing device 121-2 on the surface of the human skin, and is disposed obliquely upward as viewed from the intersection.
- the azimuth angle between the optical axis center of the photographing apparatus 121-1 and the optical axis center of the photographing apparatus 121-3 is different by 90 degrees. Further, an angle ⁇ 1 formed by the optical axis center of the imaging device 121-2 and the optical axis center of the imaging device 121-1, and an optical axis center of the imaging device 121-2 and the optical axis center of the imaging device 121-3. The angle ⁇ 2 is set to the same angle.
- the image processing device 312 is provided with an image acquisition unit 331 instead of the image acquisition unit 131 and an image selection unit 332 and a region composition unit 333 as compared with the image processing device 112 of FIG. 1. Is different.
- the image acquisition unit 331 acquires skin images photographed by the photographing devices 121-1 to 121-n and stores them in the storage unit 133. In addition, the image acquisition unit 331 notifies the image selection unit 332 that a skin image has been acquired.
- the image selection unit 332 selects two skin images stored in the storage unit 133 as images to be processed, and supplies them to the body hair region detection unit 132. In addition, when the detection of the body hair regions of all skin images is completed, the image selection unit 332 notifies the region composition unit 333 to that effect.
- the body hair region detection unit 132 detects a body hair region in the skin image selected by the image selection unit 332 and supplies information indicating the detection result to the region synthesis unit 333.
- the region composition unit 333 selects one skin image (hereinafter referred to as a detection target image) that is a target for finally detecting a hair region from the skin images stored in the storage unit 133.
- the region synthesis unit 333 calculates a hair region that is obtained by projecting the hair region detected in each skin image other than the detection target image onto the coordinate system of the detection target image.
- the region synthesis unit 333 obtains the final hair region by synthesizing the body hair region in the detection target image and the body hair region projected from each skin image onto the coordinate system of the detection target image. Then, the region composition unit 333 outputs the detection target image and the final body hair region detection result to the subsequent stage.
- the skin region where the body hair BH11 is present is photographed in the direction of looking down obliquely from the upper left to the lower left by the photographing device 121-1, and from the upper right to the oblique lower by the photographing device 121-2.
- a process in the case of shooting in the direction of looking down and shooting in the direction of looking down obliquely from below with the shooting device 121-3 will be described as a specific example as appropriate.
- step S201 the image processing system 301 acquires three or more skin images taken from different directions. Specifically, the photographing devices 121-1 to 121-n photograph human skin almost simultaneously from different directions. At this time, the imaging devices 121-1 to 121-n, for example, close-up the skin from a short distance with the probe 311 in contact with or close to the skin. Then, the photographing apparatuses 121-1 to 121-n supply the skin image obtained as a result of photographing to the image acquisition unit 331. The image acquisition unit 331 stores the acquired n skin images in the storage unit 133. In addition, the image acquisition unit 331 notifies the image selection unit 332 that a skin image has been acquired.
- a skin image including an image Da is captured by the imaging device 121-1
- a skin image including the image Db is captured by the imaging device 121-2
- an image Dc is captured by the imaging device 121-3.
- a skin image including is taken.
- the images Da to Dc are obtained by extracting portions corresponding to the same skin region from the skin images captured by the imaging devices 121-1 to 121-3, respectively.
- the skin area concealed by the body hair area Aa in the image Da and the skin area concealed by the body hair area Ab in the image Db are substantially the same.
- the skin area hidden by the body hair area Aa in the image Da is different from the skin area hidden by the body hair area Ac in the image Dc.
- the skin region concealed by the hair region Ab in the image Db is different from the skin region concealed by the body hair region Ac in the image Dc.
- the image selection unit 332 selects two skin images as processing targets. Specifically, the image selection unit 332 selects one skin image for which a body hair region has not yet been detected from among the skin images stored in the storage unit 133 as a projection destination image. The image selection unit 332 selects one of the remaining skin images as a projection source image. At this time, it is desirable to select the projection source image so as not to match the combination of skin images selected in the past. Then, the image selection unit 332 supplies the selected projection source image and projection destination image to the feature point detection unit 151 of the body hair region detection unit 132.
- step S203 a hair region detection process is executed using the selected projection source image and projection target image in the same manner as the process in step S2 of FIG. Then, the hair region of the projected image is detected.
- the region detection unit 156 of the body hair region detection unit 132 supplies information indicating the detection result of the body hair region to the region synthesis unit 333.
- step S204 the image selection unit 332 determines whether or not the body hair regions of all skin images have been detected. If it is determined that there is a skin image for which a hair region has not yet been detected, the process returns to step S202.
- steps S202 to S204 are repeatedly executed until it is determined in step S204 that all the hair regions of the skin image have been detected. Thereby, the hair region of all the skin images is detected.
- the hair region of the image Db is detected by using the image Da as the projection source image and the image Db as the projection destination image.
- the image Db as the projection source image and the image Dc as the projection destination image
- the hair region of the image Dc is detected, the image Dc as the projection source image, and the image Da as the projection destination image.
- a body hair region of the image Da is detected.
- the image Da ′, the image Db ′, and the image Dc ′ in FIG. 22 indicate the detection results of the hair regions of the image Da, the image Db, and the image Dc. Further, in this example, the body hair region Aa of the image Da and the body hair region Ab of the image Db overlap each other, and thus the body hair region of the image Db is not detected.
- step S204 if it is determined in step S204 that the hair regions of all skin images have been detected, the process proceeds to step S205. At this time, the image selection unit 332 notifies the region composition unit 333 that the detection of the body hair regions of all skin images has been completed.
- step S205 the region synthesis unit 333 synthesizes the detected hair region. Specifically, the region composition unit 333 selects one detection target image from the skin images stored in the storage unit 133. And the area
- the homography matrix from each skin image to the coordinate system of the detection target image is already used in the hair region detection process, and if there is one stored in the storage unit 133, it may be used. .
- the homography matrix may be calculated by the above-described method using feature points in regions other than the body hair region.
- the region synthesis unit 333 obtains a final hair region by synthesizing (OR coupling) the body hair region in the detection target image and the body hair region projected from each skin image onto the coordinate system of the detection target image. That is, a region obtained by superimposing the body hair regions detected in each skin image is obtained as the final body hair region.
- the body hair region detected in the image Db and the body hair region detected in the image Dc are projected onto the coordinate system of the image Da, and the region where the body hair regions are overlapped is finally obtained.
- the region where the body hair regions are overlapped is finally obtained.
- the region synthesis unit 333 outputs the detection target image and the detection result of the final body hair region to the subsequent stage.
- the hair region in the skin image can be detected with higher accuracy.
- body hair is removed from a skin image using three or more skin images acquired by photographing the skin from three or more different directions.
- FIG. 24 is a block diagram illustrating a functional configuration example of an image processing system 401 that is the fourth embodiment of the image processing system to which the present technology is applied.
- portions corresponding to those in FIG. 13 or FIG. 18 are denoted by the same reference numerals, and description of portions having the same processing will be omitted as appropriate because the description will be repeated.
- the image processing system 401 is provided with the same probe 311 as the image processing system 301 in FIG. 18 instead of the probe 111, and the image processing apparatus 411 is replaced with the image processing apparatus 211.
- the image processing device 411 is provided with the same image acquisition unit 331 as that of the image processing system 301 in FIG. 18 instead of the image acquisition unit 131, and instead of the body hair removal unit 231, the hair removal unit 431. Is different.
- the hair removal unit 431 is different from the body hair removal unit 231 in that a removal unit 442 is provided instead of the removal unit 241 and an image selection unit 441 is added.
- the image selection unit 441 first selects two of the skin images stored in the storage unit 133 as images to be processed, and supplies them to the body hair region detection unit 132. Thereafter, the image selection unit 441 selects one of the unprocessed images stored in the storage unit 133 and the body hair removal image supplied from the removal unit 442 as a processing target image, and the body hair region detection unit 132. To supply. In addition, when the detection of the body hair regions of all skin images is completed, the image selection unit 441 notifies the removal unit 442 to that effect.
- the removal unit 442 generates a body hair removal image obtained by removing body hair from the skin image based on the detection result of the body hair region by the body hair region detection unit 132.
- the removal unit 442 supplies the generated hair removal image to the image selection unit 441 or outputs it to the subsequent stage.
- step S301 three or more skin images taken from different directions are acquired in the same manner as in step S201 of FIG.
- step S302 the image selection unit 441 selects two skin images as processing targets. Specifically, the image selection unit 441 selects any two skin images from the skin images stored in the storage unit 133, and sets one as a projection source image and the other as a projection destination image. Then, the image selection unit 332 supplies the selected projection source image and projection destination image to the feature point detection unit 151 of the body hair region detection unit 132.
- step S303 the hair region detection process is executed using the selected projection source image and the projection destination image, similarly to the process in step S2 of FIG. Thereby, the hair region of the projected image is detected.
- step S304 the removal unit 442 generates an image (body hair removal image) obtained by removing body hair from the projected image, similarly to the process in step S103 of FIG.
- the removal unit 442 supplies the generated hair removal image to the image selection unit 441.
- step S305 the image selection unit 441 determines whether or not all skin images have been processed. If it is determined that a skin image that has not yet been processed remains, the process proceeds to step S306.
- step S306 the image selection unit 441 selects one of unprocessed skin images and an image from which body hair has been removed as a processing target. Specifically, the image selection unit 441 selects one of unprocessed skin images stored in the storage unit 133 as a projection source image. Further, the image selection unit 441 selects the hair removal image supplied from the removal unit 442 as a projection destination image. Then, the image selection unit 332 supplies the selected projection source image and projection destination image to the feature point detection unit 151 of the body hair region detection unit 132.
- step S305 Thereafter, the process returns to step S303, and the processes of steps S303 to S306 are repeatedly executed until it is determined in step S305 that all the skin images have been processed. That is, one of the unprocessed skin images is used as a projection source image, a newly generated body hair removal image is used as a projection destination image, and a hair removal image obtained by further removing body hair from the body hair removal image that is the projection destination image. A newly generated process is repeatedly executed.
- step S305 if it is determined in step S305 that all skin images have been processed, the process proceeds to step S307. At this time, the image selection unit 441 notifies the removal unit 442 that the processing of all skin images has been completed.
- step S307 the removal unit 442 outputs an image from which hair has been removed. That is, the removal unit 442 outputs the latest hair removal image at the current stage to the subsequent stage.
- a hair removal image is generated using the images Da to Dc in FIG. 21.
- a hair removal image Db ′ (not shown) obtained by removing body hair from the image Db is generated.
- a body hair removal image Db ′′ obtained by further removing body hair from the body hair removal image Db ′ is generated. Is output to the subsequent stage.
- an image from which body hair has been removed is generated using a skin image obtained by photographing skin from different directions using a single photographing device using a mirror. It is a thing.
- FIG. 26 is a block diagram illustrating a functional configuration example of an image processing system 501 that is the fifth embodiment of the image processing system to which the present technology is applied.
- portions corresponding to those in FIG. 24 are denoted by the same reference numerals, and description of portions having the same processing will be omitted as appropriate because the description will be repeated.
- the image processing system 501 is configured to include a probe 511 and an image processing device 512.
- the probe 511 is configured to include an imaging device 521 and a mirror 522.
- FIG. 27 is a schematic diagram when the inside of the probe 511 is viewed from the side.
- the mirror 522 has a cylindrical shape in which the side surface of the truncated cone is turned upside down, and a mirror is provided on the inner surface. Then, in the state where the skin region to be imaged enters the opening 522A of the mirror 522, and the probe 511 is in contact with or close to the human skin so that the periphery of the region is radially surrounded by the mirror 522, the imaging device By 521, a skin image is taken.
- the photographing device 521 is configured by a photographing device having a wide angle of view using a fisheye lens or the like. Then, the imaging device 521 can obtain an image of the skin region in the opening 522A viewed from around 360 degrees by photographing the skin reflected on the mirror 522. The imaging device 521 supplies the captured skin image to the image processing device 512.
- the image processing apparatus 512 is different from the image processing apparatus 411 in FIG. 24 in that an image acquisition unit 531 is provided instead of the image acquisition unit 331, and an image cutout unit 532 and a geometric distortion correction unit 533 are provided. Different.
- the image acquisition unit 531 acquires the skin image photographed by the photographing device 521 and supplies the skin image to the image cutout unit 532.
- the image cutout unit 532 cuts out the skin image reflected on the mirror 522 in the skin image at a predetermined interval and cutout width. Then, the image cutout unit 532 supplies the plurality of skin images obtained as a result to the geometric distortion correction unit 533.
- the geometric distortion correction unit 533 performs geometric distortion correction on each skin image supplied from the image cutout unit 532 and causes the storage unit 133 to store the corrected skin image.
- the geometric distortion correction unit 533 notifies the image selection unit 441 of the body hair removal unit 431 that the skin image has been acquired.
- step S401 the image processing system 501 acquires a skin image.
- the imaging device 521 captures human skin through the mirror 522.
- the imaging device 521 takes a close-up of the skin through the mirror 522 from a short distance in a state where the probe 511 is in contact with or close to the skin.
- the imaging device 521 supplies the skin image obtained as a result of imaging to the image acquisition unit 531.
- the image acquisition unit 531 supplies the acquired skin image to the image cutout unit 532.
- ring-shaped images In the skin image obtained at this time, as shown in FIG. 29A, by photographing the skin reflected on the mirror 522, a ring shape (doughnut type) of the same region of the skin viewed from the periphery over 360 degrees. Images (hereinafter referred to as ring-shaped images).
- step S402 the image cutout unit 532 cuts out the skin image. Specifically, the image cutout unit 532 cuts out a plurality of skin images at a predetermined interval and cutout width from a ring-shaped image in the skin image as shown in FIG. 29B. Then, the image cutout unit 532 supplies the plurality of cutout skin images to the geometric distortion correction unit 533.
- the annular image may be cut out and cut out at an appropriate interval, or the skin image may be extracted from a partial range of the annular image. Further, for example, in image processing, a skin image may be cut out so that adjacent skin image regions partially overlap.
- the geometric distortion correction unit 533 corrects the geometric distortion of the skin image. Specifically, the geometric distortion correction unit 533 performs geometric distortion correction by distortion aberration correction, affine transformation, projective transformation, and the like on each skin image cut out by the image cutout unit 532. Thereby, a planar image of the skin shown in FIG. 29C is generated from the skin image shown in FIG. 29B. Then, the geometric distortion correction unit 533 causes the storage unit 133 to store the corrected skin image. The geometric distortion correction unit 533 notifies the image selection unit 441 of the body hair removal unit 431 that the skin image has been acquired.
- steps S404 to S409 processing similar to that in steps S302 to S307 in FIG. 25 is performed, and a body hair removal image obtained by removing body hair from the skin image is generated and output to the subsequent stage.
- probe 511 it is also possible to apply the probe 511 to the image processing system 301 in FIG. 18 and detect a hair region using a ring-shaped image photographed by the probe 511.
- the mirror 522 it is not necessary to make the mirror 522 into a shape that surrounds the periphery of the skin region by 360 degrees, and it is also possible to make the shape to surround only a part.
- body hair is removed using skin images obtained by photographing skin from different directions with one photographing device using MLA (Micro Lens Array) technology. An image is generated.
- MLA Micro Lens Array
- FIG. 30 is a block diagram illustrating a functional configuration example of an image processing system 601 that is the sixth embodiment of the image processing system to which the present technology is applied.
- portions corresponding to those in FIG. 26 are denoted by the same reference numerals, and description of portions having the same processing will be omitted as appropriate since description thereof will be repeated.
- the image processing system 601 is configured to include a probe 611 and an image processing device 612.
- the probe 611 is configured to include an imaging device 621 that employs MLA technology.
- the imaging device 621 is arranged in a grid so that a plurality of microlenses are arranged in a grid and a plurality of image sensors are arranged for one microlens.
- FIG. 31 schematically shows an arrangement example of a microlens and an image sensor.
- Image sensors 652A to 652D and image sensors 653A to 653D are arranged one by one with respect to the micro lenses 651A to 651D arranged in the horizontal direction. Further, the imaging elements 652A to 652D are arranged so that the relative positions with respect to the micro lenses 651A to 651D are the same. In addition, the imaging elements 653A to 653D are arranged at positions different from the imaging elements 652A to 652D so that the relative positions with respect to the microlenses 651A to 651D are the same.
- microlenses 651A to 651D when it is not necessary to distinguish the microlenses 651A to 651D, they are simply referred to as a microlens 651.
- imaging elements 652 when it is not necessary to individually distinguish the imaging elements 652A to 652D, the imaging elements 652A to 652D are simply referred to as imaging elements 652, and when it is not necessary to individually distinguish the imaging elements 653A to 653D, they are simply referred to as imaging elements 653.
- the microlens 651, the image sensor 652, and the image sensor 653 are arranged in a grid with the same positional relationship.
- the imaging device 621 supplies the captured skin image to the image acquisition unit 631.
- the image acquisition unit 631 supplies the skin image captured by the imaging device 621 to the image reconstruction unit 632.
- the image reconstruction unit 632 generates a plurality of skin images by dividing each pixel of the skin image for each pixel corresponding to the image sensor of the same group. For example, an image Da in which the pixels corresponding to the group of the image sensor 652 in FIG. 31 are aggregated and an image Db in which the pixels corresponding to the group of the image sensor 653 are aggregated are generated.
- the images Da and Db are images with parallax when the same skin region is viewed from two different directions. Therefore, for example, when a skin region where the body hair BH21 is present is photographed, the skin region concealed by the body hair region Aa of the image Da and the body hair of the image Db, as in the case of photographing from different directions by the two photographing devices.
- the skin area to be concealed differs depending on the area Ab.
- the direction in which the skin is photographed can be changed by changing the relative position of the image sensor with respect to the microlens.
- photographs skin can be increased by increasing the number of the image pick-up elements arrange
- the image reconstruction unit 632 stores the generated skin image in the storage unit 133. Further, the image reconstruction unit 632 notifies the image selection unit 441 of the body hair removal unit 431 that the skin image has been acquired.
- step S501 the image processing system 601 acquires a skin image.
- the imaging device 621 captures human skin.
- the imaging device 621 takes a close-up of the skin from a short distance in a state where the probe 611 is in contact with or close to the skin.
- the imaging device 621 supplies a skin image obtained as a result of imaging to the image acquisition unit 631.
- the image acquisition unit 631 supplies the acquired skin image to the image reconstruction unit 632.
- the image reconstruction unit 632 reconstructs a skin image. Specifically, the image reconstruction unit 632 divides the pixels of the skin image photographed by the photographing device 621 into groups of photographed image pickup devices and aggregates the pixels, thereby collecting a plurality of images photographed from different directions. Generate a skin image. Then, the image reconstruction unit 632 stores the generated skin image in the storage unit 133. Further, the image reconstruction unit 632 notifies the image selection unit 441 of the body hair removal unit 431 that the skin image has been acquired.
- steps S503 to S508 processing similar to that in steps S302 to S307 in FIG. 25 is performed, and an image obtained by removing body hair from the skin image is generated.
- a seventh embodiment of the present technology will be described with reference to FIGS.
- a beauty analysis function for analyzing the skin condition is added to the image processing system 601 in FIG.
- FIG. 33 is a block diagram illustrating a functional configuration example of an image processing system 701 that is the seventh embodiment of the image processing system to which the present technology is applied.
- parts corresponding to those in FIG. 30 are denoted by the same reference numerals, and description of parts having the same processing will be omitted as appropriate because the description will be repeated.
- FIG. 30 is different from the image processing system 601 in FIG. 30 in that an image processing device 711 is provided instead of the image processing device 612 and a display device 712 is added.
- the image processing device 711 is different from the image processing device 612 in that a beauty analysis unit 731 is added.
- the beauty analysis unit 731 analyzes a human skin state based on the hair removal image supplied from the body hair removal unit 431.
- the beauty analysis unit 731 supplies information indicating the analysis result to the display device 712.
- the display device 712 displays the analysis result of the human skin condition.
- FIG. 34 is a block diagram illustrating a configuration example of functions of the beauty analysis unit 731.
- the beauty analysis unit 731 includes a peripheral light amount correction unit 751, a blur processing unit 752, a color conversion unit 753, a binarization processing unit 754, a skin hill detection unit 755, a texture analysis unit 756, and a presentation control unit 757. Composed.
- the peripheral light amount correction unit 751 corrects the peripheral light amount of the hair removal image, and supplies the corrected hair removal image to the blurring processing unit 752.
- the blurring processing unit 752 performs a blurring process on the hair removal image and supplies the hair removal image subjected to the blurring process to the color conversion unit 753.
- the color conversion unit 753 performs color conversion of the body hair removal image, and supplies the hair removal image after color conversion to the binarization processing unit 754.
- the binarization processing unit 754 performs binarization processing on the body hair removal image and supplies the generated binarized image to the skin hill detection unit 755.
- the skin mound detection unit 755 detects the area of each skin mound in the skin image (hair removal image) based on the binarized image, and generates a histogram indicating the distribution of the area of the skin mound.
- the skin hill detection unit 755 supplies information indicating the generated histogram to the texture analysis unit 756.
- the texture analysis unit 756 analyzes the skin texture state based on the histogram indicating the distribution of the area of the skin, and supplies the analysis result to the presentation control unit 757.
- the presentation control unit 757 causes the display device 712 to present the analysis result of the skin texture state.
- step S601 the hair removal process described above with reference to FIG. 32 is performed.
- a hair removal image is generated by removing the hair from the skin image, and the generated hair removal image is supplied from the hair removal unit 431 to the peripheral light amount correction unit 751 of the beauty analysis unit 731.
- an image D31a in which the hair BH31 is removed from the image D31 in which the hair BH31 is shown is generated and supplied to the peripheral light amount correction unit 751.
- step S602 the peripheral light amount correction unit 751 corrects the peripheral light amount of the hair removal image.
- the peripheral light amount correction unit 751 supplies the corrected hair removal image to the blurring processing unit 752.
- step S603 the blurring processing unit 752 performs a blurring process on the body hair removal image.
- the blur processing unit 752 supplies the hair removal image subjected to the blur processing to the color conversion unit 753.
- step S604 the color conversion unit 753 performs color conversion.
- the color conversion unit 753 converts the color space of the hair removal image into a predetermined color space such as an L * a * b * color system or HSV space that separates luminance and color difference.
- the color conversion unit 753 supplies the hair removal image after color conversion to the binarization processing unit 754.
- the binarization processing unit 754 performs binarization processing. For example, when the color space of the hair removal image is converted into the L * a * b * color system or the HSV space, the binarization processing unit 754 has the lightness based on a preset threshold value. The hair removal image is converted into a two-tone black-and-white binarized image of pixels that are equal to or higher than the threshold and pixels that are lower than the threshold. The binarization processing unit 754 supplies the generated binarized image to the skin hill detection unit 755.
- a binarized image D31b is generated from the image D31a by the processing of steps S602 to S605.
- the binarization process may be performed using only one color (R, G, or B) of the RGB skin image without performing the color conversion of the skin image.
- the skin mound detector 755 detects the distribution of the area of the skin mound. Specifically, the skin hill detection unit 755 recognizes a white pixel region surrounded by black pixels as a skin region surrounded by a skin groove in the binarized image, and Detect the area. Further, the skin hill detection unit 755 generates a histogram indicating the distribution of the area of the detected skin hill. Then, the cuticle detection unit 755 supplies information indicating the generated histogram to the texture analysis unit 756.
- a histogram is generated based on the binarized image D31b with the horizontal axis as a class based on the area of the skin and the vertical axis as the frequency.
- the texture analysis unit 756 analyzes the texture of the skin based on the distribution of the area of the skin. For example, the texture analysis unit 756 determines whether or not the skin is textured on the basis of the frequency deviation of the histogram. For example, the texture analysis unit 756 determines that the skin is well textured when the frequency of the histogram is largely biased toward a specific class (area). For example, the texture analysis unit 756 analyzes the fineness of the texture based on the class (area) in which the frequency of the histogram is increased. For example, the texture analysis unit 756 determines that the skin has a fine texture when the area of the class having the peak frequency is small. Then, the texture analysis unit 756 supplies the analysis result to the presentation control unit 757.
- step S608 the display device 712 displays the analysis result.
- the presentation control unit 757 generates data for presenting the analysis result of the skin texture state and supplies the data to the display device 712.
- the display device 712 presents the analysis result to the user by displaying an image indicating the analysis result of the skin texture state based on the acquired data.
- FIG. 37 shows an example in which a beauty analysis is performed without removing the hair BH31 from the image D31.
- the area of the cuticle cannot be accurately detected. More specifically, since the cuticle is subdivided by the body hair BH31, a cuticle with a smaller area than the usual or a large number of cuticles is detected. Therefore, in the histogram showing the distribution of the skin area, the frequency of the class corresponding to the area that does not normally exist increases or the frequency distribution becomes non-uniform. Therefore, the analysis accuracy of the skin texture state is lowered.
- the analysis accuracy of the texture state of the skin is improved by performing the beauty analysis using the image obtained by removing the body hair from the skin image as described above.
- Modification 1 Modification of system configuration
- the sum of the difference values of each difference image is stored, and when detecting the hair region, the difference image is calculated using a homography matrix corresponding to the difference image having the smallest difference value. You may make it regenerate. Thereby, the capacity of the storage unit 133 can be reduced.
- the homography matrix is recalculated based on the pair of feature points used in the generation process of the difference image that minimizes the difference value
- the difference image may be regenerated using the recalculated homography matrix.
- the body hairs are projected by projecting pixels in region units of a predetermined area using a homography matrix. It is also possible to remove it. In this case, it is desirable to perform projection after correcting image distortion due to a difference in shooting direction.
- the hair from the projection source image by projecting the pixels in the non-hair region of the projection destination image onto the projection source image. That is, in the projection source image, by replacing the pixel of the region corresponding to the non-hair region of the projection destination image (the body hair region of the projection source image) with the corresponding pixel of the projection destination image (the pixel of the non-hair region). It is possible to generate a hair removal image in which the hair is removed from the image.
- a photographing apparatus for photographing from directly above the skin such as the photographing apparatus 121-2 in FIG. 2 or FIG. 19, the photographing is performed in order to finally obtain an image with little distortion. It is desirable to remove body hair from the skin image photographed by the apparatus.
- the hair region detection unit 132 in FIG. 13 and FIG. 24 it is possible to detect a hair region using a plurality of skin images by an arbitrary method different from the method described above.
- the homography matrix between the images is calculated as described above, and the hair is removed from the skin image by projecting the pixels between the images using the calculated homography matrix. You just have to do it.
- the feature point extraction and the homography matrix calculation process are omitted, and the previously obtained homography matrix is used. Projective transformation may be performed.
- the distance between the imaging device and the skin changes due to the force of pressing the probe against the skin, resulting in an error in the homography matrix.
- the influence of the error of the homography matrix may be reduced by reducing the resolution of the skin image.
- projective transformation using a homography matrix
- projective transformation may be performed using other projection matrices or other methods.
- projective transformation can be performed using a method that is an extension of affine transformation described in JP-A-2009-258868.
- the amount of hair based on the detection result of the hair region.
- a hair quality diagnosis (damaged, damaged, etc.).
- the object of the beauty analysis can be analyzed using a skin image. It is not particularly limited. For example, it is possible to analyze the shape of the skin other than texture (for example, wrinkles), the color of the skin (for example, redness, dullness, melanin amount, etc.), etc. using the body hair removal image. Further, for example, it is possible to analyze the health condition of the skin (for example, eczema, rash, etc.) using the body hair removal image.
- the detection target is not limited to body hair.
- the detection target in a plurality of images obtained by photographing the detection target from different directions so that at least a part of the detection target exists in front of the background (on the imaging apparatus side) and at least a part of the background overlaps, the background hidden by the detection target
- the present technique can be used to detect the area where the detection target is shown.
- an example is shown in which the detection target is body hair and the background is skin.
- the present technology can also be applied when the detection target is a person 812 and the background is a landscape.
- the photographing device 811L and the photographing device 811R photograph the person 812 from different directions so that at least a part of the background scenery overlaps.
- the image DL51 is an image photographed by the photographing device 811L
- the image DR51 is an image photographed by the photographing device 811R.
- the difference image DS51 is generated by the above-described processing using the image DL51 as the projection source image and the image DR51 as the projection destination image.
- the difference values in the area AL52 corresponding to the area AL51 where the person 812 is shown in the image DL51 and the area AR52 corresponding to the area AR51 where the person 812 is shown in the image DR51 are large. Therefore, a region including the region AL52 and the region AR52 is detected as a detection target candidate region.
- the detection target candidate areas are an area where the person 812 is actually reflected in the image DR51 which is the projection target image (hereinafter referred to as a detection target area) and other areas (hereinafter referred to as non-detection target areas). And separated.
- the number of colors of the person 812 as the detection target and the background scene is larger than that in the case of detecting body hair.
- the detection target candidate area is also detected by being separated into two areas, a detection target area and a non-detection target area. Therefore, the detection target region and the non-detection target region are separated by a method different from the method for detecting body hair.
- the image in the area AR53 corresponding to the area AR52 of the difference image DS51 and in which the person 812 is actually shown is larger than the image in the surrounding area.
- the image in the area AL53 corresponding to the area AL52 of the difference image DS51 is very similar to the image in the surrounding area. For example, using this property, two regions are separated. Note that the region AR53 and the region AL53 are actually regions having substantially the same shape as the region AR51 and the region AL51, respectively, but are simplified and shown as rectangular regions here for easy understanding.
- the feature amount of the pixel for example, luminance, color signal, peripheral frequency information, and the like can be used. Further, these feature amounts may be used alone or in combination.
- each small region can be set arbitrarily.
- the shape of each small region may be a rectangle or an ellipse, and the areas may be different.
- the histograms of the small areas are normalized so that the total frequency is equal to 1.
- it is desirable that the small region around the detection target candidate region is set not to overlap the detection target candidate region and as close as possible to the detection target candidate region.
- the histogram similarity is calculated between the small area in the detection target candidate area and the surrounding small areas.
- indices such as Bhattacharyya coefficient, Cos similarity, Pearson correlation coefficient, and distance between vectors (Euclid distance, etc.) can be used as a histogram similarity index indicating histogram similarity. Further, these indices may be used alone or in combination.
- the histogram similarity index based on the histogram similarity index and a predetermined threshold value, it is determined whether or not the histograms are similar.
- an area having many small areas similar to the surrounding small areas and the histogram is set as a non-detection target area, and the other area is set as a detection target area.
- the area AL53 is a non-detection target area
- the area AR53 is a detection target area.
- the area of the person 812 to be detected is large, and the distance from the imaging devices 811L and 811R to the person 812 is also long. Therefore, in order to prevent the background area concealed by the area AL51 of the image DL51 and the background area concealed by the area AR51 of the image DR51 from overlapping with each other, the area AR52 and the area AR52 of the difference image DS51 are prevented from overlapping. It is necessary to sufficiently separate the distance between the imaging device 811L and the imaging device 811R.
- the photographer may move between two points separated from each other and take a photograph. This is particularly effective when the detection target is a non-moving object such as a building.
- a plurality of photographers may use a plurality of images obtained by photographing the detection target from each point.
- a service such as SNS (Social Network Service) or a cloud service.
- the shooting conditions such as the shooting date and time and sunshine conditions at each point are different and the reflection of each image is different, correct each image in advance to match the reflection between the images, and then detect the detection target It is desirable to perform processing.
- an application may be considered that does not remove the detection target but leaves the detection target and removes the background.
- the detection target is body hair
- a background other than the person 831 is removed from the image D101, and characters and the like are further added to create a picture postcard D102.
- Such an application can be considered.
- the series of processes described above can be executed by hardware or can be executed by software.
- a program constituting the software is installed in the computer.
- the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
- FIG. 43 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- an input / output interface 1005 is connected to the bus 1004.
- An input unit 1006, an output unit 1007, a storage unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
- the input unit 1006 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 1007 includes a display, a speaker, and the like.
- the storage unit 1008 includes a hard disk, a nonvolatile memory, and the like.
- the communication unit 1009 includes a network interface.
- the drive 1010 drives a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 1001 loads the program stored in the storage unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program, for example. Is performed.
- the program executed by the computer (CPU 1001) can be provided by being recorded on the removable medium 1011 as a package medium, for example.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 1008 via the input / output interface 1005 by attaching the removable medium 1011 to the drive 1010. Further, the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. In addition, the program can be installed in the ROM 1002 or the storage unit 1008 in advance.
- the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
- the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
- the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
- each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
- the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
- the present technology can take the following configurations.
- a detection target region detection unit that detects a region in which the detection target is reflected in an image of the detection target;
- the detection target area detection unit includes: A projection conversion unit that generates a projection image obtained by projecting the first image obtained by photographing the detection target into a coordinate system of a second image obtained by photographing the detection target from a different direction from the first image;
- a difference image generation unit for generating a difference image between the second image and the projected image;
- a candidate area consisting of pixels having a difference value equal to or greater than a predetermined threshold is detected in the difference image, and the second image area corresponding to the candidate area is detected as a detection target area in which the detection target is reflected and other non-detection areas.
- An image processing apparatus comprising: an area detection unit that divides the detection target area.
- a detection object removing unit that removes the detection object from an image of the detection object;
- the detection object removal unit includes: The detection area detection unit; Based on the detection result by the detection target region detection unit, pixels in the region of the other image corresponding to the region in which the detection target of at least one of the first image and the second image is shown.
- the image processing apparatus according to any one of (1), further including: a removal unit that projects and replaces the image on the one pixel to remove the detection target from the one image.
- the removing unit projects the pixel in the region of the first image corresponding to at least the detection target region of the second image to the second image, and replaces the pixel from the second image.
- the image processing apparatus wherein a detection target is removed.
- the removal unit removes the detection target from the first image by projecting and replacing at least the pixels in the non-detection target region of the second image to the first image.
- An image processing apparatus according to 1.
- the detection target removing unit selects two images from three or more images obtained by photographing the detection target from different directions, and newly generates an image from which the detection target is removed using the two selected images. Thereafter, the process of newly generating an image from which the detection target is removed is repeated using one of the newly generated image and the remaining image until there is no remaining image (2) to (4) An image processing apparatus according to any one of the above.
- the detection target area detection unit includes: A feature point extraction unit for extracting feature points of the first image and feature points of the second image; An association unit that associates the feature points of the first image with the feature points of the second image; Based on at least a part of a pair of feature points of the first image and feature points of the second image associated by the association unit, the first image is coordinated with the second image.
- a projection matrix calculation unit for calculating a projection matrix to be projected onto The image processing device according to any one of (1) to (5), wherein the projection conversion unit generates the projection image using the projection matrix.
- the projection matrix calculation unit calculates a plurality of the projection matrices based on a combination of a plurality of the feature point pairs,
- the projective transformation unit generates a plurality of projected images using a plurality of the projection matrices,
- the difference image generation unit generates a plurality of the difference images between the second image and each of the plurality of projection images;
- the image processing apparatus according to (6), wherein the region detection unit detects the candidate region using the difference image that minimizes a difference from the second image among the plurality of difference images.
- the region detection unit divides the detection target region and the non-detection target region by comparing an image in the region of the second image corresponding to the candidate region with a surrounding image.
- the detection target region detection unit detects the detection target region in each of three or more images obtained by photographing the detection target from different directions, A region combining unit that combines the detection target region in the image selected from the three or more images and a region obtained by projecting the detection target region of the remaining image onto the coordinate system of the selected image;
- the image processing device according to any one of (8) to (8).
- the image processing device Generating a projection image obtained by projectively converting the first image obtained by photographing the detection target to a coordinate system of the second image obtained by photographing the detection target from a direction different from the first image; Generating a difference image between the second image and the projected image; In the difference image, a candidate area consisting of pixels having a difference value of a predetermined threshold or more is detected
- An image processing method including a step of dividing a region of the second image corresponding to the candidate region into a detection target region in which the detection target is reflected and a non-detection target region other than the detection target region.
- An imaging unit for imaging the detection target A detection target region detection unit that detects a region in which the detection target is reflected in an image captured by the photographing unit;
- the detection target area detection unit includes: A projection image is generated by projectively converting the first image obtained by photographing the detection target by the photographing unit into a coordinate system of a second image obtained by photographing the detection target from a direction different from the first image by the photographing unit.
- a projective transformation unit for generating a difference image between the second image and the projected image;
- a candidate area consisting of pixels having a difference value equal to or greater than a predetermined threshold is detected in the difference image, and the second image area corresponding to the candidate area is detected as a detection target area in which the detection target is reflected and other non-detection areas.
- an area detection unit that divides the detection target area
- a detection target removing unit that removes the detection target from an image obtained by photographing the detection target by the photographing unit
- the detection object removal unit includes: The detection area detection unit; Based on the detection result by the detection target region detection unit, pixels in the region of the other image corresponding to the region in which the detection target of at least one of the first image and the second image is shown
- the image processing system according to (12) further including: a removal unit that projects and replaces the image on the one pixel to remove the detection target from the one image.
- the detection target removing unit selects two images from three or more images obtained by photographing the detection target from different directions by the photographing unit, and uses the selected two images to remove the detection target.
- the detection target area detection unit includes: A feature point extraction unit for extracting feature points of the first image and feature points of the second image; An association unit that associates the feature points of the first image with the feature points of the second image; Based on at least a part of a pair of feature points of the first image and feature points of the second image associated by the association unit, the first image is coordinated with the second image.
- the detection target region detection unit detects the detection target region in each of three or more images obtained by photographing the detection target from different directions by the photographing unit,
- the image processing apparatus further includes an area synthesis unit that synthesizes the detection target area in the image selected from the three or more images and an area obtained by projecting the detection target area of the remaining images onto the coordinate system of the selected image. 12)
- the image processing system according to (15).
- the photographing unit A plurality of lenses arranged in two dimensions; A plurality of image sensors, and the plurality of image sensors are arranged with respect to one lens so that the relative positions with respect to the lenses are the same.
- the image processing system according to any one of (12) to (16), further including an image generation unit that generates a plurality of images captured by the imaging elements having the same relative position to the lens.
- the imaging unit captures an image reflected on a mirror that radially surrounds at least a part of the periphery of the region including the detection target;
- An image cutout unit that cuts out a plurality of images from an image taken of the image reflected on the mirror by the shooting unit;
- a projective transformation unit that generates a projected image obtained by projectively transforming a first image obtained by photographing body hair into a coordinate system of a second image obtained by photographing the hair from a direction different from that of the first image;
- a difference image generation unit for generating a difference image between the second image and the projected image;
- a candidate area consisting of pixels having a difference value equal to or greater than a predetermined threshold in the difference image is detected, and a region of the second image corresponding to the candidate region is a body hair region in which the body hair is reflected and the other non-hair region
- An area detection unit divided into Based on the detection result by the region detection unit, pixels in the region of the other image corresponding to the region where the body hair of one image of at least one of the first image and the second image is reflected
- An image processing apparatus comprising: a removal unit that projects and replaces the first hair and removes the hair from the one image.
- 101 image processing system 111 probe, 112 image processing device, 121-1 to 121-n imaging device, 131 image acquisition unit, 132 hair region detection unit, 151 feature point extraction unit, 152 association unit, 153 homography matrix calculation Unit, 154 projection conversion unit, 155 difference image generation unit, 156 region detection unit, 201 image processing system, 231 body hair removal unit, 241 removal unit, 301 image processing system, 311 probe, 312 image processing device, 331 image acquisition unit, 332 image selection unit, 333 region detection unit, 401 image processing system, 411 image processing device, 431 body hair removal unit, 441 image selection unit, 442 removal unit, 501 image processing system, 511 probe, 512 image processing device, 5 21 imaging device, 522 mirror, 531 image acquisition unit, 532 image cropping unit, 533 geometric distortion correction unit, 601 image processing system, 611 probe, 612 image processing device, 621 imaging device, 631 image acquisition unit, 632 image reconstruction unit , 651A to 651D microlens, 652A to
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Veterinary Medicine (AREA)
- Dermatology (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
1.第1の実施の形態(2方向から撮影した肌画像を用いて体毛領域を検出する例)
2.第2の実施の形態(2方向から撮影した肌画像を用いて肌画像から体毛を除去する例)
3.第3の実施の形態(3方向以上から撮影した肌画像を用いて体毛領域を検出する例)
4.第4の実施の形態(3方向以上から撮影した肌画像を用いて肌画像から体毛を除去する例)
5.第5の実施の形態(ミラーを用いて1つの撮影装置で異なる方向から肌を撮影する例)
6.第6の実施の形態(マイクロレンズを用いて異なる方向から肌を撮影する例)
7.第7の実施の形態(美容解析に適用した例)
8.変形例
まず、図1乃至図12を参照して、本技術の第1の実施の形態について説明する。
図1は、本技術を適用した画像処理システムの第1の実施の形態である画像処理システム101の機能の構成例を示すブロック図である。
図4は、体毛領域検出部132の機能の構成例を示すブロック図である。
次に、図5のフローチャートを参照して、画像処理システム101により実行される体毛検出処理について説明する。
次に、図13乃至図16を参照して、本技術の第2の実施の形態について説明する。なお、本技術の第2の実施の形態は、第1の実施の形態に肌画像から体毛を除去する機能を追加したものである。
図13は、本技術を適用した画像処理システムの第2の実施の形態である画像処理システム201の機能の構成例を示すブロック図である。
次に、図14のフローチャートを参照して、画像処理システム201により実行される体毛除去処理について説明する。
次に、図17乃至図23を参照して、本技術の第3の実施の形態について説明する。なお、本技術の第3の実施の形態は、異なる3方向以上から肌を撮影することにより取得した3以上の肌画像を用いて、体毛領域を検出するようにしたものである。
図18は、本技術を適用した画像処理システムの第3の実施の形態である画像処理システム301の機能の構成例を示すブロック図である。なお、図中、図1と対応する部分には、同じ符号を付してあり、処理が同じ部分については、その説明は繰り返しになるため適宜省略する。
次に、図20のフローチャートを参照して、画像処理システム301により実行される体毛領域検出処理について説明する。
次に、図24および図25を参照して、本技術の第4の実施の形態について説明する。なお、本技術の第4の実施の形態は、異なる3方向以上から肌を撮影することにより取得した3以上の肌画像を用いて、肌画像から体毛を除去するようにしたものである。
図24は、本技術を適用した画像処理システムの第4の実施の形態である画像処理システム401の機能の構成例を示すブロック図である。なお、図中、図13または図18と対応する部分には、同じ符号を付してあり、処理が同じ部分については、その説明は繰り返しになるため適宜省略する。
次に、図25のフローチャートを参照して、画像処理システム401により実行される体毛除去処理について説明する。
次に、図26乃至図29を参照して、本技術の第5の実施の形態について説明する。なお、本技術の第5の実施の形態は、ミラーを用いて1つの撮影装置で異なる方向から肌を撮影することにより得られた肌画像を用いて、体毛を除去した画像を生成するようにしたものである。
図26は、本技術を適用した画像処理システムの第5の実施の形態である画像処理システム501の機能の構成例を示すブロック図である。なお、図中、図24と対応する部分には、同じ符号を付してあり、処理が同じ部分については、その説明は繰り返しになるため適宜省略する。
次に、図28のフローチャートを参照して、画像処理システム501により実行される体毛除去処理について説明する。
次に、図30乃至図32を参照して、本技術の第6の実施の形態について説明する。なお、本技術の第6の実施の形態は、MLA(Micro Lens Array)技術を用いて1つの撮影装置で異なる方向から肌を撮影することにより得られた肌画像を用いて、体毛を除去した画像を生成するようにしたものである。
図30は、本技術を適用した画像処理システムの第6の実施の形態である画像処理システム601の機能の構成例を示すブロック図である。なお、図中、図26と対応する部分には、同じ符号を付してあり、処理が同じ部分については、その説明は繰り返しになるため適宜省略する。
次に、図32のフローチャートを参照して、画像処理システム601により実行される体毛除去処理について説明する。
次に、図33乃至図37を参照して、本技術の第7の実施の形態について説明する。なお、本技術の第7の実施の形態は、図30の画像処理システム601に、肌の状態を解析する美容解析の機能を追加したものである。
図33は、本技術を適用した画像処理システムの第7の実施の形態である画像処理システム701の機能の構成例を示すブロック図である。なお、図中、図30と対応する部分には、同じ符号を付してあり、処理が同じ部分については、その説明は繰り返しになるため適宜省略する。
図34は、美容解析部731の機能の構成例を示すブロック図である。なお、ここでは、美容解析部731が、人の肌のキメの状態を解析する場合の構成例を示している。美容解析部731は、周辺光量補正部751、ぼかし処理部752、色変換部753、2値化処理部754、皮丘検出部755、キメ解析部756、および、提示制御部757を含むように構成される。
次に、図35のフローチャートを参照して、画像処理システム701により実行される美容解析処理について説明する。
以下、上述した本技術の実施の形態の変形例について説明する。
例えば、差分画像を記憶する代わりに各差分画像の差分値の合計を記憶し、体毛領域を検出する際に、差分値が最小となる差分画像に対応するホモグラフィ行列を用いて、差分画像を再生成するようにしてもよい。これにより、記憶部133の容量を削減することができる。
以上の説明では、体毛領域の検出結果に基づいて、肌画像から体毛を除去する例を示したが、体毛領域の検出結果を他の用途に適用することが可能である。
本技術において、検出対象は体毛に限定されるものではない。例えば、検出対象の少なくとも一部が背景より手前(撮影装置側)に存在し、背景の少なくとも一部が重なるように異なる方向から検出対象を撮影した複数の画像において、検出対象により隠蔽される背景の領域が異なる場合に、本技術を用いて、検出対象が写っている領域を検出することができる。なお、上述した一連の実施の形態では、検出対象が体毛となり、背景が肌となる例が示されている。
上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
検出対象を撮影した画像において前記検出対象が写っている領域を検出する検出対象領域検出部を備え、
前記検出対象領域検出部は、
前記検出対象を撮影した第1の画像を、前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成する射影変換部と、
前記第2の画像と前記射影画像との間の差分画像を生成する差分画像生成部と、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける領域検出部と
を備える画像処理装置。
(2)
前記検出対象を撮影した画像から前記検出対象を除去する検出対象除去部を
さらに備え、
前記検出対象除去部は、
前記検出対象領域検出部と、
前記検出対象領域検出部による検出結果に基づいて、少なくとも前記第1の画像および前記第2の画像のうち一方の画像の前記検出対象が写っている領域に対応する他方の画像の領域内の画素を前記一方の画素に射影し、置き換えることにより前記一方の画像から前記検出対象を除去する除去部と
を備える前記(1)のいずれかに記載の画像処理装置。
(3)
前記除去部は、少なくとも前記第2の画像の前記検出対象領域に対応する前記第1の画像の領域内の画素を前記第2の画像に射影し、置き換えることにより、前記第2の画像から前記検出対象を除去する
前記(2)に記載の画像処理装置。
(4)
前記除去部は、少なくとも前記第2の画像の前記非検出対象領域内の画素を前記第1の画像に射影し、置き換えることにより、前記第1の画像から前記検出対象を除去する
前記(2)に記載の画像処理装置。
(5)
前記検出対象除去部は、それぞれ異なる方向から前記検出対象を撮影した3以上の画像から2つの画像を選択し、選択した2つの画像を用いて、前記検出対象を除去した画像を新たに生成した後、残りの画像がなくなるまで、新たに生成した画像と残りの画像のうちの1つを用いて、前記検出対象を除去した画像を新たに生成する処理を繰り返す
前記(2)乃至(4)のいずれかに記載の画像処理装置。
(6)
前記検出対象領域検出部は、
前記第1の画像の特徴点および前記第2の画像の特徴点を抽出する特徴点抽出部と、
前記第1の画像の特徴点と前記第2の画像の特徴点とを対応付ける対応付け部と、
前記対応付け部により対応付けられた前記第1の画像の特徴点と前記第2の画像の特徴点のペアの少なくとも一部に基づいて、前記第1の画像を前記第2の画像の座標系に射影する射影行列を算出する射影行列算出部と
をさらに備え、
前記射影変換部は、前記射影行列を用いて前記射影画像を生成する
前記(1)乃至(5)のいずれかに記載の画像処理装置。
(7)
前記射影行列算出部は、複数の前記特徴点のペアの組み合わせに基づいて、複数の前記射影行列を算出し、
前記射影変換部は、複数の前記射影行列をそれぞれ用いて複数の前記射影画像を生成し、
前記差分画像生成部は、前記第2の画像と複数の前記射影画像のそれぞれとの間の複数の前記差分画像を生成し、
前記領域検出部は、複数の前記差分画像のうち前記第2の画像との差が最小となる前記差分画像を用いて前記候補領域を検出する
前記(6)に記載の画像処理装置。
(8)
前記領域検出部は、前記候補領域に対応する前記第2の画像の領域内の画像を周囲の画像と比較することにより、前記検出対象領域と前記非検出対象領域を分ける
前記(1)乃至(7)のいずれかに記載の画像処理装置。
(9)
前記検出対象領域検出部は、それぞれ異なる方向から前記検出対象を撮影した3以上の各画像における前記検出対象領域を検出し、
3以上の前記画像の中から選択した画像における前記検出対象領域と、残りの画像の前記検出対象領域を前記選択した画像の座標系に射影した領域とを合成する領域合成部を
前記(1)乃至(8)のいずれかに記載の画像処理装置。
(10)
画像処理装置が、
検出対象を撮影した第1の画像を、前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成し、
前記第2の画像と前記射影画像との間の差分画像を生成し、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、
前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける
ステップを含む画像処理方法。
(11)
検出対象を撮影した第1の画像を、前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成し、
前記第2の画像と前記射影画像との間の差分画像を生成し、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、
前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける
ステップを含む処理をコンピュータに実行させるためのプログラム。
(12)
検出対象を撮影する撮影部と、
前記撮影部により撮影した画像において前記検出対象が写っている領域を検出する検出対象領域検出部と
を備え、
前記検出対象領域検出部は、
前記撮影部により前記検出対象を撮影した第1の画像を、前記撮影部により前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成する射影変換部と、
前記第2の画像と前記射影画像との間の差分画像を生成する差分画像生成部と、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける領域検出部と
(13)
前記撮影部により前記検出対象を撮影した画像から前記検出対象を除去する検出対象除去部を
さらに備え、
前記検出対象除去部は、
前記検出対象領域検出部と、
前記検出対象領域検出部による検出結果に基づいて、少なくとも前記第1の画像および前記第2の画像のうち一方の画像の前記検出対象が写っている領域に対応する他方の画像の領域内の画素を前記一方の画素に射影し、置き換えることにより前記一方の画像から前記検出対象を除去する除去部と
を備える前記(12)に記載の画像処理システム。
(14)
前記検出対象除去部は、前記撮影部によりそれぞれ異なる方向から前記検出対象を撮影した3以上の画像から2つの画像を選択し、選択した2つの画像を用いて、前記検出対象を除去した画像を新たに生成した後、残りの画像がなくなるまで、新たに生成した画像と残りの画像のうちの1つを用いて、前記検出対象を除去した画像を新たに生成する処理を繰り返す
前記(13)に記載の画像処理システム。
を備える画像処理システム。
(15)
前記検出対象領域検出部は、
前記第1の画像の特徴点および前記第2の画像の特徴点を抽出する特徴点抽出部と、
前記第1の画像の特徴点と前記第2の画像の特徴点とを対応付ける対応付け部と、
前記対応付け部により対応付けられた前記第1の画像の特徴点と前記第2の画像の特徴点のペアの少なくとも一部に基づいて、前記第1の画像を前記第2の画像の座標系に射影する射影行列を算出する射影行列算出部と
をさらに備え、
前記射影変換部は、前記射影行列を用いて前記射影画像を生成する
前記(12)乃至(14)のいずれかに記載の画像処理システム。
(16)
前記検出対象領域検出部は、前記撮影部によりそれぞれ異なる方向から前記検出対象を撮影した3以上の各画像における前記検出対象領域を検出し、
3以上の前記画像の中から選択した画像における前記検出対象領域と、残りの画像の前記検出対象領域を前記選択した画像の座標系に射影した領域とを合成する領域合成部を
さらに備える前記(12)乃至(15)に記載の画像処理システム。
(17)
前記撮影部は、
2次元に配置された複数のレンズと、
複数の撮像素子と
を備え、各前記レンズに対する相対位置がそれぞれ同じになるように、1つの前記レンズに対して複数の前記撮像素子が配置されており、
前記レンズに対する相対位置が同じ前記撮像素子によりそれぞれ撮影された複数の画像を生成する画像生成部を
さらに備える前記(12)乃至(16)のいずれかに記載の画像処理システム。
(18)
前記撮影部は、前記検出対象を含む領域の周囲の少なくとも一部を放射状に囲むミラーに映った像を撮影し、
前記撮影部により前記ミラーに映った像を撮影した画像から複数の画像を切り出す画像切り出し部と、
切り出された複数の画像に対して幾何歪補正を行う幾何歪補正部と
さらに備える前記(12)乃至(16)のいずれかに記載の画像処理システム。
(19)
前記第1の画像および前記第2の画像は、前記撮影部により前記検出対象を接写した画像である
前記(12)乃至(18)のいずれかに記載の画像処理システム。
(20)
体毛を撮影した第1の画像を、前記第1の画像と異なる方向から前記体毛を撮影した第2の画像の座標系に射影変換した射影画像を生成する射影変換部と、
前記第2の画像と前記射影画像との間の差分画像を生成する差分画像生成部と、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、前記候補領域に対応する前記第2の画像の領域を前記体毛が写っている体毛領域とそれ以外の非体毛領域に分ける領域検出部と、
前記領域検出部による検出結果に基づいて、少なくとも前記第1の画像および前記第2の画像のうち一方の画像の前記体毛が写っている領域に対応する他方の画像の領域内の画素を前記一方の画素に射影し、置き換えることにより前記一方の画像から前記体毛を除去する除去部と
を備える画像処理装置。
Claims (20)
- 検出対象を撮影した画像において前記検出対象が写っている領域を検出する検出対象領域検出部を備え、
前記検出対象領域検出部は、
前記検出対象を撮影した第1の画像を、前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成する射影変換部と、
前記第2の画像と前記射影画像との間の差分画像を生成する差分画像生成部と、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける領域検出部と
を備える画像処理装置。 - 前記検出対象を撮影した画像から前記検出対象を除去する検出対象除去部を
さらに備え、
前記検出対象除去部は、
前記検出対象領域検出部と、
前記検出対象領域検出部による検出結果に基づいて、少なくとも前記第1の画像および前記第2の画像のうち一方の画像の前記検出対象が写っている領域に対応する他方の画像の領域内の画素を前記一方の画素に射影し、置き換えることにより前記一方の画像から前記検出対象を除去する除去部と
を備える請求項1に記載の画像処理装置。 - 前記除去部は、少なくとも前記第2の画像の前記検出対象領域に対応する前記第1の画像の領域内の画素を前記第2の画像に射影し、置き換えることにより、前記第2の画像から前記検出対象を除去する
請求項2に記載の画像処理装置。 - 前記除去部は、少なくとも前記第2の画像の前記非検出対象領域内の画素を前記第1の画像に射影し、置き換えることにより、前記第1の画像から前記検出対象を除去する
請求項2に記載の画像処理装置。 - 前記検出対象除去部は、それぞれ異なる方向から前記検出対象を撮影した3以上の画像から2つの画像を選択し、選択した2つの画像を用いて、前記検出対象を除去した画像を新たに生成した後、残りの画像がなくなるまで、新たに生成した画像と残りの画像のうちの1つを用いて、前記検出対象を除去した画像を新たに生成する処理を繰り返す
請求項2に記載の画像処理装置。 - 前記検出対象領域検出部は、
前記第1の画像の特徴点および前記第2の画像の特徴点を抽出する特徴点抽出部と、
前記第1の画像の特徴点と前記第2の画像の特徴点とを対応付ける対応付け部と、
前記対応付け部により対応付けられた前記第1の画像の特徴点と前記第2の画像の特徴点のペアの少なくとも一部に基づいて、前記第1の画像を前記第2の画像の座標系に射影する射影行列を算出する射影行列算出部と
をさらに備え、
前記射影変換部は、前記射影行列を用いて前記射影画像を生成する
請求項1に記載の画像処理装置。 - 前記射影行列算出部は、複数の前記特徴点のペアの組み合わせに基づいて、複数の前記射影行列を算出し、
前記射影変換部は、複数の前記射影行列をそれぞれ用いて複数の前記射影画像を生成し、
前記差分画像生成部は、前記第2の画像と複数の前記射影画像のそれぞれとの間の複数の前記差分画像を生成し、
前記領域検出部は、複数の前記差分画像のうち前記第2の画像との差が最小となる前記差分画像を用いて前記候補領域を検出する
請求項6に記載の画像処理装置。 - 前記領域検出部は、前記候補領域に対応する前記第2の画像の領域内の画像を周囲の画像と比較することにより、前記検出対象領域と前記非検出対象領域を分ける
請求項1に記載の画像処理装置。 - 前記検出対象領域検出部は、それぞれ異なる方向から前記検出対象を撮影した3以上の各画像における前記検出対象領域を検出し、
3以上の前記画像の中から選択した画像における前記検出対象領域と、残りの画像の前記検出対象領域を前記選択した画像の座標系に射影した領域とを合成する領域合成部を
さらに備える請求項1に記載の画像処理装置。 - 画像処理装置が、
検出対象を撮影した第1の画像を、前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成し、
前記第2の画像と前記射影画像との間の差分画像を生成し、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、
前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける
ステップを含む画像処理方法。 - 検出対象を撮影した第1の画像を、前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成し、
前記第2の画像と前記射影画像との間の差分画像を生成し、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、
前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける
ステップを含む処理をコンピュータに実行させるためのプログラム。 - 検出対象を撮影する撮影部と、
前記撮影部により撮影した画像において前記検出対象が写っている検出対象領域を検出する検出対象領域検出部と
を備え、
前記検出対象領域検出部は、
前記撮影部により前記検出対象を撮影した第1の画像を、前記撮影部により前記第1の画像と異なる方向から前記検出対象を撮影した第2の画像の座標系に射影変換した射影画像を生成する射影変換部と、
前記第2の画像と前記射影画像との間の差分画像を生成する差分画像生成部と、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、前記候補領域に対応する前記第2の画像の領域を前記検出対象が写っている検出対象領域とそれ以外の非検出対象領域に分ける領域検出部と
を備える画像処理システム。 - 前記撮影部により前記検出対象を撮影した画像から前記検出対象を除去する検出対象除去部を
さらに備え、
前記検出対象除去部は、
前記検出対象領域検出部と、
前記検出対象領域検出部による検出結果に基づいて、少なくとも前記第1の画像および前記第2の画像のうち一方の画像の前記検出対象が写っている領域に対応する他方の画像の領域内の画素を前記一方の画素に射影し、置き換えることにより前記一方の画像から前記検出対象を除去する除去部と
を備える請求項12に記載の画像処理システム。 - 前記検出対象除去部は、前記撮影部によりそれぞれ異なる方向から前記検出対象を撮影した3以上の画像から2つの画像を選択し、選択した2つの画像を用いて、前記検出対象を除去した画像を新たに生成した後、残りの画像がなくなるまで、新たに生成した画像と残りの画像のうちの1つを用いて、前記検出対象を除去した画像を新たに生成する処理を繰り返す
請求項13に記載の画像処理システム。 - 前記検出対象領域検出部は、
前記第1の画像の特徴点および前記第2の画像の特徴点を抽出する特徴点抽出部と、
前記第1の画像の特徴点と前記第2の画像の特徴点とを対応付ける対応付け部と、
前記対応付け部により対応付けられた前記第1の画像の特徴点と前記第2の画像の特徴点のペアの少なくとも一部に基づいて、前記第1の画像を前記第2の画像の座標系に射影する射影行列を算出する射影行列算出部と
をさらに備え、
前記射影変換部は、前記射影行列を用いて前記射影画像を生成する
請求項12に記載の画像処理システム。 - 前記検出対象領域検出部は、前記撮影部によりそれぞれ異なる方向から前記検出対象を撮影した3以上の各画像における前記検出対象領域を検出し、
3以上の前記画像の中から選択した画像における前記検出対象領域と、残りの画像の前記検出対象領域を前記選択した画像の座標系に射影した領域とを合成する領域合成部を
さらに備える請求項12に記載の画像処理システム。 - 前記撮影部は、
2次元に配置された複数のレンズと、
複数の撮像素子と
を備え、各前記レンズに対する相対位置がそれぞれ同じになるように、1つの前記レンズに対して複数の前記撮像素子が配置されており、
前記レンズに対する相対位置が同じ前記撮像素子によりそれぞれ撮影された複数の画像を生成する画像生成部を
さらに備える請求項12に記載の画像処理システム。 - 前記撮影部は、前記検出対象を含む領域の周囲の少なくとも一部を放射状に囲むミラーに映った像を撮影し、
前記撮影部により前記ミラーに映った像を撮影した画像から複数の画像を切り出す画像切り出し部と、
切り出された複数の画像に対して幾何歪補正を行う幾何歪補正部と
をさらに備える請求項12に記載の画像処理システム。 - 前記第1の画像および前記第2の画像は、前記撮影部により前記検出対象を含む領域を接写した画像である
請求項12に記載の画像処理システム。 - 体毛を撮影した第1の画像を、前記第1の画像と異なる方向から前記体毛を撮影した第2の画像の座標系に射影変換した射影画像を生成する射影変換部と、
前記第2の画像と前記射影画像との間の差分画像を生成する差分画像生成部と、
前記差分画像において差分値が所定の閾値以上の画素からなる候補領域を検出し、前記候補領域に対応する前記第2の画像の領域を前記体毛が写っている体毛領域とそれ以外の非体毛領域に分ける領域検出部と、
前記領域検出部による検出結果に基づいて、少なくとも前記第1の画像および前記第2の画像のうち一方の画像の前記体毛が写っている領域に対応する他方の画像の領域内の画素を前記一方の画素に射影し、置き換えることにより前記一方の画像から前記体毛を除去する除去部と
を備える画像処理装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201280070232.3A CN104135926B (zh) | 2011-12-27 | 2012-12-13 | 图像处理设备、图像处理系统、图像处理方法以及程序 |
US14/366,540 US9345429B2 (en) | 2011-12-27 | 2012-12-13 | Image processing device, image processing system, image processing method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011285560A JP5818091B2 (ja) | 2011-12-27 | 2011-12-27 | 画像処理装置、画像処理システム、画像処理方法、および、プログラム |
JP2011-285560 | 2011-12-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013099628A1 true WO2013099628A1 (ja) | 2013-07-04 |
Family
ID=48697120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/082374 WO2013099628A1 (ja) | 2011-12-27 | 2012-12-13 | 画像処理装置、画像処理システム、画像処理方法、および、プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9345429B2 (ja) |
JP (1) | JP5818091B2 (ja) |
CN (1) | CN104135926B (ja) |
WO (1) | WO2013099628A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021229984A1 (ja) * | 2020-05-15 | 2021-11-18 | ソニーグループ株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6379519B2 (ja) * | 2014-02-27 | 2018-08-29 | カシオ計算機株式会社 | 肌処理装置、肌処理方法及びプログラム |
GB2528044B (en) * | 2014-07-04 | 2018-08-22 | Arc Devices Ni Ltd | Non-touch optical detection of vital signs |
US20180091733A1 (en) * | 2015-07-31 | 2018-03-29 | Hewlett-Packard Development Company, L.P. | Capturing images provided by users |
JP6993087B2 (ja) * | 2017-01-17 | 2022-02-04 | 花王株式会社 | 皮膚の歪み測定方法 |
CN110431835B (zh) * | 2017-03-16 | 2022-03-04 | 富士胶片株式会社 | 图像合成装置、图像合成方法及记录介质 |
JP6857088B2 (ja) * | 2017-06-20 | 2021-04-14 | キヤノン株式会社 | 画像処理装置およびその制御方法、撮像装置、監視システム |
JP7084120B2 (ja) * | 2017-10-17 | 2022-06-14 | ザイオソフト株式会社 | 医用画像処理装置、医用画像処理方法、及び医用画像処理プログラム |
CN107818305B (zh) * | 2017-10-31 | 2020-09-22 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
WO2019130827A1 (ja) * | 2017-12-25 | 2019-07-04 | キヤノン株式会社 | 画像処理装置およびその制御方法 |
JP7058585B2 (ja) * | 2017-12-25 | 2022-04-22 | キヤノン株式会社 | 画像処理装置およびその制御方法 |
CN108376117B (zh) * | 2018-02-07 | 2021-05-11 | 网易(杭州)网络有限公司 | 交互响应的测试方法和设备 |
CN108492246B (zh) * | 2018-03-12 | 2023-01-24 | 维沃移动通信有限公司 | 一种图像处理方法、装置及移动终端 |
CN108398177B (zh) * | 2018-04-10 | 2020-04-03 | 西安维塑智能科技有限公司 | 一种内设称重模块的三维人体扫描设备 |
CN110005621B (zh) * | 2018-06-13 | 2020-10-02 | 宁波瑞卡电器有限公司 | 离心式防护型吹风机 |
JP7246974B2 (ja) * | 2019-03-05 | 2023-03-28 | 花王株式会社 | 肌画像処理方法 |
WO2021106062A1 (ja) * | 2019-11-26 | 2021-06-03 | 日本電信電話株式会社 | 信号再構成方法、信号再構成装置及びプログラム |
FR3115888B1 (fr) * | 2020-10-30 | 2023-04-14 | St Microelectronics Grenoble 2 | Procédé de détection d’une présence d’un objet dans un champ de vision d’un capteur temps de vol |
US11665340B2 (en) * | 2021-03-22 | 2023-05-30 | Meta Platforms, Inc. | Systems and methods for histogram-based weighted prediction in video encoding |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2993611B2 (ja) * | 1990-09-29 | 1999-12-20 | 工業技術院長 | 画像処理方法 |
JP2001283204A (ja) * | 2000-03-31 | 2001-10-12 | Toshiba Corp | 障害物検出装置 |
JP2005084012A (ja) * | 2003-09-11 | 2005-03-31 | Kao Corp | 肌形状計測方法及び肌形状計測装置 |
JP2006053756A (ja) * | 2004-08-11 | 2006-02-23 | Tokyo Institute Of Technology | 物体検出装置 |
WO2008050904A1 (fr) * | 2006-10-25 | 2008-05-02 | Tokyo Institute Of Technology | Procédé de génération d'image dans un plan de focalisation virtuel haute résolution |
JP2010082245A (ja) * | 2008-09-30 | 2010-04-15 | Panasonic Electric Works Co Ltd | 毛情報測定方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6208749B1 (en) * | 1997-02-28 | 2001-03-27 | Electro-Optical Sciences, Inc. | Systems and methods for the multispectral imaging and characterization of skin tissue |
US6963661B1 (en) | 1999-09-09 | 2005-11-08 | Kabushiki Kaisha Toshiba | Obstacle detection system and method therefor |
US6711286B1 (en) * | 2000-10-20 | 2004-03-23 | Eastman Kodak Company | Method for blond-hair-pixel removal in image skin-color detection |
FR2840710B1 (fr) * | 2002-06-11 | 2005-01-07 | Ge Med Sys Global Tech Co Llc | Systeme de traitement d'images numeriques, installation d'imagerie incorporant un tel systeme, et procede de traitement d'images correspondant |
US7689016B2 (en) * | 2005-05-27 | 2010-03-30 | Stoecker & Associates, A Subsidiary Of The Dermatology Center, Llc | Automatic detection of critical dermoscopy features for malignant melanoma diagnosis |
-
2011
- 2011-12-27 JP JP2011285560A patent/JP5818091B2/ja active Active
-
2012
- 2012-12-13 US US14/366,540 patent/US9345429B2/en active Active
- 2012-12-13 CN CN201280070232.3A patent/CN104135926B/zh active Active
- 2012-12-13 WO PCT/JP2012/082374 patent/WO2013099628A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2993611B2 (ja) * | 1990-09-29 | 1999-12-20 | 工業技術院長 | 画像処理方法 |
JP2001283204A (ja) * | 2000-03-31 | 2001-10-12 | Toshiba Corp | 障害物検出装置 |
JP2005084012A (ja) * | 2003-09-11 | 2005-03-31 | Kao Corp | 肌形状計測方法及び肌形状計測装置 |
JP2006053756A (ja) * | 2004-08-11 | 2006-02-23 | Tokyo Institute Of Technology | 物体検出装置 |
WO2008050904A1 (fr) * | 2006-10-25 | 2008-05-02 | Tokyo Institute Of Technology | Procédé de génération d'image dans un plan de focalisation virtuel haute résolution |
JP2010082245A (ja) * | 2008-09-30 | 2010-04-15 | Panasonic Electric Works Co Ltd | 毛情報測定方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021229984A1 (ja) * | 2020-05-15 | 2021-11-18 | ソニーグループ株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
Also Published As
Publication number | Publication date |
---|---|
US20150125030A1 (en) | 2015-05-07 |
US9345429B2 (en) | 2016-05-24 |
CN104135926B (zh) | 2016-08-31 |
JP5818091B2 (ja) | 2015-11-18 |
JP2013132447A (ja) | 2013-07-08 |
CN104135926A (zh) | 2014-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5818091B2 (ja) | 画像処理装置、画像処理システム、画像処理方法、および、プログラム | |
WO2019085792A1 (en) | Image processing method and device, readable storage medium and electronic device | |
WO2019085751A1 (en) | Method and apparatus for image processing, and computer-readable storage medium | |
US20200082159A1 (en) | Polarization Imaging for Facial Recognition Enhancement System and Method | |
JP2010045613A (ja) | 画像識別方法および撮像装置 | |
JP6312227B2 (ja) | Rgb−d画像化システム、rgb−d画像の生成方法、及びrgb−d画像を生成する装置 | |
JP5766077B2 (ja) | ノイズ低減のための画像処理装置及び画像処理方法 | |
JP2013009050A (ja) | 画像処理装置、画像処理方法 | |
JP2015197745A (ja) | 画像処理装置、撮像装置、画像処理方法及びプログラム | |
JP5779089B2 (ja) | エッジ検出装置、エッジ検出プログラム、およびエッジ検出方法 | |
KR20110090787A (ko) | 화상 처리 장치 및 방법, 및 프로그램 | |
JP5882789B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JPWO2007129446A1 (ja) | 画像処理方法、画像処理プログラム、画像処理装置、及び撮像装置 | |
WO2019105304A1 (zh) | 图像白平衡处理方法、计算机可读存储介质和电子设备 | |
JP5504990B2 (ja) | 撮像装置、画像処理装置及びプログラム | |
US9392146B2 (en) | Apparatus and method for extracting object | |
TW201342303A (zh) | 三維空間圖像的獲取系統及方法 | |
JP5791361B2 (ja) | パターン識別装置、パターン識別方法およびプログラム | |
JP2004364212A (ja) | 物体撮影装置、物体撮影方法及び物体撮影プログラム | |
KR101832743B1 (ko) | 디지털 홀로그래피 디스플레이 및 그 방법 | |
JP2009239392A (ja) | 複眼撮影装置およびその制御方法並びにプログラム | |
JP2014164497A (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP6556033B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
US11688046B2 (en) | Selective image signal processing | |
JP2014049895A (ja) | 画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12862956 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14366540 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12862956 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12862956 Country of ref document: EP Kind code of ref document: A1 |