WO2020230445A1 - Image processing device, image processing method, and computer program - Google Patents

Image processing device, image processing method, and computer program Download PDF

Info

Publication number
WO2020230445A1
WO2020230445A1 PCT/JP2020/011715 JP2020011715W WO2020230445A1 WO 2020230445 A1 WO2020230445 A1 WO 2020230445A1 JP 2020011715 W JP2020011715 W JP 2020011715W WO 2020230445 A1 WO2020230445 A1 WO 2020230445A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
feature
feature image
pattern
Prior art date
Application number
PCT/JP2020/011715
Other languages
French (fr)
Japanese (ja)
Inventor
武井 一朗
宏毅 田岡
松本 博志
明夫 西村
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to JP2021519285A priority Critical patent/JPWO2020230445A1/ja
Publication of WO2020230445A1 publication Critical patent/WO2020230445A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to an image processing device, an image processing method, and a computer program.
  • an illuminance measurement value of each predetermined point is acquired from a user's face image, and a plurality of light sources are individually controlled based on the acquired illuminance measurement value of each predetermined point to control the illuminance of the default point.
  • an illuminance control device that suppresses erroneous detection of wrinkles and omission of detection by adjusting the illumination of the face so that the value approaches a predetermined set value.
  • Wrinkle detection omission is not limited to the above cases.
  • wrinkle detection omissions can occur if illumination from multiple different angles just erases the shadows of the wrinkles.
  • detection omissions are not limited to wrinkles on the skin, but may occur in the overall image processing for detecting the region of the concave portion from the shadow generated in the concave portion.
  • the non-limiting embodiment of the present disclosure contributes to the provision of an image processing device, an image processing method, and a computer program that suppresses detection omission of a concave portion of a subject.
  • the image processing apparatus illuminates a subject based on a first illumination pattern showing a pattern relating to light emission of a plurality of illuminations, captures a first image, and uses the second illumination pattern as described above.
  • the imaging processing unit that illuminates the subject and captures the second image, and the concave region of the subject is extracted from the first image and the second image, respectively, and the first feature image and the second feature image are obtained. It includes a feature image extracting unit to be generated, and a feature image synthesizing unit that synthesizes the first feature image and the second feature image to generate a composite feature image.
  • the figure which shows the appearance example of the image processing apparatus which concerns on Embodiment 1. A block diagram showing an example of the configuration of the image processing apparatus according to the first embodiment. The figure for demonstrating an example of the generation and composition of the wrinkle image which concerns on Embodiment 1. The figure for demonstrating an example of the process of generating the synthetic wrinkle image which concerns on Embodiment 1.
  • a flowchart showing a processing example of the image processing apparatus according to the third embodiment The figure which shows the example of the hardware configuration which concerns on embodiment of this disclosure.
  • FIG. 1 is a diagram showing an example of the appearance of the image processing device 10 according to the first embodiment.
  • the image processing device 10 is a device that detects a concave region of a subject from a photographed image taken by illuminating the subject.
  • the subject may be skin other than the face of the person (for example, skin of limbs).
  • the recesses may be other than wrinkles (eg, pores, scars, etc.).
  • the subject is not limited to a person, but may be an object (for example, a plate made of metal or plastic).
  • the recess may be a scratch on the surface of the object.
  • the image processing device 10 includes a photographing unit 101, an illumination unit 102 (102A, 102B, 102C, 102D), and a display unit 103 on the main surface 21 of the housing 20.
  • the main surface 21 of the housing 20 is a rectangular plane having a size one size larger than the human head.
  • the face is arranged at a position where the center of the face faces the center of the main surface 21 at a distance of about several tens of centimeters from the main surface 21.
  • the photographing unit 101 is, for example, a camera including a lens and an image sensor, and is arranged above the center of the main surface 21. When the face is in a fixed position, the photographing unit 101 photographs the face from a substantially front direction.
  • the display unit 103 is, for example, a liquid crystal display with a touch panel, and accepts operations on the image processing device 10 and displays an image. The operation may be accepted not via the touch panel but through another input device such as a keyboard and / or a mouse. Alternatively, the display unit 103 may automatically operate when the photographing unit 101 detects the face of the subject within an appropriate range.
  • the display unit 103 displays the face image F, which is a left-right inverted image including the face of the subject photographed by the photographing unit 101.
  • the lighting unit 102 is arranged so as to emit light toward the outside on the main surface 21 side, using a light emitting element such as an LED (Light-Emitting Diode) as a light source.
  • the illumination units 102A, 102B, 102C, and 102D are arranged near the upper side, the lower side, the left side, and the right side of the main surface 21, respectively. That is, the lighting unit 102A is in a position where the face is illuminated from above, the lighting unit 102B is in a position where the face is illuminated from below, the lighting unit 102C is in a position where the face is illuminated from the left side, and the lighting unit 102D is in a position where the face is illuminated from the right side. Be placed. In other words, each lighting unit 102 has a different irradiation angle of light to the face.
  • FIG. 2 is a block diagram showing an example of the configuration of the image processing device 10.
  • the image processing device 10 includes a photographing unit 101, an illumination unit 102 (102A, 102B, 102C, 102D), a display unit 103, a storage unit 104, and a control unit 105.
  • the photographing unit 101, the lighting unit 102, and the display unit 103 are as described above.
  • the storage unit 104 is, for example, a memory and / or a storage, and holds a program and data processed by the control unit 105.
  • the control unit 105 is, for example, a CPU (Central Processing Unit), and executes processing for realizing the functions of the image processing device 10.
  • the image processing device 10 has, for example, the functions of a shooting processing unit 111, a feature image extraction unit 112, a feature image composition unit 113, and a display processing unit 114.
  • the shooting processing unit 111 controls the lighting unit 102 and the shooting unit 101 to shoot images (face images) of a plurality of subjects.
  • the photographing processing unit 111 controls the illumination unit 102 based on the first illumination pattern to illuminate the subject, captures the first face image, and controls the illumination unit 102 based on the second illumination pattern. Illuminates the subject and takes a second facial image.
  • the first illumination pattern is different from the second illumination pattern in the lighting and / or extinguishing pattern of the illumination unit 102.
  • the upper and / or lower illumination units 102A and 102B are turned on, and the left and / or right illumination units 102C and 102D are extinguished.
  • the upper and / or lower illumination units 102A and 102B are turned off, and the left and / or right illumination units 102C and 102D are turned on.
  • the first illumination pattern is different from the second illumination pattern in the magnitude of the illuminance of the illumination unit 102.
  • the illuminance of the upper and / or lower illumination units 102A and 102B is larger than the illuminance of the left and / or right illumination units 102C and 102D.
  • the illuminance of the upper and / or lower illumination units 102A and 102B is smaller than the illuminance of the left and / or right illumination units 102C and 102D.
  • the photographing processing unit 111 may control the upper and lower lighting units 102A and 102B to different illuminances when the upper and lower lighting units 102A and 102B are turned on. Similarly, when the left and right lighting units 102C and 102D are turned on, the photographing processing unit 111 may control the left and right lighting units 102C and 102D to different illuminances.
  • the image processing device 10 captures a plurality of face images by changing the illumination patterns of the illumination units 102A, 102B, 102C, and 102D having different irradiation angles for the subject. As a result, even if the shadow of the wrinkle disappears in one lighting pattern or the shadow of the wrinkle is photographed lightly, the shadow of the wrinkle can be photographed deeply in another lighting pattern.
  • the feature image extraction unit 112 extracts wrinkle regions, which are concave regions of the subject, from the first and second face images, respectively, and generates first and second wrinkle images, which are examples of feature images. For example, the feature image extraction unit 112 extracts a wrinkle region from a face image using a predetermined edge detection filter to generate a wrinkle image.
  • the feature image synthesizing unit 113 synthesizes the first and second wrinkle images generated by the feature image extracting unit 112 to generate a composite wrinkle image. As a result, wrinkles contained in only one of the first and second wrinkle images are also included in the synthetic wrinkle image, so that omission of detection of wrinkles can be suppressed.
  • the display processing unit 114 displays, for example, the composite wrinkle image generated by the feature image synthesis unit 113 on the display unit 103.
  • the first face image 200a and the second face image 200b are included in the face included in the first face image 200a and the second face image 200b because the shooting timings are different.
  • the position, size, etc. may be different from the face. Therefore, when the first and second wrinkle images extracted from the first and second face images 200a and 200b are simply combined, the position and size of the wrinkles may be deviated and combined.
  • the feature image extraction unit 112 adjusts at least one of the first and second face images 200a and 200b based on the face feature points so that the position and / or size of the face match, and then the feature image extraction unit 112 Wrinkle images 300a and 300b may be generated from the adjusted face images 201a and 201b, respectively.
  • Facial feature points are data generated by detecting characteristic parts of the face such as facial contours, eyes, nose, mouth, and right using a predetermined image analysis, and are, for example, nodes and edges. It is data composed of.
  • the position and / or size of the face in the face image can be fixed, so that the above processing may not be performed.
  • the feature image synthesizing unit 113 synthesizes the first and second wrinkle images 300a and 300b adjusted and generated by the feature image extracting unit 112 to generate a composite wrinkle image 301. As a result, the composite wrinkle image 301 with less deviation is generated. Further, as shown in FIG. 3, the composite wrinkle image 301 has a wrinkle region 211a and a second face image 200b that are captured by the first face image 200a and not captured by the second face image 200b.
  • the first facial image 200a, which was captured, includes both the wrinkled region 211b, which was not captured.
  • the feature image extraction unit 112 generates first and second wrinkle images in which the pixel value of the wrinkle region is a negative value and the pixel value of the non-wrinkle region is 0 (S1). ).
  • the feature image synthesizing unit 113 calculates "pixel value of the first wrinkle image-pixel value of the second wrinkle image" for each pixel value of the first and second wrinkle images (S2).
  • the feature image synthesizing unit 113 converts the negative value of "pixel value of the first wrinkled image-pixel value of the second wrinkled image" calculated in S2 to 0, and the pixel value after the third conversion. Is calculated (S3).
  • the feature image synthesizing unit 113 calculates "first pixel value-third converted pixel value” and generates a pixel value of the composite wrinkled image (S4).
  • the synthesis method is not limited to the above method.
  • the maximum value may be selected by comparing each pixel of the first wrinkle image and the second wrinkle image.
  • the wrinkle image may be binarized and the first wrinkle image and the second wrinkle image may be OR-combined.
  • a composite wrinkle image can be generated from a plurality of wrinkle images.
  • the method of synthesizing two wrinkle images obtained by extracting a wrinkle region from a face image which is described in the present embodiment, is only an example.
  • the first face image and the second face image may be combined to generate a composite face image having the shades of both wrinkles, and the wrinkle region may be extracted from the composite face image.
  • the shooting processing unit 111 controls the lighting unit 102 based on the first lighting pattern (S11).
  • the shooting processing unit 111 controls the shooting unit 101 to shoot the first face image 200a (S12).
  • the shooting of the first face image 200a may be instructed by the operator.
  • the shooting processing unit 111 controls the lighting unit 102 based on the second lighting pattern (S13).
  • the shooting processing unit 111 controls the shooting unit 101 to shoot the second face image 200b (S14).
  • the shooting of the second face image 200b may be instructed by the operator.
  • the feature image extraction unit 112 extracts facial feature points from the first and second face images 200a and 200b, respectively (S15).
  • the feature image extraction unit 112 adjusts at least one of the first and second face images 200a and 200b based on the extracted face feature points so that the position and size of the face match (S16).
  • the feature image extraction unit 112 generates first and second wrinkle images 300a and 300b from the first and second face images 201a and 201b adjusted in S16, respectively (S17).
  • the feature image synthesizing unit 113 synthesizes the first and second wrinkle images 300a and 300b generated in S17 to generate a composite wrinkle image 301 (S18).
  • the display processing unit 114 displays the synthetic wrinkle image 301 generated in S18 on the display unit 103 (S19).
  • the image processing apparatus 10 can generate and display a synthetic wrinkle image 301 in which wrinkle detection omission is suppressed.
  • At least one of the illumination units 102A, 102B, 102C, and 102D may have a mechanism capable of adjusting the irradiation angle.
  • the irradiation angle of at least one of the illumination units 102A, 102B, 102C, and 102D may be different between the first illumination pattern and the second illumination pattern described above.
  • the photographing processing unit 111 may control to change the irradiation angle of the illumination unit 102 in the processing of S11 and S13 of FIG.
  • the irradiation angles in the first and second illumination patterns may be predetermined or may be manually adjusted by the operator.
  • the illumination units 102E and 102F are arranged at the upper half and the lower half positions near the left side of the main surface 21 of the housing 20, respectively, and the main of the housing 20.
  • the illumination units 102G and 102H may be arranged at the positions of the upper half and the lower half, respectively.
  • the upper half illumination units 102E and 102G may be turned on, and the lower half illumination units 102F and 102H may be turned off.
  • the upper half illumination units 102E and 102G may be turned off, and the lower half illumination units 102F and 102H may be turned on.
  • the above-mentioned "turning off” may be read as irradiation with an illuminance lower than the illuminance of the lighting unit 102 that "turns on”.
  • the illuminance of the lower illumination unit 102B may be smaller (or extinguished) than the illuminance of the upper illumination unit 102A.
  • the illuminance of the upper illumination unit 102A may be smaller (or extinguished) than the illuminance of the lower illumination unit 102B.
  • the first wrinkle image showing the thickness La of the upper part of the wrinkle is generated from the first face image
  • the second wrinkle showing the thickness Lb of the lower part of the wrinkle is generated from the second face image.
  • An image is generated. Therefore, by synthesizing the first and second wrinkle images, as shown in FIG. 7C, a synthetic wrinkle image showing the wrinkle thickness L with higher accuracy can be generated.
  • the above “upper side” and “lower side” may be read as “left side” and "right side”.
  • the lighting pattern is not limited to the above two patterns.
  • the image processing device 10 generates first to fourth wrinkle images from the first to fourth face images taken in each of the first to fourth illumination patterns, and generates a composite wrinkle image. You may.
  • the image processing apparatus 10 can suppress omission of detection of wrinkles and generate a synthetic wrinkle image showing the thickness of wrinkles with higher accuracy.
  • Embodiment 2 In the second embodiment, an example is shown in which the image processing device 10 changes the lighting at the time of shooting in cooperation with a lighting unit (hereinafter referred to as “external lighting unit”) provided in an external environment.
  • a lighting unit hereinafter referred to as “external lighting unit”
  • the same reference numerals may be given to the configurations described in the first embodiment, and the description may be omitted.
  • FIG. 8 shows an example of the appearance of the image processing apparatus 10 according to the second embodiment.
  • FIG. 9 shows a configuration example of the image processing device 10 according to the second embodiment.
  • the image processing device 10 connects the external lighting unit 401 through a wired or wireless communication network 400.
  • the wired communication network 400 is, for example, Ethernet or PLC (Power Line Communication).
  • the wireless communication network 400 is, for example, Wi-Fi, Bluetooth®.
  • the external lighting unit 401 is a lighting device provided for an external environment such as a ceiling and a wall of a room, and is composed of LEDs and the like.
  • the external lighting unit 401 may be turned on and off only, or the illuminance may be adjustable.
  • the image processing device 10 includes a communication unit 402 for communicating with the external lighting unit 401.
  • the photographing processing unit 111 may control the lighting and / or extinguishing of the external lighting unit 401 or the adjustment of the illuminance via the communication unit 402.
  • the first lighting pattern differs from the second lighting pattern in the lighting and extinguishing patterns of the external lighting unit 401.
  • the external illumination unit 401 is turned on, and in the second illumination pattern, the external illumination unit 401 is extinguished.
  • the left and / right illumination units 102C and 102D may be lit in both the first illumination pattern and the second illumination pattern.
  • the magnitude of the illuminance of the external lighting unit 401 is different from that of the second lighting pattern in the first lighting pattern.
  • the illuminance of the external illumination unit 401 may be larger than that in the second illumination pattern.
  • the image processing device 10 changes the illumination pattern of the external illumination unit 401 to capture a plurality of face images.
  • the shadow of the wrinkle is shot in another lighting pattern. That is, it is possible to suppress the omission of detection of wrinkles.
  • the external lighting unit 401 may be provided in the image processing device 10 shown in FIG. Further, the external lighting unit 401 may be incorporated into the lighting pattern as one of the plurality of lighting units 102. In addition to changing the lighting pattern, the size of the lighting may be changed, or the position of the lighting may be changed. In this case as well, since the way the light hits the face changes, different wrinkle shadows are photographed, so that detection omission can be suppressed. It should be noted that the lighting method may be controlled according to the part of the face to be detected (under the eyes, the outer corners of the eyes, the forehead, the space between the eyebrows, etc.). It is possible to further improve the detection accuracy by using an illumination pattern that matches the direction and depth of wrinkles that are likely to occur according to each part.
  • Embodiment 3 In the third embodiment, an example is shown in which a subject is instructed to move the direction of the face and a plurality of face images having different irradiation angles are taken.
  • the image processing device 10 according to the third embodiment may have the same configuration as that of FIGS. 1 and 2.
  • the shooting processing unit 111 instructs the subject to orient the first face (S31). For example, the photographing processing unit 111 instructs the face to turn slightly to the left. This instruction may be displayed on the display unit 103 or may be output by voice.
  • the photographing processing unit 111 photographs the first face image (S32). At this time, the photographing processing unit 111 may light all the lighting units 102, or may light some of the lighting units 102.
  • the shooting processing unit 111 instructs the subject to have a second face orientation different from that of the first face (S33). For example, the photographing processing unit 111 instructs the face to turn slightly to the right.
  • the shooting processing unit 111 shoots a second face image (S34).
  • the photographing processing unit 111 may light all the lighting units 102, or may light some of the lighting units 102.
  • the illumination pattern of S32 and the illumination pattern of S34 may be the same, or may be different as described in the first embodiment.
  • the subsequent processing is the same as the processing of S15 to S19 shown in FIG. 5, so the description thereof will be omitted.
  • the above-mentioned “left” and “right” of the face orientation may be read as “upper” and “lower”, respectively.
  • the image processing device 10 illuminates a subject based on a first illumination pattern showing a pattern relating to light emission of a plurality of illuminations, takes a first image, and shoots the subject based on the second illumination pattern.
  • the photographing processing unit 111 that captures the second image by illuminating and the predetermined characteristic area of the subject are extracted from the first image and the second image, respectively, and the first feature image and the second feature image are taken. It is provided with a feature image extraction unit 112 that generates a composite feature image, and a feature image synthesis unit 113 that synthesizes a first feature image and a second feature image to generate a composite feature image.
  • the illumination pattern when the first image is taken is different from the illumination pattern when the second image is taken, so that if one of the illumination patterns, the shadow of the characteristic portion of the subject disappears. Even if the shadow of the characteristic portion is photographed lightly, the shadow of the characteristic portion of the subject can be photographed deeply in the other lighting pattern. Therefore, in the composite feature image generated by synthesizing the first and second feature images extracted from the first and second images, respectively, the omission of detection of the region of the feature portion of the subject is suppressed.
  • FIG. 11 is a diagram showing a hardware configuration of a computer that realizes the functions of each device by a program.
  • the computer 2100 includes an input device 2101 such as a keyboard or mouse and a touch pad, an output device 2102 such as a display or a speaker, a CPU (Central Processing Unit) 2103, a ROM (Read Only Memory) 2104, and a RAM (Random Access Memory) 2105.
  • a hard disk device or a storage device 2106 such as SSD (Solid State Drive), a reading device 2107 that reads information from a recording medium such as a DVD-ROM (Digital Versatile Disk Read Only Memory) or a USB (Universal Serial Bus) memory, via a network.
  • a communication device 2108 for communicating is provided, and each part is connected by a bus 2109.
  • the reading device 2107 reads the program from the recording medium on which the program for realizing the function of each of the above devices is recorded, and stores the program in the storage device 2106.
  • the communication device 2108 communicates with the server device connected to the network, and stores the program downloaded from the server device for realizing the function of each device in the storage device 2106.
  • the CPU 2103 copies the program stored in the storage device 2106 to the RAM 2105, and sequentially reads and executes the instructions included in the program from the RAM 2105, thereby realizing the functions of the above devices.
  • This disclosure can be realized by software, hardware, or software linked with hardware.
  • Each functional block used in the description of the above embodiment is partially or wholly realized as an LSI which is an integrated circuit, and each process described in the above embodiment is partially or wholly. It may be controlled by one LSI or a combination of LSIs.
  • the LSI may be composed of individual chips, or may be composed of one chip so as to include a part or all of functional blocks.
  • the LSI may include data input and output.
  • LSIs may be referred to as ICs, system LSIs, super LSIs, and ultra LSIs depending on the degree of integration.
  • the method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit, a general-purpose processor, or a dedicated processor. Further, an FPGA (Field Programmable Gate Array) that can be programmed after the LSI is manufactured, or a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI may be used.
  • FPGA Field Programmable Gate Array
  • the present disclosure may be realized as digital processing or analog processing.
  • One aspect of the present disclosure is useful for an apparatus for detecting a concave portion of a subject.
  • Image processing device 10 Housing 21 Main surface 101 Imaging unit 102, 102A to 102H Lighting unit 103 Display unit 104 Storage unit 105 Control unit 111 Imaging processing unit 112 Feature image extraction unit 113 Feature image composition unit 114 Display processing unit 400 Communication network 401 External lighting unit 402 Communication unit

Abstract

This image processing device (10) is provided with: an imaging processing unit (111) which captures a first image with the subject illuminated on the basis of a first illumination pattern that indicates a pattern for lighting of multiple lights, and captures a second image with the subject illuminated on the basis of a second illumination pattern; a feature image extraction unit (112) which extracts the region of a recess on the subject from the first image and the second image, and generates a first feature image and a second feature image; and a feature image combining unit (113) which combines the first feature image and the second feature image to generate a combined feature image.

Description

画像処理装置、画像処理方法、及び、コンピュータプログラムImage processing equipment, image processing methods, and computer programs
 本開示は、画像処理装置、画像処理方法、及び、コンピュータプログラムに関する。 The present disclosure relates to an image processing device, an image processing method, and a computer program.
 従来、被写体の顔を撮影した画像(以下「顔画像」という)を解析して、肌の皺を検出することが行われている。皺の検出において、肌の光沢が強すぎたり、肌の影が濃すぎたりすると、皺の誤検出及び検出漏れが生じる可能性が高くなる。特許文献1には、ユーザの顔画像から、各既定点の照度計測値を取得し、取得された各既定点の照度計測値に基づき、複数の光源を個別に制御して、既定点の照度が所定の設定値に近づくように、顔に対する照明を調整することにより、皺の誤検出及び検出漏れを抑制する照明制御装置が開示されている。 Conventionally, wrinkles on the skin have been detected by analyzing an image of the subject's face (hereinafter referred to as "face image"). In the detection of wrinkles, if the skin is too glossy or the shadow of the skin is too dark, there is a high possibility that false detection of wrinkles and omission of detection will occur. In Patent Document 1, an illuminance measurement value of each predetermined point is acquired from a user's face image, and a plurality of light sources are individually controlled based on the acquired illuminance measurement value of each predetermined point to control the illuminance of the default point. There is disclosed an illuminance control device that suppresses erroneous detection of wrinkles and omission of detection by adjusting the illumination of the face so that the value approaches a predetermined set value.
特開2016-81677号公報JP-A-2016-81677
 皺の検出漏れは、上述の場合に限られない。例えば、複数の異なる角度からの照明がちょうど皺の陰影を消してしまう場合も、皺の検出漏れが発生し得る。このような検出漏れは、肌の皺に限られず、凹部に生じる陰影から、当該凹部の領域を検出する画像処理全般において生じ得る。 Wrinkle detection omission is not limited to the above cases. For example, wrinkle detection omissions can occur if illumination from multiple different angles just erases the shadows of the wrinkles. Such detection omissions are not limited to wrinkles on the skin, but may occur in the overall image processing for detecting the region of the concave portion from the shadow generated in the concave portion.
 本開示の非限定的な実施例は、被写体の凹部の検出漏れを抑制する、画像処理装置、画像処理方法、及び、コンピュータプログラムの提供に資する。 The non-limiting embodiment of the present disclosure contributes to the provision of an image processing device, an image processing method, and a computer program that suppresses detection omission of a concave portion of a subject.
 本開示の一態様に係る画像処理装置は、複数の照明の発光に関するパターンを示す第1の照明パターンに基づいて被写体を照らして第1の画像を撮影し、第2の照明パターンに基づいて前記被写体を照らして第2の画像を撮影する撮影処理部と、前記第1の画像及び第2の画像からそれぞれ前記被写体の凹部の領域を抽出し、第1の特徴画像及び第2の特徴画像を生成する特徴画像抽出部と、前記第1の特徴画像と前記第2の特徴画像を合成して合成特徴画像を生成する特徴画像合成部と、を備える。 The image processing apparatus according to one aspect of the present disclosure illuminates a subject based on a first illumination pattern showing a pattern relating to light emission of a plurality of illuminations, captures a first image, and uses the second illumination pattern as described above. The imaging processing unit that illuminates the subject and captures the second image, and the concave region of the subject is extracted from the first image and the second image, respectively, and the first feature image and the second feature image are obtained. It includes a feature image extracting unit to be generated, and a feature image synthesizing unit that synthesizes the first feature image and the second feature image to generate a composite feature image.
 なお、これらの包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラム、または、記録媒体で実現されてもよく、システム、装置、方法、集積回路、コンピュータプログラムおよび記録媒体の任意な組み合わせで実現されてもよい。 It should be noted that these comprehensive or specific aspects may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium, and any of the systems, devices, methods, integrated circuits, computer programs, and recording media. It may be realized by various combinations.
 本開示の一態様によれば、被写体の凹部の検出漏れを抑制できる。 According to one aspect of the present disclosure, it is possible to suppress the detection omission of the concave portion of the subject.
 本開示の一態様における更なる利点および効果は、明細書および図面から明らかにされる。かかる利点および/または効果は、いくつかの実施形態並びに明細書および図面に記載された特徴によってそれぞれ提供されるが、1つまたはそれ以上の同一の特徴を得るために必ずしも全てが提供される必要はない。 Further advantages and effects in one aspect of the present disclosure will be apparent from the specification and drawings. Such advantages and / or effects are provided by some embodiments and features described in the specification and drawings, respectively, but not all need to be provided in order to obtain one or more identical features. There is no.
実施の形態1に係る画像処理装置の外観例を示す図The figure which shows the appearance example of the image processing apparatus which concerns on Embodiment 1. 実施の形態1に係る画像処理装置の構成の一例を示すブロック図A block diagram showing an example of the configuration of the image processing apparatus according to the first embodiment. 実施の形態1に係る皺画像の生成及び合成の一例を説明するための図The figure for demonstrating an example of the generation and composition of the wrinkle image which concerns on Embodiment 1. 実施の形態1に係る合成皺画像を生成する処理の一例を説明するための図The figure for demonstrating an example of the process of generating the synthetic wrinkle image which concerns on Embodiment 1. 実施の形態1に係る画像処理装置の処理例を示すフローチャートA flowchart showing a processing example of the image processing apparatus according to the first embodiment. 実施の形態1に係る画像処理装置の変形例の外観を示す図The figure which shows the appearance of the modification of the image processing apparatus which concerns on Embodiment 1. 実施の形態1に係る合成皺画像における皺の太さの一例を説明するための図The figure for demonstrating an example of the thickness of the wrinkle in the synthetic wrinkle image which concerns on Embodiment 1. 実施の形態2に係る画像処理装置の外観例を示す図The figure which shows the appearance example of the image processing apparatus which concerns on Embodiment 2. 実施の形態2に係る画像処理装置の構成の一例を示すブロック図A block diagram showing an example of the configuration of the image processing apparatus according to the second embodiment. 実施の形態3に係る画像処理装置の処理例を示すフローチャートA flowchart showing a processing example of the image processing apparatus according to the third embodiment. 本開示の実施の形態に係るハードウェア構成の例を示す図The figure which shows the example of the hardware configuration which concerns on embodiment of this disclosure.
 以下、図面を適宜参照して、本発明の実施の形態について、詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings as appropriate. However, more detailed explanation than necessary may be omitted. For example, detailed explanations of already well-known matters and duplicate explanations for substantially the same configuration may be omitted. This is to avoid unnecessary redundancy of the following description and to facilitate the understanding of those skilled in the art.
 (実施の形態1)
 <構成例>
 図1は、実施の形態1に係る画像処理装置10の外観の一例を示す図である。
(Embodiment 1)
<Configuration example>
FIG. 1 is a diagram showing an example of the appearance of the image processing device 10 according to the first embodiment.
 画像処理装置10は、被写体を照らして撮影した撮影画像から、被写体の凹部の領域を検出する装置である。以下、被写体が人物の顔であり、被写体の凹部が顔の皺である例を説明する。しかし、被写体は、人物の顔以外の肌(例えば手足の肌)であってよい。凹部は、皺以外(例えば毛穴、傷跡など)であってよい。また、被写体は、人物に限られず、物体(例えば金属又はプラスチック等の板)であってよい。凹部は、当該物体の面に生じた傷であってよい。 The image processing device 10 is a device that detects a concave region of a subject from a photographed image taken by illuminating the subject. Hereinafter, an example in which the subject is a person's face and the concave portion of the subject is a wrinkle on the face will be described. However, the subject may be skin other than the face of the person (for example, skin of limbs). The recesses may be other than wrinkles (eg, pores, scars, etc.). Further, the subject is not limited to a person, but may be an object (for example, a plate made of metal or plastic). The recess may be a scratch on the surface of the object.
 画像処理装置10は、筐体20の主面21に、撮影部101、照明部102(102A、102B、102C、102D)、及び、表示部103を備える。筐体20の主面21は、人の頭部よりも一回り大きい大きさの矩形平面である。例えば、主面21から数十センチメートル程度離隔した、主面21の中央付近に顔の中心が対向する位置に、顔を配置する。 The image processing device 10 includes a photographing unit 101, an illumination unit 102 (102A, 102B, 102C, 102D), and a display unit 103 on the main surface 21 of the housing 20. The main surface 21 of the housing 20 is a rectangular plane having a size one size larger than the human head. For example, the face is arranged at a position where the center of the face faces the center of the main surface 21 at a distance of about several tens of centimeters from the main surface 21.
 撮影部101は、例えばレンズ及び撮像素子を含むカメラであり、主面21の中央よりも上方に配置される。撮影部101は、顔が定位置にある場合に、顔をほぼ正面の方向から撮影する。 The photographing unit 101 is, for example, a camera including a lens and an image sensor, and is arranged above the center of the main surface 21. When the face is in a fixed position, the photographing unit 101 photographs the face from a substantially front direction.
 表示部103は、例えばタッチパネル付き液晶ディスプレイであり、画像処理装置10に対する操作の受け付け、及び、画像の表示を行う。操作の受け付けは、タッチパネルを介さず、キーボード及び/又はマウスなど、別の入力装置を通じて行われてよい。或いは、表示部103は、撮影部101が被写体の顔を適切な範囲内にて検知した場合に、自動で動作してもよい。表示部103は、撮影部101で撮影された被写体の顔を含む画像を左右反転させた顔画像Fを表示する。 The display unit 103 is, for example, a liquid crystal display with a touch panel, and accepts operations on the image processing device 10 and displays an image. The operation may be accepted not via the touch panel but through another input device such as a keyboard and / or a mouse. Alternatively, the display unit 103 may automatically operate when the photographing unit 101 detects the face of the subject within an appropriate range. The display unit 103 displays the face image F, which is a left-right inverted image including the face of the subject photographed by the photographing unit 101.
 照明部102は、例えばLED(Light-Emitting Diode)等の発光素子を光源とし、主面21側の外部に向けて光を放射するように配置される。照明部102A、102B、102C、102Dは、それぞれ、主面21の上辺付近、下辺付近、左辺付近、右辺付近に配置される。すなわち、照明部102Aは顔を上側から照らす位置に、照明部102Bは顔を下側から照らす位置に、照明部102Cは顔を左側から照らす位置に、照明部102Dは顔を右側から照らす位置に配置される。別言すると、各照明部102は、顔に対する光の照射角度が異なる。 The lighting unit 102 is arranged so as to emit light toward the outside on the main surface 21 side, using a light emitting element such as an LED (Light-Emitting Diode) as a light source. The illumination units 102A, 102B, 102C, and 102D are arranged near the upper side, the lower side, the left side, and the right side of the main surface 21, respectively. That is, the lighting unit 102A is in a position where the face is illuminated from above, the lighting unit 102B is in a position where the face is illuminated from below, the lighting unit 102C is in a position where the face is illuminated from the left side, and the lighting unit 102D is in a position where the face is illuminated from the right side. Be placed. In other words, each lighting unit 102 has a different irradiation angle of light to the face.
 図2は、画像処理装置10の構成の一例を示すブロック図である。 FIG. 2 is a block diagram showing an example of the configuration of the image processing device 10.
 画像処理装置10は、撮影部101、照明部102(102A、102B、102C、102D)、表示部103、記憶部104、及び制御部105を有する。撮影部101、照明部102、表示部103は、上記で説明したとおりである。 The image processing device 10 includes a photographing unit 101, an illumination unit 102 (102A, 102B, 102C, 102D), a display unit 103, a storage unit 104, and a control unit 105. The photographing unit 101, the lighting unit 102, and the display unit 103 are as described above.
 記憶部104は、例えばメモリ及び/又はストレージであり、制御部105が処理するプログラム及びデータを保持する。 The storage unit 104 is, for example, a memory and / or a storage, and holds a program and data processed by the control unit 105.
 制御部105は、例えばCPU(Central Processing Unit)であり、画像処理装置10が有する機能を実現するための処理を実行する。画像処理装置10は、例えば、撮影処理部111、特徴画像抽出部112、特徴画像合成部113、及び、表示処理部114の機能を有する。 The control unit 105 is, for example, a CPU (Central Processing Unit), and executes processing for realizing the functions of the image processing device 10. The image processing device 10 has, for example, the functions of a shooting processing unit 111, a feature image extraction unit 112, a feature image composition unit 113, and a display processing unit 114.
 撮影処理部111は、照明部102及び撮影部101を制御して複数の被写体の画像(顔画像)を撮影する。例えば、撮影処理部111は、第1の照明パターンに基づいて照明部102を制御して被写体を照らし、第1の顔画像を撮影し、第2の照明パターンに基づいて照明部102を制御して被写体を照らし、第2の顔画像を撮影する。 The shooting processing unit 111 controls the lighting unit 102 and the shooting unit 101 to shoot images (face images) of a plurality of subjects. For example, the photographing processing unit 111 controls the illumination unit 102 based on the first illumination pattern to illuminate the subject, captures the first face image, and controls the illumination unit 102 based on the second illumination pattern. Illuminates the subject and takes a second facial image.
 第1の照明パターンは、第2の照明パターンと、照明部102の点灯及び/又は消灯のパターンが異なる。例えば、第1の照明パターンでは、上側及び/又は下側の照明部102A、102Bが点灯し、左側及び/又は右側の照明部102C、102Dが消灯する。第2の照明パターンでは、上側及び/又は下側の照明部102A、102Bが消灯し、左側及び/又は右側の照明部102C、102Dが点灯する。 The first illumination pattern is different from the second illumination pattern in the lighting and / or extinguishing pattern of the illumination unit 102. For example, in the first illumination pattern, the upper and / or lower illumination units 102A and 102B are turned on, and the left and / or right illumination units 102C and 102D are extinguished. In the second illumination pattern, the upper and / or lower illumination units 102A and 102B are turned off, and the left and / or right illumination units 102C and 102D are turned on.
 或いは、第1の照明パターンは、第2の照明パターンと、照明部102の照度の大きさが異なる。例えば、第1の照明パターンでは、上側及び/又は下側の照明部102A、102Bの照度が、左側及び/又は右側の照明部102C、102Dの照度よりも大きい。第2の照明パターンでは、上側及び/又は下側の照明部102A、102Bの照度が、左側及び/又は右側の照明部102C、102Dの照度よりも小さい。 Alternatively, the first illumination pattern is different from the second illumination pattern in the magnitude of the illuminance of the illumination unit 102. For example, in the first illumination pattern, the illuminance of the upper and / or lower illumination units 102A and 102B is larger than the illuminance of the left and / or right illumination units 102C and 102D. In the second illumination pattern, the illuminance of the upper and / or lower illumination units 102A and 102B is smaller than the illuminance of the left and / or right illumination units 102C and 102D.
 なお、撮影処理部111は、上側及び下側の照明部102A、102Bを点灯させる場合に、上側及び下側の照明部102A、102Bを互いに異なる照度に制御してもよい。同様に、撮影処理部111は、左側及び右側の照明部102C、102Dを点灯させる場合に、左側及び右側の照明部102C、102Dを互いに異なる照度に制御してもよい。 Note that the photographing processing unit 111 may control the upper and lower lighting units 102A and 102B to different illuminances when the upper and lower lighting units 102A and 102B are turned on. Similarly, when the left and right lighting units 102C and 102D are turned on, the photographing processing unit 111 may control the left and right lighting units 102C and 102D to different illuminances.
 このように、画像処理装置10は、被写体に対する照射角度が互いに異なる照明部102A、102B、102C、102Dの照明パターンを変化させて、複数の顔画像を撮影する。これにより、もし1つの照明パターンではちょうど皺の陰影が消えたり、皺の陰影が薄く撮影されたりしても、別の照明パターンではその皺の陰影が濃く撮影され得る。 In this way, the image processing device 10 captures a plurality of face images by changing the illumination patterns of the illumination units 102A, 102B, 102C, and 102D having different irradiation angles for the subject. As a result, even if the shadow of the wrinkle disappears in one lighting pattern or the shadow of the wrinkle is photographed lightly, the shadow of the wrinkle can be photographed deeply in another lighting pattern.
 特徴画像抽出部112は、第1及び第2の顔画像から、それぞれ、被写体の凹部の領域である皺領域を抽出し、特徴画像の一例である第1及び第2の皺画像を生成する。例えば、特徴画像抽出部112は、所定のエッジ検出フィルタを用いて、顔画像から皺領域を抽出し、皺画像を生成する。 The feature image extraction unit 112 extracts wrinkle regions, which are concave regions of the subject, from the first and second face images, respectively, and generates first and second wrinkle images, which are examples of feature images. For example, the feature image extraction unit 112 extracts a wrinkle region from a face image using a predetermined edge detection filter to generate a wrinkle image.
 特徴画像合成部113は、特徴画像抽出部112によって生成された第1及び第2の皺画像を合成し、合成皺画像を生成する。これにより、第1及び第2の皺画像の何れか一方にしか含まれない皺も、合成皺画像に含まれるので、皺の検出漏れを抑止できる。 The feature image synthesizing unit 113 synthesizes the first and second wrinkle images generated by the feature image extracting unit 112 to generate a composite wrinkle image. As a result, wrinkles contained in only one of the first and second wrinkle images are also included in the synthetic wrinkle image, so that omission of detection of wrinkles can be suppressed.
 表示処理部114は、例えば、特徴画像合成部113によって生成された合成皺画像を、表示部103に表示する。 The display processing unit 114 displays, for example, the composite wrinkle image generated by the feature image synthesis unit 113 on the display unit 103.
 <皺画像の生成及び合成の詳細>
 図3を参照して、皺画像の生成及び合成について詳細に説明する。
<Details of wrinkle image generation and composition>
The generation and composition of wrinkle images will be described in detail with reference to FIG.
 図3に示すように、第1の顔画像200a及び第2の顔画像200bは、撮影のタイミングが異なるため、第1の顔画像200aに含まれる顔と、第2の顔画像200bに含まれる顔とは、位置及びサイズ等が異なり得る。よって、第1及び第2の顔画像200a、200bからそれぞれ抽出した第1及び第2の皺画像を単純に合成した場合、皺の位置及びサイズがずれて合成されるおそれがある。 As shown in FIG. 3, the first face image 200a and the second face image 200b are included in the face included in the first face image 200a and the second face image 200b because the shooting timings are different. The position, size, etc. may be different from the face. Therefore, when the first and second wrinkle images extracted from the first and second face images 200a and 200b are simply combined, the position and size of the wrinkles may be deviated and combined.
 そこで、特徴画像抽出部112は、顔特徴点に基づいて、第1及び第2の顔画像200a、200bのうちの少なくとも一方を、顔の位置及び/又はサイズが適合するように調整した後、調整後の顔画像201a、201bから、それぞれ、皺画像300a、300bを生成してよい。顔特徴点は、例えば、顔の輪郭、目、鼻、口、右など、顔の特徴的な部位を所定の画像解析を用いて検出することにより生成されるデータであり、例えば、ノード及びエッジから構成されるデータである。なお、顔を固定できる装具などを用いる場合、顔画像の顔の位置及び/又はサイズを固定できるため、上記の処理は行われなくてもよい。 Therefore, the feature image extraction unit 112 adjusts at least one of the first and second face images 200a and 200b based on the face feature points so that the position and / or size of the face match, and then the feature image extraction unit 112 Wrinkle images 300a and 300b may be generated from the adjusted face images 201a and 201b, respectively. Facial feature points are data generated by detecting characteristic parts of the face such as facial contours, eyes, nose, mouth, and right using a predetermined image analysis, and are, for example, nodes and edges. It is data composed of. When a device capable of fixing the face is used, the position and / or size of the face in the face image can be fixed, so that the above processing may not be performed.
 特徴画像合成部113は、特徴画像抽出部112によって調整及び生成された第1及び第2の皺画像300a、300bを合成して、合成皺画像301を生成する。これにより、ズレの少ない合成皺画像301が生成される。また、図3に示すように、合成皺画像301には、第1の顔画像200aに撮影され、第2の顔画像200bには撮影されなかった皺領域211aと、第2の顔画像200bに撮影され、第1の顔画像200aには撮影されなかった皺領域211bとの両方が含まれる。 The feature image synthesizing unit 113 synthesizes the first and second wrinkle images 300a and 300b adjusted and generated by the feature image extracting unit 112 to generate a composite wrinkle image 301. As a result, the composite wrinkle image 301 with less deviation is generated. Further, as shown in FIG. 3, the composite wrinkle image 301 has a wrinkle region 211a and a second face image 200b that are captured by the first face image 200a and not captured by the second face image 200b. The first facial image 200a, which was captured, includes both the wrinkled region 211b, which was not captured.
 次に、図4を参照して、複数の皺画像を合成して合成皺画像を生成する処理の一例を説明する。 Next, with reference to FIG. 4, an example of a process of synthesizing a plurality of wrinkle images to generate a composite wrinkle image will be described.
 例えば、図4に示すように、特徴画像抽出部112は、皺領域の画素値を負値とし、皺でない領域の画素値を0とする、第1及び第2の皺画像を生成する(S1)。 For example, as shown in FIG. 4, the feature image extraction unit 112 generates first and second wrinkle images in which the pixel value of the wrinkle region is a negative value and the pixel value of the non-wrinkle region is 0 (S1). ).
 次に、特徴画像合成部113は、第1及び第2の皺画像の各画素値について、「第1の皺画像の画素値-第2の皺画像の画素値」を算出する(S2)。 Next, the feature image synthesizing unit 113 calculates "pixel value of the first wrinkle image-pixel value of the second wrinkle image" for each pixel value of the first and second wrinkle images (S2).
 次に、特徴画像合成部113は、S2で算出した「第1の皺画像の画素値-第2の皺画像の画素値」の負値を0に変換し、第3の変換後の画素値を算出する(S3)。 Next, the feature image synthesizing unit 113 converts the negative value of "pixel value of the first wrinkled image-pixel value of the second wrinkled image" calculated in S2 to 0, and the pixel value after the third conversion. Is calculated (S3).
 次に、特徴画像合成部113は、「第1の画素値-第3の変換後の画素値」を算出し、合成皺画像の画素値を生成する(S4)。 Next, the feature image synthesizing unit 113 calculates "first pixel value-third converted pixel value" and generates a pixel value of the composite wrinkled image (S4).
 なお、合成の方法は上記の方法に限られない。例えば、第1の皺画像と第2の皺画像との各画素を比較して最大値を選択してもよい。或いは、皺画像を2値化し、第1の皺画像と第2の皺画像とをOR合成してもよい。 The synthesis method is not limited to the above method. For example, the maximum value may be selected by comparing each pixel of the first wrinkle image and the second wrinkle image. Alternatively, the wrinkle image may be binarized and the first wrinkle image and the second wrinkle image may be OR-combined.
 この処理により、複数の皺画像から合成皺画像を生成できる。なお、本実施の形態にて説明している、顔画像から皺領域を抽出した2つの皺画像を合成する方法は、一例に過ぎない。例えば、まず、第1の顔画像と第2の顔画像とを合成して両方の皺の陰を有する合成顔画像を生成し、その合成顔画像から皺領域を抽出してもよい。 By this process, a composite wrinkle image can be generated from a plurality of wrinkle images. The method of synthesizing two wrinkle images obtained by extracting a wrinkle region from a face image, which is described in the present embodiment, is only an example. For example, first, the first face image and the second face image may be combined to generate a composite face image having the shades of both wrinkles, and the wrinkle region may be extracted from the composite face image.
 <動作例>
 図5に示すフローチャートを参照して、実施の形態1に係る画像処理装置10の処理例を説明する。
<Operation example>
A processing example of the image processing apparatus 10 according to the first embodiment will be described with reference to the flowchart shown in FIG.
 撮影処理部111は、第1の照明パターンに基づいて照明部102を制御する(S11)。撮影処理部111は、撮影部101を制御し、第1の顔画像200aを撮影する(S12)。第1の顔画像200aの撮影は、操作者によって指示されてもよい。 The shooting processing unit 111 controls the lighting unit 102 based on the first lighting pattern (S11). The shooting processing unit 111 controls the shooting unit 101 to shoot the first face image 200a (S12). The shooting of the first face image 200a may be instructed by the operator.
 撮影処理部111は、第2の照明パターンに基づいて照明部102を制御する(S13)。撮影処理部111は、撮影部101を制御し、第2の顔画像200bを撮影する(S14)。第2の顔画像200bの撮影は、操作者によって指示されてもよい。 The shooting processing unit 111 controls the lighting unit 102 based on the second lighting pattern (S13). The shooting processing unit 111 controls the shooting unit 101 to shoot the second face image 200b (S14). The shooting of the second face image 200b may be instructed by the operator.
 特徴画像抽出部112は、第1及び第2の顔画像200a、200bから、それぞれ、顔特徴点を抽出する(S15)。特徴画像抽出部112は、抽出した顔特徴点に基づき、第1及び第2の顔画像200a、200bのうちの少なくとも一方を、顔の位置及びサイズが適合するように調整する(S16)。 The feature image extraction unit 112 extracts facial feature points from the first and second face images 200a and 200b, respectively (S15). The feature image extraction unit 112 adjusts at least one of the first and second face images 200a and 200b based on the extracted face feature points so that the position and size of the face match (S16).
 特徴画像抽出部112は、S16にて調整した第1及び第2の顔画像201a、201bから、それぞれ、第1及び第2の皺画像300a、300bを生成する(S17)。 The feature image extraction unit 112 generates first and second wrinkle images 300a and 300b from the first and second face images 201a and 201b adjusted in S16, respectively (S17).
 特徴画像合成部113は、S17にて生成された第1及び第2の皺画像300a、300bを合成し、合成皺画像301を生成する(S18)。 The feature image synthesizing unit 113 synthesizes the first and second wrinkle images 300a and 300b generated in S17 to generate a composite wrinkle image 301 (S18).
 表示処理部114は、S18にて生成された合成皺画像301を、表示部103に表示する(S19)。 The display processing unit 114 displays the synthetic wrinkle image 301 generated in S18 on the display unit 103 (S19).
 以上の処理により、画像処理装置10は、皺の検出漏れが抑制された合成皺画像301を生成及び表示できる。 By the above processing, the image processing apparatus 10 can generate and display a synthetic wrinkle image 301 in which wrinkle detection omission is suppressed.
 <変形例>
 照明部102A、102B、102C、102Dのうちの少なくとも1つは、照射角度を調整可能な機構を有してもよい。この場合、上述した第1の照明パターンと第2の照明パターンとは、照明部102A、102B、102C、102Dのうちの少なくとも1つの照射角度が異なってよい。この場合、撮影処理部111は、図5のS11及びS13の処理において、照明部102の照射角度を変化させる制御を行ってよい。第1及び第2の照明パターンにおける照射角度は、予め定められてもよいし、操作者によって手動で調節されてもよい。
<Modification example>
At least one of the illumination units 102A, 102B, 102C, and 102D may have a mechanism capable of adjusting the irradiation angle. In this case, the irradiation angle of at least one of the illumination units 102A, 102B, 102C, and 102D may be different between the first illumination pattern and the second illumination pattern described above. In this case, the photographing processing unit 111 may control to change the irradiation angle of the illumination unit 102 in the processing of S11 and S13 of FIG. The irradiation angles in the first and second illumination patterns may be predetermined or may be manually adjusted by the operator.
 また、図6に示すように、画像処理装置10は、筐体20の主面21の左辺付近において、上半分及び下半分の位置にそれぞれ照明部102E、102Fが配置され、筐体20の主面21の右辺付近において、上半分及び下半分の位置にそれぞれ照明部102G、102Hが配置された構成であってもよい。この場合、第1の照明パターンでは、上半分の照明部102E、102Gが点灯し、下半分の照明部102F、102Hが消灯してよい。第2の照明パターンでは、上半分の照明部102E、102Gが消灯し、下半分の照明部102F、102Hが点灯してよい。なお、上記の「消灯」は、「点灯」する照明部102の照度よりも低い照度による照射と読み替えられてもよい。 Further, as shown in FIG. 6, in the image processing device 10, the illumination units 102E and 102F are arranged at the upper half and the lower half positions near the left side of the main surface 21 of the housing 20, respectively, and the main of the housing 20. In the vicinity of the right side of the surface 21, the illumination units 102G and 102H may be arranged at the positions of the upper half and the lower half, respectively. In this case, in the first illumination pattern, the upper half illumination units 102E and 102G may be turned on, and the lower half illumination units 102F and 102H may be turned off. In the second illumination pattern, the upper half illumination units 102E and 102G may be turned off, and the lower half illumination units 102F and 102H may be turned on. The above-mentioned "turning off" may be read as irradiation with an illuminance lower than the illuminance of the lighting unit 102 that "turns on".
 また、図7(A)に示すように、上側から光401aが照射されると、皺の上部に陰影402aが生じる。図7(B)に示すように、下側から光401bが照射されると、皺の下部に陰影402bが生じる。そこで、第1の照明パターンでは、下側の照明部102Bの照度を、上側の照明部102Aの照度よりも小さく(又は消灯)してよい。そして、第2の照明パターンでは、上側の照明部102Aの照度を、下側の照明部102Bの照度よりも小さく(又は消灯)してよい。これにより、第1の顔画像からは、皺の上部の太さLaを示す第1の皺画像が生成され、第2の顔画像からは、皺の下部の太さLbを示す第2の皺画像が生成される。よって、第1及び第2の皺画像を合成することにより、図7(C)に示すように、より高い精度の皺の太さLを示す合成皺画像を生成できる。なお、上記の「上側」及び「下側」は、「左側」及び「右側」と読み替えられてもよい。 Further, as shown in FIG. 7A, when light 401a is irradiated from above, a shadow 402a is generated on the upper part of the wrinkle. As shown in FIG. 7B, when the light 401b is irradiated from the lower side, a shadow 402b is generated at the lower part of the wrinkle. Therefore, in the first illumination pattern, the illuminance of the lower illumination unit 102B may be smaller (or extinguished) than the illuminance of the upper illumination unit 102A. Then, in the second illumination pattern, the illuminance of the upper illumination unit 102A may be smaller (or extinguished) than the illuminance of the lower illumination unit 102B. As a result, the first wrinkle image showing the thickness La of the upper part of the wrinkle is generated from the first face image, and the second wrinkle showing the thickness Lb of the lower part of the wrinkle is generated from the second face image. An image is generated. Therefore, by synthesizing the first and second wrinkle images, as shown in FIG. 7C, a synthetic wrinkle image showing the wrinkle thickness L with higher accuracy can be generated. The above "upper side" and "lower side" may be read as "left side" and "right side".
 また、照明パターンは、上述した2つのパターンに限られない。例えば、第1の照明パターンでは上側の照明部102Aが点灯し、第2の照明パターンでは下側の照明部102Bが点灯し、第3の照明パターンでは左側の照明部102Cが点灯し、第4の照明パターンでは下側の照明部102Dが点灯してもよい。この場合、画像処理装置10は、第1から第4の照明パターンのそれぞれにて撮影した第1から第4の顔画像から、第1から第4の皺画像を生成し、合成皺画像を生成してもよい。これにより、画像処理装置10は、皺の検出漏れを抑制し、かつ、より高い精度の皺の太さを示す合成皺画像を生成できる。 Also, the lighting pattern is not limited to the above two patterns. For example, in the first lighting pattern, the upper lighting unit 102A is lit, in the second lighting pattern, the lower lighting unit 102B is lit, in the third lighting pattern, the left lighting unit 102C is lit, and the fourth In the illumination pattern of, the lower illumination unit 102D may be lit. In this case, the image processing device 10 generates first to fourth wrinkle images from the first to fourth face images taken in each of the first to fourth illumination patterns, and generates a composite wrinkle image. You may. As a result, the image processing apparatus 10 can suppress omission of detection of wrinkles and generate a synthetic wrinkle image showing the thickness of wrinkles with higher accuracy.
 (実施の形態2)
 実施の形態2では、画像処理装置10が、外部の環境に備えられた照明部(以下「外部照明部」という)と連携して、撮影時の照明を変化させる例を示す。実施の形態2では、実施の形態1にて説明済みの構成については同じ参照符号を付し、説明を省略する場合がある。
(Embodiment 2)
In the second embodiment, an example is shown in which the image processing device 10 changes the lighting at the time of shooting in cooperation with a lighting unit (hereinafter referred to as “external lighting unit”) provided in an external environment. In the second embodiment, the same reference numerals may be given to the configurations described in the first embodiment, and the description may be omitted.
 <構成例>
 図8は、実施の形態2に係る画像処理装置10の外観の一例を示す。図9は、実施の形態2に係る画像処理装置10の構成例を示す。
<Configuration example>
FIG. 8 shows an example of the appearance of the image processing apparatus 10 according to the second embodiment. FIG. 9 shows a configuration example of the image processing device 10 according to the second embodiment.
 図8に示すように、画像処理装置10は、有線又は無線の通信ネットワーク400を通じて、外部照明部401を接続する。有線の通信ネットワーク400は、例えば、Ethernet、PLC(Power Line Communication)である。無線の通信ネットワーク400は、例えば、Wi-Fi、Bluetooth(登録商標)である。 As shown in FIG. 8, the image processing device 10 connects the external lighting unit 401 through a wired or wireless communication network 400. The wired communication network 400 is, for example, Ethernet or PLC (Power Line Communication). The wireless communication network 400 is, for example, Wi-Fi, Bluetooth®.
 外部照明部401は、例えば部屋の天井及び壁など、外部環境に備えられた照明機器であり、LEDなどで構成される。外部照明部401は、点灯及び消灯のみに対応してもよいし、照度を調節可能であってもよい。 The external lighting unit 401 is a lighting device provided for an external environment such as a ceiling and a wall of a room, and is composed of LEDs and the like. The external lighting unit 401 may be turned on and off only, or the illuminance may be adjustable.
 画像処理装置10は、外部照明部401と通信を行うための通信部402を備える。撮影処理部111は、通信部402を介して、外部照明部401の点灯及び/又は消灯、或いは照度の調節を制御してよい。 The image processing device 10 includes a communication unit 402 for communicating with the external lighting unit 401. The photographing processing unit 111 may control the lighting and / or extinguishing of the external lighting unit 401 or the adjustment of the illuminance via the communication unit 402.
 実施の形態2において、第1の照明パターンは、第2の照明パターンと、外部照明部401の点灯及び消灯のパターンが異なる。例えば、第1の照明パターンでは、外部照明部401が点灯し、第2の照明パターンでは、外部照明部401が消灯する。この場合、第1の照明パターンと第2の照明パターンの何れにおいても左側及び/右側の照明部102C、102Dは、点灯してよい。 In the second embodiment, the first lighting pattern differs from the second lighting pattern in the lighting and extinguishing patterns of the external lighting unit 401. For example, in the first illumination pattern, the external illumination unit 401 is turned on, and in the second illumination pattern, the external illumination unit 401 is extinguished. In this case, the left and / right illumination units 102C and 102D may be lit in both the first illumination pattern and the second illumination pattern.
 或いは、外部照明部401の照度が調節可能な場合、第1の照明パターンは、第2の照明パターンと、外部照明部401の照度の大きさが異なる。例えば、第1の照明パターンでは、第2の照明パターンと比較して、外部照明部401の照度が大きくてよい。 Alternatively, when the illuminance of the external lighting unit 401 can be adjusted, the magnitude of the illuminance of the external lighting unit 401 is different from that of the second lighting pattern in the first lighting pattern. For example, in the first illumination pattern, the illuminance of the external illumination unit 401 may be larger than that in the second illumination pattern.
 このように、画像処理装置10は、外部照明部401の照明パターンを変化させて、複数の顔画像を撮影する。これにより、実施の形態1の場合と同様、もし1つの照明パターンではちょうど皺の陰影が消えて撮影されるとしても、別の照明パターンではその皺の陰影が撮影される。つまり、皺の検出漏れを抑止できる。 In this way, the image processing device 10 changes the illumination pattern of the external illumination unit 401 to capture a plurality of face images. As a result, as in the case of the first embodiment, even if the shadow of the wrinkle disappears in one lighting pattern, the shadow of the wrinkle is shot in another lighting pattern. That is, it is possible to suppress the omission of detection of wrinkles.
 なお、外部照明部401は、図1に示す画像処理装置10に備えられてもよい。また、外部照明部401は、複数の照明部102のうちの1つとして、照明パターンに組み込まれてもよい。なお、照明パターンを変えるだけでなく、照明のサイズを変更してもよいし、照明の位置を変更してもよい。この場合も顔に対する光の当たり方が変わるため異なる皺の陰影が撮影されるため、検出漏れを抑止することができる。なお、検出したい顔の部位(目の下、目尻、額、眉間、など)に応じて照明の当て方を制御してもよい。各部位に応じて生じやすいしわの向きや深さに合わせた照明パターンを用いることで、より検出精度を高めることが可能となる。 The external lighting unit 401 may be provided in the image processing device 10 shown in FIG. Further, the external lighting unit 401 may be incorporated into the lighting pattern as one of the plurality of lighting units 102. In addition to changing the lighting pattern, the size of the lighting may be changed, or the position of the lighting may be changed. In this case as well, since the way the light hits the face changes, different wrinkle shadows are photographed, so that detection omission can be suppressed. It should be noted that the lighting method may be controlled according to the part of the face to be detected (under the eyes, the outer corners of the eyes, the forehead, the space between the eyebrows, etc.). It is possible to further improve the detection accuracy by using an illumination pattern that matches the direction and depth of wrinkles that are likely to occur according to each part.
 (実施の形態3)
 実施の形態3では、被写体に対して顔の向きを動かすよう指示して、照射角度の異なる複数の顔画像を撮影する例を示す。実施の形態3に係る画像処理装置10は、図1及び図2と同様の構成であってよい。
(Embodiment 3)
In the third embodiment, an example is shown in which a subject is instructed to move the direction of the face and a plurality of face images having different irradiation angles are taken. The image processing device 10 according to the third embodiment may have the same configuration as that of FIGS. 1 and 2.
 <動作例>
 図10に示すフローチャートを参照して、実施の形態3に係る画像処理装置10の処理例を説明する。
<Operation example>
A processing example of the image processing apparatus 10 according to the third embodiment will be described with reference to the flowchart shown in FIG.
 撮影処理部111は、被写体に対して、第1の顔の向きを指示する(S31)。例えば、撮影処理部111は、顔の向きを少し左に向けるよう指示する。この指示は、表示部103に表示されてもよいし、音声によって出力されてもよい。 The shooting processing unit 111 instructs the subject to orient the first face (S31). For example, the photographing processing unit 111 instructs the face to turn slightly to the left. This instruction may be displayed on the display unit 103 or may be output by voice.
 撮影処理部111は、第1の顔画像を撮影する(S32)。このとき、撮影処理部111は、全ての照明部102を点灯させてもよいし、一部の照明部102を点灯させてもよい。 The photographing processing unit 111 photographs the first face image (S32). At this time, the photographing processing unit 111 may light all the lighting units 102, or may light some of the lighting units 102.
 撮影処理部111は、被写体に対して、第1の顔の向きとは異なる第2の顔の向きを指示する(S33)。例えば、撮影処理部111は、顔の向きを少し右に向けるよう指示する。 The shooting processing unit 111 instructs the subject to have a second face orientation different from that of the first face (S33). For example, the photographing processing unit 111 instructs the face to turn slightly to the right.
 撮影処理部111は、第2の顔画像を撮影する(S34)。このとき、撮影処理部111は、全ての照明部102を点灯させてもよいし、一部の照明部102を点灯させてもよい。また、S32の照明パターンとS34の照明パターンとは、同じであってもよいし、実施の形態1にて説明したように異なってもよい。 The shooting processing unit 111 shoots a second face image (S34). At this time, the photographing processing unit 111 may light all the lighting units 102, or may light some of the lighting units 102. Further, the illumination pattern of S32 and the illumination pattern of S34 may be the same, or may be different as described in the first embodiment.
 以降の処理は、図5に示したS15~S19の処理と同様であるので、説明を省略する。なお、上述した顔の向きの「左」及び「右」は、それぞれ「上」及び「下」に読み替えられてもよい。 The subsequent processing is the same as the processing of S15 to S19 shown in FIG. 5, so the description thereof will be omitted. In addition, the above-mentioned "left" and "right" of the face orientation may be read as "upper" and "lower", respectively.
 以上の処理によれば、第1の顔画像と第2の顔画像とでは照明の当たり方が異なるので、皺の検出漏れを抑制できる。 According to the above processing, since the lighting of the first face image and the second face image are different, it is possible to suppress the omission of detection of wrinkles.
 (本開示のまとめ)
 本開示に係る画像処理装置10は、複数の照明の発光に関するパターンを示す第1の照明パターンに基づいて被写体を照らして第1の画像を撮影し、第2の照明パターンに基づいて前記被写体を照らして第2の画像を撮影する撮影処理部111と、被写体の所定の特徴的な領域を、第1の画像及び第2の画像からそれぞれ抽出して第1の特徴画像及び第2の特徴画像を生成する特徴画像抽出部112と、第1の特徴画像と第2の特徴画像を合成して合成特徴画像を生成する特徴画像合成部113と、を備える。
(Summary of this disclosure)
The image processing device 10 according to the present disclosure illuminates a subject based on a first illumination pattern showing a pattern relating to light emission of a plurality of illuminations, takes a first image, and shoots the subject based on the second illumination pattern. The photographing processing unit 111 that captures the second image by illuminating and the predetermined characteristic area of the subject are extracted from the first image and the second image, respectively, and the first feature image and the second feature image are taken. It is provided with a feature image extraction unit 112 that generates a composite feature image, and a feature image synthesis unit 113 that synthesizes a first feature image and a second feature image to generate a composite feature image.
 この構成によれば、第1の画像を撮影したときの照明パターンは、第2の画像を撮影したときの照明パターンと異なるので、もし一方の照明パターンではちょうど被写体の特徴部分の陰影が消えたり、特徴部分の陰影が薄く撮影されたりしても、他方の照明パターンではその被写体の特徴部分の陰影が濃く撮影され得る。よって、第1及び第2の画像からそれぞれ抽出した第1及び第2の特徴画像を合成して生成される合成特徴画像においては、被写体の特徴部分の領域の検出漏れが抑止される。 According to this configuration, the illumination pattern when the first image is taken is different from the illumination pattern when the second image is taken, so that if one of the illumination patterns, the shadow of the characteristic portion of the subject disappears. Even if the shadow of the characteristic portion is photographed lightly, the shadow of the characteristic portion of the subject can be photographed deeply in the other lighting pattern. Therefore, in the composite feature image generated by synthesizing the first and second feature images extracted from the first and second images, respectively, the omission of detection of the region of the feature portion of the subject is suppressed.
 以上、本開示に係る実施形態について図面を参照して詳述してきたが、上述した画像処理装置10の機能は、コンピュータプログラムにより実現され得る。 Although the embodiments according to the present disclosure have been described in detail with reference to the drawings, the functions of the image processing apparatus 10 described above can be realized by a computer program.
 図11は、各装置の機能をプログラムにより実現するコンピュータのハードウェア構成を示す図である。このコンピュータ2100は、キーボード又はマウス、タッチパッドなどの入力装置2101、ディスプレイ又はスピーカーなどの出力装置2102、CPU(Central Processing Unit)2103、ROM(Read Only Memory)2104、RAM(Random Access Memory)2105、ハードディスク装置又はSSD(Solid State Drive)などの記憶装置2106、DVD-ROM(Digital Versatile Disk Read Only Memory)又はUSB(Universal Serial Bus)メモリなどの記録媒体から情報を読み取る読取装置2107、ネットワークを介して通信を行う通信装置2108を備え、各部はバス2109により接続される。 FIG. 11 is a diagram showing a hardware configuration of a computer that realizes the functions of each device by a program. The computer 2100 includes an input device 2101 such as a keyboard or mouse and a touch pad, an output device 2102 such as a display or a speaker, a CPU (Central Processing Unit) 2103, a ROM (Read Only Memory) 2104, and a RAM (Random Access Memory) 2105. A hard disk device or a storage device 2106 such as SSD (Solid State Drive), a reading device 2107 that reads information from a recording medium such as a DVD-ROM (Digital Versatile Disk Read Only Memory) or a USB (Universal Serial Bus) memory, via a network. A communication device 2108 for communicating is provided, and each part is connected by a bus 2109.
 そして、読取装置2107は、上記各装置の機能を実現するためのプログラムを記録した記録媒体からそのプログラムを読み取り、記憶装置2106に記憶させる。あるいは、通信装置2108が、ネットワークに接続されたサーバ装置と通信を行い、サーバ装置からダウンロードした上記各装置の機能を実現するためのプログラムを記憶装置2106に記憶させる。 Then, the reading device 2107 reads the program from the recording medium on which the program for realizing the function of each of the above devices is recorded, and stores the program in the storage device 2106. Alternatively, the communication device 2108 communicates with the server device connected to the network, and stores the program downloaded from the server device for realizing the function of each device in the storage device 2106.
 そして、CPU2103が、記憶装置2106に記憶されたプログラムをRAM2105にコピーし、そのプログラムに含まれる命令をRAM2105から順次読み出して実行することにより、上記各装置の機能が実現される。 Then, the CPU 2103 copies the program stored in the storage device 2106 to the RAM 2105, and sequentially reads and executes the instructions included in the program from the RAM 2105, thereby realizing the functions of the above devices.
 本開示はソフトウェア、ハードウェア、又は、ハードウェアと連携したソフトウェアで実現することが可能である。 This disclosure can be realized by software, hardware, or software linked with hardware.
 上記実施の形態の説明に用いた各機能ブロックは、部分的に又は全体的に、集積回路であるLSIとして実現され、上記実施の形態で説明した各プロセスは、部分的に又は全体的に、一つのLSI又はLSIの組み合わせによって制御されてもよい。LSIは個々のチップから構成されてもよいし、機能ブロックの一部または全てを含むように一つのチップから構成されてもよい。LSIはデータの入力と出力を備えてもよい。LSIは、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。 Each functional block used in the description of the above embodiment is partially or wholly realized as an LSI which is an integrated circuit, and each process described in the above embodiment is partially or wholly. It may be controlled by one LSI or a combination of LSIs. The LSI may be composed of individual chips, or may be composed of one chip so as to include a part or all of functional blocks. The LSI may include data input and output. LSIs may be referred to as ICs, system LSIs, super LSIs, and ultra LSIs depending on the degree of integration.
 集積回路化の手法はLSIに限るものではなく、専用回路、汎用プロセッサ又は専用プロセッサで実現してもよい。また、LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。本開示は、デジタル処理又はアナログ処理として実現されてもよい。 The method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit, a general-purpose processor, or a dedicated processor. Further, an FPGA (Field Programmable Gate Array) that can be programmed after the LSI is manufactured, or a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI may be used. The present disclosure may be realized as digital processing or analog processing.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行ってもよい。バイオ技術の適用等が可能性としてありえる。 Furthermore, if an integrated circuit technology that replaces an LSI appears due to advances in semiconductor technology or another technology derived from it, it is naturally possible to integrate functional blocks using that technology. There is a possibility of applying biotechnology.
 本開示の一態様は、被写体の凹部を検出する装置に有用である。 One aspect of the present disclosure is useful for an apparatus for detecting a concave portion of a subject.
 10 画像処理装置
 20 筐体
 21 主面
 101 撮影部
 102、102A~102H 照明部
 103 表示部
 104 記憶部
 105 制御部
 111 撮影処理部
 112 特徴画像抽出部
 113 特徴画像合成部
 114 表示処理部
 400 通信ネットワーク
 401 外部照明部
 402 通信部
10 Image processing device 20 Housing 21 Main surface 101 Imaging unit 102, 102A to 102H Lighting unit 103 Display unit 104 Storage unit 105 Control unit 111 Imaging processing unit 112 Feature image extraction unit 113 Feature image composition unit 114 Display processing unit 400 Communication network 401 External lighting unit 402 Communication unit

Claims (9)

  1.  複数の照明の発光に関するパターンを示す第1の照明パターンに基づいて被写体を照らして第1の画像を撮影し、第2の照明パターンに基づいて前記被写体を照らして第2の画像を撮影する撮影処理部と、
     前記第1の画像及び第2の画像からそれぞれ前記被写体の凹部の領域を抽出し、第1の特徴画像及び第2の特徴画像を生成する特徴画像抽出部と、
     前記第1の特徴画像と前記第2の特徴画像を合成して合成特徴画像を生成する特徴画像合成部と、
     を備える、画像処理装置。
    Taking a picture of illuminating a subject based on a first lighting pattern showing patterns related to light emission of a plurality of lights to take a first image, and illuminating the subject based on a second lighting pattern to take a second image. Processing unit and
    A feature image extraction unit that extracts a recessed region of the subject from the first image and the second image to generate a first feature image and a second feature image, respectively.
    A feature image compositing unit that synthesizes the first feature image and the second feature image to generate a composite feature image,
    An image processing device.
  2.  前記第1の照明パターンは、前記第2の照明パターンと、前記複数の照明のうちの少なくとも1つの照明の消灯のパターンが異なる、
     請求項1に記載の画像処理装置。
    The first lighting pattern differs from the second lighting pattern in that at least one of the plurality of lights is turned off.
    The image processing apparatus according to claim 1.
  3.  前記第1の照明パターンは、前記第2の照明パターンと、前記複数の照明のうちの少なくとも1つの照明の照度が異なる、
     請求項1に記載の画像処理装置。
    The first illumination pattern differs from the second illumination pattern in the illuminance of at least one of the plurality of illuminations.
    The image processing apparatus according to claim 1.
  4.  前記第1の照明パターンは、前記第2の照明パターンと、前記複数の照明のうちの少なくとも1つの照明の照射の角度が異なる、
     請求項1に記載の画像処理装置。
    The first illumination pattern differs from the second illumination pattern in the irradiation angle of at least one of the plurality of illuminations.
    The image processing apparatus according to claim 1.
  5.  前記被写体は、人物の顔であり、
     前記凹部の領域は、前記顔の皺の領域である、
     請求項1から4の何れか1項に記載の画像処理装置。
    The subject is a person's face,
    The recessed area is the wrinkled area of the face.
    The image processing apparatus according to any one of claims 1 to 4.
  6.  前記特徴画像抽出部は、前記第1の画像及び第2の画像に含まれる顔の画像の位置及び/又はサイズを、前記顔の特徴的な部位に基づいて調整した後、前記凹部の領域を抽出する、
     請求項5に記載の画像処理装置。
    The feature image extraction unit adjusts the position and / or size of the face image included in the first image and the second image based on the characteristic portion of the face, and then determines the region of the recess. Extract,
    The image processing apparatus according to claim 5.
  7.  前記複数の照明のうちの少なくも1つは、外部の環境に備えられた外部照明であり、
     前記第1の照明パターンは、前記第2の照明パターンと、前記外部照明の消灯のパターン又は照度が異なる、
     請求項1に記載の画像処理装置。
    At least one of the plurality of lights is an external light provided for an external environment.
    The first lighting pattern is different from the second lighting pattern in the pattern of turning off the external lighting or the illuminance.
    The image processing apparatus according to claim 1.
  8.  装置が、
     複数の照明の発光に関するパターンを示す第1の照明パターンに基づいて被写体を照らして第1の画像を撮影し、第2の照明パターンに基づいて前記被写体を照らして第2の画像を撮影し、
     前記第1の画像及び第2の画像からそれぞれ前記被写体の凹部の領域を抽出し、第1の特徴画像及び第2の特徴画像を生成し、
     前記第1の特徴画像と前記第2の特徴画像を合成して合成特徴画像を生成する、
     画像処理方法。
    The device is
    A first image is taken by illuminating the subject based on a first illumination pattern showing a pattern relating to light emission of a plurality of lights, and a second image is taken by illuminating the subject based on a second illumination pattern.
    The concave region of the subject is extracted from the first image and the second image, respectively, and the first feature image and the second feature image are generated.
    The first feature image and the second feature image are combined to generate a composite feature image.
    Image processing method.
  9.  複数の照明の発光に関するパターンを示す第1の照明パターンに基づいて被写体を照らして第1の画像を撮影し、第2の照明パターンに基づいて前記被写体を照らして第2の画像を撮影し、
     前記第1の画像及び第2の画像からそれぞれ前記被写体の凹部の領域を抽出し、第1の特徴画像及び第2の特徴画像を生成し、
     前記第1の特徴画像と前記第2の特徴画像を合成して合成特徴画像を生成する、
     ことをコンピュータに実行させる、
     コンピュータプログラム。
    A first image is taken by illuminating the subject based on a first illumination pattern showing a pattern relating to light emission of a plurality of lights, and a second image is taken by illuminating the subject based on a second illumination pattern.
    The concave region of the subject is extracted from the first image and the second image, respectively, and the first feature image and the second feature image are generated.
    The first feature image and the second feature image are combined to generate a composite feature image.
    Let the computer do that,
    Computer program.
PCT/JP2020/011715 2019-05-13 2020-03-17 Image processing device, image processing method, and computer program WO2020230445A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021519285A JPWO2020230445A1 (en) 2019-05-13 2020-03-17

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019090607 2019-05-13
JP2019-090607 2019-05-13

Publications (1)

Publication Number Publication Date
WO2020230445A1 true WO2020230445A1 (en) 2020-11-19

Family

ID=73290137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/011715 WO2020230445A1 (en) 2019-05-13 2020-03-17 Image processing device, image processing method, and computer program

Country Status (2)

Country Link
JP (1) JPWO2020230445A1 (en)
WO (1) WO2020230445A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023048153A1 (en) * 2021-09-24 2023-03-30 テルモ株式会社 Information processing method, computer program, and information processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002310935A (en) * 2001-04-19 2002-10-23 Murata Mfg Co Ltd Method and apparatus for extraction of illumination condition and visual inspection system
JP2014197162A (en) * 2013-03-07 2014-10-16 カシオ計算機株式会社 Imaging device
JP2016009203A (en) * 2014-06-20 2016-01-18 パナソニックIpマネジメント株式会社 Wrinkle detection apparatus and wrinkle detection method
JP2018112479A (en) * 2017-01-12 2018-07-19 リコーエレメックス株式会社 Visual inspection system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002310935A (en) * 2001-04-19 2002-10-23 Murata Mfg Co Ltd Method and apparatus for extraction of illumination condition and visual inspection system
JP2014197162A (en) * 2013-03-07 2014-10-16 カシオ計算機株式会社 Imaging device
JP2016009203A (en) * 2014-06-20 2016-01-18 パナソニックIpマネジメント株式会社 Wrinkle detection apparatus and wrinkle detection method
JP2018112479A (en) * 2017-01-12 2018-07-19 リコーエレメックス株式会社 Visual inspection system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023048153A1 (en) * 2021-09-24 2023-03-30 テルモ株式会社 Information processing method, computer program, and information processing device

Also Published As

Publication number Publication date
JPWO2020230445A1 (en) 2020-11-19

Similar Documents

Publication Publication Date Title
US11308711B2 (en) Enhanced contrast for object detection and characterization by optical imaging based on differences between images
JP6305941B2 (en) Writing system and method for object enhancement
CN105488782B (en) Gloss determination device and gloss determination method
WO2012124730A1 (en) Detection device, input device, projector, and electronic apparatus
CN104717422B (en) Show equipment and display methods
JP6799155B2 (en) Information processing device, information processing system, and subject information identification method
KR20110094037A (en) Video infrared retinal image scanner
JP2006255430A (en) Individual identification device
JP6799154B2 (en) Information processing device and material identification method
JP2008305192A (en) Face authentication device
KR101674099B1 (en) Apparatus for generating image for face authentication and method thereof
WO2020230445A1 (en) Image processing device, image processing method, and computer program
JP2006285763A (en) Method and device for generating image without shadow for photographic subject, and white board used therefor
WO2010106657A1 (en) Organism information acquisition device and organism authentication device
KR20200089204A (en) Eye tracking system for mild cognitive impairment
JP2020021397A (en) Image processing device, image processing method, and image processing program
US20150195434A1 (en) Three-dimensional face scanning apparatus
JP2007058507A (en) Line of sight detecting device
JP2021512430A (en) Systems and methods for 3D scanning
JP3342810B2 (en) Iris image acquisition device
JP2009187375A (en) Biometric method, biometric apparatus, iris authentication method and iris authentication apparatus
JP2009014520A (en) Face measurement system
JP5227883B2 (en) Composite video generation system, lighting control device, and program
WO2020240989A1 (en) Imaging device, imaging control method, and imaging control program
CN112804914A (en) Mirror

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20805876

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021519285

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20805876

Country of ref document: EP

Kind code of ref document: A1