WO2020036072A1 - Image processing device, image processing method, and program - Google Patents

Image processing device, image processing method, and program Download PDF

Info

Publication number
WO2020036072A1
WO2020036072A1 PCT/JP2019/030257 JP2019030257W WO2020036072A1 WO 2020036072 A1 WO2020036072 A1 WO 2020036072A1 JP 2019030257 W JP2019030257 W JP 2019030257W WO 2020036072 A1 WO2020036072 A1 WO 2020036072A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
foreground
mask
background
image processing
Prior art date
Application number
PCT/JP2019/030257
Other languages
French (fr)
Japanese (ja)
Inventor
正志 藏之下
與那覇 誠
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2020537411A priority Critical patent/JP7143419B2/en
Publication of WO2020036072A1 publication Critical patent/WO2020036072A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/46Analysis of texture based on statistical description of texture using random fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses

Definitions

  • the present invention relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program that generate three-dimensional image data from a two-dimensional image.
  • a main subject of an input two-dimensional image is set as a foreground, and depth information is added to the foreground and the background to generate three-dimensional image data.
  • Patent Literature 1 for the purpose of suppressing discomfort, a technology of performing a blurring process on an object boundary portion of a mask pattern is described. Specifically, in the technique described in Patent Document 1, the mask at the edge portion of the mask is inclined by using a first-stage low-pass filter and a second-stage low-pass filter to blur the mask. . Patent Document 1 describes adjusting the blur width and performing blur processing individually for each layer (Patent Document 1, paragraph 0080).
  • Patent Literature 2 discloses a technique for blurring the contour of a specified subject by an average value filter process in order to suppress a sense of discomfort.
  • Patent Document 2 describes that the smoothness of a three-dimensional effect is controlled by the size of the average value filter (Patent Document 1, paragraph 0030).
  • Patent Document 1 and Patent Document 2 described above there is no mention of giving a different degree of blur depending on a place where the boundary between the foreground and the background is clear and a place where it is not clear.
  • the present invention has been made in view of such circumstances, and an object of the present invention is to provide an image processing apparatus, an image processing method, and a program that generate three-dimensional image data in which a sense of discomfort is effectively suppressed. .
  • An image processing apparatus for achieving the above object is an image processing apparatus that generates three-dimensional image data including a foreground and a background from a two-dimensional image, An image acquisition unit for acquiring, a probability calculation unit for estimating a foreground region in the two-dimensional image by image processing, and calculating a probability of being a foreground region for each small region of the two-dimensional image;
  • the image processing apparatus includes a mask generation unit that generates a foreground image mask having a gradation of a mixture ratio of the foreground and the background in a boundary region with the background, and a foreground image acquisition unit that obtains a foreground image using the image mask.
  • the probability of being the foreground region is calculated for each small region of the two-dimensional image, and the foreground having the gradation of the mixture ratio of the foreground and the background in the boundary region between the foreground and the background is calculated based on the probability.
  • the gradation of the mixture ratio of the boundary region is controlled according to the portion where the boundary between the foreground and the background is clear and the portion where the boundary is unclear, and the degree of blur is adjusted.
  • the image processing device includes a background image acquisition unit that acquires a background image by complementing an area corresponding to the foreground image in the two-dimensional image.
  • the background image is acquired by complementing the area corresponding to the foreground image, it is possible to generate three-dimensional image data in which the sense of discomfort is effectively suppressed.
  • the unmasked portion of the image mask is larger than the foreground in the two-dimensional image.
  • the enlarged foreground image can be obtained by enlarging the non-mask portion of the image mask, the missing portion of the background image can be covered, and a three-dimensional image in which discomfort is suppressed can be obtained.
  • Image data can be generated.
  • the image processing apparatus includes an extraction area acquisition unit that binarizes the image mask based on the evaluation threshold and acquires an extraction area of the foreground, and the mask generation unit includes an image configured by the extraction area and the boundary area. Generate a mask.
  • the image mask is binarized based on the evaluation threshold, the foreground extraction region is obtained, and the image mask including the extraction region and the boundary region is generated.
  • the image mask including the extraction region and the boundary region is generated.
  • the image processing device includes a boundary area acquisition unit that acquires a difference between the enlarged image mask obtained by enlarging the image mask and the extraction area to acquire a boundary area.
  • the difference between the enlarged image mask and the extraction region is obtained and the boundary region is obtained, so that the boundary region can be obtained more accurately.
  • the image processing apparatus includes a boundary area obtaining unit that obtains a difference between an enlarged image mask obtained by enlarging the image mask and a reduced extraction area obtained by reducing the extraction area to obtain a boundary area.
  • the difference between the enlarged extracted area obtained by enlarging the extracted area and the reduced extracted area obtained by reducing the extracted area is obtained, and the boundary area is obtained. This makes it possible to acquire an enlarged foreground image in which the width of the boundary portion is wider.
  • the border region has a width of 10 pixels or more and 20 pixels or less.
  • the probability calculation unit is configured with a learned recognizer that extracts a foreground.
  • the three-dimensional image data is for lenticular printing.
  • An image processing method is an image processing method for generating three-dimensional image data including a foreground and a background from a two-dimensional image. Estimating the foreground region in the image by image processing, calculating the probability of being a foreground region for each small region of the two-dimensional image, and determining the foreground and background in the boundary region between the foreground and background based on the probability. Generating a foreground image mask having a mixture ratio gradation; and acquiring a foreground image with the image mask.
  • a program according to another aspect of the present invention is a program that causes a computer to execute an image processing step of generating three-dimensional image data including a foreground and a background from a two-dimensional image, and a step of acquiring the two-dimensional image Estimating the foreground area in the two-dimensional image by image processing and calculating the probability of being a foreground area for each small area of the two-dimensional image; and, based on the probability, the foreground in the boundary area between the foreground and the background.
  • causing the computer to execute an image processing step including a step of generating a foreground image mask having a gradation of a mixture ratio of the image and the background, and a step of acquiring a foreground image using the image mask.
  • the probability of being a foreground region is calculated for each small region of a two-dimensional image, and a foreground having a gradation of a mixture ratio of the foreground and the background in a boundary region between the foreground and the background is calculated based on the probability.
  • the gradation of the mixture ratio of the boundary region is controlled according to the location where the boundary between the foreground and the background is clear and the location where the boundary is unclear, and the degree of blur is adjusted.
  • FIG. 1 is a diagram illustrating an appearance of a computer.
  • FIG. 2 is a block diagram illustrating a functional configuration example of the image processing apparatus.
  • FIG. 3 is a flowchart showing the image processing process.
  • FIG. 4 is a diagram conceptually showing a two-dimensional image and an image mask with probability.
  • FIG. 5 is an enlarged view of the probability-added image mask.
  • FIG. 6 is a conceptual diagram showing the gradation of the mixing ratio.
  • FIG. 7 is a diagram illustrating extraction of an extraction region.
  • FIG. 8 is a diagram illustrating acquisition of a boundary region.
  • FIG. 9 is a diagram illustrating generation of an image mask.
  • FIG. 10 is an enlarged view of the image mask.
  • FIG. 11 is a diagram for describing acquisition of a foreground image.
  • FIG. 11 is a diagram for describing acquisition of a foreground image.
  • FIG. 12 is a diagram illustrating a missing portion of the background in the three-dimensional image.
  • FIG. 13 is a diagram illustrating a missing portion of the background in the three-dimensional image.
  • FIG. 14 is a diagram illustrating a background image.
  • FIG. 15 is a diagram illustrating an image used to generate three-dimensional image data.
  • FIG. 1 is a diagram showing the appearance of a computer provided with the image processing device of the present invention.
  • the computer 3 incorporates the image processing device 11 (FIG. 2) according to one embodiment of the present invention.
  • a two-dimensional image 201 is input to the computer 3, and a display unit including a monitor 9 and an input unit including a keyboard 5 and a mouse 7 are connected to the computer 3.
  • the illustrated form of the computer 3 is an example, and an apparatus having the same functions as the computer 3 can include the image processing apparatus 11 of the present invention.
  • the image processing device 11 can be mounted on a tablet terminal.
  • the computer 3 displays on the monitor 9 the two-dimensional image 201 input to the image processing apparatus 11 (FIG. 2) and the generated three-dimensional image data.
  • the user inputs a command using the keyboard 5 and the mouse 7.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration of the image processing apparatus 11.
  • the hardware structure for executing various controls of the image processing apparatus 11 shown in FIG. 2 includes the following various processors.
  • the circuit configuration can be changed after manufacturing such as CPU (Central Processing Unit) and FPGA (Field Programmable Gate Array), which are general-purpose processors that execute software (programs) and function as various control units.
  • Special-purpose circuits such as programmable logic devices (Programmable Logic Devices: PLDs), ASICs (Application Specific Integrated Circuits), etc. It is.
  • One processing unit may be configured by one of these various processors, or configured by two or more processors of the same type or different types (for example, a plurality of FPGAs or a combination of a CPU and an FPGA). You may. Further, a plurality of control units may be configured by one processor. As an example in which a plurality of control units are configured by one processor, first, as represented by a computer such as a client or a server, one processor is configured by a combination of one or more CPUs and software. There is a form in which a processor functions as a plurality of control units.
  • a form using a processor that realizes the functions of the entire system including a plurality of control units with one integrated circuit (IC) chip is used.
  • the various control units are configured by using one or more of the above-described various processors as a hardware structure.
  • the image processing device 11 includes an image acquisition unit 13, a probability calculation unit 15, a mask generation unit 17, a foreground image acquisition unit 19, a background image acquisition unit 21, a three-dimensional image data generation unit 25, a display control unit 23, and a storage unit 26. Is provided.
  • the storage unit 26 stores a program, information related to various controls of the image processing apparatus 11, and the like. Further, the display control unit 23 controls display on the monitor 9.
  • the image acquisition unit 13 acquires the two-dimensional image 201.
  • the two-dimensional image 201 has, for example, a foreground composed of a main subject and a background other than the foreground.
  • the probability calculation unit 15 estimates a foreground region in the two-dimensional image by image processing, and calculates a probability of being a foreground region for each small region of the two-dimensional image.
  • Known methods are applied to the estimation and the probability calculation performed by the probability calculation unit 15.
  • the probability calculation unit 15 is configured by a learned recognizer that extracts a foreground.
  • the probability calculating unit 15 estimates that each small area is a foreground area by image processing, and calculates a probability based on the estimation for each small area.
  • the small area of the two-dimensional image calculated by the probability calculation unit 15 is, for example, every pixel, every 2 ⁇ 2 pixels, or an area of a predetermined size.
  • the mask generation unit 17 generates an image mask for acquiring a foreground.
  • the image mask has a gradation of the mixture ratio of the foreground and the background in the boundary region between the foreground and the background, based on the probability of the foreground region calculated by the probability calculation unit 15. That is, the image mask has a blur based on the probability of being a foreground area in the boundary area.
  • the image mask may constitute a mask portion and a non-mask portion using the probabilities calculated by the probability calculation portion 15 as they are, or may include an extraction region and an extraction region that are extracted based on the probabilities calculated by the probability calculation portion 15.
  • the boundary area may form a mask portion and a non-mask portion.
  • the mask generation unit 17 includes an extraction area acquisition unit 18 and a boundary area acquisition unit 20.
  • the extraction area acquisition unit 18 acquires an extraction area. Specifically, the extraction area acquisition unit 18 binarizes the probability-added image mask based on the evaluation threshold and acquires the foreground extraction area.
  • the extraction area is an area having a certain probability or more as a foreground area.
  • the boundary area acquisition unit 20 acquires a boundary area in the probability-added image mask.
  • the boundary region of the probability-added image mask has a gradation of the mixture ratio of the foreground and the background.
  • the foreground image acquisition unit 19 acquires a foreground image using an image mask.
  • the background image acquisition unit 21 acquires a background image by complementing a region corresponding to the foreground image in the two-dimensional image.
  • the three-dimensional image data generation unit 25 generates three-dimensional image data from the foreground image acquired by the foreground image acquisition unit 19 and the background image acquired by the background image acquisition unit 21. For example, the three-dimensional image data generation unit 25 generates three-dimensional image data for lenticular printing.
  • FIG. 3 is a flowchart illustrating an image processing step (image processing method) of generating three-dimensional image data from a two-dimensional image by the image processing device 11. First, the entire image processing process will be described, and then each process will be described in detail.
  • the image acquisition unit 13 acquires the two-dimensional image A (Step S10). Thereafter, the probability calculation unit 15 automatically extracts a foreground region by image processing, and thereafter calculates a foreground probability. Next, the mask generation unit 17 uses the calculated image mask B based on the calculated probability. Is generated (step S11). Next, the extraction area acquisition unit 18 binarizes the probability-added image mask B and acquires an extraction area C (step S12). After that, the boundary area acquisition unit 20 calculates the boundary area based on the extraction area C, and extracts the boundary area D from the probability-added image mask B (step S13). Then, the mask generation unit 17 combines the extraction area C and the boundary area D to generate an image mask E (step S14).
  • the foreground image acquisition unit 19 acquires the foreground image F by applying the image mask E (step S15).
  • the background image acquisition unit 21 complements the non-mask portion of the image mask E with surrounding pixels to obtain a background image G (Step S16).
  • the three-dimensional image data generation unit 25 generates lenticular printing data using the foreground image F and the background image G (step S17).
  • Step S10 and Step S11 In steps S10 and S11, the two-dimensional image A is input, and an image mask B with probability is generated.
  • FIG. 4 is a diagram conceptually showing a two-dimensional image A201 and an image mask B203 with probability.
  • the two-dimensional image A201 has a main subject O which is a person.
  • Three-dimensional image data is generated with the main subject O in the two-dimensional image A201 as a foreground and a background including a part excluding the main subject O.
  • the image mask with probability B203 automatically extracts the main subject O by image processing, and has a non-mask portion P corresponding to the foreground portion and a mask portion M corresponding to the background portion.
  • the probability-added image mask B203 has a probability of being a foreground area for each small area.
  • the foreground may be obtained by applying the image mask with probability B203, but by obtaining the foreground by applying the image mask E213 described later, it is possible to obtain a more accurate foreground image.
  • FIG. 5 is an enlarged view of the probability-added image mask B203.
  • 5A shows an image mask with a probability B203
  • FIG. 5B shows an enlarged view of a region H of the image mask with a probability B203
  • FIG. Shows an enlarged view of the region J.
  • the probability-added image mask B203 has a gradation of the mixture ratio in the boundary region between the foreground and the background, and blurs accordingly.
  • this gradation is based on the probability of being a foreground area, and where the boundary between the foreground and the background is clear, the gradation is steep and narrow, and the part where the boundary between the foreground and the background is unclear. Then, the gradation becomes loose and wide.
  • FIG. 6 is a conceptual diagram showing the gradation of the mixture ratio of the boundary part when the boundary part between the foreground and the background is clear and when the boundary part between the foreground and the background is unclear.
  • FIG. 6A shows the gradation of the probability of being a foreground area when the boundary between the foreground and the background is clear.
  • FIG. 6B shows a gradation of the probability of being a foreground area when the boundary between the foreground and the background is unclear.
  • the gradation at the boundary 303 is sharp because the region 301 with a probability of 100% being the foreground and the region (background region) 305 having a probability of 0% being the foreground are clear. It is getting smaller.
  • the gradation at the boundary 303 is gentle and wide. .
  • the gradation of the probability of being the foreground area is different from the part where the boundary between the foreground and the background is clear and the part where the boundary is not clear, by adjusting the degree of blur of the foreground by using this gradation, The sense of discomfort can be effectively suppressed in the three-dimensional image data.
  • FIG. 7 is a diagram illustrating the extraction of the extraction area C performed by the extraction area obtaining unit 18.
  • the extraction area acquisition unit 18 binarizes the probability assigned to each small area of the image mask with probability B203 based on the evaluation threshold. For example, when the probability is indicated from 0% to 100%, the extraction region acquisition unit 18 binarizes the region with the probability of 50% or more and the region with the probability of less than 50%.
  • the extraction region C209 has a non-mask portion P whose probability of being a foreground region is 50% or more, and has a mask portion M whose probability of being a foreground region is less than 50%. That is, the extraction region C209 has a binarized non-mask portion P and a mask portion M.
  • FIG. 8 is a diagram illustrating the acquisition of the boundary area D performed by the boundary area acquisition unit 20.
  • the boundary area D211 is generated by subtracting the extraction area C209 from the probability-added image mask B203. Specifically, after enlarging the probability-added image mask B203 (enlarged image mask), the boundary region D211 is generated by subtracting the extraction region C209. Alternatively, the boundary area D211 may be generated by enlarging the image mask with probability B203 (enlarged image mask) and then subtracting the reduced extraction area C209 (reduced extraction area).
  • the boundary area D211 has a gradation of the mixture ratio of the foreground and the background in the boundary area of the image mask with probability B203.
  • the width of the boundary region D211 can be controlled by performing scaling on the probability-added image mask B203 and the extraction region C. For example, it is preferable that the width of the boundary region D211 is not less than 10 pixels and not more than 20 pixels.
  • FIG. 9 is a diagram illustrating generation of the image mask E213 performed by the mask generation unit 17.
  • the mask generation unit 17 obtains the image mask E213 by combining the extraction area C209 and the boundary area D211. For example, after expanding the image mask with probability B203 by 10 pixels vertically and horizontally, the extraction area C209 is reduced by 10 pixels vertically and horizontally to obtain a boundary area D211. The images are combined to obtain an image mask E213.
  • the image mask E213 has a binarized uniform value in the extraction region C209, and has a gradation in the boundary region D211.
  • the boundary portion between the non-mask portion P and the mask portion M has a gradation of a mixture ratio of the foreground and the background, and generates blur.
  • a portion (extracted region) other than the boundary region of P is definitely not masked.
  • FIG. 10 is an enlarged view of a region R of the image mask E213 shown in FIG.
  • the region R of the image mask E213 has a gradation of the mixture ratio of the non-mask portion P and the mask portion M (indicated by an arrow N).
  • FIG. 11 is a diagram illustrating that the foreground image acquisition unit 19 acquires the foreground image F215.
  • the foreground image F215 is obtained by applying the image mask E213 to the two-dimensional image 201. Since the image mask E213 has a gradation corresponding to the probability of being a foreground region in the boundary portion, the foreground image F215 has a blur corresponding to the probability. That is, in the foreground image F215, the degree of blur is adjusted in accordance with the location where the boundary between the foreground and the background is clear and the location where the boundary is unclear.
  • FIG. 12 and FIG. 13 are diagrams illustrating a missing portion of the background in the three-dimensional image.
  • FIG. 12 shows a case where the three-dimensional image is viewed from the front, that is, a case where the foreground image F215 is not moved with respect to the background image. In this case, since the foreground image F215 has not been moved, a missing portion of the background does not appear, and the missing portion does not stand out.
  • FIG. 12 shows a case where the three-dimensional image is viewed from the front, that is, a case where the foreground image F215 is not moved with respect to the background image. In this case, since the foreground image F215 has not been moved, a missing portion of the background does not appear, and the missing portion does not stand out.
  • FIG. 12 shows a case where the three-dimensional image is viewed from the front, that is, a case where the foreground image F215 is not moved with respect to the background image. In this case, since the foreground image F
  • FIG. 13 shows a case where the three-dimensional image is viewed obliquely, that is, a case where the foreground image F215 is moved with respect to the background image.
  • the foreground image F215 since the foreground image F215 is moved, a missing portion U of the background occurs, and the missing portion U becomes conspicuous.
  • the missing portion U can be covered by enlarging the foreground image F215.
  • FIG. 14 is a diagram illustrating the background image G217 acquired by the background image acquiring unit 21.
  • the background image G ⁇ b> 217 is generated by the background image acquisition unit 21 by complementing the foreground image F ⁇ b> 215 illustrated in FIG. 11. Note that the complement performed by the background image acquisition unit 21 is performed by a known technique.
  • FIG. 15 is a diagram illustrating an image used to generate three-dimensional image data.
  • the image 223 is a view from the front, and the positional relationship between the foreground and the background is the same as the original two-dimensional image A.
  • the image 221 moves the foreground image F215 in the direction of arrow V with respect to the background image G217, and the image 225 moves the foreground image F215 in the direction of arrow W with respect to the background image G217.
  • the three-dimensional image data generation unit 25 generates three-dimensional image data for lenticular printing using the images 221, 223, and 225.

Abstract

Provided are an image processing device, an image processing method, and a program for generating three-dimensional image data in which unnaturalness has been suppressed effectively. The image processing device (11) is provided with: an image acquisition unit (13) which acquires a two-dimensional image; a probability calculation unit (15) which estimates a foreground region of the two-dimensional image by image processing, and calculates the probability of being a foreground region for each small region of the two-dimensional image; a mask generation unit (17) which, on the basis of the probability, generates a foreground image mask having a gradation of a mixture ratio of a foreground and a background in a foreground-background boundary region; and a foreground image acquisition unit (19) which acquires a foreground image using the image mask.

Description

画像処理装置、画像処理方法、及びプログラムImage processing apparatus, image processing method, and program
 本発明は、画像処理装置、画像処理方法、及びプログラムに関し、特に2次元画像から3次元画像データを生成する画像処理装置、画像処理方法、及びプログラムに関する。 The present invention relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program that generate three-dimensional image data from a two-dimensional image.
 従来より、入力された2次元画像の主要被写体を前景とし、この前景と背景とに対して奥行き情報を付加して3次元画像データが生成されている。 Conventionally, a main subject of an input two-dimensional image is set as a foreground, and depth information is added to the foreground and the background to generate three-dimensional image data.
 ここで、大量の3次元画像データの生成を行うためには、自動的な画像処理により3次元画像データを効率的に生成する必要がある。一方で、前景と背景の境界部分をくっきりと切り出してしまうと、境界部分が目立ってしまい、生成される3次元画像データは違和感を有してしまう。 Here, in order to generate a large amount of three-dimensional image data, it is necessary to efficiently generate three-dimensional image data by automatic image processing. On the other hand, if the boundary between the foreground and the background is cut out clearly, the boundary is conspicuous, and the generated three-dimensional image data has a sense of incongruity.
 この3次元画像データの違和感を抑制することを目的として、様々な手法が提案されている。 様 々 Various methods have been proposed for the purpose of suppressing the uncomfortable feeling of the three-dimensional image data.
 例えば特許文献1に記載された技術では、違和感を抑制することを目的として、マスクパターンのオブジェクト境界部分に、ぼかし処理を行う技術が記載されている。具体的には、特許文献1に記載された技術では、1段目のローパスフィルタ及び2段目のローパスフィルタを使用して、マスクのエッジ部分の信号に傾斜をつけて、マスクをぼかしている。また、特許文献1には、ぼかし幅を調整することや、レイヤ毎に個別にぼかし処理を行うことが記載されている(特許文献1段落0080)。 For example, in the technology described in Patent Literature 1, for the purpose of suppressing discomfort, a technology of performing a blurring process on an object boundary portion of a mask pattern is described. Specifically, in the technique described in Patent Document 1, the mask at the edge portion of the mask is inclined by using a first-stage low-pass filter and a second-stage low-pass filter to blur the mask. . Patent Document 1 describes adjusting the blur width and performing blur processing individually for each layer (Patent Document 1, paragraph 0080).
 例えば特許文献2では、違和感を抑制することを目的とし、指定した被写体の輪郭を平均値フィルタ処理によりぼかす技術が記載されている。また、特許文献2では、平均値フィルタのサイズの大小により、立体感の滑らかさを制御することが記載されている(特許文献1段落0030)。 For example, Patent Literature 2 discloses a technique for blurring the contour of a specified subject by an average value filter process in order to suppress a sense of discomfort. Patent Document 2 describes that the smoothness of a three-dimensional effect is controlled by the size of the average value filter (Patent Document 1, paragraph 0030).
特開2013-178749号公報JP 2013-178949 A 特開2003-47027号公報JP 2003-47027 A
 ここで、2次元画像によっては、前景と背景との境界が明瞭である箇所と、境界が複雑であり不明瞭である箇所とが存在する場合がある。このような、境界が明瞭な箇所と、境界が不明瞭な箇所とで同じぼけ処理を行ってしまうと、3次元画像データの違和感を効果的に抑制できない場合が発生する。 Here, depending on the two-dimensional image, there may be a part where the boundary between the foreground and the background is clear, and a part where the boundary is complicated and unclear. If the same blurring processing is performed at a place where the boundary is clear and a place where the boundary is not clear, a case may occur where the sense of discomfort of the three-dimensional image data cannot be effectively suppressed.
 上述した特許文献1及び特許文献2では、前景と背景との境界が明瞭な箇所及び不明瞭な箇所に応じて異なるぼけ具合を与えることについての言及はない。 特許 In Patent Document 1 and Patent Document 2 described above, there is no mention of giving a different degree of blur depending on a place where the boundary between the foreground and the background is clear and a place where it is not clear.
 本発明はこのような事情に鑑みてなされたもので、その目的は、効果的に違和感が抑制された3次元画像データを生成する画像処理装置、画像処理方法、及びプログラムを提供することである。 The present invention has been made in view of such circumstances, and an object of the present invention is to provide an image processing apparatus, an image processing method, and a program that generate three-dimensional image data in which a sense of discomfort is effectively suppressed. .
 上記目的を達成するための本発明の一の態様である画像処理装置は、2次元画像から、前景及び背景で構成される3次元画像データを生成する画像処理装置であって、2次元画像を取得する画像取得部と、2次元画像における前景の領域を画像処理により推定し、2次元画像の小領域毎に前景の領域である確率を算出する確率算出部と、確率に基づいて、前景と背景との境界領域における前景と背景との混合比率のグラデーションを有する、前景の画像マスクを生成するマスク生成部と、画像マスクにより、前景画像を取得する前景画像取得部と、を備える。 An image processing apparatus according to one aspect of the present invention for achieving the above object is an image processing apparatus that generates three-dimensional image data including a foreground and a background from a two-dimensional image, An image acquisition unit for acquiring, a probability calculation unit for estimating a foreground region in the two-dimensional image by image processing, and calculating a probability of being a foreground region for each small region of the two-dimensional image; The image processing apparatus includes a mask generation unit that generates a foreground image mask having a gradation of a mixture ratio of the foreground and the background in a boundary region with the background, and a foreground image acquisition unit that obtains a foreground image using the image mask.
 本態様によれば、2次元画像の小領域毎に前景の領域である確率が算出され、その確率に基づいて、前景と背景との境界領域における前景と背景との混合比率のグラデーションを有する前景の画像マスクが生成される。これにより、前景と背景との境界が明瞭な箇所と、境界が不明瞭な箇所とに応じて、境界領域の混合比率のグラデーションが制御され、ぼけ具合が調整される。 According to this aspect, the probability of being the foreground region is calculated for each small region of the two-dimensional image, and the foreground having the gradation of the mixture ratio of the foreground and the background in the boundary region between the foreground and the background is calculated based on the probability. Are generated. Thereby, the gradation of the mixture ratio of the boundary region is controlled according to the portion where the boundary between the foreground and the background is clear and the portion where the boundary is unclear, and the degree of blur is adjusted.
 好ましくは、画像処理装置は、2次元画像において、前景画像に対応する領域を補完して、背景画像を取得する背景画像取得部を備える。 Preferably, the image processing device includes a background image acquisition unit that acquires a background image by complementing an area corresponding to the foreground image in the two-dimensional image.
 本態様によれば、前景画像に対応する領域を補完して背景画像が取得されるので、違和感が効果的に抑制された3次元画像データを生成することができる。 According to this aspect, since the background image is acquired by complementing the area corresponding to the foreground image, it is possible to generate three-dimensional image data in which the sense of discomfort is effectively suppressed.
 好ましくは、画像マスクの非マスク部分は、2次元画像における前景よりも拡大されている。 Preferably, the unmasked portion of the image mask is larger than the foreground in the two-dimensional image.
 本態様によれば、画像マスクの非マスク部分が拡大されることにより、拡大された前景画像を得ることができるので、背景画像の欠損部分を覆い隠すことができ、違和感の抑制された3次元画像データを生成することができる。 According to this aspect, since the enlarged foreground image can be obtained by enlarging the non-mask portion of the image mask, the missing portion of the background image can be covered, and a three-dimensional image in which discomfort is suppressed can be obtained. Image data can be generated.
 好ましくは、画像処理装置は、画像マスクを評価閾値に基づいて2値化し、前景の抽出領域を取得する抽出領域取得部を備え、マスク生成部は、抽出領域と境界領域とで構成される画像マスクを生成する。 Preferably, the image processing apparatus includes an extraction area acquisition unit that binarizes the image mask based on the evaluation threshold and acquires an extraction area of the foreground, and the mask generation unit includes an image configured by the extraction area and the boundary area. Generate a mask.
 本態様によれば、画像マスクを評価閾値に基づいて2値化し、前景の抽出領域が取得され、抽出領域と境界領域とで構成される画像マスクが生成される。これにより、より違和感の抑制された3次元画像データを生成することができる。 According to this aspect, the image mask is binarized based on the evaluation threshold, the foreground extraction region is obtained, and the image mask including the extraction region and the boundary region is generated. As a result, it is possible to generate three-dimensional image data in which a sense of discomfort is suppressed.
 好ましくは、画像処理装置は、画像マスクを拡大した拡大画像マスクと、抽出領域との差分を取得して、境界領域を取得する境界領域取得部を備える。 Preferably, the image processing device includes a boundary area acquisition unit that acquires a difference between the enlarged image mask obtained by enlarging the image mask and the extraction area to acquire a boundary area.
 本態様によれば、拡大画像マスクと抽出領域との差分を取得して、境界領域が取得されるので、より正確に境界領域を取得することができる。 According to the present aspect, the difference between the enlarged image mask and the extraction region is obtained and the boundary region is obtained, so that the boundary region can be obtained more accurately.
 好ましくは、画像処理装置は、画像マスクを拡大した拡大画像マスクと、抽出領域を縮小した縮小抽出領域との差分を取得して、境界領域を取得する境界領域取得部を備える。 Preferably, the image processing apparatus includes a boundary area obtaining unit that obtains a difference between an enlarged image mask obtained by enlarging the image mask and a reduced extraction area obtained by reducing the extraction area to obtain a boundary area.
 本態様によれば、抽出領域を拡大した拡大抽出領域と、抽出領域を縮小した縮小抽出領域との差分を取得して、境界領域が取得される。これにより、境界部分の幅がより広くなった拡大された前景画像を取得することができる。 According to this aspect, the difference between the enlarged extracted area obtained by enlarging the extracted area and the reduced extracted area obtained by reducing the extracted area is obtained, and the boundary area is obtained. This makes it possible to acquire an enlarged foreground image in which the width of the boundary portion is wider.
 好ましくは、境界領域は、10ピクセル以上20ピクセル以下の幅を有する。 Preferably, the border region has a width of 10 pixels or more and 20 pixels or less.
 好ましくは、確率算出部は、前景を抽出する学習済みの認識器で構成される。 Preferably, the probability calculation unit is configured with a learned recognizer that extracts a foreground.
 好ましくは、3次元画像データはレンチキュラー印刷用である。 Preferably, the three-dimensional image data is for lenticular printing.
 本発明の他の態様である画像処理方法は、2次元画像から、前景及び背景で構成される3次元画像データを生成する画像処理方法であって、2次元画像を取得するステップと、2次元画像における前景の領域を画像処理により推定し、2次元画像の小領域毎に前景の領域である確率を算出するステップと、確率に基づいて、前景と背景との境界領域における前景と背景との混合比率のグラデーションを有する、前景の画像マスクを生成するステップと、画像マスクにより、前景画像を取得するステップと、を含む。 An image processing method according to another aspect of the present invention is an image processing method for generating three-dimensional image data including a foreground and a background from a two-dimensional image. Estimating the foreground region in the image by image processing, calculating the probability of being a foreground region for each small region of the two-dimensional image, and determining the foreground and background in the boundary region between the foreground and background based on the probability. Generating a foreground image mask having a mixture ratio gradation; and acquiring a foreground image with the image mask.
 本発明の他の態様であるプログラムは、2次元画像から、前景及び背景で構成される3次元画像データを生成する画像処理工程をコンピュータに実行させるプログラムであって、2次元画像を取得する工程と、2次元画像における前景の領域を画像処理により推定し、2次元画像の小領域毎に前景の領域である確率を算出する工程と、確率に基づいて、前景と背景との境界領域における前景と背景との混合比率のグラデーションを有する、前景の画像マスクを生成する工程と、画像マスクにより、前景画像を取得する工程と、を含む画像処理工程をコンピュータに実行させる。 A program according to another aspect of the present invention is a program that causes a computer to execute an image processing step of generating three-dimensional image data including a foreground and a background from a two-dimensional image, and a step of acquiring the two-dimensional image Estimating the foreground area in the two-dimensional image by image processing and calculating the probability of being a foreground area for each small area of the two-dimensional image; and, based on the probability, the foreground in the boundary area between the foreground and the background. And causing the computer to execute an image processing step including a step of generating a foreground image mask having a gradation of a mixture ratio of the image and the background, and a step of acquiring a foreground image using the image mask.
 本発明によれば、2次元画像の小領域毎に前景の領域である確率が算出され、その確率に基づいて、前景と背景との境界領域における前景と背景との混合比率のグラデーションを有する前景の画像マスクが生成されるので、前景と背景との境界が明瞭な箇所と、境界が不明瞭な箇所とに応じて、境界領域の混合比率のグラデーションが制御されてぼけ具合が調整される。 According to the present invention, the probability of being a foreground region is calculated for each small region of a two-dimensional image, and a foreground having a gradation of a mixture ratio of the foreground and the background in a boundary region between the foreground and the background is calculated based on the probability. Is generated, the gradation of the mixture ratio of the boundary region is controlled according to the location where the boundary between the foreground and the background is clear and the location where the boundary is unclear, and the degree of blur is adjusted.
図1は、コンピュータの外観を示す図である。FIG. 1 is a diagram illustrating an appearance of a computer. 図2は、画像処理装置の機能構成例を示すブロック図である。FIG. 2 is a block diagram illustrating a functional configuration example of the image processing apparatus. 図3は、画像処理工程を示すフローチャートである。FIG. 3 is a flowchart showing the image processing process. 図4は、2次元画像と確率付き画像マスクとを概念的に示す図である。FIG. 4 is a diagram conceptually showing a two-dimensional image and an image mask with probability. 図5は、確率付き画像マスクの拡大図を示す図である。FIG. 5 is an enlarged view of the probability-added image mask. 図6は、混合比率のグラデーションを示す概念図である。FIG. 6 is a conceptual diagram showing the gradation of the mixing ratio. 図7は、抽出領域の抽出に関して、説明する図である。FIG. 7 is a diagram illustrating extraction of an extraction region. 図8は、境界領域の取得に関して説明する図である。FIG. 8 is a diagram illustrating acquisition of a boundary region. 図9は、画像マスクの生成に関して説明する図である。FIG. 9 is a diagram illustrating generation of an image mask. 図10は、画像マスクの拡大図である。FIG. 10 is an enlarged view of the image mask. 図11は、前景画像を取得することを説明する図であるFIG. 11 is a diagram for describing acquisition of a foreground image. 図12は、3次元画像における背景の欠損部分に関して説明する図である。FIG. 12 is a diagram illustrating a missing portion of the background in the three-dimensional image. 図13は、3次元画像における背景の欠損部分に関して説明する図である。FIG. 13 is a diagram illustrating a missing portion of the background in the three-dimensional image. 図14は、背景画像を示す図である。FIG. 14 is a diagram illustrating a background image. 図15は、3次元画像データの生成に用いられる画像を示す図である。FIG. 15 is a diagram illustrating an image used to generate three-dimensional image data.
 以下、添付図面に従って本発明に係る画像処理装置、画像処理方法、及びプログラムの好ましい実施の形態について説明する。 Hereinafter, preferred embodiments of an image processing apparatus, an image processing method, and a program according to the present invention will be described with reference to the accompanying drawings.
 図1は、本発明の画像処理装置を備えるコンピュータの外観を示す図である。 FIG. 1 is a diagram showing the appearance of a computer provided with the image processing device of the present invention.
 コンピュータ3は、本発明の一態様である画像処理装置11(図2)を搭載している。コンピュータ3には、2次元画像201が入力され、モニタ9で構成される表示部とキーボード5およびマウス7で構成される入力部が接続されている。なお、図示されたコンピュータ3の形態は一例であり、コンピュータ3と同様の機能を有する装置は本発明の画像処理装置11を備えることができる。例えば、タブレット端末に画像処理装置11を搭載することも可能である。 The computer 3 incorporates the image processing device 11 (FIG. 2) according to one embodiment of the present invention. A two-dimensional image 201 is input to the computer 3, and a display unit including a monitor 9 and an input unit including a keyboard 5 and a mouse 7 are connected to the computer 3. The illustrated form of the computer 3 is an example, and an apparatus having the same functions as the computer 3 can include the image processing apparatus 11 of the present invention. For example, the image processing device 11 can be mounted on a tablet terminal.
 コンピュータ3は、画像処理装置11(図2)に入力された2次元画像201、生成された3次元画像データをモニタ9に表示する。ユーザは、指令をキーボード5およびマウス7により入力する。 The computer 3 displays on the monitor 9 the two-dimensional image 201 input to the image processing apparatus 11 (FIG. 2) and the generated three-dimensional image data. The user inputs a command using the keyboard 5 and the mouse 7.
 図2は、画像処理装置11の機能構成例を示すブロック図である。図2に示す画像処理装置11の各種制御を実行するハードウェア的な構造は、次に示すような各種のプロセッサ(processor)である。各種のプロセッサには、ソフトウェア(プログラム)を実行して各種の制御部として機能する汎用的なプロセッサであるCPU(Central Processing Unit)、FPGA(Field Programmable Gate Array)などの製造後に回路構成を変更可能なプロセッサであるプログラマブルロジックデバイス(Programmable Logic Device:PLD)、ASIC(Application Specific Integrated Circuit)などの特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路などが含まれる。 FIG. 2 is a block diagram illustrating an example of a functional configuration of the image processing apparatus 11. The hardware structure for executing various controls of the image processing apparatus 11 shown in FIG. 2 includes the following various processors. For various processors, the circuit configuration can be changed after manufacturing such as CPU (Central Processing Unit) and FPGA (Field Programmable Gate Array), which are general-purpose processors that execute software (programs) and function as various control units. Special-purpose circuits such as programmable logic devices (Programmable Logic Devices: PLDs), ASICs (Application Specific Integrated Circuits), etc. It is.
 1つの処理部は、これら各種のプロセッサのうちの1つで構成されていてもよいし、同種又は異種の2つ以上のプロセッサ(例えば、複数のFPGA、あるいはCPUとFPGAの組み合わせ)で構成されてもよい。また、複数の制御部を1つのプロセッサで構成してもよい。複数の制御部を1つのプロセッサで構成する例としては、第1に、クライアントやサーバなどのコンピュータに代表されるように、1つ以上のCPUとソフトウェアの組合せで1つのプロセッサを構成し、このプロセッサが複数の制御部として機能する形態がある。第2に、システムオンチップ(System On Chip:SoC)などに代表されるように、複数の制御部を含むシステム全体の機能を1つのIC(Integrated Circuit)チップで実現するプロセッサを使用する形態がある。このように、各種の制御部は、ハードウェア的な構造として、上記各種のプロセッサを1つ以上用いて構成される。 One processing unit may be configured by one of these various processors, or configured by two or more processors of the same type or different types (for example, a plurality of FPGAs or a combination of a CPU and an FPGA). You may. Further, a plurality of control units may be configured by one processor. As an example in which a plurality of control units are configured by one processor, first, as represented by a computer such as a client or a server, one processor is configured by a combination of one or more CPUs and software. There is a form in which a processor functions as a plurality of control units. Second, as represented by a system-on-chip (System-on-Chip: SoC), a form using a processor that realizes the functions of the entire system including a plurality of control units with one integrated circuit (IC) chip is used. is there. As described above, the various control units are configured by using one or more of the above-described various processors as a hardware structure.
 画像処理装置11は、画像取得部13、確率算出部15、マスク生成部17、前景画像取得部19、背景画像取得部21、3次元画像データ生成部25、表示制御部23、及び記憶部26を備える。記憶部26には、プログラムおよび画像処理装置11の各種制御に係る情報等が記憶される。また、表示制御部23は、モニタ9による表示を制御する。 The image processing device 11 includes an image acquisition unit 13, a probability calculation unit 15, a mask generation unit 17, a foreground image acquisition unit 19, a background image acquisition unit 21, a three-dimensional image data generation unit 25, a display control unit 23, and a storage unit 26. Is provided. The storage unit 26 stores a program, information related to various controls of the image processing apparatus 11, and the like. Further, the display control unit 23 controls display on the monitor 9.
 画像取得部13は2次元画像201を取得する。2次元画像201は、例えば主要被写体で構成される前景と、その前景以外の背景とを有する。 The image acquisition unit 13 acquires the two-dimensional image 201. The two-dimensional image 201 has, for example, a foreground composed of a main subject and a background other than the foreground.
 確率算出部15は、2次元画像における前景の領域を画像処理により推定し、2次元画像の小領域毎に前景の領域である確率を算出する。確率算出部15で行われる推定及び確率の算出は、公知の手法が適用される。例えば、確率算出部15は、前景を抽出する学習済みの認識器で構成される。確率算出部15は、小領域毎に前景の領域であることを画像処理で推定し、その小領域毎にその推定に基づく確率を算出する。また、確率算出部15が算出する2次元画像の小領域は、例えば1画素毎であったり、2×2画素毎であったり、所定の大きさの領域のことを意味する。 The probability calculation unit 15 estimates a foreground region in the two-dimensional image by image processing, and calculates a probability of being a foreground region for each small region of the two-dimensional image. Known methods are applied to the estimation and the probability calculation performed by the probability calculation unit 15. For example, the probability calculation unit 15 is configured by a learned recognizer that extracts a foreground. The probability calculating unit 15 estimates that each small area is a foreground area by image processing, and calculates a probability based on the estimation for each small area. The small area of the two-dimensional image calculated by the probability calculation unit 15 is, for example, every pixel, every 2 × 2 pixels, or an area of a predetermined size.
 マスク生成部17は、前景を取得するための画像マスクを生成する。画像マスクは、確率算出部15で算出された前景である領域の確率に基づいて、前景と背景との境界領域における前景と背景との混合比率のグラデーションを有する。すなわち、画像マスクは、境界領域において、前景の領域である確率に基づいたぼけを有する。画像マスクは、確率算出部15で算出された確率をそのまま用いて、マスク部分及び非マスク部分を構成してもよいし、確率算出部15で算出された確率に基づいて抽出される抽出領域及び境界領域とで、マスク部分及び非マスク部分を構成してもよい。また、マスク生成部17は、抽出領域取得部18及び境界領域取得部20を有する。 The mask generation unit 17 generates an image mask for acquiring a foreground. The image mask has a gradation of the mixture ratio of the foreground and the background in the boundary region between the foreground and the background, based on the probability of the foreground region calculated by the probability calculation unit 15. That is, the image mask has a blur based on the probability of being a foreground area in the boundary area. The image mask may constitute a mask portion and a non-mask portion using the probabilities calculated by the probability calculation portion 15 as they are, or may include an extraction region and an extraction region that are extracted based on the probabilities calculated by the probability calculation portion 15. The boundary area may form a mask portion and a non-mask portion. In addition, the mask generation unit 17 includes an extraction area acquisition unit 18 and a boundary area acquisition unit 20.
 抽出領域取得部18は抽出領域を取得する。具体的には、抽出領域取得部18は、確率付き画像マスクを評価閾値に基づいて2値化し、前景の抽出領域を取得する。ここで抽出領域とは、前景の領域である確率が一定以上の領域のことである。 (4) The extraction area acquisition unit 18 acquires an extraction area. Specifically, the extraction area acquisition unit 18 binarizes the probability-added image mask based on the evaluation threshold and acquires the foreground extraction area. Here, the extraction area is an area having a certain probability or more as a foreground area.
 境界領域取得部20は、確率付き画像マスクにおける境界領域を取得する。確率付き画像マスクの境界領域では、前景と背景との混合比率のグラデーションを有する。 The boundary area acquisition unit 20 acquires a boundary area in the probability-added image mask. The boundary region of the probability-added image mask has a gradation of the mixture ratio of the foreground and the background.
 前景画像取得部19は、画像マスクにより、前景画像を取得する。 The foreground image acquisition unit 19 acquires a foreground image using an image mask.
 背景画像取得部21は、2次元画像において、前景画像に対応する領域を補完して、背景画像を取得する。 The background image acquisition unit 21 acquires a background image by complementing a region corresponding to the foreground image in the two-dimensional image.
 3次元画像データ生成部25は、前景画像取得部19で取得された前景画像、及び背景画像取得部21で取得された背景画像により3次元画像データを生成する。例えば、3次元画像データ生成部25は、レンチキュラー印刷用の3次元画像データを生成する。 The three-dimensional image data generation unit 25 generates three-dimensional image data from the foreground image acquired by the foreground image acquisition unit 19 and the background image acquired by the background image acquisition unit 21. For example, the three-dimensional image data generation unit 25 generates three-dimensional image data for lenticular printing.
 次に、画像処理装置11により行われる、レンチキュラー印刷用画像データである3次元画像データの生成の具体例に関して説明する。 Next, a specific example of generation of three-dimensional image data, which is image data for lenticular printing, performed by the image processing apparatus 11 will be described.
 <画像処理の工程>
 図3は、画像処理装置11により、2次元画像から3次元画像データを生成する画像処理工程(画像処理方法)を示すフローチャートである。先ず、画像処理工程の全体を説明し、その後に各工程について詳しく説明をする。
<Image processing process>
FIG. 3 is a flowchart illustrating an image processing step (image processing method) of generating three-dimensional image data from a two-dimensional image by the image processing device 11. First, the entire image processing process will be described, and then each process will be described in detail.
 画像取得部13は、2次元画像Aを取得する(ステップS10)。その後、確率算出部15は、前景となる領域を画像処理により自動抽出し、その後に前景である確率を算出し、次にマスク生成部17は、算出された確率に基づいて確率付き画像マスクBを生成する(ステップS11)。次に、抽出領域取得部18は、確率付き画像マスクBを2値化し、抽出領域Cを取得する(ステップS12)。その後、境界領域取得部20は、抽出領域Cに基づいて境界領域を計算して、確率付き画像マスクBから境界領域Dを抜き出す(ステップS13)。そして、マスク生成部17により、抽出領域Cと境界領域Dとを合成して、画像マスクEを生成する(ステップS14)。 The image acquisition unit 13 acquires the two-dimensional image A (Step S10). Thereafter, the probability calculation unit 15 automatically extracts a foreground region by image processing, and thereafter calculates a foreground probability. Next, the mask generation unit 17 uses the calculated image mask B based on the calculated probability. Is generated (step S11). Next, the extraction area acquisition unit 18 binarizes the probability-added image mask B and acquires an extraction area C (step S12). After that, the boundary area acquisition unit 20 calculates the boundary area based on the extraction area C, and extracts the boundary area D from the probability-added image mask B (step S13). Then, the mask generation unit 17 combines the extraction area C and the boundary area D to generate an image mask E (step S14).
 前景画像取得部19は、画像マスクEを適用して、前景画像Fを取得する(ステップS15)。また、背景画像取得部21は、画像マスクEの非マスク部分を周りの画素から補完し背景画像Gとする(ステップS16)。その後、3次元画像データ生成部25により、前景画像F及び背景画像Gを用いて、レンチキュラー印刷用データが生成される(ステップS17)。 The foreground image acquisition unit 19 acquires the foreground image F by applying the image mask E (step S15). In addition, the background image acquisition unit 21 complements the non-mask portion of the image mask E with surrounding pixels to obtain a background image G (Step S16). Thereafter, the three-dimensional image data generation unit 25 generates lenticular printing data using the foreground image F and the background image G (step S17).
 次に、上述した各工程(ステップ)に関して、詳細に説明する。 Next, each of the above-described steps will be described in detail.
 <ステップS10及びステップS11>
 ステップS10及びステップS11では、2次元画像Aが入力され、確率付き画像マスクBが生成される。図4は、2次元画像A201と確率付き画像マスクB203とを概念的に示す図である。
<Step S10 and Step S11>
In steps S10 and S11, the two-dimensional image A is input, and an image mask B with probability is generated. FIG. 4 is a diagram conceptually showing a two-dimensional image A201 and an image mask B203 with probability.
 2次元画像A201は、人物である主要被写体Oを有している。2次元画像A201における主要被写体Oを前景とし、主要被写体Oを除いた部分で構成される背景とで、3次元画像データが生成される。 The two-dimensional image A201 has a main subject O which is a person. Three-dimensional image data is generated with the main subject O in the two-dimensional image A201 as a foreground and a background including a part excluding the main subject O.
 確率付き画像マスクB203は、主要被写体Oを画像処理により自動抽出し、前景部分に対応した非マスク部分P及び背景部分に対応したマスク部分Mを有している。また、確率付き画像マスクB203は、前景の領域である確率を小領域毎に有している。確率付き画像マスクB203を適用して、前景を取得してもよいが、後述する画像マスクE213を適用して前景を取得することにより、より正確な前景画像の取得を行うことができる。 The image mask with probability B203 automatically extracts the main subject O by image processing, and has a non-mask portion P corresponding to the foreground portion and a mask portion M corresponding to the background portion. The probability-added image mask B203 has a probability of being a foreground area for each small area. The foreground may be obtained by applying the image mask with probability B203, but by obtaining the foreground by applying the image mask E213 described later, it is possible to obtain a more accurate foreground image.
 図5は、確率付き画像マスクB203の拡大図を示す図である。図5(A)には、確率付き画像マスクB203が示されており、図5(B)には、確率付き画像マスクB203の領域Hの拡大図が示されており、図5(C)には、領域Jの拡大図が示されている。図5(C)に示されている図の矢印Nで示すように、確率付き画像マスクB203は、前景と背景との境界領域において混合比率のグラデーションを有しており、これに応じてぼけを有する。なお、このグラデーションは前景の領域である確率に基づくものであり、前景と背景との境界部分が明瞭な部分では、グラデーションは急であり狭くなり、前景と背景との境界部分が不明瞭な部分では、グラデーションは緩く広くなる。 FIG. 5 is an enlarged view of the probability-added image mask B203. 5A shows an image mask with a probability B203, FIG. 5B shows an enlarged view of a region H of the image mask with a probability B203, and FIG. Shows an enlarged view of the region J. As indicated by the arrow N in the diagram shown in FIG. 5C, the probability-added image mask B203 has a gradation of the mixture ratio in the boundary region between the foreground and the background, and blurs accordingly. Have. Note that this gradation is based on the probability of being a foreground area, and where the boundary between the foreground and the background is clear, the gradation is steep and narrow, and the part where the boundary between the foreground and the background is unclear. Then, the gradation becomes loose and wide.
 図6は、前景と背景との境界部分が明瞭な場合、及び前景と背景との境界部分が不明瞭な場合の境界部分の混合比率のグラデーションを示す概念図である。図6(A)は、前景と背景との境界部分が明瞭な場合の前景の領域である確率のグラデーションが示されている。また、図6(B)は、前景と背景との境界部分が不明瞭な場合の前景の領域である確率のグラデーションが示されている。 FIG. 6 is a conceptual diagram showing the gradation of the mixture ratio of the boundary part when the boundary part between the foreground and the background is clear and when the boundary part between the foreground and the background is unclear. FIG. 6A shows the gradation of the probability of being a foreground area when the boundary between the foreground and the background is clear. FIG. 6B shows a gradation of the probability of being a foreground area when the boundary between the foreground and the background is unclear.
 図6(A)に示す場合では、前景である確率100%の領域301と前景である確率0%である領域(背景の領域)305が明瞭であるために、境界303におけるグラデーションが急であり狭くなっている。一方、図6(B)に示す場合では、前景である確率100%の領域301と前景である確率0%である領域305が不明瞭であるために、境界303におけるグラデーションが緩やかであり広くなる。このように、前景の領域である確率のグラデーションは、前景と背景との境界が明瞭な部分と不明瞭な部分と異なるので、このグラデーションを利用して、前景のぼけ具合を調整することにより、3次元画像データにおいて違和感を効果的に抑制することができる。 In the case shown in FIG. 6A, the gradation at the boundary 303 is sharp because the region 301 with a probability of 100% being the foreground and the region (background region) 305 having a probability of 0% being the foreground are clear. It is getting smaller. On the other hand, in the case shown in FIG. 6B, since the region 301 having the probability of 100% being the foreground and the region 305 having the probability of being 0% being the foreground are unclear, the gradation at the boundary 303 is gentle and wide. . As described above, since the gradation of the probability of being the foreground area is different from the part where the boundary between the foreground and the background is clear and the part where the boundary is not clear, by adjusting the degree of blur of the foreground by using this gradation, The sense of discomfort can be effectively suppressed in the three-dimensional image data.
 <ステップS12>
 ステップS12では、抽出領域Cが取得される。図7は、抽出領域取得部18で行われる抽出領域Cの抽出に関して、説明する図である。抽出領域取得部18は、確率付き画像マスクB203の小領域毎に付されている確率を評価閾値に基づいて二値化する。例えば、抽出領域取得部18は、確率が0%から100%で示されている場合には、確率が50%以上の領域と確率が50%未満の領域とで2値化する。抽出領域C209は、前景の領域である確率が50%以上である非マスク部分Pを有し、前景の領域である確率が50%未満のマスク部分Mを有している。すなわち、抽出領域C209では、2値化された非マスク部分Pとマスク部分Mを有している。
<Step S12>
In step S12, the extraction area C is obtained. FIG. 7 is a diagram illustrating the extraction of the extraction area C performed by the extraction area obtaining unit 18. The extraction area acquisition unit 18 binarizes the probability assigned to each small area of the image mask with probability B203 based on the evaluation threshold. For example, when the probability is indicated from 0% to 100%, the extraction region acquisition unit 18 binarizes the region with the probability of 50% or more and the region with the probability of less than 50%. The extraction region C209 has a non-mask portion P whose probability of being a foreground region is 50% or more, and has a mask portion M whose probability of being a foreground region is less than 50%. That is, the extraction region C209 has a binarized non-mask portion P and a mask portion M.
 <ステップS13>
 ステップS13では境界領域Dが取得される。図8は、境界領域取得部20で行われる境界領域Dの取得に関して説明する図である。図8に示した場合では、確率付き画像マスクB203から抽出領域C209を差し引くことで境界領域D211を生成する。具体的には、確率付き画像マスクB203を拡大した(拡大画像マスク)後に、抽出領域C209を差し引くことで境界領域D211を生成する。また、確率付き画像マスクB203を拡大した(拡大画像マスク)後に、縮小した抽出領域C209(縮小抽出領域)を差し引くことで境界領域D211を生成してもよい。例えば、確率付き画像マスクB203を縦横に10ピクセル分拡大した後に、抽出領域C209を縦横に10ピクセル分縮小して、差分を得る。境界領域D211は、確率付き画像マスクB203の境界領域における前景と背景との混合比率のグラデーションを有する。このように、確率付き画像マスクB203及び抽出領域Cに関して拡縮を行うことにより、境界領域D211の幅を制御することができる。例えば、境界領域D211の幅は、10ピクセル以上20ピクセル以下であることが好ましい。
<Step S13>
In step S13, a boundary area D is obtained. FIG. 8 is a diagram illustrating the acquisition of the boundary area D performed by the boundary area acquisition unit 20. In the case shown in FIG. 8, the boundary area D211 is generated by subtracting the extraction area C209 from the probability-added image mask B203. Specifically, after enlarging the probability-added image mask B203 (enlarged image mask), the boundary region D211 is generated by subtracting the extraction region C209. Alternatively, the boundary area D211 may be generated by enlarging the image mask with probability B203 (enlarged image mask) and then subtracting the reduced extraction area C209 (reduced extraction area). For example, after the probability-added image mask B203 is expanded vertically and horizontally by 10 pixels, the extraction area C209 is reduced vertically and horizontally by 10 pixels to obtain a difference. The boundary area D211 has a gradation of the mixture ratio of the foreground and the background in the boundary area of the image mask with probability B203. As described above, the width of the boundary region D211 can be controlled by performing scaling on the probability-added image mask B203 and the extraction region C. For example, it is preferable that the width of the boundary region D211 is not less than 10 pixels and not more than 20 pixels.
 <ステップS14>
 ステップS14では、画像マスクEが生成される。図9は、マスク生成部17により行われる画像マスクE213の生成に関して説明する図である。マスク生成部17は、抽出領域C209と境界領域D211とを合成することにより、画像マスクE213を得る。例えば、確率付き画像マスクB203を縦横に10ピクセル分拡大した後に、抽出領域C209を縦横に10ピクセル分縮小して、境界領域D211を得た場合には、縦横10ピクセル分縮小した抽出領域C209と合成して、画像マスクE213を得る。画像マスクE213は、抽出領域C209の部分では2値化された均一の値を有し、境界領域D211の部分ではグラデーションを有する。このように合成して画像マスクE213を生成することにより、非マスク部分Pとマスク部分Mとの境界部分では、前景と背景との混合比率のグラデーションを有しぼけを発生させ、且つ非マスク部分Pの境界領域以外の部分(抽出領域)では確実に非マスクとなる。
<Step S14>
In step S14, an image mask E is generated. FIG. 9 is a diagram illustrating generation of the image mask E213 performed by the mask generation unit 17. The mask generation unit 17 obtains the image mask E213 by combining the extraction area C209 and the boundary area D211. For example, after expanding the image mask with probability B203 by 10 pixels vertically and horizontally, the extraction area C209 is reduced by 10 pixels vertically and horizontally to obtain a boundary area D211. The images are combined to obtain an image mask E213. The image mask E213 has a binarized uniform value in the extraction region C209, and has a gradation in the boundary region D211. By generating the image mask E213 by combining in this manner, the boundary portion between the non-mask portion P and the mask portion M has a gradation of a mixture ratio of the foreground and the background, and generates blur. A portion (extracted region) other than the boundary region of P is definitely not masked.
 図10は、図9で示した画像マスクE213の領域Rの拡大図である。図10に示すように画像マスクE213の領域Rでは、非マスク部分Pとマスク部分Mとの混合比率のグラデーションを有する(矢印Nで示す)。このようなグラデーションを有する画像マスクE213を適用して、前景及び背景を取得することにより、適切なぼけを有する前景及び背景を取得することができる。 FIG. 10 is an enlarged view of a region R of the image mask E213 shown in FIG. As shown in FIG. 10, the region R of the image mask E213 has a gradation of the mixture ratio of the non-mask portion P and the mask portion M (indicated by an arrow N). By applying the image mask E213 having such a gradation and acquiring the foreground and the background, it is possible to acquire the foreground and the background having appropriate blur.
 <ステップS15>
 ステップS15では画像マスクEが得られる。図11は、前景画像取得部19により、前景画像F215を取得することを説明する図である。図11に示すように、2次元画像201に対して画像マスクE213を適用することにより、前景画像F215を取得する。画像マスクE213は、境界部分において前景の領域である確率に応じたグラデーションを有しているので、前景画像F215は確率に応じたぼけを有する。すなわち、前景画像F215は、前景と背景との境界が明瞭な箇所と、境界が不明瞭な箇所とに応じて、ぼけ具合が調整されている。
<Step S15>
In step S15, an image mask E is obtained. FIG. 11 is a diagram illustrating that the foreground image acquisition unit 19 acquires the foreground image F215. As shown in FIG. 11, the foreground image F215 is obtained by applying the image mask E213 to the two-dimensional image 201. Since the image mask E213 has a gradation corresponding to the probability of being a foreground region in the boundary portion, the foreground image F215 has a blur corresponding to the probability. That is, in the foreground image F215, the degree of blur is adjusted in accordance with the location where the boundary between the foreground and the background is clear and the location where the boundary is unclear.
 ここで、前景画像F215は、元の2次元画像A201の主要被写体Oに対して拡大することにより、背景の欠損部分をカバーすることができる。図12及び図13は、3次元画像における背景の欠損部分に関して説明する図である。図12では、正面から3次元画像を見た場合、すなわち、前景画像F215が背景画像に対して移動させられていない場合を示している。この場合には、前景画像F215が移動されていないので、背景の欠損部分が出てくることはなく、欠損部分が目立つということはない。一方、図13では、斜めから3次元画像を見た場合、すなわち、前景画像F215が背景画像に対して移動させられている場合を示している。この場合には、前景画像F215が移動されているので、背景の欠損部分Uが発生し、欠損部分Uが目立ってしまう。このような場合には、前景画像F215を拡大することにより欠損部分Uをカバーすることができる。 Here, the foreground image F215 can cover the missing portion of the background by enlarging the main subject O of the original two-dimensional image A201. FIG. 12 and FIG. 13 are diagrams illustrating a missing portion of the background in the three-dimensional image. FIG. 12 shows a case where the three-dimensional image is viewed from the front, that is, a case where the foreground image F215 is not moved with respect to the background image. In this case, since the foreground image F215 has not been moved, a missing portion of the background does not appear, and the missing portion does not stand out. On the other hand, FIG. 13 shows a case where the three-dimensional image is viewed obliquely, that is, a case where the foreground image F215 is moved with respect to the background image. In this case, since the foreground image F215 is moved, a missing portion U of the background occurs, and the missing portion U becomes conspicuous. In such a case, the missing portion U can be covered by enlarging the foreground image F215.
 <ステップS16>
 ステップS16では、背景画像Gが得られる。図14は、背景画像取得部21により、取得された背景画像G217を示す図である。図14に示された図では、図11で示された前景画像F215の箇所を補完して、背景画像取得部21により、背景画像G217が生成されている。なお、背景画像取得部21で行われる補完は公知の技術により行われる。
<Step S16>
In step S16, a background image G is obtained. FIG. 14 is a diagram illustrating the background image G217 acquired by the background image acquiring unit 21. In the diagram illustrated in FIG. 14, the background image G <b> 217 is generated by the background image acquisition unit 21 by complementing the foreground image F <b> 215 illustrated in FIG. 11. Note that the complement performed by the background image acquisition unit 21 is performed by a known technique.
 <ステップS17>
 ステップS17では、レンチキュラー印刷用データが生成される。図15は、3次元画像データの生成に用いられる画像を示す図である。画像223は、正面から見た図であり、前景と背景との位置関係は元の2次元画像Aと同じである。画像221は、前景画像F215を背景画像G217に対して矢印Vの方向に移動させており、画像225は、前景画像F215を背景画像G217に対して矢印Wの方向に移動させている。例えば、3次元画像データ生成部25により、画像221、画像223、及び画像225を使用してレンチキュラー印刷用の3次元画像データが生成される。
<Step S17>
In step S17, lenticular printing data is generated. FIG. 15 is a diagram illustrating an image used to generate three-dimensional image data. The image 223 is a view from the front, and the positional relationship between the foreground and the background is the same as the original two-dimensional image A. The image 221 moves the foreground image F215 in the direction of arrow V with respect to the background image G217, and the image 225 moves the foreground image F215 in the direction of arrow W with respect to the background image G217. For example, the three-dimensional image data generation unit 25 generates three-dimensional image data for lenticular printing using the images 221, 223, and 225.
 上述の各構成及び機能は、任意のハードウェア、ソフトウェア、或いは両者の組み合わせによって適宜実現可能である。例えば、上述の処理ステップ(処理手順)をコンピュータに実行させるプログラム、そのようなプログラムを記録したコンピュータ読み取り可能な記録媒体(非一時的記録媒体)、或いはそのようなプログラムをインストール可能なコンピュータに対しても本発明を適用することが可能である。 The configurations and functions described above can be appropriately implemented by any hardware, software, or a combination of both. For example, a program that causes a computer to execute the above-described processing steps (processing procedure), a computer-readable recording medium (non-transitory recording medium) on which such a program is recorded, or a computer on which such a program can be installed However, the present invention can be applied.
 以上で本発明の例に関して説明してきたが、本発明は上述した実施の形態に限定されず、本発明の精神を逸脱しない範囲で種々の変形が可能であることは言うまでもない。 Although an example of the present invention has been described above, the present invention is not limited to the above-described embodiment, and it goes without saying that various modifications can be made without departing from the spirit of the present invention.
3    :コンピュータ
5    :キーボード
7    :マウス
9    :モニタ
10   :縦横
11   :画像処理装置
13   :画像取得部
15   :確率算出部
17   :マスク生成部
18   :抽出領域取得部
19   :前景画像取得部
20   :境界領域取得部
21   :背景画像取得部
23   :表示制御部
25   :3次元画像データ生成部
26   :記憶部
201  :2次元画像
3: Computer 5: Keyboard 7: Mouse 9: Monitor 10: Vertical and horizontal 11: Image processing device 13: Image acquisition unit 15: Probability calculation unit 17: Mask generation unit 18: Extraction region acquisition unit 19: Foreground image acquisition unit 20: Boundary Area acquisition unit 21: background image acquisition unit 23: display control unit 25: three-dimensional image data generation unit 26: storage unit 201: two-dimensional image

Claims (12)

  1.  2次元画像から、前景及び背景で構成される3次元画像データを生成する画像処理装置であって、
     前記2次元画像を取得する画像取得部と、
     前記2次元画像における前記前景の領域を画像処理により推定し、前記2次元画像の小領域毎に前記前景の領域である確率を算出する確率算出部と、
     前記確率に基づいて、前記前景と前記背景との境界領域における前記前景と前記背景との混合比率のグラデーションを有する、前記前景の画像マスクを生成するマスク生成部と、
     前記画像マスクにより、前景画像を取得する前景画像取得部と、
     を備える画像処理装置。
    An image processing apparatus that generates three-dimensional image data including a foreground and a background from a two-dimensional image,
    An image acquisition unit that acquires the two-dimensional image;
    A probability calculation unit for estimating the foreground region in the two-dimensional image by image processing and calculating a probability of being the foreground region for each small region of the two-dimensional image;
    Based on the probability, having a gradation of a mixture ratio of the foreground and the background in the boundary region between the foreground and the background, a mask generation unit that generates the foreground image mask,
    A foreground image acquisition unit for acquiring a foreground image by the image mask;
    An image processing apparatus comprising:
  2.  前記2次元画像において、前記前景画像に対応する領域を補完して、背景画像を取得する背景画像取得部を備える請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, further comprising: a background image acquisition unit that acquires a background image by complementing an area corresponding to the foreground image in the two-dimensional image.
  3.  前記画像マスクの非マスク部分は、前記2次元画像における前記前景よりも拡大されている請求項1又は2に記載の画像処理装置。 The image processing apparatus according to claim 1 or 2, wherein the non-mask portion of the image mask is larger than the foreground in the two-dimensional image.
  4.  前記画像マスクを評価閾値に基づいて2値化し、前記前景の抽出領域を取得する抽出領域取得部を備え、
     前記マスク生成部は、前記抽出領域と前記境界領域とで構成される前記画像マスクを生成する請求項1から3のいずれか1項に記載の画像処理装置。
    An extraction area acquisition unit that binarizes the image mask based on an evaluation threshold and acquires the extraction area of the foreground;
    The image processing device according to claim 1, wherein the mask generation unit generates the image mask including the extraction region and the boundary region.
  5.  前記画像マスクを拡大した拡大画像マスクと、前記抽出領域との差分を取得して、前記境界領域を取得する境界領域取得部を備える請求項4に記載の画像処理装置。 5. The image processing apparatus according to claim 4, further comprising: a boundary area obtaining unit configured to obtain a difference between the extracted area and an enlarged image mask obtained by enlarging the image mask to obtain the boundary area.
  6.  前記画像マスクを拡大した拡大画像マスクと、前記抽出領域を縮小した縮小抽出領域との差分を取得して、前記境界領域を取得する境界領域取得部を備える請求項4に記載の画像処理装置。 5. The image processing apparatus according to claim 4, further comprising: acquiring a difference between an enlarged image mask obtained by enlarging the image mask and a reduced extraction area obtained by reducing the extraction area, and acquiring the boundary area.
  7.  前記境界領域は、10ピクセル以上20ピクセル以下の幅を有する請求項1から6のいずれか1項に記載の画像処理装置。 The image processing apparatus according to any one of claims 1 to 6, wherein the boundary area has a width of 10 pixels or more and 20 pixels or less.
  8.  前記確率算出部は、前記前景を抽出する学習済みの認識器で構成される請求項1から7のいずれか1項に記載の画像処理装置。 The image processing apparatus according to any one of claims 1 to 7, wherein the probability calculation unit includes a learned recognizer that extracts the foreground.
  9.  前記3次元画像データはレンチキュラー印刷用である請求項1から8のいずれか1項に記載の画像処理装置。 The image processing apparatus according to any one of claims 1 to 8, wherein the three-dimensional image data is for lenticular printing.
  10.  2次元画像から、前景及び背景で構成される3次元画像データを生成する画像処理方法であって、
     前記2次元画像を取得するステップと、
     前記2次元画像における前記前景の領域を画像処理により推定し、前記2次元画像の小領域毎に前記前景の領域である確率を算出するステップと、
     前記確率に基づいて、前記前景と前記背景との境界領域における前記前景と前記背景との混合比率のグラデーションを有する、前記前景の画像マスクを生成するステップと、
     前記画像マスクにより、前景画像を取得するステップと、
     を含む画像処理方法。
    An image processing method for generating three-dimensional image data including a foreground and a background from a two-dimensional image,
    Obtaining the two-dimensional image;
    Estimating the foreground area in the two-dimensional image by image processing, and calculating a probability of being the foreground area for each small area of the two-dimensional image;
    Generating a foreground image mask having a gradation of a mixture ratio of the foreground and the background in a boundary region between the foreground and the background, based on the probability;
    Obtaining a foreground image by the image mask;
    An image processing method including:
  11.  2次元画像から、前景及び背景で構成される3次元画像データを生成する画像処理工程をコンピュータに実行させるプログラムであって、
     前記2次元画像を取得する工程と、
     前記2次元画像における前記前景の領域を画像処理により推定し、前記2次元画像の小領域毎に前記前景の領域である確率を算出する工程と、
     前記確率に基づいて、前記前景と前記背景との境界領域における前記前景と前記背景との混合比率のグラデーションを有する、前記前景の画像マスクを生成する工程と、
     前記画像マスクにより、前景画像を取得する工程と、
     を含む画像処理工程をコンピュータに実行させるプログラム。
    A program for causing a computer to execute an image processing step of generating three-dimensional image data including a foreground and a background from a two-dimensional image,
    Obtaining the two-dimensional image;
    Estimating the foreground area in the two-dimensional image by image processing, and calculating a probability of being the foreground area for each small area of the two-dimensional image;
    Generating a foreground image mask having a gradation of a mixture ratio of the foreground and the background in a boundary region between the foreground and the background, based on the probability.
    A step of acquiring a foreground image by the image mask;
    A program for causing a computer to execute an image processing step including:
  12.  非一時的かつコンピュータ読取可能な記録媒体であって、前記記録媒体に格納された指令がコンピュータによって読み取られた場合に、
     2次元画像から、前景及び背景で構成される3次元画像データを生成する画像処理工程であって、
     前記2次元画像を取得する工程と、
     前記2次元画像における前記前景の領域を画像処理により推定し、前記2次元画像の小領域毎に前記前景の領域である確率を算出する工程と、
     前記確率に基づいて、前記前景と前記背景との境界領域における前記前景と前記背景との混合比率のグラデーションを有する、前記前景の画像マスクを生成する工程と、
     前記画像マスクにより、前景画像を取得する工程と、
     を含む画像処理工程をコンピュータに実行させる記録媒体。
    A non-transitory and computer-readable recording medium, wherein the instructions stored in the recording medium are read by a computer,
    An image processing step of generating three-dimensional image data including a foreground and a background from a two-dimensional image,
    Obtaining the two-dimensional image;
    Estimating the foreground region in the two-dimensional image by image processing, and calculating a probability of being the foreground region for each small region of the two-dimensional image;
    Based on the probability, having a gradation of a mixture ratio of the foreground and the background in the boundary region between the foreground and the background, generating the foreground image mask,
    A step of acquiring a foreground image by the image mask;
    A recording medium for causing a computer to execute an image processing step including:
PCT/JP2019/030257 2018-08-14 2019-08-01 Image processing device, image processing method, and program WO2020036072A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020537411A JP7143419B2 (en) 2018-08-14 2019-08-01 Image processing device, image processing method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-152660 2018-08-14
JP2018152660 2018-08-14

Publications (1)

Publication Number Publication Date
WO2020036072A1 true WO2020036072A1 (en) 2020-02-20

Family

ID=69525502

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/030257 WO2020036072A1 (en) 2018-08-14 2019-08-01 Image processing device, image processing method, and program

Country Status (2)

Country Link
JP (1) JP7143419B2 (en)
WO (1) WO2020036072A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908120A (en) * 2023-01-06 2023-04-04 荣耀终端有限公司 Image processing method and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010061478A (en) * 2008-09-04 2010-03-18 Sony Computer Entertainment Inc Image processing device, object tracking device, and image processing method
US8175384B1 (en) * 2008-03-17 2012-05-08 Adobe Systems Incorporated Method and apparatus for discriminative alpha matting
US8391594B1 (en) * 2009-05-28 2013-03-05 Adobe Systems Incorporated Method and apparatus for generating variable-width border masks
JP2014072639A (en) * 2012-09-28 2014-04-21 Jvc Kenwood Corp Image processing apparatus, image processing method, and image processing program
US20170116777A1 (en) * 2015-10-21 2017-04-27 Samsung Electronics Co., Ltd. Image processing method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175384B1 (en) * 2008-03-17 2012-05-08 Adobe Systems Incorporated Method and apparatus for discriminative alpha matting
JP2010061478A (en) * 2008-09-04 2010-03-18 Sony Computer Entertainment Inc Image processing device, object tracking device, and image processing method
US8391594B1 (en) * 2009-05-28 2013-03-05 Adobe Systems Incorporated Method and apparatus for generating variable-width border masks
JP2014072639A (en) * 2012-09-28 2014-04-21 Jvc Kenwood Corp Image processing apparatus, image processing method, and image processing program
US20170116777A1 (en) * 2015-10-21 2017-04-27 Samsung Electronics Co., Ltd. Image processing method and apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908120A (en) * 2023-01-06 2023-04-04 荣耀终端有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
JP7143419B2 (en) 2022-09-28
JPWO2020036072A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
US20180300937A1 (en) System and a method of restoring an occluded background region
US8824821B2 (en) Method and apparatus for performing user inspired visual effects rendering on an image
US10140513B2 (en) Reference image slicing
US10762649B2 (en) Methods and systems for providing selective disparity refinement
JP2015215895A (en) Depth value restoration method of depth image, and system thereof
JP2019125929A5 (en)
US9734551B1 (en) Providing depth-of-field renderings
CN112330527A (en) Image processing method, image processing apparatus, electronic device, and medium
US20220237802A1 (en) Image processing apparatus and non-transitory computer readable medium storing program
RU2697627C1 (en) Method of correcting illumination of an object on an image in a sequence of images and a user&#39;s computing device which implements said method
TW201436552A (en) Method and apparatus for increasing frame rate of an image stream using at least one higher frame rate image stream
JP2008016006A (en) Reliable image sharpening method
CN112149592A (en) Image processing method and device and computer equipment
WO2020036072A1 (en) Image processing device, image processing method, and program
JP2018205788A5 (en)
JP2018194985A (en) Image processing apparatus, image processing method and image processing program
US9412188B2 (en) Method and image processing system for removing a visual object from an image
CN116132732A (en) Video processing method, device, electronic equipment and storage medium
JP5896661B2 (en) Information processing apparatus, information processing apparatus control method, and program
JP2020052530A5 (en)
JP7271115B2 (en) Image processing device, background image generation method and program
KR101617551B1 (en) Image processing method and system for improving face detection
WO2020059575A1 (en) Three-dimensional image generation device, three-dimensional image generation method, and program
JP2020112928A (en) Background model generation device, background model generation method, and background model generation program
CN108876912A (en) Three-dimensional scenic physics renders method and its system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850696

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020537411

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19850696

Country of ref document: EP

Kind code of ref document: A1