WO2022201318A1 - Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image - Google Patents

Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image Download PDF

Info

Publication number
WO2022201318A1
WO2022201318A1 PCT/JP2021/012033 JP2021012033W WO2022201318A1 WO 2022201318 A1 WO2022201318 A1 WO 2022201318A1 JP 2021012033 W JP2021012033 W JP 2021012033W WO 2022201318 A1 WO2022201318 A1 WO 2022201318A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
unit
coordinate
background
Prior art date
Application number
PCT/JP2021/012033
Other languages
English (en)
Japanese (ja)
Inventor
弘員 柿沼
広夢 宮下
翔大 山田
秀信 長田
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2023508223A priority Critical patent/JP7473076B2/ja
Priority to PCT/JP2021/012033 priority patent/WO2022201318A1/fr
Publication of WO2022201318A1 publication Critical patent/WO2022201318A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • Embodiments of the present invention relate to an image processing device, an image processing method, and an image processing program for extracting a foreground from an image.
  • a technology that extracts only a person or object from a subject included in an image as the foreground and uses the other area as the background is widely known in the field of image processing.
  • This image processing is mainly applied to image synthesis in which an image showing only a target subject is superimposed on another image, image compression in which unimportant pixels are expressed with a smaller amount of information, and the like.
  • a color image that serves as the background is captured in advance, and the difference between the color image and the color image captured in the same shooting range at the same angle of view is taken to extract the newly appearing subject as the foreground.
  • a keying method (so-called chromakey) has also been proposed in which pixels corresponding to specific lightness or chromaticity are removed from a color image and the remaining area is used as the foreground.
  • Non-Patent Document 1 a sensor image is acquired at the same time as a color image, and it is premised on extracting a rough area of the subject from the sensor image.
  • pixels within a certain distance from the boundary between the foreground and background are defined as unknown pixels, and clustering processing is performed by comparing pixel values around the color image.
  • Patent Document 1 proposes classifying the unknown pixels in this Non-Patent Document 1 into two types to reduce the number of pixel references. According to the method disclosed in Japanese Patent Application Laid-Open No. 2002-200310, it is possible to specify the foreground area at high speed even for a high-resolution image.
  • Patent Document 2 proposes to implement the portion of extracting a rough area of the subject in Non-Patent Document 1 by referring to a lookup table obtained by learning with a neural network. According to the method of Patent Document 2, even if the background of the subject changes, it is difficult to extract the foreground (that is, it is possible to improve the robustness against background changes as compared with the normal background subtraction method).
  • Non-Patent Document 1 and Patent Document 1 above it is assumed that when a sensor image cannot be used, the background subtraction method is used to extract a rough area of the subject. However, in the background subtraction method, since all areas with a certain amount of pixel value change over time are extracted as subjects, it is not possible to extract only a specific subject.
  • Patent Document 2 Although it is possible to designate the pixels of the object to be used as the foreground and the pixels of the object to be used as the background as teacher data during learning in the neural network, if there are many pixels common to both, There is a problem that a contradiction occurs and the distinction between the foreground and the background becomes ambiguous.
  • the present invention seeks to provide a technique that enables precise and high-speed extraction of an arbitrary number of subjects in an image as the foreground.
  • an image processing apparatus includes an image storage unit, a learning unit, an image reception processing unit, an inference processing unit, a boundary correction processing unit, and an image synthesis processing unit. And prepare.
  • the image storage unit stores at least a plurality of color images to be learned, a mask image created with respect to the plurality of color images, and a minimum pixel value to a maximum pixel value in the horizontal direction at the same resolution as the plurality of color images. and a coordinate image that does not depend on the content of the color image, including an x-coordinate image that varies continuously from a minimum pixel value to a maximum pixel value in the vertical direction.
  • the learning unit performs deep learning using the plurality of color images, the mask image, and the coordinate image stored in the image storage unit to generate a deep learning model.
  • the image reception processing unit receives a color image to be processed.
  • the inference processing unit inputs the color image to be processed and the coordinate image stored in the image storage unit, and makes an inference by referring to the deep learning model generated by the learning unit, A foreground, which is at least one subject area in the color image to be processed, is extracted to generate at least one mask image.
  • a boundary correction processing unit receives as input the color image to be processed and the mask image generated by the inference processing unit, and corrects a boundary region between each of the at least one foreground and the background in the color image to be processed.
  • the image synthesizing unit synthesizes the color image to be processed and the mask image after boundary correction generated by the boundary correction processing unit to generate the at least one extracted image of the foreground.
  • FIG. 1 is a block diagram showing an example configuration of an image processing system including an image processing section as an image processing apparatus according to one embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of a hardware configuration of an image processing unit;
  • FIG. 3 is a flowchart illustrating an example of processing operations of an image processing unit.
  • FIG. 4 is a flowchart showing an example of details of the preprocessing in FIG.
  • FIG. 5 is a flowchart showing an example of details of the learning process in FIG.
  • FIG. 6 is a diagram showing an example of a color image.
  • FIG. 7 is a flow chart showing an example of the processing operation of the first image editing process in the image editing section in FIG.
  • FIG. 1 is a block diagram showing an example configuration of an image processing system including an image processing section as an image processing apparatus according to one embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of a hardware configuration of an image processing unit;
  • FIG. 3 is a flowchart
  • FIG. 8 is a flow chart showing an example of the processing operation of the second image editing process in the image editing unit.
  • FIG. 9 is a diagram showing an example of a background image.
  • FIG. 10 is a diagram showing an example of a background depth image.
  • FIG. 11 is a diagram showing an example of a coordinate image.
  • FIG. 12 is a schematic diagram showing an example of a deep learning model.
  • FIG. 13 is a flowchart showing an example of LUT generation processing in FIG.
  • FIG. 14 is a diagram showing an example of a lookup table regarding colors between pixels.
  • FIG. 15 is a schematic diagram showing the color relationship between the correction target pixel and the search pixel.
  • FIG. 16 is a diagram showing an example of a lookup table regarding coordinates between pixels.
  • FIG. 9 is a diagram showing an example of a background image.
  • FIG. 10 is a diagram showing an example of a background depth image.
  • FIG. 11 is a diagram showing an example of a coordinate image.
  • FIG. 17 is a schematic diagram showing the relationship of the coordinates of the correction target pixel and the search pixel.
  • FIG. 18 is a diagram showing an example of allocation of CPU threads to each unit related to image processing.
  • FIG. 19 is a flowchart showing an example of details of the initialization process in FIG.
  • FIG. 20 is a flowchart showing an example of details of image processing in FIG.
  • FIG. 21 is a diagram showing an example of a mask image after boundary correction.
  • FIG. 22 is a diagram showing an example of an extracted image.
  • 23A and 23B are diagrams showing the correspondence between images involved in image processing.
  • FIG. 1 is a block diagram showing an example configuration of an image processing system including an image processing section 100 as an image processing apparatus according to an embodiment of the present invention.
  • the image processing system includes this image processing section 100 , an imaging section 200 , a display section 300 and an image editing section 400 .
  • the image processing system may be configured such that each of these units is integrated as one device or housing, or may be configured from a plurality of devices. Also, multiple devices may be remotely located and connected via a network.
  • the imaging unit 200 includes, for example, a camera that captures color images, and sequentially transmits to the image processing unit 100 color moving images (color video) composed of a plurality of frames being captured.
  • the imaging unit 200 may be a recorder capable of recording a color image captured by a camera and reproducing and outputting the recorded color image.
  • the color image transmitted by the imaging unit 200 is a color video
  • the camera will be described assuming that the angle of view of the color video it captures is fixed.
  • this embodiment is applicable.
  • the image processing unit 100 creates a mask image for extracting an arbitrary number of subjects from the color image transmitted from the imaging unit 200, that is, at least one subject, and combines this mask image with the original color image. to create an extracted video, and transmit the created extracted video to the display unit 300 .
  • the display unit 300 includes display devices such as a monitor and a projector, for example.
  • the display section 300 displays the extracted video transmitted from the image processing section 100 .
  • the mask image may be displayed on the display unit 300 by transmitting the mask image before being combined with the color image from the image processing unit 100 .
  • the image editing unit 400 creates a background image, a background depth image, and a coordinate image necessary for creating a mask video in the image processing unit 100. Details of these images will be described later.
  • the image processing unit 100 includes an image receiving unit 110, an inference unit 120, a boundary correction unit 130, an image synthesis unit 140, an image transmission unit 150, a storage unit 160, and a learning unit 170. , a LUT generation unit 180 , and a parallel number management unit 190 .
  • Storage unit 160 includes image storage unit 161 , model storage unit 162 , and LUT storage unit 163 .
  • the image receiving unit 110 receives the color video (or the color video and the depth video) from the imaging unit 200, and acquires the video frame by frame as a color image.
  • the image receiving unit 110 performs color conversion on each acquired color image as necessary.
  • Image receiving section 110 outputs the color image to inference section 120 , boundary correction section 130 and image composition section 140 .
  • the image receiving unit 110 writes out the color image being acquired (or the color image and the depth image) as a file in response to an instruction from the user through an interface (not shown) of the image processing unit 100, and stores the image storage unit 161 in the file. can be saved to
  • the image storage unit 161 stores the color image (or the color image and the depth image) written out from the image receiving unit 110 as a learning target color image or a processing target color image.
  • the image storage unit 161 also stores the background image, background depth image, and coordinate image created by the image editing unit 400 .
  • the learning unit 170 performs deep learning using the learning target color image stored in the image storage unit 161 to generate a deep learning model. Details of deep learning in the learning unit 170 will be described later.
  • the model storage unit 162 stores the deep learning model learned by the learning unit 170 .
  • the LUT generation unit 180 generates in advance two types of lookup tables LUT1 and LUT2 regarding colors and coordinates based on rules set by the user. This rule allows you to adjust the file size of the lookup table. However, this adjustment must be made according to the performance of the CPU, memory, disk capacity, etc. of the device that constitutes the image processing unit 100 .
  • the colors handled during boundary correction in the boundary correction unit 130 which will be described later, are quantized from 256 gradations to 32 gradations, and the coordinates of the pixels to be referred to are quantized.
  • the search range is set according to the rule that the upper limit is 64 pixels. Details of the method of generating the lookup tables LUT1 and LUT2 will be described later.
  • the LUT storage unit 163 stores the lookup tables LUT1 and LUT2 generated by the LUT generation unit 180. FIG.
  • the inference unit 120 After reading the deep learning model stored in the model storage unit 162, the inference unit 120 extracts the color image to be processed output from the image reception unit 110, the background image stored in the image storage unit 161, and the background image. A depth image and a coordinate image are obtained as input images. Then, the inference unit 120 performs inference processing on the input image with reference to the deep learning model. As a result, the inference unit 120 can generate an arbitrary number of foregrounds learned in advance by the learning unit 170 as a mask image having an arbitrary number of channels. That is, the inference unit 120 generates a multi-channel image in which one type of foreground is expressed as a mask image for each channel in the color image. If the resolution of the image at the time of learning is small, the inference unit 120 also reduces the input color image accordingly, enlarges the mask image that is the output image, and outputs it to the boundary correction unit 130. and
  • the boundary correction unit 130 After reading the two types of lookup tables LUT1 and LUT2 stored in the LUT storage unit 163, the boundary correction unit 130 compares the color image output from the image reception unit 110 with the color image stored in the image storage unit 161. The background image and the mask image output from the inference unit 120 are acquired, and boundary correction processing is performed. Thereby, the boundary correction unit 130 can generate a mask image in which the boundary region of the mask is precisely corrected. The boundary correction unit 130 outputs the generated mask image after boundary correction to the image composition unit 140 .
  • the boundary correction unit 130 performs the correction based on the color distance and the coordinate distance between the pixels to be determined in the background image corresponding to the boundary regions between the foreground, which is the subject region in the color image, and the background, and the surrounding pixels. , determine whether the pixel to be determined is the foreground or the background, and correct the boundary area.
  • the color and coordinate distance between two pixels are obtained by calculation, but in this embodiment, only the distance calculation part is replaced by referring to two types of lookup tables LUT1 and LUT2. This speeds up the processing.
  • the boundary correction unit 130 separates the channels of the mask image, and divides the mask image into multiple mask images of one channel. Correction processing may be performed. Alternatively, the boundary correction section 130 may select only one channel and execute the boundary correction process only once.
  • the image synthesis unit 140 masks the color image output from the image reception unit 110 with the mask image after boundary correction output from the boundary correction unit 130, thereby generating an extracted image in which only the foreground is extracted.
  • the image synthesizing unit 140 transmits the generated extracted image to the image transmitting unit 150 .
  • the image synthesizing unit 140 may generate multiple extracted images. Alternatively, the image synthesizing unit 140 may generate a single extracted image by synthesizing a plurality of foregrounds.
  • the image synthesizing unit 140 may simultaneously output an image obtained by processing the mask image without synthesizing. This image can be used, for example, for expressing shadows in the foreground.
  • the image transmitting unit 150 converts the extracted image output from the image synthesizing unit 140 into one video frame, and transmits it to the display unit 300 as an extracted video. Note that when there are a plurality of extracted images output from the image synthesizing section 140, the image transmitting section 150 transmits them to the display section 300 as a plurality of extracted images.
  • the parallel number management unit 190 preliminarily sets the number of threads of the CPU exclusively used by each unit before the processing of the image reception unit 110, the inference unit 120, the boundary correction unit 130, the image synthesis unit 140, and the image transmission unit 150 is started. set to
  • FIG. 2 is a diagram showing an example of the hardware configuration of a device that constitutes the image processing unit 100.
  • This device can be, for example, a general purpose computer such as a personal computer.
  • the image processing unit 100 includes a processor 11 , a memory 12 , a GPU (Graphics Processing Unit) 13 , an external storage device 14 , an input device 15 , an output device 16 and a communication IF device 17 .
  • Memory 12 , external storage device 14 , input device 15 , output device 16 and communication IF device 17 are connected to processor 11 via bus 18 .
  • the processor 11 includes a multi-core/multi-threaded CPU, and is capable of concurrently executing multiple pieces of information processing. Also, the processor 11 may include a plurality of such CPUs.
  • the memory 12 can include RAM, which is volatile memory, and ROM, which is nonvolatile memory.
  • RAM is used as a work area for the processor 11, and each function of the image processing section 100 is realized by the processor 11 executing an image processing program for the image processing section 100 loaded on the RAM.
  • the GPU 13 is a device that performs high-speed calculation processing for deep learning.
  • the external storage device 14 includes non-volatile memories such as HDDs and SSDs that can be written and read at any time.
  • the external storage device 14 stores various application programs such as an image processing program, and stores various data acquired and created while the processor 11 is executing these application programs or performing basic control processing.
  • used to The external storage device 14 can function as an image storage unit 161, a model storage unit 162, and a LUT storage unit 163, for example.
  • the input device 15 includes, for example, pointing devices such as keyboards and mice.
  • the output device 16 includes, for example, a liquid crystal monitor. Also, the input device 15 and the output device 16 may be a liquid crystal touch panel or the like having both an input function and a display function.
  • the communication IF device 17 includes, for example, one or more wired or wireless communication interface units, and receives color images (or color images and depth images) transmitted from the imaging unit 200 .
  • a wired interface for example, a wired LAN, a USB (Universal Serial Bus) interface, etc., are used.
  • An interface adopting the power wireless data communication standard is used.
  • FIG. 3 is a flowchart showing an example of the processing operation of the image processing unit 100.
  • FIG. 3 When the execution of the image processing program is instructed by the user through the input device 15, the processor 11 starts the operation shown in this flow chart.
  • the processor 11 first performs preprocessing to prepare data necessary for image processing (step S100). The details of this preprocessing will be described later.
  • the processor 11 determines whether or not the user has input an instruction to start actual image processing from the input device 15 (step S200). If the processor 11 determines that the image processing start instruction has not been input, it repeats step S200. In this way, the processor 11 waits for input of an instruction to start image processing.
  • processor 11 executes the operation as parallel number management unit 190 and allocates the number of CPU threads to each process in image processing (step S300). ).
  • the processor 11 executes initialization processing for setting information necessary for operating as the inference unit 120 and the boundary correction unit 13 (step S400). The details of this initialization process will be described later.
  • the processor 11 executes image processing for extracting a foreground image from the color image transmitted from the imaging unit 200 and outputting it to the display unit 300 (step S500). The details of this image processing will be described later.
  • the processor 11 determines whether or not the user has input an image processing end instruction from the input device 15 (step S600). If the processor 11 determines that the image processing end instruction has not been input, the processor 11 returns to step S500. Thus, the processor 11 repeatedly executes the image processing in step S500 until an instruction to end the image processing is input.
  • step S600 the processor 11 ends the processing shown in this flowchart.
  • FIG. 4 is a flow chart showing an example of the details of the pre-processing in step S100.
  • the processor 11 first executes a learning process and stores the deep learning model in, for example, the model storage unit 162 provided in the external storage device 14 (step S110). Further, the processor 11 executes LUT generation processing, and stores the lookup table in the LUT storage unit 163 provided in the external storage device 14, for example (step S120).
  • FIG. 5 is a flowchart showing an example of details of the learning process in step S110.
  • the processor 11 first operates as the image receiving unit 110 to receive the color video (or the color video and the depth video) from the imaging unit 200, and the color video to be learned frame by frame. It is acquired as a color image (step S111). The obtained color image is subjected to color conversion as necessary and temporarily stored in a work area secured in the memory 12 .
  • FIG. 6 is a diagram showing an example of the acquired and temporarily stored color image IMG1.
  • the processor 11 saves the acquired color image IMG1 as a file in, for example, the image storage unit 161 provided in the external storage device 14 (step S112).
  • the processor 11 determines whether or not the user has input an instruction to start the learning process from the input device 15 (step S113). When determining that the instruction to start the learning process has not been input, the processor 11 repeats step S113. In this way, the processor 11 waits for an instruction to start learning processing.
  • the instruction to start the learning process will be put on hold until the images required for learning are ready. Images necessary for this learning can be generated by the image editing unit 400 and stored in the image storage unit 161 provided in the external storage device 14, for example.
  • the image editing unit 400 performs two types of image editing processing.
  • FIG. 7 is a flow chart showing an example of the processing operation of the first image editing process in the image editing unit 400
  • FIG. 8 is a flow chart showing an example of the processing operation of the second image editing process.
  • the computer that configures the image processing section 100 can also operate as the image editing section 400 . That is, when the execution of the image editing program is instructed by the user through the input device 15, the processor 11 starts the operations shown in these flow charts.
  • the processor 11 first acquires a color image group including a plurality of color images IMG1 stored, for example, in the image storage unit 161 provided in the external storage device 14 (step S701).
  • the obtained color image group is temporarily stored in the work area secured in the memory 12 .
  • the processor 11 selects one or more color images IMG1 to be used for processing from among the plurality of color images IMG1 included in the color image group (step S702).
  • the single selected color image IMG1 is the color image IMG1 in which the subject to be extracted is not shown. This can be done, for example, by temporarily creating an environment in which the subject is not shown as a photographing environment, and selecting the color image IMG1 acquired at that time and stored in the image storage unit 161 provided in the external storage device 14, for example. can be done.
  • the position of the subject to be extracted has changed, that is, a plurality of color images IMG1 having different subject positions. Select the color image IMG1.
  • the processor 11 Based on one or more selected color images IMG1, the processor 11 generates a background image, which is an image of only the background in which the subject to be extracted is not included in the image (step S703).
  • the background image can be generated by synthesizing and editing the plurality of color images IMG1.
  • FIG. 9 is a diagram showing an example of the background image IMG2.
  • the processor 11 when combining a plurality of color images IMG1, the processor 11 does not adopt from any one color image IMG1 as pixels of a region that does not contain a subject and does not change, but from a plurality of color images IMG1. Take the mean of the image IMG1. By doing so, the noise of the camera sensor or the like can be reduced, and the background image IMG2 with higher accuracy can be created.
  • FIG. 10 is a diagram showing an example of the background depth image IMG3. If the image received by the image receiving unit 110 is only a color image, the processor 11 creates a pseudo background depth image IMG3 from the background image IMG2 generated in step S703. Further, when color video and depth video are received, the processor 11 acquires a plurality of depth images stored, for example, in the image storage unit 161 provided in the external storage device 14, A background depth image IMG3 is created as a background depth image IMG3.
  • FIG. 11 is a diagram showing an example of the coordinate image IMG4.
  • the coordinate image IMG4 includes an x-coordinate image IMG4X and a y-coordinate image IMG4Y, and is always the same image regardless of the contents of the color image IMG1 and the background image IMG2.
  • the x-coordinate image IMG4X has the same resolution as the color image IMG1 and continuously varies horizontally from the minimum pixel value to the maximum pixel value (for example, from 0 to 255 if the image bit depth is 8 bits). It is a 1-channel image.
  • the y-coordinate image IMG4Y has the same resolution as the color image IMG1, and is vertically continuous from the minimum pixel value to the maximum pixel value (for example, from 0 to 255 if the bit depth of the image is 8 bits). It is a changing one-channel image.
  • the processor 11 stores the generated background image IMG2, the generated background depth image IMG3, and the coordinate image IMG4 (x-coordinate image IMG4X, y-coordinate image IMG4Y) in the image storage unit 161 provided in the external storage device 14, for example. Save (step S706).
  • the processor 11 first acquires a color image group including a plurality of color images IMG1 stored in, for example, the image storage unit 161 provided in the external storage device 14 (step S711).
  • the color video being captured is recorded by an external device different from the image processing unit 100, and the video is decomposed into images frame by frame and stored in the image storage unit 161.
  • a plurality of color images IMG1 may be obtained therefrom in step S711.
  • the obtained color image group is temporarily stored in the work area secured in the memory 12 .
  • the processor 11 selects a color image IMG1 in which the poses and positions of the subject are as diverse as possible from among the plurality of color images IMG1 included in the color image group (step S712). This can be done, for example, by selecting a color image IMG1 having many areas with different pixel values from the reference color image, and then selecting another color image IMG1 as a new reference image. good.
  • the processor 11 creates a mask image with only the region of the target subject as the foreground for each of the selected color images IMG1 (step S713).
  • the region of the target subject is, for example, a region in which pixel values differ between color images.
  • other well-known techniques may be used to determine the area of the subject.
  • the processor 11 saves the combination of the created mask image and the original color image IMG1, for example, in the image storage unit 161 provided in the external storage device 14 (step S714).
  • the background image IMG2 After the background image IMG2, the background depth image IMG3, the coordinate image IMG4 (x-coordinate image IMG4X, y-coordinate image IMG4Y) and the combination of the mask image and the color image IMG1 are stored in the image storage unit 161 as described above, , the user inputs an instruction to start the learning process.
  • the processor 11 determines YES in step S113, operates as the learning unit 170, and generates a group of teacher images (step S114). That is, the processor 11 stores, for example, a combination of the color image IMG1 and the mask image stored in the image storage unit 161 provided in the external storage device 14, the background image IMG2, the background depth image IMG3, and the coordinate image IMG4 (x-coordinate image IMG4X, y-coordinate images IMG4Y) are acquired and these are used as a set of teacher images. The processor 11 aggregates a plurality of sets of such teacher images to generate a group of teacher images.
  • FIG. 12 is a schematic diagram showing an example of the deep learning model at this time.
  • the processor 11 may adjust the resolution of the input image to the GPU 13, change the number of convolutions, or change the number of channels to be increased during convolution in accordance with the performance of the GPU 13. good.
  • the processor 11 stores the learned model in, for example, the model storage unit 162 provided in the external storage device 14 (step S116), and ends the learning process of step S110.
  • FIG. 13 is a flow chart showing an example of the details of the LUT generation process in step S120.
  • the processor 11 operates as the LUT generator 180 and first sets the degree of quantization of pixel values (step S121).
  • the purpose of this quantization is to reduce the file size of the color lookup table. For example, if the image to be processed is a 24-bit image (3-channel image with 256 gradations with pixel values from 0 to 255), quantization converts it to an 18-bit image (pixel values from 0 to 31). 3-channel image with 32 gradations) or conversion into a 12-bit image (3-channel image with 16 gradations with pixel values from 0 to 15).
  • the processor 11 sets the degree of quantization based on rules set by the user from the input device 15 .
  • the processor 11 sets the upper limit of the search range of search pixels, which are peripheral pixels during boundary correction (step S122).
  • the upper limit of the search range is the number of surrounding pixels to be referred to for the boundary correction target pixel, which is the determination target pixel for the foreground, background, or unclassified area. For example, by setting the upper limit of the search range to 64 pixels or 32 pixels, the file size of the lookup table regarding coordinates can be reduced.
  • the processor 11 sets the upper limit of this search range based on rules set by the user from the input device 15 .
  • the processor 11 generates two types of lookup tables, that is, a lookup table regarding colors between pixels and a lookup table regarding coordinates between pixels (step S123).
  • FIG. 14 is a diagram showing an example of the lookup table LUT1 regarding colors between pixels.
  • This lookup table LUT1 is an example of a lookup table for colors when quantizing from 24 bits to 18 bits.
  • FIG. 15 is a schematic diagram showing the color relationship between the correction target pixel PIS during boundary correction, which is the determination target pixel, and the search pixel PIR, which is the surrounding pixel.
  • a lookup table LUT1 relating to colors between pixels receives as input all color combinations of the pixel to be corrected PIS and the search pixel PIR, the color distance between the two pixels, that is, the color difference, and the index of the distance. It becomes a lookup table that outputs what is given.
  • the color space is not limited to the RGB space, and may be another color space such as the YUV space in accordance with the color space of the image to be handled.
  • the processor 11 can generate a lookup table LUT1 relating to colors between pixels, for example, by performing calculations as shown in the following equations. Processor 11 can perform these operations at high speed by bit shift operations.
  • the color index i color has the following relationship.
  • the color distance d color between pixels is corrected as follows to be the color distance d' color .
  • the lookup table LUT1 for colors between pixels can be written as follows.
  • the elements i Rs , i Gs , i Bs , i Rr , i Gr and i Br of the color index i color can be obtained as follows.
  • FIG. 16 is a diagram showing an example of the lookup table LUT2 regarding coordinates between pixels.
  • This lookup table LUT2 is an example of a lookup table relating to coordinates between pixels when the search range upper limit is set to 64 pixels.
  • FIG. 17 is a schematic diagram showing the relationship regarding the coordinates of the correction target pixel PIS and the search pixel PIR during boundary correction.
  • a lookup table LUT2 relating to coordinates between pixels receives as input a combination of absolute values of differences in coordinates between the pixel to be corrected PIS and the search pixel PIR, and obtains the distance between the two pixels and the index given to the distance. It becomes a lookup table to be output.
  • the processor 11 can generate a lookup table LUT2 relating to the distance between pixels, for example, by performing calculations as shown in the following equations. Processor 11 can perform these operations at high speed by bit shift operations.
  • the distance d coord (for example, the Euclidean distance) between the pixels is as follows. become.
  • the distance index i coord has the following relationship.
  • the lookup table LUT2 regarding coordinates between pixels can be written as follows.
  • Each element of the coordinate index i coord can be obtained as follows.
  • the processor 11 After creating the lookup table LUT1 for two types of colors and the lookup table LUT2 for coordinates as described above, the processor 11 stores the two types of lookup tables thus created in the external storage device 14, for example. This is stored in the LUT storage unit 163 (step S124). The processor 11 then terminates the LUT generation process of step S120.
  • FIG. 18 is a diagram showing an example of allocation of CPU threads to each unit involved in image processing in step S300.
  • the processor 11 has 24 CPU threads, for example, the image receiving unit 110, the inference unit 120, the boundary correction unit 130, the image synthesizing unit 140, and the image transmitting unit 150 have the 24 CPU threads exclusively. Allocate a CPU thread.
  • the processor 11 allocates 3 CPU threads to each of the image reception unit 110 , the inference unit 120 , the image synthesis unit 140 and the image transmission unit 150 , and allocates 12 CPU threads to the boundary correction unit 130 .
  • the processor 11 allocates as many threads as possible to the boundary correction unit 130 so as to improve the processing speed of the entire system. adjusted to a good balance.
  • FIG. 19 is a flowchart showing an example of details of the initialization process in step S400.
  • the processor 11 first acquires the background image IMG2, the background depth image IMG3, and the coordinate image IMG (x-coordinate image IMG4X, y-coordinate image IMG4Y) stored in, for example, the image storage unit 161 provided in the external storage device 14 (step S401). Each acquired image is temporarily stored in a work area secured in the memory 12 .
  • the processor 11 reads the deep learning model stored in, for example, the model storage unit 162 provided in the external storage device 14, and temporarily stores it in the work area secured in the memory 12 (step S402).
  • the processor 11 reads the color lookup table LUT1 and the coordinate lookup table LUT2 stored in the LUT storage unit 163 provided in the external storage device 14, for example, and temporarily stores them in the work area secured in the memory 12. (step S403).
  • FIG. 20 is a flow chart showing an example of details of the image processing in step S500.
  • the CPU provided in the processor 11 first executes the operation as the image receiving unit 110 using, for example, three CPU threads. That is, the CPU thread assigned to operate as the image receiving unit 110 receives the color video (or the color video and the depth video) from the imaging unit 200, and acquires a color image of one frame in the color video (step S501). Then, the CPU thread performs color conversion on the acquired color image as necessary, and temporarily saves it in the work area secured in the memory 12 as a color image to be processed.
  • FIG. 6 is a diagram showing an example of the acquired and temporarily stored color image IMG1 to be processed.
  • the CPU of the processor 11 uses, for example, three CPU threads to perform operations as the inference unit 120 on the temporarily stored color image IMG1 to be processed. That is, the CPU thread assigned to operate as the inference unit 120 uses the background image IMG2, the background depth image IMG3, and the coordinate image IMG4, which are temporarily stored in the memory 12 by the process of step S400, and the dashed line in FIG. , the color image IMG1 temporarily stored in the memory 12 by the process of step S501 is read out as an input image.
  • the CPU thread executes inference processing on this input image by referring to the deep learning model temporarily stored in the memory 12 by the processing of step S400, and generates an arbitrary number of foreground mask images ( step S502).
  • the CPU thread temporarily saves the generated mask image in the work area secured in the memory 12, and ends the image processing operation shown in FIG. After that, the CPU thread returns to the image processing operation again from the judgment in step S500, and waits for the acquisition of the next one-frame color image IMG1 to be processed in step S501. Become.
  • the CPU included in the processor 11 uses, for example, 12 CPU threads to perform the operation of the boundary correction unit 130 on the temporarily stored mask image. do. That is, the CPU thread assigned to operate as the boundary correction unit 130 uses the background image IMG2 and the mask image temporarily stored in the memory 12 by the process of step S400, and Then, the color image IMG1 temporarily stored in the memory 12 by the processing of step S501 is acquired.
  • the CPU thread executes boundary correction processing using the two types of lookup tables LUT1 and LUT2 temporarily stored in the memory 12 by the processing of step S400, thereby correcting the boundary between the foreground and background of each mask image.
  • the area is finely corrected (step S503).
  • the CPU thread temporarily saves each corrected mask image as a corrected mask image in the work area secured in the memory 12, and ends the image processing operation shown in FIG. After that, the CPU thread returns to the operation of this image processing again from the judgment in step S500, obtains the color image IMG1 of the next one frame in step S501, and prepares the mask image in step S502. Waiting for generation.
  • FIG. 21 is a diagram showing an example of the mask image IMG5 after boundary correction. This example shows a case where the color image IMG1 shown in FIG. Images IMG51 and IMG52 have been generated.
  • the CPU provided in the processor 11 uses, for example, three CPU threads to perform image synthesis using an arbitrary number of temporarily stored post-boundary correction mask images IMG5, as indicated by dashed arrows in FIG.
  • the operation as the unit 140 is executed. That is, the CPU thread assigned the operation of the image synthesizing unit 140 reads the color image IMG1 temporarily stored in the memory 12 by the process of step S501, as indicated by the dashed-dotted arrow in FIG.
  • the CPU thread masks this color image IMG1 with an arbitrary number of post-boundary-correction mask images IMG5 temporarily stored in the memory 12 by the process of step S503, thereby obtaining an image in which only the foreground is extracted. number of extracted images are generated (step S504).
  • the CPU thread temporarily saves each generated extracted image in the work area secured in the memory 12, and ends the image processing operation shown in FIG. Thereafter, the CPU thread returns to the operation of this image processing again from the judgment in step S500, obtains the color image IMG1 of the next frame in step S501, and after the boundary correction in step S503. The generation of the mask image IMG5 is awaited.
  • FIG. 22 is a diagram showing an example of an extracted image IMG6. This example shows the case of inputting the color image IMG1 shown in FIG. generated.
  • the CPU included in the processor 11 uses, for example, three CPU threads to process an arbitrary number of temporarily stored extracted images IMG6, and the image transmission unit 150 perform the operation as That is, the CPU thread assigned to operate as the image transmission unit 150 converts an arbitrary number of the extracted images IMG6 temporarily stored in the memory 12 by the process of step S504 into one video frame and displays it as video. It is transmitted to the unit 300 (step S505). Then, the CPU thread terminates the image processing operation shown in FIG. After that, the CPU thread returns to the image processing operation again from the judgment in step S500, and waits for the extraction image IMG6 to be generated in step S504.
  • FIG. 23 is a diagram showing the correspondence of each image related to the above image processing.
  • a mask image IMG7 is generated by performing inference processing on the color image IMG1 obtained from the received color video with reference to the deep learning model. By performing boundary correction processing on this mask image IMG7 using two kinds of lookup tables LUT1 and LUT2, a mask image IMG5 after boundary correction is generated.
  • An extracted image IMG6 is generated by masking the color image IMG1 with the mask image IMG5 after boundary correction.
  • the image processing unit 100 as the image processing apparatus according to the embodiment described above includes at least a plurality of learning target color images, a mask image created with respect to the plurality of color images, and the plurality of color images.
  • a coordinate image IMG4 that does not depend on the content of the image is stored in the image storage unit 161 in advance, and the learning unit 170 acquires the plurality of color images, the mask image, and the coordinate image IMG4 stored in the image storage unit 161.
  • the image processing unit 100 receives the color image IMG1 to be processed by the image receiving unit 110 as the image reception processing unit, and the inference unit 120 as the inference processing unit processes the color image IMG1 to be processed and the image With the coordinate image IMG4 stored in the storage unit 161 as an input, the deep learning model generated by the learning unit 170 stored in the model storage unit 162 is referenced and inferred to obtain the color image IMG1 to be processed.
  • the foreground which is at least one subject area in the image, is extracted to generate at least one mask image IMG7.
  • the image processing unit 100 inputs the color image IMG1 to be processed and the mask image IMG7 generated by the inference unit 120 in the boundary correction unit 130 as the boundary correction processing unit, and the color image IMG1 to be processed is input. determines whether the pixel to be determined is the foreground or the background based on the color and coordinate relationship between the pixel to be determined and its peripheral pixels in the boundary area between each of the foreground, which is at least one subject area, and the background in By determining and correcting the boundary area, at least one post-boundary correction mask image IMG5 is generated. At least one foreground extracted image IMG6 is generated by synthesizing the generated boundary-corrected mask image IMG5.
  • the image processing unit 100 receives a color image to be learned and a mask image and coordinate image IMG4 (x coordinate image IMG4X, y coordinate image IMG4Y) based thereon as input, and performs segmentation. and perform inference processing with reference to this deep learning model for the color image IMG1 to be processed. Can be extracted at high speed.
  • a mask image and coordinate image IMG4 x coordinate image IMG4X, y coordinate image IMG4Y
  • the processing target color image IMG1 received by the image receiving unit 110 is an image with a fixed shooting angle of view, and a plurality of learning target color images IMG1 stored in the image storage unit 161.
  • the image is an image obtained by photographing the same photographing range at the same photographing angle of view as the color image IMG1 to be processed.
  • the image processing unit 100 acquires the background image IMG2 and the background depth image IMG3 stored in the image storage unit 161 in addition to the plurality of color images to be learned, the mask image, and the coordinate image IMG4. to perform deep learning. Therefore, the image processing unit 100 inputs the background image IMG2 and the background depth image IMG3 stored in the image storage unit 161 in addition to the color image IMG1 and the coordinate image IMG4 to be processed in the inference unit 120 to obtain a depth image.
  • at least one mask image IMG7 is generated and stored in the image storage unit 161 in addition to the color image IMG1 and the mask image IMG7 to be processed by the boundary correction unit 130.
  • the background image IMG2 is input, and the boundary region is determined based on the color and coordinate relationship between the correction target pixel PIS, which is the determination target pixel of the background image IMG2 corresponding to the boundary region, and the search pixel PIR, which is the surrounding pixel. Correct to generate at least one boundary-corrected mask image IMG5.
  • the image processing unit 100 processes the background depth image IMG3 and the coordinate image IMG4 (x-coordinate image IMG4X, y-coordinate image IMG4Y) in addition to the color image IMG1 and the background image IMG2.
  • a deep learning model that performs segmentation is constructed, and inference processing is performed with reference to this deep learning model. , the regions can be extracted more precisely.
  • the background depth stored in the image storage unit 161 is The image IMG3 is an image generated from the plurality of depth images, and when the depth image is not captured together with the plurality of learning target color images, the background depth image IMG3 stored in the image storage unit 161 is , an image created artificially from a background image IMG2 generated based on a plurality of learning target color images.
  • the background depth image IMG3 can be saved even if the depth image cannot be obtained together with the color image of the learning object, and if the depth image can be obtained, the highly accurate background depth image IMG3 is saved. be able to.
  • the image processing unit 100 determines the distance between the correction target pixel PIS, which is the determination target pixel, and the search pixel PIR, which is the peripheral pixel, based on a rule set by the user.
  • a LUT generator 180 is further provided for generating a lookup table for speeding up determination of the relationship between colors and coordinates, and the boundary correction unit 130 uses the lookup table generated by the LUT generator 180. obtains the color and coordinate relationship between the correction target pixel PIS and the search pixel PIR in the boundary area, and determines whether the determination target pixel is the foreground or the background.
  • a lookup table is used to determine the relationship between colors and coordinates between two pixels in boundary correction. , it is possible to extract these regions at a higher speed.
  • the image processing unit 100 causes the LUT generation unit 180 to generate pixels according to the color of the correction target pixel PIS, which is the determination target pixel, and the color of the search pixel PIR, which is the peripheral pixel.
  • a lookup table LUT1 relating to color between pixels that outputs the color difference between pixels, that is, the color distance, and coordinates between pixels that output the coordinate distance between pixels according to the coordinates of the correction target pixel PIS and the coordinates of the search pixel PIR.
  • a lookup table LUT2 for .
  • the lookup table is used to calculate the distance between colors and coordinates in boundary correction, thereby reducing the amount of calculation and increasing the speed.
  • a region can be extracted precisely and at high speed.
  • the image receiving unit 110, the inference unit 120, the boundary correcting unit 130, the image synthesizing unit 140, and the image transmitting unit 150 in the image processing unit 100 as the image processing apparatus according to one embodiment are configured by a multithreaded CPU.
  • a parallel number management unit 190 is further provided for controlling the number of threads of the CPU exclusively used by each unit.
  • the above embodiment is based on the premise that the angle of view of the captured color video is fixed. However, it does not have to be a color image captured by a camera with a fixed angle of view. In this case, since the background information cannot be used, the background image IMG2 and the background depth image IMG3 should be excluded from the input to construct the deep learning model.
  • the LUT generation process of step S120 is executed after the learning process of step S110 is executed.
  • processing may be executed in parallel. 3 to 5, 7, 8 and 13 may be switched in order or executed in parallel unless the processing in question utilizes the results of previous processing. It may be processed by Also, each processing in the flowchart shown in FIG. 20 is supposed to be processed in parallel, but if the performance of the processor 11 permits, each processing may be performed in order so as to use the results of previous processing. I don't mind.
  • the image editing unit 400 is provided outside the image processing unit 100 , but the functions of the image editing unit 400 may be incorporated into the image processing unit 100 .
  • the method described in each embodiment can be executed by a computer (computer) as a program (software means), such as a magnetic disk (floppy (registered trademark) disk, hard disk, etc.), an optical disk (CD-ROM, DVD , MO, etc.), a semiconductor memory (ROM, RAM, flash memory, etc.), or the like, or may be transmitted and distributed via a communication medium.
  • the programs stored on the medium also include a setting program for configuring software means (including not only execution programs but also tables and data structures) to be executed by the computer.
  • a computer that realizes this apparatus reads a program recorded on a recording medium, and in some cases, builds software means by a setting program, and executes the above-described processes by controlling the operation by this software means.
  • the term "recording medium” as used herein is not limited to those for distribution, and includes storage media such as magnetic disks, semiconductor memories, etc. provided in computers or devices connected via a network.
  • the present invention is not limited to the above embodiments, and can be modified in various ways without departing from the gist of the invention at the implementation stage.
  • each embodiment may be implemented in combination as much as possible, and in that case, the combined effect can be obtained.
  • the above-described embodiments include inventions at various stages, and various inventions can be extracted by appropriately combining a plurality of disclosed constituent elements.
  • Boundary correction Unit 140 Image synthesizing unit 150 Image transmitting unit 160 Storage unit 161 Image storage unit 162 Model storage unit 163 LUT storage unit 170 Learning unit 180 LUT generation unit 190 Parallel number management unit 200 Imaging unit 300 Display section 400 Image editing section IMG1 Color image IMG2 Background image IMG3 Background depth image IMG4 Coordinate image IMG4X X coordinate image IMG4Y Y coordinate image IMG5, IMG51, IMG52 Mask image after boundary correction IMG6, IMG61, IMG62 Extracted image IMG7 Mask image LUT1, LUT2 Lookup table PIR Search pixel PIS Correction target pixel

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention porte, selon un mode de réalisation, sur un dispositif de traitement d'image : qui utilise une unité d'apprentissage pour générer un modèle d'apprentissage profond à l'avance par apprentissage profond à l'aide d'images en couleurs, d'images de masque et d'images de coordonnées stockées dans une unité de stockage d'image, lesdites images de coordonnées étant indépendantes du contenu des images en couleurs ; et qui entre, dans une unité de traitement d'inférence, une image en couleurs à traiter reçue par une unité de traitement de réception d'image et une image de coordonnées stockée dans l'unité de stockage d'image, et qui effectue une inférence en référence au modèle d'apprentissage profond pour extraire un premier plan constituant au moins une zone de sujet et pour générer au moins une image de masque. Ensuite, le dispositif de traitement d'image : utilise une unité de traitement de correction de limite pour corriger une région de limite entre chacun d'au moins un premier plan et l'arrière-plan dans l'image en couleurs à traiter sur la base de la relation de couleur et de coordonnées entre des pixels à déterminer et les pixels environnants de ceux-ci dans la région de limite, ce qui permet de générer au moins une image de masque à limite corrigée ; et utilise une unité de traitement de combinaison d'image pour combiner l'image en couleurs à traiter avec l'image de masque à limite corrigée pour générer une image extraite du ou des premiers plans.
PCT/JP2021/012033 2021-03-23 2021-03-23 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image WO2022201318A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023508223A JP7473076B2 (ja) 2021-03-23 2021-03-23 画像処理装置、画像処理方法及び画像処理プログラム
PCT/JP2021/012033 WO2022201318A1 (fr) 2021-03-23 2021-03-23 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/012033 WO2022201318A1 (fr) 2021-03-23 2021-03-23 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image

Publications (1)

Publication Number Publication Date
WO2022201318A1 true WO2022201318A1 (fr) 2022-09-29

Family

ID=83396568

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/012033 WO2022201318A1 (fr) 2021-03-23 2021-03-23 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image

Country Status (2)

Country Link
JP (1) JP7473076B2 (fr)
WO (1) WO2022201318A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170287137A1 (en) * 2016-03-31 2017-10-05 Adobe Systems Incorporated Utilizing deep learning for boundary-aware image segmentation
JP2018129029A (ja) * 2017-02-08 2018-08-16 日本電信電話株式会社 画像処理装置、画像処理方法、および画像処理プログラム
WO2019225692A1 (fr) * 2018-05-24 2019-11-28 日本電信電話株式会社 Dispositif de traitement vidéo, procédé de traitement vidéo et programme de traitement vidéo

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170287137A1 (en) * 2016-03-31 2017-10-05 Adobe Systems Incorporated Utilizing deep learning for boundary-aware image segmentation
JP2018129029A (ja) * 2017-02-08 2018-08-16 日本電信電話株式会社 画像処理装置、画像処理方法、および画像処理プログラム
WO2019225692A1 (fr) * 2018-05-24 2019-11-28 日本電信電話株式会社 Dispositif de traitement vidéo, procédé de traitement vidéo et programme de traitement vidéo

Also Published As

Publication number Publication date
JP7473076B2 (ja) 2024-04-23
JPWO2022201318A1 (fr) 2022-09-29

Similar Documents

Publication Publication Date Title
US9697416B2 (en) Object detection using cascaded convolutional neural networks
US9571819B1 (en) Efficient dense stereo computation
US11317070B2 (en) Saturation management for luminance gains in image processing
KR101375969B1 (ko) 스킨 컬러 개선을 위한 이미지 처리 방법 및 시스템
CN114930811B (zh) 用于深白平衡编辑的方法和装置
JP2017022620A (ja) 情報処理装置、情報処理方法、コンピュータプログラム
JP6817779B2 (ja) 画像処理装置、その制御方法、プログラムならびに記録媒体
WO2022201318A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image
KR100791374B1 (ko) 색역 내에 존재하는 색상을 영상 적응적으로 조절하는 방법및 장치
JP6160426B2 (ja) 画像処理装置及びプログラム
US10123052B2 (en) Elimination of artifacts from lossy encoding of digital images by color channel expansion
WO2023056926A1 (fr) Procédé de traitement d'image, dispositif électronique et support de stockage
US11425300B2 (en) Electronic device and method for processing image by electronic device
JP2010039999A (ja) 画像セグメンテーション方法、プログラムおよび装置
JP4212430B2 (ja) 多重画像作成装置、多重画像作成方法、多重画像作成プログラム及びプログラム記録媒体
JP7034690B2 (ja) 情報処理装置、情報処理方法及びプログラム
US20200351456A1 (en) Image processing device, image processing method, and program
WO2021134714A1 (fr) Procédé de traitement d'image infrarouge, procédé de marquage de pixel défectueux et dispositif associé
JP5889383B2 (ja) 画像処理装置および画像処理方法
JP2005311900A (ja) 画像処理装置およびその方法
JP2020088709A (ja) 画像処理装置、画像処理方法及びプログラム
JP5750934B2 (ja) 画像処理装置および画像処理プログラム
JP2008147714A (ja) 画像処理装置およびその方法
JP6670918B2 (ja) 生成装置、生成方法及び生成プログラム
CN118115360A (zh) 图像数据的多路处理方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932928

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023508223

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21932928

Country of ref document: EP

Kind code of ref document: A1