WO2019059460A1 - Appareil et procédé de traitement d'image - Google Patents

Appareil et procédé de traitement d'image Download PDF

Info

Publication number
WO2019059460A1
WO2019059460A1 PCT/KR2017/015476 KR2017015476W WO2019059460A1 WO 2019059460 A1 WO2019059460 A1 WO 2019059460A1 KR 2017015476 W KR2017015476 W KR 2017015476W WO 2019059460 A1 WO2019059460 A1 WO 2019059460A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
original image
object cut
user input
corrected
Prior art date
Application number
PCT/KR2017/015476
Other languages
English (en)
Korean (ko)
Inventor
최승혁
Original Assignee
주식회사 이넘넷
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 이넘넷 filed Critical 주식회사 이넘넷
Publication of WO2019059460A1 publication Critical patent/WO2019059460A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to an image processing apparatus and method for extracting object cuts from an original image using an artificial intelligence technology that simulates functions of recognition, judgment, etc. of a human brain using a neural network learning algorithm.
  • Artificial intelligence includes pattern recognition, machine learning, expert system, neural network, and natural language processing. Artificial intelligence has been developed with the aim of making reasonable decision making of devices through machine learning and artificial neural network technology, which enable to increase recognition rate of big data through self learning.
  • An image processing apparatus includes a first analyzing unit for obtaining image segmentation information obtained by dividing an original image into a plurality of regions having similar characteristics to each other; A method for identifying an object from an original image based on a learning result using a neural network and segmenting the object extracted from the original image using the image segmentation information into a first region to a third region, And outputting a corrected image in which one of the divided images is clearly corrected; And a processing unit for outputting the object cut generated by calculating the original image and the corrected image, and including the original image and the classified image into learning using the neural network.
  • the image processing apparatus comprises: a receiver for receiving first user input information and second user input information from the object cut in response to receipt of an invalid signal for the object cut; And an additional segment image by reinforcing a part of the segmented image for the object cut based on the image segmentation information corresponding to the first user input information and the second user input information, And a second analyzing unit for outputting an additional corrected image in which an area is clearly corrected.
  • the processing unit may output an additional object cut generated by calculating the original image and the additional corrected image, and may include the original image and the additional classified image in the learning using the neural network.
  • the image processing apparatus may repeatedly perform the operations of the receiving unit, the second analyzing unit, and the processing unit until an acknowledgment signal for the additional object cut is received.
  • the receiving unit may receive the first user input information for a foreground area included in the object cut and receive the second user input information for a background area included in the object cut.
  • An image processing method includes: obtaining image segmentation information obtained by dividing an original image into a plurality of regions having similar characteristics; A method for identifying an object from an original image based on a learning result using a neural network and segmenting the object extracted from the original image using the image segmentation information into a first region to a third region, And outputting a corrected image in which one of the classified images is clearly corrected; And outputting an object cut generated by calculating the original image and the corrected image, and including the original image and the classified image in learning using the neural network.
  • the image processing method comprising: receiving first user input information and second user input information from the object cut in response to receiving an impossible signal for the object cut; And an additional segment image by reinforcing a part of the segmented image for the object cut based on the image segmentation information corresponding to the first user input information and the second user input information, And outputting an additional corrected image in which a certain region is clearly corrected.
  • the image processing method may further include outputting an additional object cut generated by operating the original image and the additional corrected image, and including the original image and the additional classified image in the learning using the neural network .
  • the image processing method may repeatedly perform the operations of receiving, outputting, and including until an acknowledgment signal for the additional object cut is received.
  • the receiving comprises: receiving the first user input information for a foreground region included in the object cut; And receiving the second user input information for the background area included in the object cut.
  • the object cut is extracted by manually inputting the existing user input information
  • the object cut is automatically extracted from the original image and provided by using the learning result using the neural network, It is convenient to extract object cuts conveniently without intervention.
  • the object cut is automatically extracted from the original image using the learning result using the neural network, and when the user's satisfaction with the extracted object cut is lowered, the user intervenes to extract the additional object cut, User satisfaction with cut can be improved.
  • FIG. 1 is a view for schematically explaining an image processing system according to an embodiment of the present invention.
  • FIG. 2 is a diagram for explaining a detailed configuration of an image processing apparatus in the image processing system of FIG. 1;
  • FIG. 3 is a diagram for explaining a detailed configuration of an artificial intelligent processing unit according to an embodiment of the image processing apparatus of FIG. 2. Referring to FIG.
  • FIG. 4 is a view for schematically explaining a detailed configuration of a first analysis unit of the artificial intelligence processing unit of FIG. 3;
  • FIG. 4 is a view for schematically explaining a detailed configuration of a first analysis unit of the artificial intelligence processing unit of FIG. 3;
  • FIG. 5 is a diagram for explaining a detailed configuration of an artificial intelligent processing unit according to another embodiment of the image processing apparatus of FIG. 2.
  • FIG. 5 is a diagram for explaining a detailed configuration of an artificial intelligent processing unit according to another embodiment of the image processing apparatus of FIG. 2.
  • FIG. 6 is a diagram for explaining a detailed configuration of a second analysis unit of the artificial intelligence processing unit of FIG. 5;
  • FIG. 6 is a diagram for explaining a detailed configuration of a second analysis unit of the artificial intelligence processing unit of FIG. 5;
  • FIG. 7 is a diagram for explaining a detailed configuration of a user terminal in the image processing system of FIG. 1.
  • FIG. 7 is a diagram for explaining a detailed configuration of a user terminal in the image processing system of FIG. 1.
  • 8A to 8H are views showing examples of images processed by the image processing apparatus.
  • 9 to 12 are flowcharts for explaining an image processing method according to an embodiment of the present invention.
  • FIG. 1 is a view for schematically explaining an image processing system according to an embodiment of the present invention.
  • the image processing system 1 may include an image processing apparatus 100, a user terminal 200, and a communication network 300.
  • the image processing apparatus 100 acquires image division information divided into a plurality of regions having similar characteristics to an original image and identifies the object from the original image based on the learning result using the neural network
  • a division image in which the object extracted from the original image is divided into the first region to the third region by using the image division information, a corrected image in which any one of the division images is clearly corrected is output,
  • the original image, the classified image, and the object cut can be included in the learning using the neural network in response to receipt of the confirmation signal for the generated object cut.
  • Included in the learning using the neural network may include using the original image and the classified image as learning data using a neural network.
  • the image processing apparatus 100 receives the first user input information and the second user input information from the object cut in response to the receipt of the impossible signal for the generated object cut,
  • the additional divided image is generated based on the first user input information and the image division information corresponding to the second user input information to generate an additional corrected image in which any one of the additional divided images is clearly corrected,
  • the original image and the additional classified image can be included in the learning using the neural network have.
  • the image processing apparatus 100 receives the first user input information and the second user input information until an acknowledgment signal for the additional object cut is received, generates an additional discriminated image for the foreground region, and outputs an additional corrected image And outputting additional object cuts are repeatedly performed, and the original image and the additional classified image can be included in the learning using the neural network.
  • the user terminal 200 may display an image processing web page and / or an image processing application provided by the image processing apparatus 100.
  • the image processing apparatus 100 may transmit the image processing web page and / or image processing application to the user terminal 200 as the image display apparatus through the communication network 300.
  • the image processing apparatus 100 can perform user authentication to the image processing web page and / or image processing application.
  • the user terminal 200 can transmit the original image to the image processing apparatus 100.
  • the user terminal 200 can select an image stored therein as an original image and transmit the selected image to the image processing apparatus 100.
  • the user terminal 200 can execute a photo album application or the like to select a previously stored image as an original image.
  • the user terminal 200 can receive an image from an external server and select the original image.
  • the user terminal 200 may access a social network server, a cloud server, or a content providing server to download images.
  • the user terminal 200 can capture an image using a camera provided therein and select the captured image as an original image. At this time, the user terminal 200 can execute a camera application to capture an image.
  • the user terminal 200 can transmit an acknowledgment signal and / or an invalid signal to the object cut received from the image processing apparatus 100.
  • the first user input information and the second user input information may be transmitted at the request of the image processing apparatus 100.
  • the transmission of the first user input information and the second user input information may be repeated until the user terminal 200 transmits an acknowledgment signal to the image processing apparatus 100 for the object cut.
  • the user terminal 200 includes a desktop computer 201 (Fig. 1), a smart phone 202, a notebook computer 203, a tablet PC, a smart TV, a personal digital assistant an assistant, a laptop, a media player, a micro server, a global positioning system (GPS) device, an electronic book terminal, a digital broadcast terminal, a navigation device, a kiosk, an MP3 player, a digital camera, , But is not limited thereto.
  • the user terminal 200 may be a wearable terminal having a communication function and a data processing function, such as a watch, a pair of glasses, a hair band, and a ring.
  • the user terminal 200 is not limited to the above description, and a terminal capable of web browsing as described above can be borrowed without limitation.
  • the communication network 300 connects the user terminal 200 with the image processing apparatus 100.
  • the communication network 400 may refer to a communication network that provides a connection path so that the user terminal 200 can access the image processing apparatus 100 and transmit / receive predetermined information.
  • the communication network 300 may be a wired network such as LANs (Local Area Networks), WANs (Wide Area Networks), MANs (Metropolitan Area Networks), ISDNs (Integrated Service Digital Networks), wireless LANs, CDMA, Bluetooth, But the scope of the present invention is not limited thereto.
  • FIG. 2 is a view for schematically explaining the detailed configuration of the image processing apparatus 100 of the image processing system 1 of FIG. 2, the image processing apparatus 100 includes a communication unit 110, a storage medium 120, a program storage unit 130, a control unit 140, a database 150, and an artificial intelligence processing unit 160 .
  • the communication unit 110 may provide a communication interface required to provide a transmission / reception signal between the image processing apparatus 100 and the user terminal 200 in the form of packet data in cooperation with the communication network 300. Further, the communication unit 110 can receive a predetermined information request signal from the user terminal 200 and transmit the information processed by the artificial intelligence processing unit 160 to the user terminal 200 .
  • the communication network is a medium for connecting the image processing apparatus 100 and the user terminal 200, and is a medium for allowing the user terminal 200 to access the image processing apparatus 100, And a path for providing a connection path.
  • the communication unit 110 may be a device including hardware and software necessary for transmitting / receiving signals such as a control signal or a data signal through a wired / wireless connection with other network devices.
  • the storage medium 120 performs a function of temporarily or permanently storing data processed by the control unit 140.
  • the storage medium 120 may include magnetic storage media or flash storage media, but the scope of the present invention is not limited thereto.
  • the storage medium 120 may include internal memory and / or external memory, and may be a volatile memory such as a DRAM, an SRAM, or an SDRAM, an OTPROM (one time programmable ROM), a PROM, an EPROM, an EEPROM, , NAND flash memory, or NOR flash memory, and the like.
  • a flash drive such as a compact flash (CF) card, an SD card, a Micro-SD card, a Mini-SD card, an Xd card, or a memory stick, or a storage device such as a HDD.
  • the storage medium 120 may include one or more instructions that configure the neural network, and one or more instructions that control the neural network.
  • the program storage unit 130 may include an operation of obtaining image segmentation information obtained by dividing the original image received from the user terminal 200 into a plurality of regions having similar characteristics, an operation of extracting an object from the original image based on the learning result using the neural network An operation of generating a division image in which the object extracted from the original image is divided into a first region to a third region using the image division information, a correction image in which any one of the division images is clearly corrected, A task of outputting an object cut generated by calculating an original image and a corrected image, an operation of including an original image and a classified image in learning using a neural network, and a receiving operation of a user terminal 200) for requesting and receiving first user input information and second user input information, And a control software for performing an operation of generating an additional classification image in which a part of the classification image of the object cut is clearly corrected using the user input information, the second user input information, and the image division information.
  • the database 150 stores the original image received from the user terminal 200 and various images and / or information generated by the artificial intelligence processing of the image processing apparatus 100, for example, image division information on the original image, , The corrected image, and the object cut as training data for the neural network.
  • the database 150 stores a series of process information (for example, a process of generating an additional object cut) based on the first user input information and the second user input information received from the user in response to the disable signal for the object cut, Additional classification image, additional correction image, additional object cut) can be stored as learning data for the neural network.
  • the database 150 may further include a user database for storing user information.
  • the user database may store user information for a user who wants to use a service for extracting an object cut from an original image.
  • the user information includes basic information about the user such as the user's name, affiliation, personal information, gender, age, contact, e-mail, address, and authentication information (login) such as ID (or e-mail) and password Information about a connection country, a connection location, information on a device used for connection, information related to connection such as a connected network environment, and the like.
  • the artificial intelligence processing unit 160 extracts and provides an object cut from the original image based on the learning result using the neural network and may include the information and / or the image generated for the object cut extraction in the learning using the neural network .
  • the intelligent processing unit 160 extracts and provides the additional object cut using the first user input information and the second user input information received from the user when receiving the impossible signal from the user for the extracted object cut,
  • the information and / or the image generated for the neural network can be included in the learning using the neural network, and the object cut extraction process can be repeated until the confirmation signal is received from the user.
  • AI Artificial intelligence
  • Machine learning is an algorithm technology that classifies / learns the characteristics of input data by itself.
  • Element technology is a technology that simulates functions such as recognition and judgment of human brain using machine learning algorithms such as deep learning. Understanding, reasoning / prediction, knowledge representation, motion control, and the like.
  • Linguistic understanding is a technology for recognizing, applying / processing human language / character and may include natural language processing, machine translation, dialogue system, query response, speech recognition / synthesis, and the like.
  • Visual understanding is a technology for recognizing and processing objects as human vision, and may include object identification, object tracking, image search, human recognition, scene understanding, spatial understanding, image enhancement, and the like.
  • Inference prediction is a technique for judging and logically inferring and predicting information, including knowledge / probability based reasoning, optimization prediction, preference base planning, and recommendation.
  • Knowledge representation is a technology that automates the processing of human experience information into knowledge data, which can include knowledge building (data generation / classification) and knowledge management (data utilization).
  • the motion control is a technique for controlling the autonomous travel of the vehicle and the motion of the robot, and may include motion control (navigation, collision, traveling), operation control (behavior control), and the like.
  • an object cut is automatically extracted from an original image using a learning result using a neural network based on artificial intelligence It is possible to extract the object cuts conveniently without user intervention.
  • the object cut is automatically extracted from the original image using the learning result using the neural network, and when the user's satisfaction with the extracted object cut is lowered, the user intervenes to extract the additional object cut, User satisfaction with cut can be improved.
  • FIG. 3 is a diagram for explaining a detailed configuration of the artificial intelligence processing unit 160 according to an embodiment of the image processing apparatus 100 of FIG.
  • the AI processing unit 160 may include a first analyzing unit 161, a learning unit 162, and a processing unit 163.
  • the first analysis unit 161 can obtain the image segmentation information obtained by dividing the original image into a plurality of regions having similar characteristics.
  • the first analysis unit 161 finds at least one region having a similar brightness, edge, color, or the like around the seed as a seed at an arbitrary position in the original image, If the regions adjacent thereto have the same characteristics, the regions are integrated into one region, and the regions having the same characteristics are gradually grown, and finally the entire original image is divided into a plurality of regions having similar characteristics.
  • the first analysis unit 161 may store the acquired image segmentation information in the database 150.
  • FIG. 4 is a diagram illustrating a detailed configuration of the first analysis unit 161 of the artificial intelligence processing unit 160 of FIG. 3.
  • the first analyzing unit 161 includes a setting unit 161-1, a calculating unit 161-2, a clustering unit 161-3, a first generating unit 161-4, And a generation unit 161-5.
  • the setting unit 161-1 can set the first parameter and the second parameter for acquiring the image segmentation information from the original image (Fig. 8A).
  • the first parameter may include the number of seeds, and the first parameter may be set and received from the user terminal 200, or may be set by calculating the size of the original image divided by the number of pixels in the area, It may be set to a random value for each operation.
  • the second parameter may include a repetition number for calculating the distance from each seed to each of all the pixels. If the distance calculation is continuously repeated without designating the number of repetitions, the throughput increases and the capacity shortage of the storage medium 120 occurs. Therefore, it may be required to set an appropriate number of repetitions.
  • the second parameter may be received and set from the user terminal 200, or may be set to a default value.
  • the calculation unit 161-2 can calculate the distance from each seed to each of all the pixels and express the distance calculation result in Lab color. The number of repetitions of the distance calculation of the calculation unit 161-2 can be repeatedly performed by the set second parameter.
  • the clustering unit 161-3 clusters the distance calculation results of each seed to each of all the pixels repeatedly performed by the second parameter, and includes a pixel having similar Lab color (distance calculation result) in the original image in one area . In this way, the original image can be divided into a plurality of regions having similar Lab colors.
  • the first generating unit 161-4 can generate the image division index information in which an index is attached to each of a plurality of areas similar in Lab color.
  • FIG. 8B shows an example in which the image segmentation index information image generated from the original image (FIG. 8A) is expressed in color.
  • the first generation unit 161-4 may store the generated image division index information in the database 150.
  • the second generation unit 161-5 generates connection information that links the average pixel value calculated from each of a plurality of similar Lab color regions and the image division index information of the four surrounding azimuths searched around one reference region And image division information including the image division index information generated by the first generation unit 161-4.
  • FIG. 8C shows an example of an image segmentation information image generated using an original image (FIG. 8A) and an image segmentation index information image (FIG. 8B).
  • the second generation unit 161-5 can store the generated image division information in the database 150.
  • the learning unit 162 can identify the object from the original image based on the learning result using the neural network.
  • the learning unit 162 may further include a neural network module (not shown).
  • the neural network may be a set of algorithms for identifying and / or determining objects in the original image by extracting and using various attributes in the original image, using the results of statistical machine learning.
  • the neural network can identify objects in the original image by abstracting various attributes contained in the original image input to the neural network. In this case, abstracting the attributes in the source image may be to detect attributes from the source image and determine key attributes among the detected attributes.
  • the learning unit 162 may input an original image and / or a classification image (also including an additional classification image) into a neural network, and classify the location of the object included in the original image and / Can be output from the network.
  • a classification image also including an additional classification image
  • the learning unit 162 detects predetermined image attributes in the original image and / or the classified image according to the learning result using the neural network, and detects the position and / or the position of the object in the original image based on the detected image attributes. Or the category of the object.
  • the image attribute may include a color, an edge, a polygon, a saturation, a brightness, etc. constituting an image, but the image attribute is not limited thereto.
  • the learning unit 162 may learn the neural network to identify one or more objects from the original image and / or the classified image, in order to use the neural network. For example, the learning unit 162 repeatedly performs an operation of analyzing and / or evaluating the results of map learning and / or non-image learning (or autonomous learning or active learning) on object-specific image attributes in the neural network The neural network can be learned.
  • the learning unit 162 can utilize the original image and the classified image as learning data in neural network learning for object identification.
  • the classification image may be the final classification image, and the final classification image may include a classification image and / or an additional classification image for the object cut in which a confirmation signal described later is received.
  • the learning unit 162 can generate a classification image in which the object is divided into the first to third regions by using the object identified using the neural network and the image division information acquired by the first analysis unit 161 have.
  • the first region may include a foreground region of the identified objects and may be represented by a first value (e.g., white).
  • the second area may include a background area of the identified objects, and may be represented by a second value (e.g., black).
  • the third area may include an unclear area, which is uncertain whether it is the first area or the second area of the identified objects, and may be represented by a third value (for example, gray).
  • the learning unit 162 can generate and output a corrected image that is obtained by clearly correcting one of the divided images classified into the first to third regions.
  • one of the regions may include a third region, and when a part of the third region is included in the first region, the correction image may correct a portion of the third region to the first region, And a part of the third area is included in the second area and the other part of the third area is corrected to the second area.
  • the learning unit 162 can generate a corrected image from the classified image using the correlation between the original image and the classified image.
  • the learning unit 162 may generate segmented images by comparing image segmentation information of an object and an object identified from the original image through semantic segmentation based on a learning result using a neural network, It is also possible to generate and output a corrected image in which one of the images is clearly corrected.
  • At least one of the first analysis unit 161 and the learning unit 162 may be manufactured in at least one hardware chip form and mounted on the electronic device.
  • at least one of the first analysis unit 161 and the learning unit 162 may be manufactured in the form of a dedicated hardware chip for artificial intelligence, or may be an existing general purpose processor (e.g., a CPU or an application processor) It can be built as part of a dedicated processor (eg GPU) and loaded onto various electronic arches.
  • the processing unit 163 performs an AND operation on the original image and the corrected image, and outputs the generated object cut to the user terminal 200 as a result of the operation.
  • the processing unit 163 may store the original image and the classified image in the database 150 and include the same in the learning using the neural network. In addition, the processing unit 163 can store the generated object cut in the database 150.
  • FIG. 5 is a diagram illustrating a detailed configuration of the artificial intelligence processing unit 160 according to another embodiment of the image processing apparatus 100 of FIG. 5, the artificial intelligence processing unit 160 may include a first analyzing unit 161, a learning unit 162, a processing unit 163, a receiving unit 164, and a second analyzing unit 165.
  • the first analysis unit 161 can obtain the image segmentation information obtained by dividing the original image into a plurality of regions having similar characteristics.
  • the learning unit 162 identifies the object from the original image based on the learning result using the neural network, generates the classification image for the object extracted from the original image using the image segmentation information, It is possible to generate and output a corrected image in which an area is clearly corrected.
  • the processing unit 163 may output the object cut generated by the logical product of the original image and the corrected image to the user terminal 200 and store the original image, the corrected image, and the object cut in the database 150.
  • the receiving unit 164 may receive an acknowledgment signal or an unavailable signal for the object cut output to the user terminal 200. [ The receiving unit 164 may receive the first user input information and the second user input information from the user terminal 200.
  • the processing unit 163 may store the original image and the classified image in the database 150 and include the same in the learning using the neural network.
  • FIG. 8D shows an example of an object cut receiving an acknowledgment signal from the user terminal 200.
  • the second analyzing unit 165 starts the operation.
  • the processing unit 163 transmits the first user input information to the user terminal 200, And may request the second user input information input.
  • the second analyzing unit 165 analyzes the classification image of the object cut extracted from the database 150 through the processing unit 163 and the first user input information and the second user input information received from the receiving unit 164, An additional discrimination image reinforcing a part of the object included in the discrimination image is generated using the image segmentation information received from the discrimination unit 161 and an additional correction image in which one of the additional discrimination images is clearly corrected is outputted .
  • FIG. 8E shows an example of an object cut that receives a disable signal from the user terminal 200.
  • the first user input information may include a user input, for example, a first drag 810 for specifying a foreground region from an object cut received from the user terminal 200 as shown in FIG. 8F .
  • the second user input information specifies a foreground region from the object cut that received the cancellation signal output to the user terminal 200, and then inputs a user input specifying the background region, for example, 820 < / RTI >
  • the first user input information and the second user input information may be represented by different colors.
  • FIG. 6 is a diagram for explaining a detailed configuration of the second analysis unit 165 of the artificial intelligence processing unit 160 of FIG. 6, the second analyzing unit 165 may include a third generating unit 165-1 and a fourth generating unit 165-2.
  • the second analyzing unit 165 may include a learning unit 162).
  • the third generation unit 165-1 generates a classification image of the object cut extracted from the database 150 through the processing unit 163, first user input information and second user input information received from the receiving unit 164, 1 analysis unit 161 to generate an additional classification image reinforcing a part of the object included in the classification image.
  • the third generation unit 165-1 may extract the foreground region of the object cut from the first user input information for the classification image of the object cut and extract the background region of the object cut from the second user input information.
  • the third generation unit 165-1 includes the position and pixel value of the foreground region of the object cut (the object in which the output of one part is missing) and the average pixel value, connection information, and image segmentation index information of each of the divided regions It is possible to generate an additional classification image in which the output of one portion is reinforced.
  • FIG. 8G shows an example of an additional classification image reinforcing the output of a part of the object cut that receives the impossible signal.
  • the additional classification image is divided into a first region to a third region, wherein the first region may include a foreground region of the object, and may be represented by a first value (for example, white). Also, the second area may include a background area of objects, and may be displayed as a second value (for example, black).
  • the third region may include an unclear region that is unclear, whether it is a first region or a second region, and may be represented by a third value (for example, gray).
  • the fourth generation unit 165-2 may generate a corrected image that is obtained by clearly correcting any one of the additional classification images divided into the first to third regions, and output the generated corrected image to the processing unit 163.
  • one of the regions may include a third region, and when a part of the third region is included in the first region, a portion of the third region is corrected to the first region, And if another part is included in the second area, the other part of the third area is corrected to the second area.
  • the fourth generation unit 165-2 may generate a correction image from the additional classification image using the correlation between the original image and the additional classification image.
  • FIG. 8H shows an example of the corrected image generated for the additional classification image (FIG. 8G).
  • At least one of the first analyzing unit 161, the learning unit 162, and the second analyzing unit 165 may be manufactured in at least one hardware chip form and mounted on the electronic device.
  • at least one of the first analysis unit 161, the learning unit 162, and the second analysis unit 165 may be manufactured in the form of a dedicated hardware chip for artificial intelligence, : A CPU or an application processor) or a graphics processor (e.g., a GPU) and may be mounted on various electronic arches.
  • the processing unit 163 performs an AND operation on the original image and the additional correction image, and outputs the generated additional object cut to the user terminal 200.
  • FIG. Thereafter, when the receiving unit 164 receives the confirmation signal from the user terminal 200, the processing unit 163 may store the original image and the additional classification image in the database 150 and include the same in the learning using the neural network .
  • a process of receiving first user input information and second user input information until an acknowledgment signal for an additional object cut is received, generating an additional classification image for a foreground region and outputting an additional correction image, And the original image and the additional classification image can be included in the learning result using the neural network.
  • FIG. 7 is a diagram for explaining a detailed configuration of the user terminal 200 of the image processing system 1 of FIG.
  • the user terminal 200 may include a communication unit 210, a memory 220, an input / output unit 230, a program storage unit 240, a control unit 250, and a display unit 260.
  • the communication unit 210 may be a device including hardware and software necessary to transmit / receive a signal such as a control signal or a data signal through a wired / wireless connection with another network device such as the image processing apparatus 100.
  • the communication unit 210 may include a local communication unit or a mobile communication unit.
  • a short-range wireless communication unit includes a Bluetooth communication unit, a Bluetooth low energy (BLE) communication unit, a near field communication unit, a WLAN communication unit, a Zigbee communication unit, Data Association) communication unit, a WFD (Wi-Fi Direct) communication unit, an UWB (ultra wideband) communication unit, an Ant + communication unit, and the like.
  • the mobile communication unit transmits and receives radio signals to at least one of a base station, an external terminal, and a server on a mobile communication network.
  • the wireless signal may include various types of data depending on a voice call signal, a video call signal, or a text / multimedia message transmission / reception.
  • the memory 220 may temporarily or permanently store data processed by the controller 250 or may temporarily or permanently store data transmitted to the user terminal 200.
  • the memory 220 may include magnetic storage media or flash storage media, but the scope of the present invention is not limited thereto.
  • the input / output unit 230 may include a touch recognition display controller or various other input / output controllers.
  • the touch-aware display controller may provide an output interface and an input interface between the device and the user.
  • the touch-sensitive display controller can transmit and receive electrical signals to and from the control unit 250.
  • the touch-aware display controller may display a visual output to the user, and the visual output may include text, graphics, images, video, and combinations thereof.
  • the input / output unit 230 may be a display member such as an organic light emitting display (OLED) or a liquid crystal display (LCD) capable of touch recognition.
  • OLED organic light emitting display
  • LCD liquid crystal display
  • the program storage unit 240 may include an operation of selecting an original image and transmitting it to the image processing apparatus 100, an operation of receiving and displaying an object cut and / or an additional object cut from the image processing apparatus 100, an object cut and / An operation of transmitting an acknowledgment signal or an invalid signal to the additional object cut, a task of receiving first user input information and second user input information input to the object cut and / or the additional object cut and transmitting the same to the image processing apparatus 100 Can be mounted.
  • the control unit 250 may provide various functions such as driving control software installed in the program storage unit 240 as a kind of central processing unit and controlling the display unit 260 to display predetermined information.
  • the control unit 250 may include all kinds of devices capable of processing data, such as a processor.
  • the term " processor " may refer to a data processing device embedded in hardware, for example, having a circuit physically structured to perform the functions represented by the code or instructions contained in the program.
  • a microprocessor a central processing unit (CPU), a processor core, a multiprocessor, an application-specific integrated circuit (ASIC) circuit, and a field programmable gate array (FPGA), but the scope of the present invention is not limited thereto.
  • the display unit 260 displays various information received from the image processing apparatus 100, for example, image processing web page and / or image processing application related information provided by the image processing apparatus 100, The first user input information and the second user input information on the original image to be transmitted to the image processing apparatus 100, the object cut and / or the additional object cut received from the image processing apparatus 100, the object cut and / And the like can be displayed.
  • FIG. 8A to 8H are views showing examples of images processed by the image processing apparatus.
  • 8A shows an example of an original image
  • FIG. 8B shows an example of color representation of an image segmentation index information image generated from an original image (FIG. 8A)
  • FIG. 8C shows an original image And an image segmentation information image generated using the image segmentation index information image (FIG. 8B).
  • FIG. 8D shows an example of an object cut that receives an acknowledgment signal from the user terminal 200
  • FIG. 8E shows an example of an object cut that receives a disable signal from the user terminal 200.
  • FIG. FIG. 8F shows an example of inputting the first user input information 810 and the second user input information 820 to the object cut receiving the impossible signal.
  • FIG. 8G shows an example of an additional classification image reinforcing the output of a part of the object cut that receives the impossible signal.
  • FIG. 8H shows an example of the supplementary classification image (FIG. 8G)
  • FIG. 8G shows an example of the
  • FIG. 9 is a flowchart illustrating an image processing method according to an embodiment of the present invention. In the following description, the description of the parts overlapping with those of FIGS. 1 to 8 will be omitted.
  • the image processing apparatus 100 acquires image division information that is divided into a plurality of regions having similar characteristics to the original image.
  • the image processing apparatus 100 may perform the following processing for acquiring image segmentation information.
  • the image processing apparatus 100 can set the first parameter including the number of seeds and the second parameter including the number of iterations for the distance calculation to each of all the pixels in each seed.
  • the image processing apparatus 100 can calculate the distance from each seed to each of all the pixels and express the distance calculation result in Lab color.
  • the image processing apparatus 100 clusters the distance calculation results of each seed to each of all the pixels repeatedly performed by the second parameter, and includes a pixel having similar Lab color (distance calculation result) in the original image into one area .
  • the image processing apparatus 100 can generate image division index information in which an index is added to each of a plurality of areas similar in Lab color.
  • the image processing apparatus 100 is configured to obtain connection information in which the average pixel value calculated from each of a plurality of similar Lab color regions and the image division index information of the four surrounding azimuths searched around a certain reference region, It is possible to generate the image division information including the division index information.
  • step S920 the image processing apparatus 100 identifies the object from the original image based on the learning result using the neural network, and divides the object extracted from the original image by using the image segmentation information into a first area including the foreground area, A second region including a region and a third region including an unascertained region, and generates a corrected image in which the third region of the divided image is clearly corrected.
  • step S930 the image processing apparatus 100 outputs the object cut generated by performing an AND operation between the original image and the corrected image to the user terminal 200, and includes the original image and the classified image in the learning using the neural network.
  • FIG. 10 is a flowchart illustrating an image processing method according to another embodiment of the present invention.
  • the description of the parts overlapping with those of FIGS. 1 to 9 will be omitted.
  • step S1010 the image processing apparatus 100 acquires image division information that is divided into a plurality of regions having similar characteristics to the original image.
  • step S1020 the image processing apparatus 100 identifies the object from the original image based on the learning result using the neural network, and extracts the object extracted from the original image using the image segmentation information as a first region including the foreground region, A second region including a region and a third region including an unascertained region, and generates a corrected image in which the third region of the divided image is clearly corrected.
  • step S1030 the image processing apparatus 100 performs an AND operation on the original image and the corrected image, and outputs the generated object cut to the user terminal 200.
  • step S1040 the image processing apparatus 100 determines whether it has received an invalid signal for the object cut.
  • step S1050 when the image processing apparatus 100 receives the confirmation signal for the object cut, the original image and the classification image are included in the learning using the neural network.
  • step S1060 when the image processing apparatus 100 receives the invalid signal for the object cut, the first user input for specifying the foreground area from the object cut receiving the impossible signal output from the object cut to the user terminal 200 And second user input information for designating a background area from the object cut received from the user terminal 200.
  • step S1070 the image processing apparatus 100 adds the object cut segment image, the first user input information, the second user input information, and the image segmentation information so as to reinforce a part of the object included in the segmented image And generates an additional corrected image in which one of the additional classified images is clearly corrected.
  • the image processing apparatus 100 divides the position and the pixel value of the foreground region of the object cut (the object in which the output of one part is missing) and the image including the average pixel value, connection information, It is possible to retrieve from the division information and generate an additional classification image reinforcing the output of one part.
  • step S1080 the image processing apparatus 100 performs an AND operation on the original image and the additional correction image, and outputs the additional object cut generated as a result of the operation to the user terminal 200. [
  • steps S1040 to S1080 are repeatedly performed until an acknowledgment signal for the additional object cut is received.
  • FIG. 11 is a flowchart illustrating an image processing method according to another embodiment of the present invention.
  • the description of the parts overlapping with those of FIGS. 1 to 10 will be omitted.
  • step S1110 the user terminal 200 accesses the image processing web page provided by the image processing apparatus 100 or executes the image processing application provided by the image processing apparatus 100.
  • step S1120 the user terminal 200 selects an original image and transmits it to the image processing apparatus 100.
  • the user terminal 200 can execute a photo album application or the like to select a pre-stored image as an original image. Also, the user terminal 200 can receive an image from an external server and select the original image. Also, the user terminal 200 can capture an image using a camera provided therein and select the captured image as an original image.
  • step S1130 the image processing apparatus 100 that has received the original image acquires image division information divided into a plurality of regions having similar characteristics to the original image.
  • step S1140 the image processing apparatus 100 identifies the object from the original image based on the learning result using the neural network.
  • step 1150 the image processing apparatus 100 extracts an object extracted from the original image using the image segmentation information by using a first region including a foreground region, a second region including a background region, and a third region including an un- And generates a corrected image in which the third region of the divided image is clearly corrected.
  • step S1060 the image processing apparatus 100 performs an AND operation between the original image and the corrected image.
  • step 1070 the image processing apparatus 100 transmits the generated object cut to the user terminal 200.
  • step 1080 the user terminal 200 transmits an acknowledgment signal for the object cut.
  • the image processing apparatus 100 includes the original image and the classified image in the learning using the neural network.
  • FIG. 12 is a flowchart illustrating an image processing method according to another embodiment of the present invention.
  • the description of the parts that are the same as those in the description of Figs. 1 to 11 will be omitted.
  • step S1211 the user terminal 200 accesses the image processing web page provided by the image processing apparatus 100 or executes the image processing application provided by the image processing apparatus 100.
  • step S1213 the user terminal 200 selects an original image and transmits it to the image processing apparatus 100.
  • step S1215 the image processing apparatus 100 that has received the original image acquires image division information (for example, super pixel map information) obtained by dividing the original image into a plurality of regions having similar characteristics.
  • image division information for example, super pixel map information
  • step S1217 the image processing apparatus 100 identifies the object from the original image based on the learning result using the neural network.
  • the image processing apparatus 100 displays a boundary box including the identified object, detects the contour of the object, compares the boundary box and the contour line, and adjusts the size of the boundary box so that the contour line is included in the boundary box ,
  • the image processing apparatus can further perform the operation of cutting the boundary box using the image segmentation information.
  • the image processing apparatus 100 extracts an object extracted from the original image using the image segmentation information by using a first region including a foreground region, a second region including a background region, and a third region including an un- (E.g., a tri-map image), and generates a corrected image (e.g., matting image) in which the third region of the divided image is clearly corrected.
  • the image processing apparatus 100 may generate a segmented image and a corrected image with respect to the image in the boundary box in which the cutting is performed.
  • step S1221 the image processing apparatus 100 performs an AND operation on the original image and the corrected image to generate an object cut.
  • step 1223 the image processing apparatus 100 transmits the generated object cut to the user terminal 200.
  • step 1225 the user terminal 200 transmits a disable signal for the object cut.
  • step S127 the image processing apparatus 100 requests the user terminal 200 to transmit the first user input information and the second user input information.
  • step 1229 the user terminal 200 transmits the first user input information and the second user input information to the image processing apparatus 100.
  • step 1231 the image processing apparatus 100 extracts the foreground region from the segmented image of the object cut that received the invalid signal using the first user input information, and extracts the foreground region using the second user input information The background region is extracted from the cut image.
  • step 1233 the image processing apparatus 100 calculates the position and the pixel value of the foreground region (the object in which the output of one part is missing) of the object cut, the average pixel value, the connection information, An additional discrimination image reinforcing the output of one part is generated and an additional correction image in which the third discrimination image is clearly corrected is generated.
  • step S1235 the image processing apparatus 100 performs an AND operation on the original image and the additional correction image to generate an additional object cut.
  • step S1237 the image processing apparatus 100 transmits the generated additional object cut to the user terminal 200.
  • step S1239 the image processing apparatus 100 repeatedly performs steps S1227 to S1237 until an acknowledgment signal for the additional object cut is received from the user terminal 200, and when an acknowledgment signal is received from the user terminal 200
  • the original image and the additional classification image are included in the learning using the neural network.
  • the embodiments of the present invention described above can be embodied in the form of a computer program that can be executed on various components on a computer, and the computer program can be recorded on a computer-readable medium.
  • the medium may be a magnetic medium such as a hard disk, a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM and DVD, a magneto-optical medium such as a floptical disk, , A RAM, a flash memory, and the like, which are specifically configured to store and execute program instructions.
  • the computer program may be designed and configured specifically for the present invention or may be known and used by those skilled in the computer software field.
  • Examples of computer programs may include machine language code such as those produced by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like.
  • Embodiments of the present invention relate to an image processing apparatus and method, in which an object cut is automatically extracted from an original image by using a learning result using a neural network, so that an object cut can be conveniently extracted without user intervention
  • the present invention can be applied to an image processing apparatus and method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un appareil et un procédé de traitement d'image permettant d'extraire une coupe d'objet d'une image d'origine à l'aide d'une technologie d'intelligence artificielle qui utilise un algorithme d'apprentissage de réseau neuronal de façon à simuler des fonctions de reconnaissance, de détermination et analogues du cerveau humain. Selon un mode de réalisation de l'invention, l'appareil de traitement d'image comprend : une première unité d'analyse permettant d'acquérir des informations de division d'image selon lesquelles une image d'origine est divisée en une pluralité de zones ayant des caractéristiques similaires ; une unité d'apprentissage permettant d'identifier un objet à partir de l'image d'origine d'après le résultat d'apprentissage à l'aide d'un réseau neuronal, de générer, à l'aide des informations de division d'image, une image classée dans laquelle l'objet extrait de l'image d'origine est classé dans des première à troisième zones, et de générer une image corrigée dans laquelle une zone quelconque de l'image classée est corrigée de façon nette ; et une unité de traitement permettant de générer une coupe d'objet générée en calculant l'image d'origine et l'image corrigée, et d'inclure l'image d'origine et l'image classée dans l'apprentissage à l'aide du réseau neuronal.
PCT/KR2017/015476 2017-09-22 2017-12-26 Appareil et procédé de traitement d'image WO2019059460A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0122877 2017-09-22
KR1020170122877A KR101867586B1 (ko) 2017-09-22 2017-09-22 영상 처리 장치 및 방법

Publications (1)

Publication Number Publication Date
WO2019059460A1 true WO2019059460A1 (fr) 2019-03-28

Family

ID=62628831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/015476 WO2019059460A1 (fr) 2017-09-22 2017-12-26 Appareil et procédé de traitement d'image

Country Status (3)

Country Link
JP (1) JP2019061642A (fr)
KR (1) KR101867586B1 (fr)
WO (1) WO2019059460A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727085B (zh) * 2021-05-31 2022-09-16 荣耀终端有限公司 一种白平衡处理方法、电子设备、芯片系统和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040015613A (ko) * 2002-08-13 2004-02-19 삼성전자주식회사 인공 신경망을 이용한 얼굴 인식 방법 및 장치
JP2011529593A (ja) * 2008-07-28 2011-12-08 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 画像補正に対する修復技術の使用
KR20120074924A (ko) * 2010-12-28 2012-07-06 경북대학교 산학협력단 영상검출장치 및 그 영상검출방법
KR20150103443A (ko) * 2014-03-03 2015-09-11 에스케이플래닛 주식회사 멀티클래스 분류 장치, 그 방법 및 컴퓨터 프로그램이 기록된 기록매체
KR20170038622A (ko) * 2015-09-30 2017-04-07 삼성전자주식회사 영상으로부터 객체를 분할하는 방법 및 장치

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8386964B2 (en) * 2010-07-21 2013-02-26 Microsoft Corporation Interactive image matting
WO2014070145A1 (fr) * 2012-10-30 2014-05-08 Hewlett-Packard Development Company, L.P. Segmentation d'objets
WO2015134996A1 (fr) * 2014-03-07 2015-09-11 Pelican Imaging Corporation Système et procédés pour une régularisation de profondeur et un matage interactif semi-automatique à l'aide d'images rvb-d
US10540768B2 (en) * 2015-09-30 2020-01-21 Samsung Electronics Co., Ltd. Apparatus and method to segment object from image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040015613A (ko) * 2002-08-13 2004-02-19 삼성전자주식회사 인공 신경망을 이용한 얼굴 인식 방법 및 장치
JP2011529593A (ja) * 2008-07-28 2011-12-08 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 画像補正に対する修復技術の使用
KR20120074924A (ko) * 2010-12-28 2012-07-06 경북대학교 산학협력단 영상검출장치 및 그 영상검출방법
KR20150103443A (ko) * 2014-03-03 2015-09-11 에스케이플래닛 주식회사 멀티클래스 분류 장치, 그 방법 및 컴퓨터 프로그램이 기록된 기록매체
KR20170038622A (ko) * 2015-09-30 2017-04-07 삼성전자주식회사 영상으로부터 객체를 분할하는 방법 및 장치

Also Published As

Publication number Publication date
KR101867586B1 (ko) 2018-06-15
JP2019061642A (ja) 2019-04-18

Similar Documents

Publication Publication Date Title
US10635979B2 (en) Category learning neural networks
WO2019098449A1 (fr) Appareil lié à une classification de données basée sur un apprentissage de métriques et procédé associé
WO2019098414A1 (fr) Procédé et dispositif d'apprentissage hiérarchique de réseau neuronal basés sur un apprentissage faiblement supervisé
WO2019050247A2 (fr) Procédé et dispositif d'apprentissage de réseau de neurones artificiels pour reconnaître une classe
WO2019031714A1 (fr) Procédé et appareil de reconnaissance d'objet
WO2019050297A1 (fr) Procédé et dispositif d'apprentissage de réseau neuronal
WO2020130747A1 (fr) Appareil et procédé de traitement d'image pour transformation de style
WO2020032467A1 (fr) Procédé, appareil et programme permettant de déduire une requête et une réponse en se basant sur l'intelligence artificielle
KR101963404B1 (ko) 2-단계 최적화 딥 러닝 방법, 이를 실행시키기 위한 프로그램을 기록한 컴퓨터 판독 가능한 기록매체 및 딥 러닝 시스템
CN110516707B (zh) 一种图像标注方法及其装置、存储介质
WO2019125054A1 (fr) Procédé de recherche de contenu et dispositif électronique associé
EP3752978A1 (fr) Appareil électronique, procédé de traitement d'image et support d'enregistrement lisible par ordinateur
CN110929806A (zh) 基于人工智能的图片处理方法、装置及电子设备
CN114722937A (zh) 一种异常数据检测方法、装置、电子设备和存储介质
WO2021006482A1 (fr) Appareil et procédé de génération d'image
CN111414951B (zh) 用于图像的细分类方法及装置
CN105631404A (zh) 对照片进行聚类的方法及装置
WO2022139009A1 (fr) Procédé et appareil pour configurer un algorithme d'apprentissage profond pour une conduite autonome
WO2019059460A1 (fr) Appareil et procédé de traitement d'image
CN113822128A (zh) 交通要素识别方法、装置、设备及计算机可读存储介质
CN113221721A (zh) 图像识别方法、装置、设备及介质
WO2020141907A1 (fr) Appareil de production d'image permettant de produire une image en fonction d'un mot clé et procédé de production d'image
KR102617756B1 (ko) 속성 기반 실종자 추적 장치 및 방법
CN116958729A (zh) 对象分类模型的训练、对象分类方法、装置及存储介质
KR20210048271A (ko) 복수 객체에 대한 자동 오디오 포커싱 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17925602

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17925602

Country of ref document: EP

Kind code of ref document: A1