WO2021077836A1 - 抠图方法及装置 - Google Patents
抠图方法及装置 Download PDFInfo
- Publication number
- WO2021077836A1 WO2021077836A1 PCT/CN2020/105441 CN2020105441W WO2021077836A1 WO 2021077836 A1 WO2021077836 A1 WO 2021077836A1 CN 2020105441 W CN2020105441 W CN 2020105441W WO 2021077836 A1 WO2021077836 A1 WO 2021077836A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- image
- image area
- pixel
- edge
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 238000009499 grossing Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 15
- 238000002372 labelling Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/771—Feature selection, e.g. selecting representative features from a multi-dimensional feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20101—Interactive definition of point of interest, landmark or seed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20168—Radial search
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present disclosure relates to the field of information processing technology, and in particular, to a matting method, device, and computer-readable storage medium.
- Cutout is one of the most common operations in image processing. It separates a certain part of a picture or video from the original picture or video into a separate layer. Mainly to prepare for the later synthesis.
- a matting template with a fixed shape and transparency is usually used for matting.
- a cutout template based on a standard face in advance.
- the parts of the human eye in the cutout template are transparent, and the remaining parts are masked colors (for example, black).
- the face area is first detected from the image, and then the matting template is superimposed on the image of the face area to obtain the eye area.
- the image and the matting template are often not completely matched.
- the cutting out template is usually designed based on the standard human face, but the human face in the image is often not exactly the same as the standard human face.
- the image cut out using the matting template may only include part of the human eye, and the complete human eye image cannot be cut out.
- the image area cut out using the matting template is too large relative to the human eye, that is, the cut out image contains the image of the non-human eye part.
- the angle of the face in the image is different from the standard face
- the standard face is a front face and the face in the image is a side face
- the position of the eyes in the face in the image is the same as that in the standard face.
- the positions of the eyes are different, which results in the image cut out using the cutout template does not include the human eye at all. It can be seen that it is often difficult to accurately cut out the required area from the image using the used matting template.
- the technical problem solved by the present disclosure is to provide a matting method to at least partially solve the technical problem of using a matting template in the prior art that it is difficult to accurately cut out a required area from an image.
- a matting device, a matting hardware device, a computer-readable storage medium, and a matting terminal are also provided.
- a matting method including:
- a matting device including:
- the feature point detection module is used to perform feature point detection on the image to obtain multiple feature points
- a marked area acquisition module configured to acquire the first image area manually marked on the image
- An area determination module configured to adjust the first image area according to the feature points to obtain a second image area
- the matting module is used for matting the image according to the second image area.
- An electronic device including:
- Memory for storing non-transitory computer readable instructions
- the processor is configured to run the computer-readable instructions so that the processor implements any of the above-mentioned matting methods when executed.
- a computer-readable storage medium for storing non-transitory computer-readable instructions.
- the computer is caused to execute any one of the above-mentioned matting methods.
- a picture-cutting terminal includes any of the above-mentioned picture-cutting devices.
- the part to be cut out in the image can be marked in the first image area by manual annotation, and then the first image area can be adjusted according to the feature points, so that the second image area can be more updated. Accurately locate the part to be cut out in the image, and then cut out according to the second image area, so that the required area can be cut out accurately. In this way, since the cutout template is replaced by manual labeling, even if the image does not match the cutout template, the required image part can be accurately cut out.
- Fig. 1a is a schematic flowchart of a matting method according to an embodiment of the present disclosure
- Fig. 1b is a schematic diagram of facial feature points in a matting method according to an embodiment of the present disclosure
- Fig. 1c is a schematic diagram of manual marking points in a matting method according to an embodiment of the present disclosure
- FIG. 1d is a schematic diagram of a second image area and external expansion points in a matting method according to an embodiment of the present disclosure
- FIG. 1e is a schematic diagram of the vertices in the matting method according to an embodiment of the present disclosure as convex points;
- FIG. 1f is a schematic diagram in which the vertices in the matting method according to an embodiment of the present disclosure are concave points;
- FIG. 1g is a schematic diagram of an expanded point whose vertices are convex points in a matting method according to an embodiment of the present disclosure
- FIG. 1h is a schematic diagram of an expanded point whose apex is a concave point in a matting method according to an embodiment of the present disclosure
- Fig. 1i is a schematic diagram of a cut-out image area in a cut-out method according to an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of the structure of an image matting device according to an embodiment of the present disclosure
- Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
- an embodiment of the present disclosure provides a matting method. As shown in Fig. 1a, the matting method mainly includes the following steps S11 to S14.
- Step S11 Perform feature point detection on the image to obtain multiple feature points.
- the image can be a human face image, an animal image, and so on.
- the feature point is the point where the gray value of the image changes drastically or the point with larger curvature on the edge of the image (that is, the intersection of two edges), which has distinct characteristics, can effectively reflect the essential characteristics of the image, and can identify The target object in the image.
- the image is a face image
- the feature points on the face image can be detected to obtain the feature points on the face.
- the white points in Figure 1b are the feature points on the face. , Including the eyes and surrounding pixels, the nose and surrounding pixels, etc.
- a feature point detection algorithm can be used to obtain feature points in the image.
- the image is a human face image
- the template face can also be used to detect the feature points of the image.
- Step S12 Obtain a first image area manually marked on the image.
- the user determines the first image area by marking dots or dashes around the image area to be cut out.
- the first image area is an artificially determined initial image area, and is usually a rough range including the cut-out image area.
- the user wants to cut out the eye area, he can mark points around the eye area of the image or determine the approximate range of the cut out eye area by drawing a circle.
- the black dots shown in Figure 1c are manually labeled annotation points, and the white dots are feature points.
- Step S13 Adjust the first image area according to the feature points to obtain a second image area.
- the feature points are the feature points located inside, at the edge and around the first image area. Since these feature points can characterize image features to the maximum, these feature points are used to adjust them so that the determined range of the second image area is closer to the image area to be cut out, that is, the cut out image area is more accurate.
- Step S14 Cut out the image according to the second image area.
- the second image area can be directly used as a cut-out area to cut out; or further adjustments can be made on the basis of the second image area to re-determine the image area before cutting out.
- the fourth option see the fourth option below. Description of the embodiment.
- the first image area is manually labeled and the first image area is adjusted according to the feature points. Since the feature points can represent image features to the maximum, the feature points are used to adjust them , So that the determined range of the second image area is closer to the image area to be cut out, thereby making the cut out image area more accurate.
- the feature points and the labeled first image area are determined according to the image, for different images, the detected feature points and the labeled first image area can be the same as the part of the image itself that needs to be cut out. Therefore, the present embodiment can accurately cut out the target area for different images, so that it can be applied to the cutout of various images.
- step S12 specifically includes:
- Step S121 Obtain manually annotated points on the image.
- users can customize marking points according to their own needs. For example, if the user wants to cut out the eye area, the eye area of the image, that is, around the eyes can be marked with annotation points.
- the black dots as shown in FIG. 1c are manually labeled annotation points, and the manually labeled annotation points are obtained from the image.
- Step S122 Determine the label point as an edge point of the first image area.
- an image area enclosed by an annotation point is used as the first image area, that is, the annotation point becomes an edge point of the first image area.
- step S13 specifically includes:
- Step S131 Adjust the position of the marking point according to the characteristic point, and determine the first target point corresponding to the marking point.
- the marked points are manually marked, the image area determined by them cannot be accurately matched to the image area that needs to be cut out. Therefore, it needs to be adjusted according to the characteristic points. Since the feature point can represent the image feature to the maximum, the feature point is used to adjust it to obtain the first target point. The range of the second image area determined by the first target point is made closer to the image area to be cut out, that is, the cut out image area is made more accurate.
- Step S132 Determine a closed area enclosed by the first target point as the second image area.
- step S131 specifically includes:
- Step S1311 Select the initial point closest to the marked point from the feature points.
- the distance between each feature point and the labeled point is calculated, for example, it can be cosine distance, Euclidean distance, etc.; the feature point with the closest distance to the labeled point is selected as the distance from the labeled point. The closest feature point to the point.
- Step S1312 Determine the closed area enclosed by the initial point as the third image area.
- the closed area formed by them is a polygon, and the closed area formed by the polygon is the third image area.
- the closed area enclosed by the three initial points is a triangle
- the area enclosed by the triangle is the third image area.
- Step S1313 Determine the center position of the third image area as the first target point corresponding to the marked point.
- the center position of the polygon is determined as the first target point corresponding to the labeled point.
- the center position can be the geometric center or the geometric center of gravity of the polygon.
- the polygon is a regular geometric figure, such as a regular triangle, a regular quadrilateral, or a regular pentagon
- the corresponding center position is the geometric center or the intersection of the diagonals.
- the corresponding center position is the geometric center of gravity, which is the intersection of the center lines of the sides of the polygon.
- the above scenario is a corresponding method for determining the first target point when the selected closest initial points are at least three.
- a feature point closest to the labeled point may be selected from the feature points as the first target point.
- two feature points closest to the marked point are selected from the feature points, and the midpoint of the line of the two feature points is used as the first target point.
- step S14 specifically includes:
- Step S141 Determine the edge of the second image area as the initial edge.
- the image edge is the intersection of the image attribute area and another attribute area, and is the place where the area attribute changes suddenly.
- the existing image edge extraction method can be used to determine the edge of the second image region.
- the image edge extraction methods that can be used are as follows: differential operator method, Laplacian Gaussian operator method, Canny operator, and fitting method , Relaxation method, neural network analysis method, wavelet transform method, etc.
- Step S142 Expand the initial edge to a direction outside the second image area to obtain a target edge.
- the target edge is the edge of the expanded image area.
- Step S143 Cut out the fourth image area enclosed by the target edge to obtain a cut out image.
- the fourth image area may be directly cut out as the target image, and the pixel values of the pixels in the cut out image are the pixel values in the image, that is, the pixel values remain unchanged.
- edge transparency transition processing can be performed on the image region enclosed by the initial edge and the target edge.
- step S142 specifically includes:
- Step S1421 Obtain the pixel points on the initial edge as the initial edge point.
- any pixel point on the initial edge can be selected as the initial edge point.
- the arbitrary pixel point can be a characteristic point or a non-characteristic point.
- Step S1422 Determine an extension point corresponding to the initial edge point according to the reference points on both sides of the initial edge point, where the extension point is located outside the second image area.
- the reference point may be pixel points or feature points on both sides of the initial edge point.
- An initial edge point can correspond to one external expansion point, or it can correspond to multiple external expansion points. As shown in Figure 1d, the white point outside the enclosed area is the expanded point.
- Step S1423 Connect the external expansion points to form the target edge.
- step S1422 specifically includes:
- Step A Determine the type of the initial edge point according to the reference points on both sides of the initial edge point.
- the initial edge point and the reference points on both sides are connected to obtain two line segments, as shown in Figures 1e and 1f, the two line segments form a certain included angle, and the initial edge point is determined according to the included angle.
- Types of For example, if the included angle is an obtuse angle, the type of the initial edge point is determined to be a convex point, and if the included angle is an acute angle, the type of the initial edge point is determined to be a concave point.
- Step B Determine the extension point corresponding to the initial edge point according to the type.
- step B specifically includes:
- the outer end point of the normal is obtained by extending the predetermined length outward according to the normal direction of the line segment formed by the initial edge point and the reference point, and the outer end point of the normal is obtained according to the angle between the normal Interpolate smoothly between the outer end points to obtain the corresponding outer expansion point;
- the type is a concave point, it is obtained by extending the predetermined length outward according to the bisector of the angle formed by the initial edge point and the reference point
- the outer end point of the angle bisector is taken as the corresponding outer expansion point.
- the preset length can be customized.
- two outer endpoints can be determined according to the normal direction of the two line segments. When the two outer endpoints are far apart, the resulting image area is not smooth enough.
- a series of outer expansion points are obtained by interpolation and smoothing between two outer end points, which are used as the outer expansion points of the initial edge point.
- the outer end point is determined according to the bisector of the angle between the two line segments. At this time, there is only one outer end point, so the outer end point is directly used as the initial edge point The expansion point.
- step S143 specifically includes:
- Step S1431 Determine the product of the preset first weight and the pixel value of the pixel on the initial edge in the image as the pixel value of the pixel on the initial edge line in the matted image.
- the preset first weight can be customized by the user, for example, it can be set to 1.
- Step S1432 Determine the product of the preset second weight and the pixel value of the pixel on the target edge in the image as the pixel value of the pixel on the target edge in the matting image.
- the preset second weight can be customized by the user, for example, it can be set to 0. And when setting the preset first weight and the preset second weight, in order to make the image area composed of the initial edge and the target edge achieve the effect that the transparency gradually decreases with the degree of expansion, the preset first weight is greater than the preset weight. Set the second weight.
- Step S1433 Determine the pixel value of each pixel in the fifth image area according to the initial edge and the target edge; wherein, the fifth image area is the image area between the initial edge and the target edge .
- step S1433 includes:
- Step C Select a first pixel on the initial edge and a second pixel on the target edge to form a polygon.
- the first pixel point may be any pixel point, any feature point or any initial edge point on the initial edge.
- the second pixel point may be any pixel point, any feature point or any extension point on the target edge.
- first pixel and the second pixel may form a polygon, and the polygon may be a triangle, a quadrilateral, a pentagon, or the like.
- Step D Determine the third weight of the third pixel according to the coordinates of the vertices of the polygon and the coordinates of the third pixel located in the polygon.
- the weights of the pixels in the polygon can also be set to a preset first weight and a preset first weight.
- a value between two weights such as a value between 0 and 1.
- Step E Determine the product of the third weight and the pixel value of the third pixel in the image as the pixel value of the third pixel in the matting image.
- step D specifically includes:
- Step D1 Determine the weight coefficient of each vertex according to the coordinates of the vertices of the polygon and the coordinates of the third pixel point.
- Step D2 Use the weighted sum of the weight coefficients of all vertices and the set weight parameters as the third weight.
- the set weight parameter can be determined according to the edge where the vertex is located. Specifically, if the vertex is the first pixel on the initial edge, its corresponding weight parameter is the preset first weight; if the vertex is the extension point on the target edge, its corresponding weight parameter is the preset The second weight.
- the three vertices of the triangle are composed of the first pixel point on the initial edge and the expanded point on the target edge. For example, it is composed of 2 first pixel points on the initial edge and 1 extension point on the target edge, or composed of 1 first pixel point on the initial edge and 2 extension points on the target edge. Therefore, the coordinates of the three vertices P1, P2 and P3 of the triangle are known, u and v are used as the weighting coefficients of P2 and P3, and (1-uv) is used as the weighting coefficient of P1. For any point P in the triangle, (u,v) must meet the conditions u ⁇ 0, v ⁇ 0, u+v ⁇ 1. Now that the coordinates of P1, P2, P3 and P are known, the values of u and v can be obtained by solving the following equations:
- Px is P abscissa
- Py is P ordinate
- P1.x is P1 abscissa
- P1.y is P ordinate
- P2.x is P2 abscissa
- P2.y is P2 ordinate
- P3.x is P3 abscissa
- P3.y is P3 ordinate.
- the device embodiments of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
- an embodiment of the present disclosure provides a matting device.
- the device can execute the steps in the embodiment of the matting method described in the first embodiment.
- the device mainly includes: a feature point detection module 21, a marked area acquisition module 22, an area determination module 23, and a matting module 24; among them,
- the feature point detection module 21 is used to perform feature point detection on the image to obtain multiple feature points;
- the marked area acquisition module 22 is configured to acquire the first image area manually marked on the image
- the area determining module 23 is configured to adjust the first image area according to the feature points to obtain a second image area
- the matting module 24 is used for matting the image according to the second image area.
- the labeling area acquiring module 22 includes: a labeling point acquiring unit 221 and a labeling area determining unit 222; wherein,
- the annotation point acquiring unit 221 is configured to acquire manually annotated annotation points on the image
- the marking area determining unit 222 is configured to determine the marking point as an edge point of the first image area.
- the labeling area determining unit 222 is specifically configured to: adjust the position of the labeling point according to the feature point, and determine the first target point corresponding to the labeling point; and enclose the first target point The closed area of is determined as the second image area.
- the labeling area determining unit 222 is specifically configured to: select an initial point closest to the labeling point from the feature points; determine the closed area enclosed by the initial point as the third image area; The center position of the third image area is determined as the first target point corresponding to the marked point.
- the matting module 24 includes: an initial edge determining unit 241, a target edge determining unit 242, and a matting unit 243; wherein,
- the initial edge determining unit 241 is configured to determine the edge of the second image area as the initial edge
- the target edge determination unit 242 is configured to expand the initial edge to a direction outside the second image area to obtain a target edge
- the matting unit 243 is configured to cut out the fourth image area enclosed by the target edge to obtain the matted image.
- the target edge determination unit 242 is specifically configured to: acquire pixel points on the initial edge as initial edge points; and determine the extension corresponding to the initial edge point according to the reference points on both sides of the initial edge point. Point, the extended point is located outside the second image area; connecting the extended points to form the target edge.
- target edge determining unit 242 is specifically configured to: determine the type of the initial edge point according to the reference points on both sides of the initial edge point; and determine the extension point corresponding to the initial edge point according to the type.
- the target edge determination unit 242 is specifically configured to: if the type is a convex point, then extend a predetermined length outward according to the normal direction of the line segment formed by the initial edge point and the reference point to obtain the The outer end point of the normal line, and interpolate smoothly between the outer end points according to the included angle of the normal line to obtain the corresponding outer expansion point; if the type is a concave point, it is formed according to the initial edge point and the reference point The angle bisector of the included angle extends outward by a preset length to obtain the outer end point of the angle bisector, and the outer end point of the angle bisector is used as the corresponding outer expansion point.
- the matting unit 243 is specifically configured to: determine the product of the preset first weight and the pixel value of the pixel on the initial edge in the image as the pixel on the initial edge line.
- the pixel value in the matted image; the product of the preset second weight and the pixel value of the pixel on the target edge in the image is determined as the pixel of the pixel on the target edge in the matted image Value; the pixel value of each pixel in the fifth image area is determined according to the initial edge and the target edge; wherein, the fifth image area is the image area between the initial edge and the target edge.
- the matting unit 243 is specifically configured to: select a first pixel point located on the initial edge and a second pixel point located on the target edge to form a polygon; The coordinates of the third pixel in the polygon determine the third weight of the third pixel; the product of the third weight and the pixel value of the third pixel in the image is determined as the The pixel value of the third pixel in the matting image.
- the matting unit 243 is specifically configured to: determine the weight coefficient of each vertex according to the coordinates of the vertices of the polygon and the coordinates of the third pixel; combine the weight coefficients of all vertices with the set weight parameters The weighted sum of is used as the third weight.
- Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 3 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device 300 may include a processing device (such as a central processing unit, a graphics processor, etc.) 301, which can be loaded into a random access device according to a program stored in a read-only memory (ROM) 302 or from a storage device 306.
- the program in the memory (RAM) 303 executes various appropriate actions and processing.
- various programs and data required for the operation of the electronic device 300 are also stored.
- the processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
- An input/output (I/O) interface 305 is also connected to the bus 304.
- the following devices can be connected to the I/O interface 305: including input devices 306 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibrations
- input devices 306 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.
- LCD liquid crystal displays
- An output device 307 such as a device
- a storage device 306 such as a magnetic tape and a hard disk
- the communication device 309 may allow the electronic device 300 to perform wireless or wired communication with other devices to exchange data.
- Fig. 3 shows an electronic device 300 with various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
- the process described above with reference to the flowchart can be implemented as a computer software program.
- the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 309, or installed from the storage device 306, or installed from the ROM 302.
- the processing device 301 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
- the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
- the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
- Communication e.g., communication network
- Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
- the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device is caused to perform feature point detection on the image to obtain feature points;
- the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
- the above-mentioned programming languages include, but are not limited to, object-oriented programming languages such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
- the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure can be implemented in software or hardware. Wherein, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
- the first obtaining unit can also be described as "a unit for obtaining at least two Internet Protocol addresses.”
- exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- ASSP Application Specific Standard Product
- SOC System on Chip
- CPLD Complex Programmable Logical device
- a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
- machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or flash memory erasable programmable read-only memory
- CD-ROM compact disk read only memory
- magnetic storage device or any suitable combination of the foregoing.
- a matting method including:
- the acquiring the first image area manually marked on the image includes:
- the label point is determined as an edge point of the first image area.
- the adjusting the first image area according to the characteristic points to obtain a second image area includes:
- the closed area enclosed by the first target point is determined as the second image area.
- the adjusting the position of the marking point according to the characteristic point and determining the first target point corresponding to the marking point includes:
- the center position of the third image area is determined as the first target point corresponding to the marked point.
- the matting the image according to the second image area includes:
- the externally expanding the initial edge to a direction outside the second image area to obtain a target edge includes:
- the determining the extension point corresponding to the initial edge point according to the reference points on both sides of the initial edge point includes:
- the outward expansion point corresponding to the initial edge point is determined according to the type.
- the determining the extension point corresponding to the initial edge point according to the type includes:
- the outer end point of the normal line is obtained by extending the predetermined length outward according to the normal direction of the line segment formed by the initial edge point and the reference point, and the outer end point of the normal line is obtained according to the angle between the normal line. Perform interpolation and smoothing between the outer endpoints to obtain the corresponding outer expansion point;
- the outer end point of the angle bisector is obtained by extending the predetermined length of the angle bisector of the angle formed by the initial edge point and the reference point, and the angle bisector is The outer end point of is used as the corresponding outer expansion point.
- the cutting out the fourth image area enclosed by the target edge to obtain the cut out image includes:
- the pixel value of each pixel in the fifth image area is determined according to the initial edge and the target edge; wherein the fifth image area is an image area between the initial edge and the target edge.
- the determining the pixel value of each pixel in the fifth image area according to the initial edge and the target edge includes:
- the product of the third weight and the pixel value of the third pixel in the image is determined as the pixel value of the third pixel in the matting image.
- the determining the third weight of the third pixel based on the coordinates of the vertices of the polygon and the coordinates of the third pixel located in the polygon includes:
- the weighted sum of the weight coefficients of all vertices and the set weight parameters is used as the third weight.
- a matting device including:
- the feature point detection module is used to perform feature point detection on the image to obtain multiple feature points
- a marked area acquisition module configured to acquire the first image area manually marked on the image
- An area determination module configured to adjust the first image area according to the feature points to obtain a second image area
- the matting module is used for matting the image according to the second image area.
- the marking area acquisition module includes:
- An annotation point acquiring unit configured to acquire an annotation point manually annotated on the image
- the marking area determining unit is configured to determine the marking point as an edge point of the first image area.
- the labeling area determining unit is specifically configured to: adjust the position of the labeling point according to the feature point to determine the first target point corresponding to the labeling point; and to enclose the first target point The closed area is determined as the second image area.
- the labeling area determining unit is specifically configured to: select an initial point closest to the labeling point from the feature points; determine the closed area enclosed by the initial point as the third image area; The center position of the third image area is determined as the first target point corresponding to the marked point.
- the image matting module includes:
- An initial edge determining unit configured to determine an edge of the second image area as an initial edge
- a target edge determination unit configured to expand the initial edge to a direction outside the second image area to obtain a target edge
- the matting unit is used to cut out the fourth image area enclosed by the target edge to obtain the matted image.
- the target edge determining unit is specifically configured to: acquire pixel points on the initial edge as initial edge points; and determine the extension point corresponding to the initial edge point according to the reference points on both sides of the initial edge point , The expansion point is located outside the second image area; connecting the expansion points to form the target edge.
- the target edge determining unit is specifically configured to: determine the type of the initial edge point according to the reference points on both sides of the initial edge point; and determine the extension point corresponding to the initial edge point according to the type.
- the target edge determination unit is specifically configured to: if the type is a convex point, obtain the method according to the normal direction of the line segment formed by the initial edge point and the reference point and extend a predetermined length outward. The outer end point of the line, and interpolate smoothly between the outer end points according to the included angle of the normal line to obtain the corresponding outer expansion point; if the type is a concave point, it is formed according to the initial edge point and the reference point The angle bisector of the included angle extends outward by a preset length to obtain the outer end point of the angle bisector, and the outer end point of the angle bisector is used as the corresponding outer expansion point.
- the matting unit is specifically configured to: determine the product of the preset first weight and the pixel value of the pixel on the initial edge in the image as the pixel on the initial edge line on the matting The pixel value in the image; the product of the preset second weight and the pixel value of the pixel on the target edge in the image is determined as the pixel value of the pixel on the target edge in the matting image Determine the pixel value of each pixel in the fifth image area according to the initial edge and the target edge; wherein, the fifth image area is the image area between the initial edge and the target edge.
- the matting unit is specifically configured to: select a first pixel point located on the initial edge and a second pixel point located on the target edge to form a polygon; The coordinates of the third pixel in the polygon determine the third weight of the third pixel; the product of the third weight and the pixel value of the third pixel in the image is determined as the first The pixel values of three pixels in the matting image.
- the matting unit is specifically configured to: determine the weight coefficient of each vertex according to the coordinates of the vertices of the polygon and the coordinates of the third pixel; combine the weight coefficients of all vertices with the set weight parameters The weighted sum serves as the third weight.
Abstract
Description
Claims (14)
- 一种抠图方法,其特征在于,包括:对图像进行特征点检测,得到特征点;获取所述图像上人工标注的第一图像区域;根据所述特征点对所述第一图像区域进行调整,得到第二图像区域;根据所述第二图像区域对所述图像进行抠图。
- 根据权利要求1所述的方法,其特征在于,所述获取所述图像上人工标注的第一图像区域,包括:获取所述图像上人工标注的标注点;将所述标注点确定为所述第一图像区域的边缘点。
- 根据权利要求2所述的方法,其特征在于,所述根据所述特征点对所述第一图像区域进行调整,得到第二图像区域,包括:根据所述特征点对所述标注点的位置进行调整,确定所述标注点对应的第一目标点;将所述第一目标点围成的闭合区域确定为所述第二图像区域。
- 根据权利要求3所述的方法,其特征在于,所述根据所述特征点对所述标注点的位置进行调整,确定所述标注点对应的第一目标点,包括:从所述特征点中选取与所述标注点最接近的初始点;将所述初始点围成的闭合区域确定为第三图像区域;将所述第三图像区域的中心位置确定为所述标注点对应的第一目标点。
- 根据权利要求1所述的方法,其特征在于,所述根据所述第二图像区域对所述图像进行抠图,包括:确定所述第二图像区域的边缘,作为初始边缘;对所述初始边缘向所述第二图像区域之外的方向进行外扩,得到目标边缘;抠出由所述目标边缘围成的第四图像区域,得到抠图图像。
- 根据权利要求5所述的方法,其特征在于,所述对所述初始边缘向所述第二图像区域之外的方向进行外扩,得到目标边缘,包括:获取所述初始边缘上的像素点,作为初始边缘点;根据所述初始边缘点两侧的参考点确定所述初始边缘点对应的外扩点,所述外扩点位于所述第二图像区域之外;连接所述外扩点,形成所述目标边缘。
- 根据权利要求6所述的方法,其特征在于,所述根据所述初始边缘点两侧的参考点确定所述初始边缘点对应的外扩点,包括:根据所述初始边缘点两侧的参考点确定所述初始边缘点的类型;根据所述类型确定所述初始边缘点对应的外扩点。
- 根据权利要求7所述的方法,其特征在于,所述根据所述类型确定所述初始边缘点对应的外扩点,包括:若所述类型为凸点,则根据所述初始边缘点与所述参考点构成的线段的法线方向向外延伸预设长度得到所述法线的外端点,并根据法线夹角在所述外端点间进行内插平滑得到对应的外扩点;若所述类型为凹点,则根据所述初始边缘点与所述参考点构成的夹角的角平分线向外延伸预设长度得到所述角平分线的外端点,将所述角 平分线的外端点作为对应的外扩点。
- 根据权利要求5-8任一项所述的方法,其特征在于,所述抠出由所述目标边缘围成的第四图像区域,得到抠图图像,包括:将预设第一权重与所述初始边缘上像素点在所述图像中的像素值的乘积确定为所述初始边缘线上像素点在所述抠图图像中的像素值;将预设第二权重与所述目标边缘上像素点在所述图像中的像素值的乘积确定为所述目标边缘上像素点在所述抠图图像中的像素值;根据所述初始边缘和所述目标边缘确定第五图像区域中每个像素点的像素值;其中,所述第五图像区域为所述初始边缘和所述目标边缘之间的图像区域。
- 根据权利要求9所述的方法,其特征在于,所述根据所述初始边缘和所述目标边缘确定第五图像区域中每个像素点的像素值,包括:选取位于所述初始边缘上的第一像素点和位于所述目标边缘上的第二像素点组成多边形;根据所述多边形的顶点的坐标和位于所述多边形内的第三像素点的坐标确定所述第三像素点的第三权重;将所述第三权重与所述第三像素点在所述图像中的像素值的乘积确定为所述第三像素点在所述抠图图像中的像素值。
- 根据权利要求10所述的方法,其特征在于,所述根据所述多边形的顶点的坐标和位于所述多边形内的第三像素点的坐标确定所述第三像素点的第三权重,包括:根据所述多边形的顶点的坐标和所述第三像素点的坐标确定每个顶点的权重系数;将所有顶点的权重系数与设定的权重参数的加权和作为所述第三权重。
- 一种抠图装置,其特征在于,包括:特征点检测模块,用于对图像进行特征点检测,得到多个特征点;标注区域获取模块,用于获取所述图像上人工标注的第一图像区域;区域确定模块,用于根据所述特征点对所述第一图像区域进行调整,得到第二图像区域;抠图模块,用于根据所述第二图像区域对所述图像进行抠图。
- 一种电子设备,包括:存储器,用于存储非暂时性计算机可读指令;以及处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-11任一项所述的抠图方法。
- 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-11任一项所述的抠图方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/770,983 US20220375098A1 (en) | 2019-10-24 | 2020-07-29 | Image matting method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911014880.7 | 2019-10-24 | ||
CN201911014880.7A CN112712459B (zh) | 2019-10-24 | 2019-10-24 | 抠图方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021077836A1 true WO2021077836A1 (zh) | 2021-04-29 |
Family
ID=75540206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/105441 WO2021077836A1 (zh) | 2019-10-24 | 2020-07-29 | 抠图方法及装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220375098A1 (zh) |
CN (1) | CN112712459B (zh) |
WO (1) | WO2021077836A1 (zh) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150193939A1 (en) * | 2012-10-30 | 2015-07-09 | Apple Inc. | Depth mapping with enhanced resolution |
CN109389611A (zh) * | 2018-08-29 | 2019-02-26 | 稿定(厦门)科技有限公司 | 交互式抠图方法、介质及计算机设备 |
CN109388725A (zh) * | 2018-10-30 | 2019-02-26 | 百度在线网络技术(北京)有限公司 | 通过视频内容进行搜索的方法及装置 |
CN109934843A (zh) * | 2019-01-28 | 2019-06-25 | 北京华捷艾米科技有限公司 | 一种实时的轮廓精细化抠像方法及存储介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657974A (zh) * | 2013-11-25 | 2015-05-27 | 腾讯科技(上海)有限公司 | 一种图像处理方法及装置 |
CN104820990A (zh) * | 2015-05-15 | 2015-08-05 | 北京理工大学 | 一种交互式图像抠图系统 |
CN110097560A (zh) * | 2019-04-30 | 2019-08-06 | 上海艾麒信息科技有限公司 | 抠图方法以及装置 |
-
2019
- 2019-10-24 CN CN201911014880.7A patent/CN112712459B/zh active Active
-
2020
- 2020-07-29 WO PCT/CN2020/105441 patent/WO2021077836A1/zh active Application Filing
- 2020-07-29 US US17/770,983 patent/US20220375098A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150193939A1 (en) * | 2012-10-30 | 2015-07-09 | Apple Inc. | Depth mapping with enhanced resolution |
CN109389611A (zh) * | 2018-08-29 | 2019-02-26 | 稿定(厦门)科技有限公司 | 交互式抠图方法、介质及计算机设备 |
CN109388725A (zh) * | 2018-10-30 | 2019-02-26 | 百度在线网络技术(北京)有限公司 | 通过视频内容进行搜索的方法及装置 |
CN109934843A (zh) * | 2019-01-28 | 2019-06-25 | 北京华捷艾米科技有限公司 | 一种实时的轮廓精细化抠像方法及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112712459A (zh) | 2021-04-27 |
US20220375098A1 (en) | 2022-11-24 |
CN112712459B (zh) | 2023-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108594997B (zh) | 手势骨架构建方法、装置、设备及存储介质 | |
WO2019101021A1 (zh) | 图像识别方法、装置及电子设备 | |
WO2020024483A1 (zh) | 用于处理图像的方法和装置 | |
US20130243351A1 (en) | Methods and Apparatus for Interfacing Panoramic Image Stitching with Post-Processors | |
WO2020082731A1 (zh) | 电子装置、证件识别方法及存储介质 | |
WO2020248900A1 (zh) | 全景视频的处理方法、装置及存储介质 | |
WO2021139382A1 (zh) | 人脸图像的处理方法、装置、可读介质和电子设备 | |
CN111325699B (zh) | 图像修复方法和图像修复模型的训练方法 | |
CN110211195B (zh) | 生成图像集合的方法、装置、电子设备和计算机可读存储介质 | |
US9070345B2 (en) | Integrating street view with live video data | |
US20210335391A1 (en) | Resource display method, device, apparatus, and storage medium | |
CN111325220B (zh) | 图像生成方法、装置、设备及存储介质 | |
CN110248197B (zh) | 语音增强方法及装置 | |
CN110991457A (zh) | 二维码处理方法、装置、电子设备及存储介质 | |
CN112990440B (zh) | 用于神经网络模型的数据量化方法、可读介质和电子设备 | |
WO2021077836A1 (zh) | 抠图方法及装置 | |
CN112508959A (zh) | 视频目标分割方法、装置、电子设备及存储介质 | |
CN111754435A (zh) | 图像处理方法、装置、终端设备及计算机可读存储介质 | |
CN112801997B (zh) | 图像增强质量评估方法、装置、电子设备及存储介质 | |
WO2021073204A1 (zh) | 对象的显示方法、装置、电子设备及计算机可读存储介质 | |
WO2020215854A1 (zh) | 渲染图像的方法、装置、电子设备和计算机可读存储介质 | |
CN111461964B (zh) | 图片处理方法、装置、电子设备和计算机可读介质 | |
US10902265B2 (en) | Imaging effect based on object depth information | |
CN114154520A (zh) | 机器翻译模型的训练方法、机器翻译方法、装置及设备 | |
CN111784607A (zh) | 图像色调映射方法、装置、终端设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20879154 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20879154 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.08.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20879154 Country of ref document: EP Kind code of ref document: A1 |