WO2020019915A1 - 一种图像处理方法、装置和计算机存储介质 - Google Patents
一种图像处理方法、装置和计算机存储介质 Download PDFInfo
- Publication number
- WO2020019915A1 WO2020019915A1 PCT/CN2019/092353 CN2019092353W WO2020019915A1 WO 2020019915 A1 WO2020019915 A1 WO 2020019915A1 CN 2019092353 W CN2019092353 W CN 2019092353W WO 2020019915 A1 WO2020019915 A1 WO 2020019915A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- limb
- mesh control
- type
- control surface
- target object
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 35
- 238000001514 detection method Methods 0.000 claims description 54
- 230000015654 memory Effects 0.000 claims description 34
- 238000004590 computer program Methods 0.000 claims description 8
- 210000003414 extremity Anatomy 0.000 description 154
- 210000001624 hip Anatomy 0.000 description 14
- 238000007493 shaping process Methods 0.000 description 8
- 230000001360 synchronised effect Effects 0.000 description 7
- 230000005291 magnetic effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/2163—Partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/20—Contour coding, e.g. using detection of edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Definitions
- the present application relates to image processing technologies, and in particular, to an image processing method, device, and computer storage medium.
- body shaping such as "leg shaping”, “arms” “Shaping”, “Waist shaping”, “Hip shaping”, “Shoulder shaping”, “Head shaping”, “Chest shaping”, etc.
- body shaping such as "leg shaping”, “arms” “Shaping”, “Waist shaping”, “Hip shaping”, “Shoulder shaping”, “Head shaping”, “Chest shaping”, etc.
- embodiments of the present application provide an image processing method, device, and computer storage medium.
- An embodiment of the present application provides an image processing method.
- the method includes:
- Deformation processing is performed based on at least a part of the mesh control surfaces of the plurality of mesh control surfaces on at least a part of a limb region corresponding to the target object to generate a second image.
- the determining a target object in the first image includes:
- the limb detection information includes limb key point information and / or limb contour point information;
- the limb key point information includes coordinate information of the limb key point
- the limb contour point information includes coordinate information of the limb contour points.
- performing deformation processing on at least a part of a limb region corresponding to the target object based on at least a part of the plurality of mesh control surfaces includes:
- a first set of mesh control surfaces corresponding to the first limb detection information is determined, and deformation processing is performed on the first set of mesh control surfaces.
- determining the first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes:
- the first group of mesh control surfaces includes at least one mesh Control surface
- the mesh control surface is a first-type mesh control surface
- Determining a first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes:
- the first-type mesh control surface includes a plurality of first-type mesh control points
- the deforming the at least one first-type mesh control surface based on the first deformation parameter includes:
- movement of any one of the plurality of first-type grid control points realizes deformation of the first-type network control surface.
- the mesh control surface is a second-type mesh control surface
- the determining the first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes:
- the second-type mesh control surface includes a plurality of second-type mesh control points
- the deforming the at least one second-type mesh control surface based on a second deformation parameter includes:
- movement of any one of the plurality of second-type grid control points realizes deformation of an area corresponding to the network control point in the second-type network control plane.
- An embodiment of the present application further provides an image processing apparatus, where the apparatus includes: an obtaining unit, a mesh dividing unit, and an image processing unit; wherein,
- the obtaining unit is configured to obtain a first image
- the mesh dividing unit is configured to mesh the first image obtained by the obtaining unit to obtain a plurality of mesh control surfaces
- the image processing unit is configured to determine a target object in the first image obtained by the obtaining unit; based on at least part of the plurality of mesh control surfaces, at least part of the mesh control surface faces at least a portion corresponding to the target object.
- the limb region is deformed to generate a second image.
- the image processing unit is configured to obtain limb detection information of a target object in the first image; the limb detection information includes limb key point information and / or limb contour points Information; the limb key point information includes coordinate information of the limb key point; the limb contour point information includes coordinate information of the limb contour point.
- the image processing unit is configured to determine at least a part of a limb region to be deformed in the target object, and obtain first limb detection information of the at least part of the limb region; A first set of mesh control surfaces corresponding to the first limb detection information is determined, and deformation processing is performed on the first set of mesh control surfaces.
- the image processing unit is configured to determine a corresponding first based on first limb keypoint information and / or first limb contour point information included in the first limb detection information.
- the mesh control surface is a first-type mesh control surface
- the image processing unit is configured to determine at least one first-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one first-type mesh control surface based on a first deformation parameter to Compress or stretch a limb area corresponding to the target object, and compress or stretch at least a portion of the background area outside the target object.
- the first-type mesh control surface includes a plurality of first-type mesh control points
- the image processing unit is configured to move at least a part of the first-type mesh control points among the plurality of first-type mesh control points included in the first-type mesh control surface based on the first deformation parameter, to
- the network-like control surface performs deformation processing; wherein the movement of any one of the plurality of first-type grid control points realizes the deformation of the first-type network control surface.
- the mesh control surface is a second-type mesh control surface
- the image processing unit is configured to determine at least one second-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one second-type mesh control surface based on a second deformation parameter to Compress or stretch a part of a limb area corresponding to the target object, and compress or stretch at least a part of a background area outside the target object.
- the second-type mesh control surface includes a plurality of second-type mesh control points; and the image processing unit is configured to move the second-type mesh based on the second deformation parameter. At least part of the second-type mesh control points included in the plurality of second-type mesh control points included in the lattice control surface to deform the second-type network control surface; The movement of any grid control point in the grid control point realizes the deformation of the area corresponding to the network control point in the second type of network control surface.
- An embodiment of the present application further provides a computer-readable storage medium having computer instructions stored thereon, which when executed by a processor, implement the steps of the image processing method described in the embodiments of the present application.
- An embodiment of the present application further provides an image processing apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the processor executes the program, the program described in the embodiment of the present application is implemented. Steps of image processing method.
- An embodiment of the present application further provides a computer program including computer instructions, and when the computer instructions are run in a processor of a device, the method described in the embodiments of the present application is implemented.
- An image processing method, device, and computer storage medium provided in the embodiments of the present application, the method includes: obtaining a first image, meshing the first image to obtain a plurality of grid control surfaces; and identifying the A target object in the first image; performing deformation processing on at least a part of the limb area corresponding to the target object based on at least part of the mesh control surfaces of the plurality of mesh control surfaces to generate a second image.
- the mesh is divided based on the image to obtain multiple mesh control surfaces, and at least part of the limb area facing the target object is deformed based on the mesh control, thereby realizing the limb area of the target object. Automatic adjustment without the need for multiple manual operations by the user, greatly improving the user's operating experience.
- FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application
- FIG. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of a hardware composition and structure of an image processing apparatus according to an embodiment of the present application.
- FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes:
- Step 101 Obtain a first image, mesh the first image, and obtain multiple mesh control surfaces.
- Step 102 Determine a target object in the first image.
- Step 103 Perform deformation processing based on at least part of the mesh control surfaces of the plurality of mesh control surfaces to face at least part of the limb area corresponding to the target object to generate a second image.
- the image processing method of this embodiment performs image processing on the first image, performs mesh division on the first image, and obtains multiple mesh control surfaces.
- the first image is evenly divided into N * M grid control surfaces, N and M are both positive integers, and N and M are the same or different.
- the target object in the first image is used as the center, and a rectangular region where the target object is located is meshed. Based on the meshing granularity of the rectangular region, a background region other than the rectangular region is meshed. Grid division.
- the number of mesh control surfaces is related to the proportion of the limb area corresponding to the target object in the first image in the first image.
- a mesh control surface may correspond to a part of a limb area of the target object, for example, a mesh control surface may correspond to the leg of the target object, or a mesh control surface may correspond to the chest and waist of the target object, so that Both the global deformation of the target object and the local deformation of the target object can be achieved.
- the mesh control surface is used as a basic deformation unit to process at least part of the limb area corresponding to the target object, that is, the mesh control surface is subjected to deformation processing, so as to achieve deformation of at least part of the limb area corresponding to the target object.
- the target object in the first image is identified; where the target object is a to-be-processed object, it can be a real person and can be understood as a real person in the image; in other embodiments, the target object is also Can be a virtual character.
- the execution order of meshing the first image and identifying the target object in the first image is not limited to the execution order in this embodiment, and the target object in the first image may also be identified before the first An image is meshed to obtain multiple mesh control surfaces.
- determining the target object in the first image includes: obtaining limb detection information of the target object in the first image; the limb detection information includes limb key point information and / or limb contour points Information; the limb key point information includes coordinate information of the limb key point; the limb contour point information includes coordinate information of the limb contour point.
- the limb region corresponding to the target object includes a head region, a shoulder region, a chest region, a waist region, an arm region, a hand region, a hip region, a leg region, and a foot region.
- the limb detection information includes limb key point information and / or limb contour point information; the limb key point information includes coordinate information of the limb key point; the limb contour point information includes coordinate information of the limb contour point.
- the limb contour point represents a limb contour of a limb region of the target object, that is, a limb contour edge of the target object can be formed through coordinate information of the limb contour point.
- the limb contour points include at least one of the following: arm contour points, hand contour points, shoulder contour points, leg contour points, foot contour points, waist contour points, head contour points, hip contour points, Chest contour points.
- the key points of the limbs represent key points of the bones of the target object, that is, the main bones of the target object can be formed through the coordinate information of the key points of the limbs and connecting the key points of the limbs.
- the limb key points include at least one of the following: arm key points, hand key points, shoulder key points, leg key points, foot key points, waist key points, head key points, hip key points, Key points on the chest.
- a target object in the first image is identified by an image recognition algorithm, and limb detection information of the target object is further determined.
- performing deformation processing on at least part of the limb area corresponding to the target object based on at least part of the mesh control surfaces of the multiple mesh control surfaces includes: determining to be deformed in the target object. Processing the at least part of the limb area to obtain the first limb detection information of the at least part of the limb area; determining a first set of grid control surfaces corresponding to the first limb detection information, and performing Deformation processing.
- determining a first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes: based on the first included in the first limb detection information.
- the limb key point information and / or the first limb contour point information determine a corresponding first set of mesh control surfaces; the first set of mesh control surfaces includes at least one mesh control surface; and the at least one mesh control surface Perform deformation processing to compress or stretch at least a part of a limb area corresponding to the target object, and compress or stretch at least a part of a background area outside the target object.
- first determine at least part of the limb area of the target object to be deformed for example, the waist area, leg area, etc. to be deformed, or the limb area of the target object (that is, the entire limb area of the target object) ; Further determine the first limb detection information based on at least a part of the limb area to be deformed, specifically the coordinate information of the limb key points and / or the coordinate information of the limb contour points of the at least part of the limb area to be deformed; based on at least part The coordinate information of the key points of the limb region and / or the coordinate information of the limb contour points determine a first set of mesh control surfaces corresponding to the at least part of the limb region, the first set of mesh control surfaces including at least one mesh control surface That is, determining at least one grid control surface corresponding to the at least part of the limb area; it can be understood that the at least part of the limb area is within an area corresponding to the at least one grid control surface.
- the grid control surface is rectangular in the initial state, and the grid control surface also has a plurality of virtual control points (or control lines); the composition control is changed by moving the control points (or control lines).
- the curvature of each control line of the surface so as to realize the deformation processing of the mesh control surface. It can be understood that the mesh control surface after the deformation processing is a curved surface.
- the mesh control surface is a first-type mesh control surface
- the determining the first set of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first set of mesh control surfaces includes: determining at least one first corresponding to the first limb detection information.
- a mesh-like control surface that deforms the at least one first-type mesh control surface based on a first deformation parameter to compress or stretch a limb region corresponding to the target object, and compress or stretch the target object Outside at least part of the background area.
- the first-type mesh control surface includes a plurality of first-type mesh control points
- the deforming the at least one first-type mesh control surface based on a first deformation parameter includes: The deformation parameters move at least a part of the first-type mesh control points among a plurality of first-type mesh control points included in the first-type mesh control surface to deform the first-type network control surface; The movement of any one of the plurality of first-type mesh control points realizes deformation of the first-type network control surface.
- the first type of mesh control surface may be a Bezier curve formed by a Bezier curve.
- a Bezier curve can have multiple control points. It can be understood that a Bezier surface can be formed by multiple Bezier curves. Deformation processing of Bezier curves is achieved by moving at least some of the control points corresponding to any Bezier curve. It can be understood that multiple control points of multiple Bezier curves are moved to achieve multiple deformations. Deformation processing of the limb region corresponding to the Bezier surface formed by the Bezier curve. Among the multiple control points of the Bezier surface, the movement of any control point will deform the global Bezier surface.
- the deformation process for the entire limb area of the target object is a deformation process of at least one first-type mesh control surface by referring to the first deformation parameter, that is, the first to be adjusted in the first-type mesh control surface.
- the mesh-like control points are deformed according to the first type of deformation parameters to achieve that the entire limb area of the target object is deformed according to the same deformation parameter.
- the entire limb area is compressed ("slimted") by 20%. Relative to the initial data, it can be understood that the width of the waist is compressed by 20% compared to the width of the waist before deformation, the width of the legs is compressed by 20% compared to the width of the legs before deformation, and so on.
- This embodiment is suitable for deforming a complete limb region of a target object by using a Bezier surface, so as to achieve global smoothing of the deformation of the complete limb region of the target object.
- the mesh control surface is a second-type mesh control surface
- the determining the first set of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first set of mesh control surfaces includes: determining at least one second corresponding to the first limb detection information.
- a mesh-like control surface which deforms the at least one second-type mesh control surface based on a second deformation parameter to compress or stretch a part of a limb region corresponding to the target object, and compress or stretch the target At least part of the background area outside the object.
- the second-type mesh control surface includes a plurality of second-type mesh control points
- the deforming the at least one second-type mesh control surface based on a second deformation parameter includes: The deformation parameter moves at least a part of the second-type mesh control point among a plurality of second-type mesh control points included in the second-type mesh control surface to deform the second-type network control surface; The movement of any one of the plurality of second-type grid control points realizes deformation of an area corresponding to the network control point in the second-type network control plane.
- the second type of mesh control surface specifically forms a catmull rom surface from a catmull rom spline curve.
- the catmull rom spline curve can have multiple control points. It can be understood that the catmull rom surface can be formed by multiple catmull rom spline curves.
- the deformation of the catmull-rom spline is realized by moving at least part of the control points of any of the control points corresponding to any catmull-rom spline curve. It can be understood that by moving the control points of multiple catmull-rom spline curves In this way, parts of the limb region corresponding to the catmull surface formed by multiple catmull surface splines are deformed.
- the difference between the first type of mesh control surface and the second type of mesh control surface in this embodiment is that the first type of mesh control surface is a Bezier surface, and the second type of mesh control surface is a catmull surface.
- the first type of mesh control points are not on the Bezier curve forming the Bezier surface. Curvature of the Bezier curve; it can be understood that the movement of the control points of the first type of mesh can change the curvature of the corresponding Bezier curve in a wide range, thereby achieving global deformation processing of the Bezier curve.
- the second type of mesh control point is on the catmull curve that forms a catmull surface.
- the movement of the second type of mesh control point changes the curvature of the position of the second type of mesh control point on the catmull curve and / It can be understood that the movement of the second type of mesh control point can change the curvature of a point on the corresponding catmullrom curve or the curve near the point, thereby realizing the deformation processing of the local area in the catmullrom surface.
- the deformation of the partial limb area of the target object can be achieved through the deformation processing of the catmull rom surface, which can make the local deformation more accurate and improve the effect of image processing.
- At least one second-type mesh control surface is deformed by referring to the second deformation parameter, so as to realize the deformation treatment of a part of the limb region corresponding to the target object.
- the second deformation parameters corresponding to different partial limb regions may be the same or different, so that different partial limb regions have different deformation effects.
- the width of the waist is compressed by 20% compared to the width of the waist before deformation
- the width of the legs is compressed by 10% compared to the width of the leg before deformation.
- first-type mesh control surface or the second-type mesh control surface it needs to be based on at least part of the limb area to be deformed and the type of deformation processing (such as compression processing or Stretch processing) to determine the mesh control points to be moved in the mesh control surface, and then move the determined mesh control points according to the corresponding deformation parameters.
- type of deformation processing such as compression processing or Stretch processing
- standard parameters are also configured during the image processing in the embodiments of the present application.
- the standard parameters indicate parameters that are satisfied by the limb area of the processed target object, that is, when the present application is adopted After the image processing solution of the embodiment performs deformation processing on the limb area so that the limb area meets the standard parameter, the deformation processing of the limb area is terminated; as another implementation manner, the standard parameter indicates adjustment of the limb area of the target object.
- the proportion that is, when the limb region is processed by using the image processing scheme of the embodiment of the present application, the adjustment change amount of the limb region meets the adjustment proportion. Based on this, the embodiment of the present application may determine the deformation parameters (including the first deformation parameter or the second deformation parameter) based on the standard parameter.
- the mesh is divided based on the image to obtain multiple mesh control surfaces, and at least part of the limb area facing the target object is deformed based on the mesh control, thereby realizing the limb area of the target object. Automatic adjustment without the need for multiple manual operations by the user, greatly improving the user's operating experience.
- FIG. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application; as shown in FIG. 2, the apparatus includes: an obtaining unit 21, a mesh dividing unit 22, and an image processing unit 23;
- the obtaining unit 21 is configured to obtain a first image
- the mesh dividing unit 22 is configured to mesh the first image obtained by the obtaining unit 21 to obtain multiple mesh control surfaces;
- the image processing unit 23 is configured to determine a target object in the first image obtained by the obtaining unit 21; and based on at least a part of the mesh control surfaces of the plurality of mesh control surfaces, corresponding to the target object. Deformation processing is performed on at least part of the limb area to generate a second image.
- the image processing unit 23 is configured to obtain limb detection information of the target object in the first image; the limb detection information includes limb key point information and / or limb contour point information; the limb key information The point information includes coordinate information of key points of the limb; the limb contour point information includes coordinate information of limb contour points.
- the image processing unit 23 is configured to determine at least a part of a limb region in the target object to be deformed, obtain first limb detection information of the at least part of the limb region, and determine the first limb. Deformation processing is performed on the first group of mesh control surfaces corresponding to the detection information.
- the image processing unit 23 is configured to determine a corresponding first group of mesh control surfaces based on the first limb keypoint information and / or the first limb contour point information included in the first limb detection information;
- the first set of mesh control surfaces includes at least one mesh control surface; deforming the at least one mesh control surface to compress or stretch at least a part of a limb region corresponding to the target object, and compressing or pulling Extending at least part of the background area outside the target object.
- the grid control surface is a first-type grid control surface
- the image processing unit 23 is configured to determine at least one first-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one first-type mesh control surface based on a first deformation parameter, Compress or stretch a limb area corresponding to the target object, and compress or stretch at least a part of the background area outside the target object.
- the first-type mesh control surface includes a plurality of first-type mesh control points
- the image processing unit 23 is configured to move at least a part of the first-type mesh control points among a plurality of first-type mesh control points included in the first-type mesh control surface based on the first deformation parameter, to A type of network control plane performs deformation processing; wherein the movement of any one of the plurality of first-type grid control points realizes the deformation of the first-type network control plane.
- the mesh control surface is a second type of mesh control surface
- the image processing unit 23 is configured to determine at least one second-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one second-type mesh control surface based on a second deformation parameter, And compressing or stretching a part of a limb area corresponding to the target object, and compressing or stretching at least a part of a background area outside the target object.
- the second-type mesh control surface includes a plurality of second-type mesh control points
- the image processing unit 23 is configured to move at least a part of the second-type mesh control points among a plurality of second-type mesh control points included in the second-type mesh control surface based on the second deformation parameter, to The second type of network control plane is deformed; wherein movement of any one of the plurality of second type of network control points realizes an area corresponding to the network control point in the second type of network control plane. Of deformation.
- the obtaining unit 21, the grid division unit 22, and the image processing unit 23 in the device may be implemented by a central processing unit (CPU, Central Processing Unit), a digital signal processor (DSP, Digital Signal Processor, Microcontroller Unit (MCU) or Programmable Gate Array (FPGA, Field-Programmable Gate Array).
- CPU Central Processing Unit
- DSP Digital Signal Processor
- MCU Microcontroller Unit
- FPGA Programmable Gate Array
- FIG. 3 is a schematic diagram of a hardware composition and structure of the image processing apparatus according to the embodiment of the present application.
- a computer program on 32 that can run on the processor 31.
- the processor 31 executes the program, the image processing method according to any one of the foregoing embodiments of the present application is implemented.
- bus system 33 various components in the image processing apparatus are coupled together through the bus system 33. It can be understood that the bus system 33 is used to implement connection and communication between these components.
- the bus system 33 includes a power bus, a control bus, and a status signal bus in addition to the data bus. However, for the sake of clarity, various buses are marked as the bus system 33 in FIG. 3.
- the memory 32 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memories.
- the non-volatile memory may be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), or an erasable programmable read-only memory (EPROM, Erasable Programmable Read- Only Memory), Electrically Erasable and Programmable Read-Only Memory (EEPROM), Magnetic Random Access Memory (FRAM, ferromagnetic random access memory), Flash Memory (Flash Memory), Magnetic Surface Memory , Compact disc, or read-only compact disc (CD-ROM, Compact Disc-Read-Only Memory); the magnetic surface memory can be a disk memory or a tape memory.
- the volatile memory may be random access memory (RAM, Random Access Memory), which is used as an external cache.
- RAM random access memory
- RAM Random Access Memory
- many forms of RAM are available, such as Static Random Access Memory (SRAM, Static Random Access Memory), Synchronous Static Random Access Memory (SSRAM, Static Random Access, Memory), Dynamic Random Access DRAM (Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced Type Synchronous Dynamic Random Access Memory (ESDRAM, Enhanced Synchronous Random Access Memory), Synchronous Link Dynamic Random Access Memory (SLDRAM) ).
- SRAM Static Random Access Memory
- SSRAM Synchronous Static Random Access Memory
- SDRAM Synchronous Dynamic Random Access Memory
- DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- ESDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- SLDRAM Synchronous Link Dyna
- the method disclosed in the foregoing embodiment of the present application may be applied to the processor 31 or implemented by the processor 31.
- the processor 31 may be an integrated circuit chip and has a signal processing capability. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 31 or an instruction in the form of software.
- the aforementioned processor 31 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
- the processor 31 may implement or execute various methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
- a general-purpose processor may be a microprocessor or any conventional processor.
- the software module may be located in a storage medium.
- the storage medium is located in the memory 32.
- the processor 31 reads information in the memory 32 and completes the steps of the foregoing method in combination with its hardware.
- the image processing device provided in the foregoing embodiment performs image processing
- only the division of the foregoing program modules is used as an example.
- the foregoing processing may be allocated by different program modules as required. That is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above.
- the image processing apparatus and the image processing method embodiments provided by the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
- an embodiment of the present application further provides a computer-readable storage medium, such as a memory 32 including a computer program, and the computer program may be executed by the processor 31 of the image processing apparatus to complete the steps of the foregoing method.
- the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM, or various devices including one or any combination of the above memories, such as Mobile phones, computers, tablet devices, personal digital assistants, etc.
- An embodiment of the present application further provides a computer-readable storage medium having computer instructions stored thereon, which, when executed by a processor, implement the image processing method described in any one of the foregoing embodiments of the present application.
- An embodiment of the present application further provides a computer program including computer-readable instructions.
- a processor in the device executes the instructions to implement any of the foregoing implementations of the application.
- the disclosed apparatus and method may be implemented in other ways.
- the device embodiments described above are only schematic.
- the division of the unit is only a logical function division.
- there may be another division manner such as multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed components are coupled, or directly coupled, or communicated with each other through some interfaces.
- the indirect coupling or communication connection of the device or unit may be electrical, mechanical, or other forms. of.
- the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
- the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
- the foregoing program may be stored in a computer-readable storage medium.
- the program is executed, the program is executed.
- the method includes the steps of the foregoing method embodiment.
- the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.
- the above-mentioned integrated unit of the present application is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
- the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device) is caused to perform all or part of the methods described in the embodiments of the present application.
- the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (19)
- 一种图像处理方法,所述方法包括:获得第一图像,将所述第一图像进行网格划分,获得多个网格控制面;确定所述第一图像中的目标对象;基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
- 根据权利要求1所述的方法,其中,所述确定所述第一图像中的目标对象,包括:获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;所述肢体关键点信息包括肢体关键点的坐标信息;所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
- 根据权利要求1或2所述的方法,其中,所述基于所述多个网格控制面中的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,包括:确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
- 根据权利要求3所述的方法,其中,所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网格控制面;所述第一组网格控制面包括至少一个网格控制面;对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
- 根据权利要求3或4所述的方法,其中,所述网格控制面为第一类网格控制面;所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:确定所述第一肢体检测信息对应的至少一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
- 根据权利要求5所述的方法,其中,所述第一类网格控制面包括多个第一类网格控制点;所述基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,包括:基于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
- 根据权利要求3或4所述的方法,其中,所述网格控制面为第二类网格控制面;所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于 第二变形参数对所述至少一个第二类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
- 根据权利要求7所述的方法,其中,所述第二类网格控制面包括多个第二类网格控制点;所述基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,包括:基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
- 一种图像处理装置,所述装置包括:获取单元、网格划分单元和图像处理单元;其中,所述获取单元,配置为获得第一图像;所述网格划分单元,配置为将所述获取单元获得的所述第一图像进行网格划分,获得多个网格控制面;所述图像处理单元,配置为确定所述获取单元获得的所述第一图像中的目标对象;基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
- 根据权利要求9所述的装置,其中,所述图像处理单元,配置为获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;所述肢体关键点信息包括肢体关键点的坐标信息;所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
- 根据权利要求9或10所述的装置,其中,所述图像处理单元,配 置为确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
- 根据权利要求11所述的装置,其中,所述图像处理单元,配置为基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网格控制面;所述第一组网格控制面包括至少一个网格控制面;对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
- 根据权利要求11或12所述的装置,其中,所述网格控制面为第一类网格控制面;所述图像处理单元,配置为确定所述第一肢体检测信息对应的至少一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
- 根据权利要求13所述的装置,其中,所述第一类网格控制面包括多个第一类网格控制点;所述图像处理单元,配置为基于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
- 根据权利要求11或12所述的装置,其中,所述网格控制面为第二类网格控制面;所述图像处理单元,配置为确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于第二变形参数对所述至少一个第二类网格控制 面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
- 根据权利要求15所述的装置,其中,所述第二类网格控制面包括多个第二类网格控制点;所述图像处理单元,配置为基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
- 一种计算机可读存储介质,其上存储有计算机指令,其中,该指令被处理器执行时实现权利要求1至8任一项所述图像处理方法的步骤。
- 一种图像处理装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现权利要求1至8任一项所述图像处理方法的步骤。
- 一种计算机程序,包括计算机指令,当所述计算机指令在设备的处理器中运行时,实现上述权利要求1至8任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207030087A KR20200133778A (ko) | 2018-07-25 | 2019-06-21 | 이미지 처리 방법, 장치 및 컴퓨터 저장 매체 |
JP2021506036A JP7138769B2 (ja) | 2018-07-25 | 2019-06-21 | 画像処理方法、装置及びコンピュータ記憶媒体 |
SG11202010404WA SG11202010404WA (en) | 2018-07-25 | 2019-06-21 | Image processing method and apparatus, and computer storage medium |
US17/117,703 US20210097268A1 (en) | 2018-07-25 | 2020-12-10 | Image processing method and apparatus, and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810829498.0A CN110766607A (zh) | 2018-07-25 | 2018-07-25 | 一种图像处理方法、装置和计算机存储介质 |
CN201810829498.0 | 2018-07-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/117,703 Continuation US20210097268A1 (en) | 2018-07-25 | 2020-12-10 | Image processing method and apparatus, and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020019915A1 true WO2020019915A1 (zh) | 2020-01-30 |
Family
ID=69181302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/092353 WO2020019915A1 (zh) | 2018-07-25 | 2019-06-21 | 一种图像处理方法、装置和计算机存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20210097268A1 (zh) |
JP (1) | JP7138769B2 (zh) |
KR (1) | KR20200133778A (zh) |
CN (1) | CN110766607A (zh) |
SG (1) | SG11202010404WA (zh) |
WO (1) | WO2020019915A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112651931A (zh) * | 2020-12-15 | 2021-04-13 | 浙江大华技术股份有限公司 | 建筑物变形监测方法、装置和计算机设备 |
US11896769B2 (en) | 2020-06-17 | 2024-02-13 | Affirm Medical Technologies Ii, Llc | Universal respiratory detector |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145084B (zh) * | 2019-12-25 | 2023-06-16 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、图像处理设备及存储介质 |
CN114913549B (zh) * | 2022-05-25 | 2023-07-07 | 北京百度网讯科技有限公司 | 图像处理方法、装置、设备及介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102496140A (zh) * | 2011-12-06 | 2012-06-13 | 中国科学院自动化研究所 | 一种基于多层嵌套笼体的实时交互式图像变形方法 |
CN102541488A (zh) * | 2010-12-09 | 2012-07-04 | 深圳华强游戏软件有限公司 | 一种实现投影屏幕的无缝对齐的图像处理方法及系统 |
CN104537608A (zh) * | 2014-12-31 | 2015-04-22 | 深圳市中兴移动通信有限公司 | 一种图像处理的方法及其装置 |
CN105989576A (zh) * | 2015-03-18 | 2016-10-05 | 卡西欧计算机株式会社 | 校正图像的装置及其方法 |
US20170330375A1 (en) * | 2015-02-04 | 2017-11-16 | Huawei Technologies Co., Ltd. | Data Processing Method and Apparatus |
CN107590708A (zh) * | 2016-07-07 | 2018-01-16 | 梁如愿 | 一种生成用户特定体形模型的方法和装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3463125B2 (ja) * | 1993-12-24 | 2003-11-05 | カシオ計算機株式会社 | 画像変形方法およびその装置 |
JP2010176588A (ja) * | 2009-01-30 | 2010-08-12 | Sony Ericsson Mobilecommunications Japan Inc | 端末装置、画像処理方法及びプログラム |
JP5240795B2 (ja) * | 2010-04-30 | 2013-07-17 | オムロン株式会社 | 画像変形装置、電子機器、画像変形方法、および画像変形プログラム |
JP2011259053A (ja) | 2010-06-07 | 2011-12-22 | Olympus Imaging Corp | 画像処理装置および画像処理方法 |
CN104978707A (zh) * | 2014-04-03 | 2015-10-14 | 陈鹏飞 | 基于轮廓线的图像变形技术 |
US9576385B2 (en) * | 2015-04-02 | 2017-02-21 | Sbitany Group LLC | System and method for virtual modification of body parts |
US10140764B2 (en) * | 2016-11-10 | 2018-11-27 | Adobe Systems Incorporated | Generating efficient, stylized mesh deformations using a plurality of input meshes |
CN107592708A (zh) | 2017-10-25 | 2018-01-16 | 成都塞普奇科技有限公司 | 一种led用电源电路 |
-
2018
- 2018-07-25 CN CN201810829498.0A patent/CN110766607A/zh active Pending
-
2019
- 2019-06-21 WO PCT/CN2019/092353 patent/WO2020019915A1/zh active Application Filing
- 2019-06-21 SG SG11202010404WA patent/SG11202010404WA/en unknown
- 2019-06-21 KR KR1020207030087A patent/KR20200133778A/ko active IP Right Grant
- 2019-06-21 JP JP2021506036A patent/JP7138769B2/ja active Active
-
2020
- 2020-12-10 US US17/117,703 patent/US20210097268A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102541488A (zh) * | 2010-12-09 | 2012-07-04 | 深圳华强游戏软件有限公司 | 一种实现投影屏幕的无缝对齐的图像处理方法及系统 |
CN102496140A (zh) * | 2011-12-06 | 2012-06-13 | 中国科学院自动化研究所 | 一种基于多层嵌套笼体的实时交互式图像变形方法 |
CN104537608A (zh) * | 2014-12-31 | 2015-04-22 | 深圳市中兴移动通信有限公司 | 一种图像处理的方法及其装置 |
US20170330375A1 (en) * | 2015-02-04 | 2017-11-16 | Huawei Technologies Co., Ltd. | Data Processing Method and Apparatus |
CN105989576A (zh) * | 2015-03-18 | 2016-10-05 | 卡西欧计算机株式会社 | 校正图像的装置及其方法 |
CN107590708A (zh) * | 2016-07-07 | 2018-01-16 | 梁如愿 | 一种生成用户特定体形模型的方法和装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11896769B2 (en) | 2020-06-17 | 2024-02-13 | Affirm Medical Technologies Ii, Llc | Universal respiratory detector |
CN112651931A (zh) * | 2020-12-15 | 2021-04-13 | 浙江大华技术股份有限公司 | 建筑物变形监测方法、装置和计算机设备 |
CN112651931B (zh) * | 2020-12-15 | 2024-04-26 | 浙江大华技术股份有限公司 | 建筑物变形监测方法、装置和计算机设备 |
Also Published As
Publication number | Publication date |
---|---|
JP2021518964A (ja) | 2021-08-05 |
KR20200133778A (ko) | 2020-11-30 |
SG11202010404WA (en) | 2020-11-27 |
US20210097268A1 (en) | 2021-04-01 |
JP7138769B2 (ja) | 2022-09-16 |
CN110766607A (zh) | 2020-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020019915A1 (zh) | 一种图像处理方法、装置和计算机存储介质 | |
WO2019227917A1 (zh) | 一种图像处理方法、装置和计算机存储介质 | |
US11244449B2 (en) | Image processing methods and apparatuses | |
JP2018200690A (ja) | 情報処理方法及び情報処理装置 | |
WO2020057667A1 (zh) | 一种图像处理方法、装置和计算机存储介质 | |
US11501407B2 (en) | Method and apparatus for image processing, and computer storage medium | |
WO2021208151A1 (zh) | 一种模型压缩方法、图像处理方法以及装置 | |
CN109584327B (zh) | 人脸老化模拟方法、装置以及设备 | |
CN108830200A (zh) | 一种图像处理方法、装置和计算机存储介质 | |
CN108830784A (zh) | 一种图像处理方法、装置和计算机存储介质 | |
CN110060348B (zh) | 人脸图像整形方法及装置 | |
JP7475287B2 (ja) | ポイントクラウドデータの処理方法、装置、電子機器、記憶媒体及びコンピュータプログラム | |
JP2011107877A5 (zh) | ||
CN108765274A (zh) | 一种图像处理方法、装置和计算机存储介质 | |
WO2022033513A1 (zh) | 目标分割方法、装置、计算机可读存储介质及计算机设备 | |
US11769310B2 (en) | Combining three-dimensional morphable models | |
CN110060287B (zh) | 人脸图像鼻部整形方法及装置 | |
CN116824090A (zh) | 一种曲面重建方法及装置 | |
CN110766603B (zh) | 一种图像处理方法、装置和计算机存储介质 | |
CN111145204B (zh) | 一种边数可设定的对轮廓曲线的多边形简化方法 | |
CN110111240A (zh) | 一种基于强结构的图像处理方法、装置和存储介质 | |
CN114638923A (zh) | 一种特征对齐方法及装置 | |
CN118134977A (zh) | 基于nurbs的医疗图像体数据配准方法、系统及计算机介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19839856 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207030087 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021506036 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19839856 Country of ref document: EP Kind code of ref document: A1 |