WO2020037924A1 - Procédé et appareil de génération d'animation - Google Patents

Procédé et appareil de génération d'animation Download PDF

Info

Publication number
WO2020037924A1
WO2020037924A1 PCT/CN2018/123648 CN2018123648W WO2020037924A1 WO 2020037924 A1 WO2020037924 A1 WO 2020037924A1 CN 2018123648 W CN2018123648 W CN 2018123648W WO 2020037924 A1 WO2020037924 A1 WO 2020037924A1
Authority
WO
WIPO (PCT)
Prior art keywords
animation
human hand
virtual object
generating
video
Prior art date
Application number
PCT/CN2018/123648
Other languages
English (en)
Chinese (zh)
Inventor
杨辉
王沈韬
胡博远
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2020037924A1 publication Critical patent/WO2020037924A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a method, a device, an electronic device, and a computer-readable storage medium for generating an animation.
  • smart terminals can listen to music, play games, chat on the Internet, and take pictures.
  • the camera pixels have reached more than 10 million pixels, which has higher resolution and is comparable to that of professional cameras.
  • an additional function can be achieved by downloading an application (Application, abbreviated as APP) from the network side.
  • APP Application, abbreviated as APP
  • an app that can implement functions such as dark light detection, beauty camera and super pixel.
  • the beauty functions of smart terminals usually include beauty treatment effects such as skin tone adjustment, microdermabrasion, big eyes, and thin face, which can perform the same degree of beauty treatment on all faces that have been identified in the image.
  • APPs that can implement simple animation functions, such as displaying a section of animation at a fixed position on the screen.
  • the current animation function can only be displayed at a fixed position and time. If you need to change the display or playback properties of the animation, you need to directly modify the animation itself, so the control of the animation is very inflexible.
  • an embodiment of the present disclosure provides a method for generating an animation, including: obtaining a virtual object; obtaining a video collected by an image sensor; identifying a human hand in the video to obtain human hand information; and acquiring animation configuration parameters according to the human hand information Generating an animation related to the virtual object according to the animation configuration parameters.
  • the step of identifying the human hand in the video and obtaining the human hand information includes: identifying the human hand in the video; recording the motion trajectory of the human hand; analyzing the motion trajectory, and identifying the motion trajectory as a predetermined action Taking the action as human information.
  • obtaining the animation configuration parameters according to the human hand information includes: obtaining the animation configuration parameters according to the type of the virtual object and the action, and the animation configuration parameters are used for rendering of the animation.
  • the type of the virtual object is an animation type, and an animation configuration parameter corresponding to the action is obtained, and the animation configuration parameter is used to control a rendering position of the virtual object and / or an attribute of the animation of the virtual object itself.
  • the type of the virtual object is a model type, and an animation configuration parameter corresponding to the action is obtained, and the animation configuration parameter is used to control a rendering position of the virtual object and / or an animation node of the virtual object.
  • obtaining the animation configuration parameters according to the human hand information includes reading an animation behavior configuration file, and the animation behavior configuration file stores an animation configuration parameter associated with the human hand information; Obtain the animation configuration parameters in the animation behavior configuration file.
  • the method further includes: obtaining an animation behavior configuration file corresponding to the type according to the type of the virtual object.
  • the method further includes: setting an animation behavior configuration file, and setting animation configuration parameters in the configuration file.
  • the step of identifying the human hand in the video to obtain the human hand information includes: identifying the human hand in the video to obtain the recognition result data; performing smoothing and coordinate normalization processing on the recognition result data to obtain the processed human hand Obtaining the manpower information according to the processed manpower.
  • generating the animation related to the virtual object according to the animation configuration parameter includes: calculating a rendering position and an animation attribute of the virtual object according to the animation configuration parameter, and generating an animation of the virtual object. .
  • an embodiment of the present disclosure provides an animation generating device, including: a virtual object acquisition module for acquiring a virtual object; a video acquisition module for acquiring a video captured by an image sensor; a human hand recognition module for identifying all The human hand in the video obtains the human hand information; the animation configuration parameter acquisition module is used to obtain the animation configuration parameters according to the human hand information; the animation generation module is used to generate the animation related to the virtual object according to the animation configuration parameters.
  • the human hand recognition module includes: a first recognition module for identifying a human hand in the video; a recording module for recording a movement trajectory of the human hand; an analysis recognition module for analyzing the movement trajectory, and Identifying the motion trajectory as a predetermined action; a human hand information output module, configured to use the action as human hand information.
  • the animation configuration parameter obtaining module is configured to obtain the animation configuration parameters according to the type of the virtual object and the action, and the animation configuration parameters are used for rendering of the animation.
  • the type of the virtual object is an animation type, and an animation configuration parameter corresponding to the action is obtained, and the animation configuration parameter is used to control a rendering position of the virtual object and / or an attribute of the animation of the virtual object itself.
  • the type of the virtual object is a model type, and an animation configuration parameter corresponding to the action is obtained, and the animation configuration parameter is used to control a rendering position of the virtual object and / or an animation node of the virtual object.
  • the animation configuration parameter acquisition module further includes a reading module for reading an animation behavior configuration file, and the animation behavior configuration file stores an animation configuration parameter associated with the human hand information; a first acquisition module For acquiring the animation configuration parameters from the animation behavior configuration file according to the human hand information.
  • the animation configuration parameter acquisition module further includes: a second acquisition module, configured to acquire an animation behavior configuration file corresponding to the type according to the type of the virtual object.
  • the animation configuration parameter acquisition module further includes: an animation behavior configuration file setting module, configured to set an animation behavior configuration file, and set an animation configuration parameter in the configuration file.
  • the human hand recognition module includes: a recognition result data acquisition module for recognizing a human hand in the video to obtain recognition result data; a recognition result data processing module for smoothing and coordinate normalizing the recognition result data
  • the first human hand information acquisition module is configured to obtain the human hand information according to the processed human hand.
  • the animation generating module is configured to calculate a rendering position and animation attributes of the virtual object according to the animation configuration parameters, and generate an animation of the virtual object.
  • an embodiment of the present disclosure provides an electronic device including: at least one processor; and,
  • a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processing
  • the processor is capable of performing any one of the foregoing animation generating methods.
  • an embodiment of the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute the foregoing first aspect. Any one of the animation generation methods.
  • Embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating an animation.
  • the animation generating method includes: obtaining a virtual object; obtaining a video collected by an image sensor; identifying a human hand in the video to obtain human hand information; obtaining an animation configuration parameter according to the human hand information; and generating and Animation related to the virtual object.
  • the embodiment of the present disclosure solves the technical problem of inflexible animation control in the prior art by adopting this technical solution.
  • FIG. 1 is a flowchart of Embodiment 1 of a method for generating an animation according to an embodiment of the present disclosure
  • FIG. 2a is a flowchart of step S104 in the second embodiment of the animation generating method according to the embodiment of the present disclosure
  • 2b-2g are schematic diagrams of specific examples of an animation generating method according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of a first embodiment of an animation generating apparatus according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of an animation configuration parameter obtaining module in the second embodiment of an animation generating device provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an animation generating terminal according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of Embodiment 1 of an animation generation method provided by an embodiment of the present disclosure.
  • the animation generation method provided by this embodiment may be executed by an animation generation device, and the animation generation device may be implemented as software or as software In combination with hardware, the animation generating device may be integrated in a certain device in the image processing system, such as an image processing server or an image processing terminal device. As shown in Figure 1, the method includes the following steps:
  • Step S101 Obtain a virtual object.
  • the virtual objects here can be any 2D or 3D virtual objects, typically virtual weapons such as virtual swords and virtual pistols, virtual stationery such as virtual pens and books, virtual gloves such as virtual gloves and virtual rings. Wearable items, etc., or virtual rainbows, clouds, etc., are not specifically limited here. Any virtual object can be introduced into this disclosure.
  • the virtual object can have types, such as an animation type virtual with an animation effect itself.
  • the object is typically an animation type virtual cloud, which itself has multiple sequence frames, presenting an animation effect from white cloud to dark cloud to rain cloud; or the virtual object may be a model type, such as the above-mentioned virtual sword, It does not have an animation effect, but it can form a section of animation effect by moving and other methods.
  • the type of the virtual object can be obtained.
  • the type of the virtual object can be obtained directly from the attribute data of the virtual object, or the ID of the virtual object is obtained, and the type and type of the ID are queried by the ID.
  • the obtaining method of the method is optional, and any method can be applied to the present disclosure.
  • Step S102 Obtain a video collected by the image sensor
  • Image sensors refer to various devices that can capture images. Typical image sensors are cameras, cameras, cameras, and so on.
  • the image sensor may be a camera on a mobile terminal, such as a front or rear camera on a smart phone, and the video image collected by the camera may be directly displayed on the display screen of the mobile phone.
  • the image video captured by the image sensor is used to further identify the image in the next step.
  • Step S103 identify the human hand in the video, and obtain the human hand information
  • color features can be used to locate the position of the human hand, segment the human hand from the background, and perform feature extraction and recognition on the found and segmented human hand image.
  • an image sensor is used to obtain the color information of the image and the position information of the color information; compare the color information with preset color information of the human hand; identify the first color information, and the first color information and the preset color information The error of the color information of the human hand is less than the first threshold; the position information of the first color information is used to form the outline of the human hand.
  • the image data of the RGB color space collected by the image sensor can be mapped to the HSV color space, and the information in the HSV color space is used as comparison information.
  • the HSV color space is used.
  • the hue value in the color is used as the color information, and the hue information is least affected by the brightness, which can well filter the interference of the brightness.
  • the position of the key points is accurately located on the image. Because the key points occupy only a very small area in the image (usually only a few to tens of pixels in size), the area occupied by the features corresponding to the key points on the image is also usually very limited and local.
  • the features currently used There are two extraction methods: (1) one-dimensional range image feature extraction along the vertical contour; (2) two-dimensional range image feature extraction of the key point square neighborhood.
  • ASM and AAM methods statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on.
  • the number of key points, accuracy, and speed used by the above various implementation methods are different, which are suitable for different application scenarios. Similarly, for other target objects, the same principle can be used to identify target objects.
  • a polygon is circled outside the outer contour of the human hand as an external detection frame of the human hand.
  • the external detection frame is used to replace the human hand and describe the position of the human hand.
  • a rectangle is used as an example.
  • the width at the widest part of the human hand and the length at the longest part can be calculated, and the external detection frame of the human hand can be identified with the width and length.
  • One way to calculate the longest and widest points of the human hand is to extract the key points of the border of the human hand, calculate the difference between the X coordinates of the two key points with the furthest X coordinate distances, and calculate the Y coordinate distance as the length of the rectangle.
  • the external detection frame can be set to the smallest circle covering the fist.
  • the center point of the external detection frame may be used as the position of the hand, and the center point of the external detection frame is the intersection of the diagonals of the external detection frame; the position of the fist may also be replaced by the circular center of the circle.
  • the hand information also includes the detected key points of the hand.
  • the number of the key points can be set.
  • the key points of the hand contour and joint key points can be set.
  • Each key point has a fixed number.
  • the order of the key points of the thumb joint, the index finger, the middle knuckle, the ring finger, and the little finger is numbered from top to bottom. In a typical application, the key points are 22, each Each key point has a fixed number.
  • the human hand information may also include human hand movements, which record the movement trajectory of the human hand, and analyze the movement trajectory to identify. Specifically, recording the motion trajectory of a human hand first needs to track the movement of the human hand.
  • the tracking of the human hand trajectory is to track the position change of a gesture in a sequence of pictures and obtain the position information of the human hand in continuous time.
  • the pros and cons of the tracking effect of human motion directly affect the effect of human motion recognition.
  • Commonly used motion tracking methods include particle filtering algorithms, Mean-shift algorithms, Kalman filtering methods, and bone tracking methods.
  • particle filtering-based target tracking is a random search process that obtains the posterior probability estimates of the target distribution in a random motion model.
  • Particle filtering is mainly divided into two steps: preliminary sampling and repeated sampling.
  • the initial sampling is to randomly place particles in an image, then calculate the similarity between each particle and the tracking target feature, and then obtain the weight of each particle.
  • the resampling phase mainly changes the distribution of particles based on the weight of the particles in the preliminary sampling.
  • the process of preliminary sampling and resampling is repeated until the target is tracked.
  • Mean-shift is a non-parametric probability density gradient estimation algorithm.
  • the basic idea of using Mean-shift algorithm to track human hands is: first to build a model of the human hand, that is, to calculate the probability of the feature value of the pixels belonging to the hand in the initial image frame in the feature space; then establish the current frame's The model calculates the eigenvalue probability of all pixels in the area where the human hand may exist. Finally, the mean hand drift is obtained by finding the similarity between the initial human hand model and the current hand human model. According to the convergence of the mean shift algorithm, the mean shift of the hand is calculated iteratively to achieve the goal of converging to the position of the hand in the current image frame.
  • Kalman filtering uses a series of mathematical equations to predict the state of a linear system, now or in the future.
  • Kalman filtering mainly observes the position information of the hand in a series of image frames, and then predicts the position of the hand in the next frame. Because the Kalman filter is based on the assumption of the posterior probability estimation of each time interval, the Kalman filter method can achieve better tracking results in the Gaussian distribution environment. This method can remove noise, and still achieve better hand tracking effect under gesture deformation.
  • Kinect can provide complete bone tracking for one or two users, that is, tracking of 20 joint points throughout the body. Skeletal point tracking is divided into active tracking and passive tracking. In the active tracking mode, two possible users are selected for tracking in the field of view. In the passive tracking mode, a maximum of 6 user bone points can be tracked, and the remaining four For position tracking only.
  • the principle of Kinect's bone tracking is to find the bone joint point information of each part by classifying and machine learning the 32 parts of the human body based on the acquired depth image.
  • a human hand motion trajectory tracking method based on bone tracking can be preferentially used in the present disclosure.
  • the movement distance of the key points of the human hand in two consecutive frames of images can be calculated.
  • the key points are considered The position of the key remains the same.
  • the preset positions of the key points are kept the same for several consecutive frames, the position of the hand is recognized as the starting point or end point of the human hand movement.
  • the threshold can be set to 1 cm.
  • the position of the human hand is used as the starting point or end point of the human hand action.
  • the positions of the key points in the image frames between the starting point and the end point can be calculated.
  • the trajectories formed by the key points in all the image frames are the movement trajectories of the human hand.
  • the human hand motion is output as human hand information to the next step.
  • the human hand information may further include an angle of the human hand, and the angle may include an angle of the human hand on the shooting plane, or an angle in space, or a combination of the two.
  • the angle can be described using an external detection frame.
  • the offset angle of the external detection frame with respect to the X axis can be calculated.
  • the degree of zoom of the external detection frame can be detected.
  • the rotation angle in space is determined according to the corresponding relationship between the zoom level and the angle. For example, when the palm is facing the camera, the detected external detection frame has the largest area. When the palm is rotated, the area of the external detection frame gradually decreases.
  • the relationship between the area reduction ratio and the angle can be set in advance, so that the rotation angle of the palm can be calculated by the area of the external detection frame.
  • the angle is not limited to this one in the embodiment. Any method that can determine the angle of the human hand can be applied to the present disclosure, and the purpose here is only to obtain the angle of the human hand.
  • the method before calculating the human hand information, further includes the steps of smoothing and coordinate normalizing the recognition data of the human hand.
  • the smoothing process may be averaging the images in multiple frames of video, and using the averaged image as the recognized image, corresponding to the human hand in the present disclosure, and identifying the human hand in the multi-frame image, After that, the hand image is weighted averaged, and the hand image obtained after the averaging is used as the identified hand, and the hand information is calculated. In this way, the human hand can be determined even when some frames are lost or the images identified by some frames are not very clear. Image and calculate the information of the human hand.
  • the coordinate normalization process is to unify the coordinate range.
  • the coordinates of the human hand image collected by the camera and the human hand image displayed on the display screen are not uniform.
  • a mapping relationship is required to map the large coordinate system to a small coordinate. tie up. After smoothing and normalization, the information of human hands is obtained.
  • Step S104 Obtain animation configuration parameters according to the human hand information.
  • the animation configuration parameters may include the rendering position of the virtual object and the attributes of the animation.
  • the rendering position of the virtual object may be related to the position of the human hand. For example, the position of the human hand is determined by the center point of the external detection frame of the human hand in step S103, and the rendering position of the virtual object may directly coincide with the center point.
  • the center position of the object coincides with the center point of the external detection frame; or the rendering position of the virtual object may maintain a certain positional relationship with the center point, for example, the rendering position of the virtual object may be located in the positive direction of the center point on the Y axis 1
  • the position of each length unit, the length unit may be a custom length unit, for example, one length unit is equal to 1 cm, etc., and is not limited herein.
  • a certain relationship can be used to determine the rendering position of the virtual object, and the question of where the virtual object is displayed.
  • you can set 3 points on the virtual object These 3 points The points correspond to the three key points on the human hand. Through this correspondence, the rendering position of the virtual object can be determined.
  • the properties of the animation define the display properties of the animation, such as the size of the animation, the rotation direction, the playing behavior, the nodes of the animation, and so on.
  • the attributes of morphology, playback, trajectory, etc. can be applied to the present disclosure.
  • the above examples are just typical animation attributes listed for easy understanding. In order to facilitate understanding, the following specifically describes an example of the association between the typical animation attributes and human hand information.
  • step S103 the positions of the left and right hands can be obtained, and the actions of the left and right hands can be recorded.
  • the distance between the left and right hands is calculated to find the animation size parameter corresponding to the distance.
  • the human hand information obtained in the step S103 includes angle information of the human hand.
  • angle information of the human hand When it is recognized that the angle of the human hand changes, the rotation direction and rotation angle of the animation corresponding to the angle can be found according to the angle of the human hand.
  • step S103 human hand movements can be identified.
  • the animation is controlled to play forward; when the clockwise rotation of the human hand is recognized, the animation is controlled to play backward. Or when it is recognized that a human hand is sliding horizontally, the playback speed of the animation can be controlled according to the sliding speed of the human hand.
  • the type of the virtual object needs to be determined first, and the type of the virtual object may be obtained together when the virtual object is obtained in step S101.
  • the animation configuration parameters may be obtained according to the type of the virtual object and the action, and the animation configuration parameters are used for rendering of the animation. If the virtual object type is a model type, obtain animation configuration parameters corresponding to the action, and the animation configuration parameters are used to control a rendering position of the virtual object and / or an animation node of the virtual object; if the virtual object is The type is an animation type, and an animation configuration parameter corresponding to the action is obtained, and the animation configuration parameter is used to control a rendering position of the virtual object and / or a property of the animation of the virtual object itself.
  • each person's hand information can correspond to a unique animation parameter.
  • the recognition process can be simplified to some extent.
  • an animation type virtual object you can use human hand motion to control the animation playback.
  • the animation playback speed when a person's hand is recognized to make a horizontal slide, you can control the animation playback speed according to the human hand's sliding speed.
  • a model type virtual object you can The animation node of the virtual object is determined according to the movement trajectory of the human hand movement, the movement trajectory of the virtual object is generated through the animation node, and the movement animation of the virtual object is generated according to the movement trajectory.
  • the virtual object is a bird. When it is recognized that the trajectory of the human hand in the air is "8", an animation of the bird flying in the air with the "8" trajectory can be generated.
  • the animation generation methods of the two different types of virtual objects can be combined, such as a cloud effect with an animation effect, which can change from white clouds to dark clouds to rain clouds to form a section of animation.
  • the speed of the animation can be set by the lateral sliding motion, and on the other hand, the cloud can be moved along the arc by the arc motion of the human hand to form a moving animation.
  • a floating animation that constantly changes the shape of the clouds.
  • the animation configuration parameters obtained in this step may further include rendering parameters, which define how the animation and / or human hands are rendered.
  • the rendering parameters will be specifically described below, and will not be repeated here.
  • this step is only for explaining the process and manner of obtaining animation configuration parameters, and does not constitute a limitation on the present disclosure.
  • the core of this step is to obtain the animation configuration parameters corresponding to the information based on the human hand information identified in step S103. As for what kind of human hand information corresponds to what kind of animation parameters, this disclosure does not limit it.
  • Step S105 Generate an animation related to the virtual object according to the animation configuration parameters.
  • the virtual object obtained in step S101 is processed according to the animation configuration parameters obtained in step S104 to perform animation related to the virtual object.
  • the obtained animation configuration parameter is the playback speed
  • the positions of key points of the human hand are generally used. For example, three key weighted average positions of the human hand can be selected as the nodes of the animation. In each frame of the human hand's motion, the positions are at the nodes of the animation. Position the virtual object to form a moving animation of the virtual object. In order to increase the diversity, you can also use Bezier curves to generate the animation effect of the virtual object. Use the nodes of the animation as the key points of the Bezier curve and bring them into the Bezier curve formula to calculate the animation curve of the virtual object. The calculation process is not repeated here.
  • this step is only for generating an animation process and manner, and does not constitute a limitation on the present disclosure.
  • the core of this step is to control or generate the animation according to the animation configuration parameters obtained in step S104.
  • any specific generation method in the art may be used, and the disclosure does not specifically limit it.
  • Embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating an animation.
  • the animation generating method includes: obtaining a virtual object; obtaining a video collected by an image sensor; identifying a human hand in the video to obtain human hand information; obtaining an animation configuration parameter according to the human hand information; and generating and Animation related to the virtual object.
  • the embodiment of the present disclosure determines the configuration parameters of the animation by acquiring information of the human hand, so that the effect of the animation is related to the human hand, and solves the technical problem of inflexible configuration or generation of the animation in the prior art.
  • the step S104 obtaining animation configuration parameters according to the human hand information, including:
  • Step S201 Read an animation behavior configuration file
  • Step S202 Acquire the animation configuration parameters from the animation behavior configuration file according to the human hand information.
  • the animation behavior configuration file may include rendering parameters in addition to the animation configuration parameters to make the mixed image of the animation and the human hand more diverse.
  • the animation behavior configuration file stores a correspondence between the human hand information and animation configuration parameters, and the correspondence may be a direct correspondence relationship, such as a correspondence between a human hand movement and a playback speed. Corresponding playback speed; the corresponding relationship may also be indirect, for example, the animation speed configuration file stores a playback speed function corresponding to a human hand motion, and the playback speed may be calculated from the direction or speed of the human hand motion.
  • the correspondence between human hand information and animation configuration parameters There is no specific limitation on the correspondence between human hand information and animation configuration parameters. In short, as long as it is through human hand information, the manner in which the animation configuration parameters can be obtained from the information stored in the animation behavior configuration file can be applied. Into this disclosure.
  • a save path of a sequence frame of the virtual object is saved in the animation behavior configuration file, and the name or ID of the virtual object is obtained in step S101.
  • Sequence frames of virtual objects are obtained in the configuration file. All sequence frames can form a complete virtual object.
  • the parameter "range” can be set in the animation behavior configuration file: [idx_start, idx_end], which means that the continuous files from the idx_start to the idx_end in the list of files constitute the sequence frame; or the parameter "idx ": [idx0, idx1, ...], which means that the idx0, idx1, ... and other files in the file list form the sequence frame in order.
  • the animation behavior configuration file further includes association parameters of the position of the virtual object, and the association parameters describe which key points of the human hand are associated with the sequence frame. By default, all key points can be associated, and several key points can be set to follow.
  • the animation behavior configuration file also includes the position relationship parameter "point" of the virtual object and the key point. "Point” may include two groups of association points, "point0" represents the first group of association points, and “point1” represents Second Group. For each group of related points, "point” describes the position of the anchor point in the camera. It is obtained by weighting the average of several groups of key points and their weights. The "idx" field is used to describe the number of the key points.
  • point may include any group of related points, and is not limited to two groups.
  • two anchor points can be obtained, and the virtual object moves following the positions of the two anchor points.
  • the coordinates of each key point can be obtained from the human hand information obtained in step S103.
  • the animation behavior configuration file may further include the relationship between the zoom level of the virtual object and the key point, and the parameters "scaleX” and “scaleY” are used to describe the scaling requirements in the x and y directions, respectively.
  • the parameters "scaleX” and “scaleY” are used to describe the scaling requirements in the x and y directions, respectively.
  • two parameters "start_idx” and “end_idx” are included, which correspond to two key points.
  • the distance between these two key points is multiplied by the value of "factor” to obtain the intensity of the scaling.
  • the factor is a preset value and can be any value. For scaling, if there is only a set of associated points "point0" in "position”, then the x direction is the actual horizontal right direction; the y direction is the actual vertical downward direction; both "scaleX” and “scaleY” take effect.
  • the original object's original aspect ratio is scaled according to the existing parameter. If “point0" and “point1" are both in “position”, the x direction is the vector direction obtained by point1.anchor-point0.anchor; the y direction is determined by rotating the x direction 90 degrees clockwise; “scaleX” is invalid and the x direction The scaling is determined by the anchor point following. “scaleY” will take effect. If “scaleY” is missing, the original aspect ratio of the virtual object will be scaled.
  • the animation behavior configuration file may further include a rotation parameter "rotationtype" of the virtual object, which takes effect only when there is only "point0" in "position”, which may include two values of 0 and 1, where: 0: No rotation is required; 1: Rotation is required based on the relevant angle value of the keypoint.
  • the animation behavior configuration file may further include a rendering blending mode.
  • the rendering blending refers to mixing two colors together. Specifically, in the present disclosure, it refers to mixing a color at a pixel position with a color to be painted on. Together to achieve special effects, and the rendering blending mode refers to the method used for blending. Generally speaking, the blending method refers to calculating the source color and the target color to obtain the mixed color. In actual applications, the source color is often used. The result obtained by multiplying the source factor and the result obtained by multiplying the target color by the target factor is calculated to obtain the mixed color.
  • BLENDcolor SRC_color * SCR_factor + DST_color * DST_factor, where 0 ⁇ SCR_factor ⁇ 1, 0 ⁇ DST_factor ⁇ 1.
  • the four components of the source color referring to red, green, blue, and alpha values
  • the four components of the target color are (Rd, Gd, Bd, Ad )
  • the source factor is (Sr, Sg, Sb, Sa)
  • the target factor is (Dr, Dg, Db, Da).
  • the new color produced by the blend can be expressed as: (Rs * Sr + Rd * Dr, Gs * Sg + Gd * Dg, Bs * Sb + Bd * Db, As * Sa + Ad * Da), where the alpha value represents transparency, 0 ⁇ alpha ⁇ 1.
  • the above mixing method is only an example. In practical applications, the mixing method can be defined or selected by itself. The calculation can be the larger of the addition, the subtraction, the multiplication, the division, the larger one, the smaller of the two, and a logical operation. (And, or, XOR, etc.). The above mixing method is only an example. In practical applications, the mixing method can be defined or selected by itself. The calculation can be the larger of the addition, the subtraction, the multiplication, the division, the larger one, the smaller of the two, and a logical operation. (And, or, XOR, etc.).
  • the animation behavior configuration file may further include a rendering order.
  • the rendering order includes two layers. One is a rendering order between a sequence of frames of a virtual object.
  • the order may be defined using a parameter "zorder". The smaller the value, the higher the rendering order; the second level is the rendering order between the virtual object and the human hand.
  • This order can be determined in various ways. Typically, you can also use a similar method as "zorder" You can directly set manual rendering first or virtual objects rendering first.
  • a depth test may also be used to determine the order of rendering.
  • the specific depth test refers to setting a depth buffer, which corresponds to the color buffer, and the depth buffer stores pixels. Depth information, the color information of the pixels stored in the color buffer.
  • the depth value of the corresponding pixel of the surface is first compared with the value stored in the depth buffer. If it is greater than or equal to the depth buffer, Median, discard this part; otherwise use the depth and color values corresponding to this pixel to update the depth buffer and color buffer respectively.
  • This process is called DepthTesting.
  • setting the value of the depth buffer to 1, which represents the maximum depth value, and the range of depth values is between [0,1], A smaller value indicates closer observation, and a larger value indicates farther away from the observer.
  • the deep write is associated with the depth test. Generally, if the depth test is enabled and the result of the depth test may update the value of the depth buffer, the deep write needs to be turned on to update the value of the depth buffer. .
  • the following example illustrates the image drawing process when the depth test is turned on and the depth is written. Assume that two color blocks, red and yellow, are to be drawn. In the rendering queue, the red block is in front, the yellow block is behind, and the red block has a depth of 0.5. , The depth value of the yellow block is 0.2, and the depth test comparison function used is DF_LEQUAL. At this time, 0.5 is written in the depth buffer, red is written in the color buffer, and then 0.2 is obtained through the comparison function when rendering yellow.
  • step S201 it may further include step S2001: obtaining an animation behavior configuration file corresponding to the type according to the type of the virtual object.
  • step S2001 obtaining an animation behavior configuration file corresponding to the type according to the type of the virtual object.
  • the types of virtual objects can be classified, and different animation behavior configuration files can be obtained for different types of virtual objects, so that it is more efficient to read the animation configuration parameters in the next step.
  • step S2002 may be further included: setting an animation behavior configuration file, and setting animation configuration parameters in the configuration file.
  • the animation configuration parameters of the animation behavior configuration file may be configured, where the animation configuration parameters may further include rendering parameters.
  • FIGS. 2b-2g for specific examples of a method for generating animation disclosed in this disclosure.
  • FIG. 2b for the video frames in the video collected by the image sensor, in the initial state, no human hand motion is detected, so no virtual object appears.
  • Figures 2c and 2d in the two frames of the image, the circular motion of the human hand is detected.
  • the virtual object-lighting trajectory is triggered, as shown in Figures 2e-2g. Animation of trajectory rotation.
  • FIG. 3 is a schematic structural diagram of a first embodiment of an animation generating device 30 according to an embodiment of the present disclosure.
  • the device includes: a virtual object acquisition module 31, a video acquisition module 32, a human hand recognition module 33, and an animation configuration parameter acquisition. Module 34 and animation generation module 35. among them,
  • a virtual object acquisition module 31 configured to acquire a virtual object
  • a video acquisition module 32 configured to acquire a video collected by an image sensor
  • a human hand recognition module 33 configured to identify a human hand in the video and obtain human hand information
  • An animation configuration parameter acquisition module 34 configured to acquire animation configuration parameters according to the human hand information
  • the animation generating module 35 is configured to generate an animation related to the virtual object according to the animation configuration parameter.
  • the human hand recognition module 33 includes:
  • a first identification module configured to identify a human hand in the video
  • a recording module for recording the movement track of a human hand
  • An analysis and recognition module configured to analyze the motion trajectory and identify the motion trajectory as a predetermined action
  • a human hand information output module is configured to use the action as human hand information.
  • the type of the virtual object is an animation type, and an animation configuration parameter corresponding to the action is obtained, and the animation configuration parameter is used to control a rendering position of the virtual object and / or an attribute of the animation of the virtual object itself.
  • the type of the virtual object is a model type, and an animation configuration parameter corresponding to the action is obtained, and the animation configuration parameter is used to control a rendering position of the virtual object and / or an animation node of the virtual object.
  • the human hand recognition module 33 includes:
  • a recognition result data acquisition module configured to recognize a human hand in the video, and obtain recognition result data
  • the recognition result data processing module is used for smoothing and coordinate normalizing the recognition result data to obtain the processed manpower
  • the first human hand information acquisition module is configured to obtain the human hand information according to the processed human hand.
  • the apparatus shown in FIG. 3 can execute the method in the embodiment shown in FIG. 1.
  • the animation configuration parameter acquisition module 34 further includes a reading module 41 for reading an animation behavior configuration file, where the animation behavior A configuration file stores animation configuration parameters associated with the human hand information; a first acquisition module 42 is configured to acquire the animation configuration parameters from the animation behavior configuration file according to the human hand information.
  • the rendering information obtaining module 34 may further include a second obtaining module 43 for obtaining an animation behavior configuration file corresponding to the type of the virtual object according to the type of the virtual object.
  • the rendering information acquisition module 34 may further include: an animation behavior configuration file setting module 44 configured to set an animation behavior configuration file and set animation configuration parameters in the configuration file.
  • the device in the foregoing second embodiment may execute the method in the embodiment shown in FIG. 2.
  • the parts that are not described in detail in this embodiment reference may be made to the related description of the embodiment shown in FIG. 2.
  • the implementation process and technical effect of the technical solution refer to the description in the embodiment shown in FIG. 2, and details are not described herein again.
  • FIG. 5 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 5, the electronic device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
  • the memory 51 is configured to store non-transitory computer-readable instructions.
  • the memory 51 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 52 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions.
  • the processor 52 is configured to run the computer-readable instructions stored in the memory 51, so that the electronic device 50 executes all or part of the steps of the animation generating method of the foregoing embodiments of the present disclosure. .
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present invention within.
  • FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 60 stores non-transitory computer-readable instructions 61 thereon.
  • the non-transitory computer-readable instruction 61 is executed by a processor, all or part of the steps of the foregoing animation generating method of the embodiments of the present disclosure are performed.
  • the computer-readable storage medium 60 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example, CD-ROM and DVD
  • magneto-optical storage media for example, MO
  • magnetic storage media for example, magnetic tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 7 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 7, the animation generating terminal 70 includes the foregoing embodiment of the animation generating device.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal equipment such as PMP (Portable Multimedia Player), navigation device, vehicle-mounted terminal equipment, vehicle-mounted display terminal, vehicle-mounted electronic rear-view mirror, etc., and fixed terminal equipment such as digital TV, desktop computer, and the like.
  • a mobile phone such as a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal equipment such as PMP (Portable Multimedia Player), navigation device, vehicle-mounted terminal equipment, vehicle-mounted display terminal, vehicle-mounted electronic rear-view mirror, etc., and fixed terminal equipment such as digital TV, desktop computer, and the like.
  • PMP Portable Multimedia Player
  • the terminal may further include other components.
  • the animation generating terminal 70 may include a power source unit 71, a wireless communication unit 72, an A / V (audio / video) input unit 73, a user input unit 74, a sensing unit 75, an interface unit 76, and a controller. 77, an output unit 78, a storage unit 79, and so on.
  • FIG. 7 illustrates a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 72 allows radio communication between the terminal 70 and a wireless communication system or network.
  • the A / V input unit 73 is used to receive audio or video signals.
  • the user input unit 74 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 75 detects the current state of the terminal 70, the position of the terminal 70, the presence or absence of a user's touch input to the terminal 70, the orientation of the terminal 70, the acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a signal for controlling the terminal 70's operation command or signal.
  • the interface unit 76 functions as an interface through which at least one external device can be connected to the terminal 70.
  • the output unit 78 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the storage unit 79 may store software programs and the like for processing and control operations performed by the controller 77, or may temporarily store data that has been output or is to be output.
  • the storage unit 79 may include at least one type of storage medium.
  • the terminal 70 can cooperate with a network storage device that performs a storage function of the storage unit 79 through a network connection.
  • the controller 77 generally controls the overall operation of the terminal device.
  • the controller 77 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 77 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images.
  • the power supply unit 71 receives external power or internal power under the control of the controller 77 and provides appropriate power required to operate each element and component.
  • Various embodiments of the animation generation method proposed by the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof.
  • various embodiments of the animation generation method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD).
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA Field Programmable Gate Array
  • processor controller, microcontroller, microprocessor, electronic unit designed to perform the functions described herein, and in some cases, the present disclosure
  • Various embodiments of the proposed animation generation method may be implemented in the controller 77.
  • various embodiments of the animation generation method proposed by the present disclosure may be implemented with a separate software module allowing at least one function or operation to be performed.
  • the software code may be implemented by a software application program (or program) written in any suitable programming language, and the software code may be stored in the storage unit 79 and executed by the controller 77.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé et un appareil de génération d'animation, un dispositif électronique, ainsi qu'un support de stockage lisible par ordinateur. Le procédé de génération d'animation consiste à : acquérir un objet virtuel (S101) ; acquérir une vidéo collectée par un capteur d'image (S102) ; identifier une main humaine dans la vidéo et obtenir des informations de main humaine (S103) ; acquérir, sur la base des informations de main humaine, un paramètre de configuration d'animation (S104) ; et générer, sur la base du paramètre de configuration d'animation, une animation relative à l'objet virtuel (S105). Le procédé résout le problème technique de contrôle rigide de la génération d'animation dans l'état antérieur de la technique.
PCT/CN2018/123648 2018-08-24 2018-12-25 Procédé et appareil de génération d'animation WO2020037924A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810975738.8 2018-08-24
CN201810975738.8A CN110858409A (zh) 2018-08-24 2018-08-24 动画生成方法和装置

Publications (1)

Publication Number Publication Date
WO2020037924A1 true WO2020037924A1 (fr) 2020-02-27

Family

ID=69592192

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123648 WO2020037924A1 (fr) 2018-08-24 2018-12-25 Procédé et appareil de génération d'animation

Country Status (2)

Country Link
CN (1) CN110858409A (fr)
WO (1) WO2020037924A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369687B (zh) * 2020-03-04 2021-03-30 腾讯科技(深圳)有限公司 合成虚拟对象的动作序列的方法及设备
CN113163135B (zh) * 2021-04-25 2022-12-16 北京字跳网络技术有限公司 视频的动画添加方法、装置、设备及介质
CN114187656A (zh) * 2021-11-30 2022-03-15 上海商汤智能科技有限公司 一种动作检测方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045373A (zh) * 2015-03-26 2015-11-11 济南大学 一种面向用户心理模型表达的三维手势交互方法
CN105389005A (zh) * 2015-10-27 2016-03-09 武汉体育学院 一种二十四式太极拳三维互动展示方法
CN106709464A (zh) * 2016-12-29 2017-05-24 华中师范大学 一种土家织锦技艺肢体与手部动作采集和集成方法
CN107024989A (zh) * 2017-03-24 2017-08-08 中北大学 一种基于Leap Motion手势识别的沙画制作方法
CN107995097A (zh) * 2017-11-22 2018-05-04 吴东辉 一种互动ar红包的方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123007B (zh) * 2014-07-29 2017-01-11 电子科技大学 一种多维加权的3d动态手势识别方法
CN104474701A (zh) * 2014-11-20 2015-04-01 杭州电子科技大学 一种推进太极拳运动的人机交互系统
CN107707839A (zh) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 图像处理方法及装置
CN107911614B (zh) * 2017-12-25 2019-09-27 腾讯数码(天津)有限公司 一种基于手势的图像拍摄方法、装置和存储介质
CN112860168B (zh) * 2018-02-08 2022-08-02 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045373A (zh) * 2015-03-26 2015-11-11 济南大学 一种面向用户心理模型表达的三维手势交互方法
CN105389005A (zh) * 2015-10-27 2016-03-09 武汉体育学院 一种二十四式太极拳三维互动展示方法
CN106709464A (zh) * 2016-12-29 2017-05-24 华中师范大学 一种土家织锦技艺肢体与手部动作采集和集成方法
CN107024989A (zh) * 2017-03-24 2017-08-08 中北大学 一种基于Leap Motion手势识别的沙画制作方法
CN107995097A (zh) * 2017-11-22 2018-05-04 吴东辉 一种互动ar红包的方法及系统

Also Published As

Publication number Publication date
CN110858409A (zh) 2020-03-03

Similar Documents

Publication Publication Date Title
WO2020037923A1 (fr) Procédé et appareil de synthèse d'image
US9514570B2 (en) Augmentation of tangible objects as user interface controller
CN108986016B (zh) 图像美化方法、装置及电子设备
US10642369B2 (en) Distinguishing between one-handed and two-handed gesture sequences in virtual, augmented, and mixed reality (xR) applications
WO2020001013A1 (fr) Procédé et dispositif de traitement d'images, support de stockage lisible par ordinateur, et terminal
US11176355B2 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
KR20150108888A (ko) 제스처 인식을 위한 부분 및 상태 검출
WO2020019665A1 (fr) Procède et appareil de production d'effet spécial tridimensionnel basé sur un visage humain, et dispositif électronique
WO2020037924A1 (fr) Procédé et appareil de génération d'animation
US11282257B2 (en) Pose selection and animation of characters using video data and training techniques
WO2020019664A1 (fr) Procédé et appareil de génération d'image déformée sur la base du visage humain
WO2019242271A1 (fr) Procédé et appareil de déformation d'image, et dispositif électronique
CN110069125B (zh) 虚拟对象的控制方法和装置
CN111199169A (zh) 图像处理方法和装置
US10440313B2 (en) Method, system and apparatus for spatially arranging a plurality of video frames for display
US11755119B2 (en) Scene controlling method, device and electronic equipment
CN110069126B (zh) 虚拟对象的控制方法和装置
Akman et al. Multi-cue hand detection and tracking for a head-mounted augmented reality system
WO2020001016A1 (fr) Procédé et appareil de génération d'image animée et dispositif électronique et support d'informations lisible par ordinateur
US20210158565A1 (en) Pose selection and animation of characters using video data and training techniques
CN110941327A (zh) 虚拟对象的显示方法和装置
CN111258413A (zh) 虚拟对象的控制方法和装置
CN110941974B (zh) 虚拟对象的控制方法和装置
CN111103967A (zh) 虚拟对象的控制方法和装置
CN111625101A (zh) 一种展示控制方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18930593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18930593

Country of ref document: EP

Kind code of ref document: A1