WO2020029554A1 - 增强现实多平面模型动画交互方法、装置、设备及存储介质 - Google Patents

增强现实多平面模型动画交互方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2020029554A1
WO2020029554A1 PCT/CN2019/073078 CN2019073078W WO2020029554A1 WO 2020029554 A1 WO2020029554 A1 WO 2020029554A1 CN 2019073078 W CN2019073078 W CN 2019073078W WO 2020029554 A1 WO2020029554 A1 WO 2020029554A1
Authority
WO
WIPO (PCT)
Prior art keywords
animation
real
plane
virtual object
planes
Prior art date
Application number
PCT/CN2019/073078
Other languages
English (en)
French (fr)
Inventor
陈怡�
刘昂
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Priority to JP2020571801A priority Critical patent/JP7337104B2/ja
Priority to GB2100236.5A priority patent/GB2590212B/en
Priority to US16/967,950 priority patent/US20210035346A1/en
Publication of WO2020029554A1 publication Critical patent/WO2020029554A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present disclosure relates to the technical field of augmented reality, and in particular, to an interactive method, device, device, and storage medium for an augmented reality multi-plane model animation.
  • Augmented Reality also called augmented reality or mixed reality
  • AR is a new technology developed on the basis of computer virtual reality. It uses computer technology to extract real-world information and overlay virtual information with the real world to achieve the real sensory effect of virtual information and real-world information in the same picture or space.
  • AR technology has a wide range of applications in military, scientific research, industry, medical, gaming, education, and municipal planning. For example, in the medical field, doctors can use AR technology to precisely locate the surgical site.
  • the existing augmented reality AR system realizes the fusion process of real images and virtual animations.
  • the video frames of the real environment are obtained, the obtained video frames are calculated and processed, the relative position of the environment and the camera is obtained, and the graphic frames of the virtual objects are generated.
  • the graphic frame of the object is synthesized with the video frame of the real environment to obtain a synthesized video frame of the augmented reality environment, and the video memory information is input for display.
  • the present disclosure provides an augmented reality multi-plane model animation interaction method; and also provides an augmented reality multi-plane model animation interaction device, device, and storage medium.
  • an augmented reality multi-plane model animation interaction method By using the plane identified in the real scene to determine the animation trajectory of the virtual object sketched by the animation model, the association between the virtual object animation and the real scene is realized, and the system's real sexy experience is enhanced.
  • An interactive method for augmented reality multi-plane model animation including:
  • the multiple real planes generate an animation trajectory of the virtual object between the multiple real planes.
  • performing calculation processing on the video image to identify multiple real planes in a real environment includes recognizing all the planes in the video image at one time, or sequentially identifying the planes in the video image, or according to the animation needs of the virtual object, Identify the required plane.
  • performing calculation processing on the video image, and identifying multiple real planes in a real environment include detecting a plane pose and a camera pose in a world coordinate system through a SLAM algorithm.
  • the generating an animation trajectory of a virtual object sketched by the virtual object according to the identified real plane further includes:
  • the corresponding three-dimensional graphics are drawn according to the animation trajectory data, and a plurality of virtual graphic frames are generated to form an animation trajectory of the virtual object.
  • the animation trajectory data includes a coordinate position in a camera coordinate system, an animation curve, and a jump relationship.
  • an animation keypoint of the virtual object is generated according to the identified posture and the jump relationship of the real plane, and the animation keypoint is used as a parameter to generate the virtual object by using a Bezier curve configuration. Animation track.
  • An augmented reality multi-plane model animation interactive device includes:
  • Obtaining module used to obtain the video image of the real environment; recognition module: used to perform calculation processing on the video image to identify the real plane in the real environment; placement module: used to place the virtual object corresponding to the model on the multiple One of the real planes; a generating module: generating an animation trajectory of the virtual object between the real planes according to the identified real planes.
  • the recognition module recognizes multiple real planes in the real environment, including recognizing all the planes in the video image at one time, or sequentially identifying the planes in the video image, or identifying the required planes according to the animation needs of the virtual object .
  • the recognition module recognizes a plurality of real planes in a real environment including detecting a plane pose and a camera pose in a world coordinate system through a SLAM algorithm.
  • the generating module according to the identified real planes, generating an animation trajectory of a virtual object to which the generated module further includes:
  • the corresponding three-dimensional graphics are drawn according to the animation trajectory data, and a plurality of virtual graphic frames are generated to form an animation trajectory of the virtual object.
  • the animation trajectory data includes a coordinate position in a camera coordinate system, an animation curve, and a jump relationship.
  • the generating module generates an animation keypoint of the virtual object according to the identified pose and the jump relationship of the real plane, and uses the Bezier curve configuration to generate the virtual keypoint using the animation keypoint as a parameter.
  • the animation track of the object is not limited to the identified pose and the jump relationship of the real plane.
  • An augmented reality multi-plane model animation interactive device includes a processor and a memory, and the memory stores computer-readable instructions; the processor executes the computer-readable instructions to implement any of the foregoing augmented reality multi-plane model animation interaction methods.
  • a computer-readable storage medium is used to store computer-readable instructions.
  • the computer When the computer-readable instructions are executed by a computer, the computer enables the computer to implement any one of the augmented reality multi-plane model animation interaction methods described above.
  • Embodiments of the present disclosure provide an augmented reality multi-plane model animation interaction method, an augmented reality multi-plane model animation interaction device, an augmented reality multi-plane model animation interaction device, and a computer-readable storage medium.
  • the method for interactively interacting with an augmented reality multi-plane model includes: obtaining a video image of a real environment; performing calculation processing on the video image to identify multiple real planes in the real environment; and placing virtual objects corresponding to the model on the multiple On one of the real planes; and generating an animation track of the virtual object between the plurality of real planes based on the identified real planes.
  • the method generates the animation trajectory of the virtual object by identifying the real plane of the real environment, thereby associating the animation effect of the virtual object with the real scene, and enhancing the user's real sexy official experience.
  • FIG. 1 is a schematic flowchart of an augmented reality multi-plane model animation interaction method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an interactive method for animated reality multi-plane model animation according to another embodiment of the present disclosure
  • 2a is an example of generating a virtual object animation according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of an augmented reality multi-plane model animation interactive device according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of an augmented reality multi-plane model animation interactive device according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an augmented reality multi-plane model animation interactive terminal according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an animation interactive method of an augmented reality multi-plane model.
  • the augmented reality multi-plane model animation interaction method mainly includes the following steps:
  • Step S1 Acquire a video image of a real environment.
  • the graphics system environment is initialized first.
  • the goal of the graphics system environment initialization is to set a drawing environment capable of supporting two-dimensional graphics and three-dimensional graphics, including obtaining a setting display mode, setting a display parameter list, a display device, creating a display surface, and setting a display surface parameter. , Set the viewpoint position and view plane, etc.
  • Graphics systems generally use cameras, camcorders and other image acquisition equipment to capture real-world video images.
  • the internal parameters of the camera and the camera refer to the internal parameters such as the focal length and deformation of the camera. This parameter determines the projection transformation matrix of the camera and depends on the properties of the camera. Therefore, the internal parameters of the same camera are constant. of.
  • the camera's internal parameters are obtained in advance through an independent camera calibration program. What is done here is to read this set of parameters into memory.
  • Step S2 Perform calculation processing on the obtained video frame image to identify multiple real planes in the real environment.
  • the real plane recognition can recognize all the planes in the environment at one time, one by one, or the required planes according to the animation needs of the virtual object.
  • the recognition of the real plane can adopt a variety of methods, using real-time localization and map reconstruction (Simultaneous Localization And Mapping (SLAM) algorithm to detect the plane pose and camera pose in the world coordinate system.
  • the pose information includes a position (three-dimensional coordinates) and a pose (rotation angles about the three axes of X, Y, and Z, respectively), which are usually represented by a pose matrix.
  • the world coordinate system is the absolute coordinate system of the system. Before the user coordinate system (ie the camera coordinate system) is established, the coordinates of all points on the screen are determined by the origin of the coordinate system.
  • a method based on feature point alignment is used to detect and identify the real plane.
  • discrete feature points in the video frame image such as SIFT, SURF, FAST, ORB and other features
  • the feature points between adjacent images are matched.
  • the pose increment of the camera is calculated and the triangulation technology is used to recover the 3D coordinates of the feature points. It is assumed that most of the extracted feature points are located in the same plane, and each plane of the scene is estimated by the RANSAC algorithm using the extracted FAST corner points.
  • a real plane detection and recognition method is adopted based on image alignment.
  • a direct alignment operation is performed on all pixels between the previous frame and the current frame of the video frame image, and all pixel information on the image is used to solve the problem.
  • the incremental camera poses of adjacent frames restore the depth information of the pixels in the image to get the real plane.
  • a video frame image is converted into a three-dimensional point cloud form, and a single frame of three-dimensional point cloud reconstruction is completed; a SURF feature descriptor is used to perform feature extraction on two adjacent frames of images, and Euclidean distance is used as a similarity measure.
  • PnP is solved to obtain the preliminary rotation matrix of the three-dimensional point cloud of two adjacent frames; the VoxelGrid filter is used to down-sample the reconstructed point cloud of each frame, and the RANSAC algorithm is used to extract the plane pose from the three-dimensional point cloud of each frame;
  • the plane pose extracted from the frame 3D point cloud determines the positions of the real planes.
  • Step S3 placing the virtual object corresponding to the model on one of the plurality of real planes.
  • the model here may be a 3D model.
  • each 3D model When each 3D model is placed in a video image, it corresponds to a virtual object.
  • the virtual object is placed on the real plane identified in step S2. Which plane is placed on? This disclosure is not limited, and may be placed on the first identified plane, or may be placed on the plane specified by the user according to the user's designation.
  • Step S4 Generate an animation trajectory of the virtual object between the multiple real planes according to the identified multiple real planes.
  • the pose of the virtual object relative to the recognized plane in a three-dimensional plane coordinate system is usually built into the system (for example, directly on the plane origin) or specified by the user.
  • S31 Calculate the pose of the virtual object relative to the world coordinate system through the plane pose in the world coordinate system and the pose of the virtual object relative to the identified plane coordinate system;
  • S32 Calculate a change matrix H (view matrix) based on the camera pose in the world coordinate system, which is used to change the pose of the virtual object relative to the world coordinate system to the pose of the virtual object relative to the camera coordinate system;
  • the phase formation process of the identified plane on the display image is equivalent to transforming the points of the plane from the world coordinate system to the camera coordinate system and then projecting onto the display image to form a two-dimensional image of the plane. Therefore, the 3D virtual object corresponding to the plane is retrieved from the corresponding data built-in or specified by the user through the identified plane, and the vertex array of the 3D virtual object is obtained. Finally, the vertex coordinates in the vertex array are multiplied by the transformation matrix. H obtains the coordinates of the three-dimensional virtual object in the camera coordinate system.
  • the product of the projection matrix and the transformation matrix H can be obtained through the simultaneous equations.
  • the projection matrix depends entirely on the internal parameters of the camera, so the transformation matrix H can be calculated.
  • S33 Generate animated trajectory data of the virtual object according to the recognized real plane data (including the plane pose).
  • the animation trajectory data includes the coordinate position, animation curve and jump relationship in the camera coordinate system.
  • the animation keypoint of the virtual object is generated according to the identified position of the real plane and the jump relationship of the virtual object. Or you can also set jump points and animation curves by setting animation key points.
  • the jumping relationship of the animation track is, for example, which plane to jump to first, and then to which plane.
  • a Bezier curve configuration is used to generate an animation curve of a virtual object, that is, an animation trajectory, to achieve accurate sketching and configuration.
  • Determine the order of the Bezier equation according to the animation trajectory data such as first order, second order, third order, or higher, and use the animation keypoint of the virtual object as the control point of the Bezier curve to create the Bezier curve.
  • Curve equations such as linear Bezier equations, quadratic Bezier equations, cubic Bezier equations, or higher-order Bezier equations. Bezier curves are drawn based on the Bezier equation. , And then form the animation curve of the virtual object, that is, the animation track.
  • FIG. 2a this is an example of an interactive method of animated reality multi-plane model animation according to an embodiment of the present disclosure.
  • four real planes are identified in step S2, which are P1, P2, P3, and P4, and the virtual object M is placed on the plane P1.
  • the user can set the key points of the animation.
  • the key points are A, B, and C, respectively located on the planes P2, P3, and P4, and the jump relationship is P1 to P2 to P3 to P4, so according to the key points and the jump relationship,
  • the embodiment of the present disclosure provides an augmented reality multi-plane model animation interactive device 30.
  • the device can perform the steps described in the embodiment of the method for interactively interacting with an augmented reality multi-plane model.
  • the device 30 mainly includes: an acquisition module 31, an identification module 32, a placement module 33, and a generation module 34.
  • the obtaining module 31 is configured to obtain a video image of a real environment.
  • the acquisition module is generally implemented based on a graphics system.
  • the graphics system environment is initialized.
  • the goal of the graphics system environment initialization is to set a drawing environment that can support two-dimensional graphics and three-dimensional graphics, including obtaining a setting display mode, setting a display parameter list, a display device, creating a display surface, setting display surface parameters, Viewpoint position, view plane, etc.
  • Graphics systems generally use cameras, camcorders and other image acquisition equipment to capture real-world video images.
  • the internal parameters of the camera and the camera refer to the internal parameters such as the focal length and deformation of the camera. This parameter determines the projection transformation matrix of the camera and depends on the properties of the camera. Therefore, the internal parameters of the same camera are constant. of.
  • the camera's internal parameters are obtained in advance through an independent camera calibration program. What is done here is to read this set of parameters into memory.
  • the acquisition module captures a video frame image through a camera and a video camera, and performs corresponding processing on the video frame image, such as scaling, grayscale processing, binarization, and contour extraction.
  • the identification module 32 is configured to perform calculation processing on the video frame image acquired by the acquisition module to identify a real plane in a real environment.
  • Real plane recognition can either identify all planes in the environment at once, or one by one, or identify the required planes according to the animation needs of the virtual object.
  • the recognition of the real plane can adopt a variety of methods, using real-time localization and map reconstruction (Simultaneous Localization And Mapping (SLAM) algorithm to detect the plane pose and camera pose in the world coordinate system.
  • the pose information includes a position (three-dimensional coordinates) and a pose (rotation angles about the three axes of X, Y, and Z, respectively), which are usually represented by a pose matrix.
  • a method based on feature point alignment is used to detect and identify the real plane.
  • discrete feature points in the video frame image such as SIFT, SURF, FAST, ORB and other features
  • the feature points between adjacent images are matched.
  • the pose increment of the camera is calculated and the triangulation technology is used to recover the 3D coordinates of the feature points. It is assumed that most of the extracted feature points are located in the same plane, and each plane of the scene is estimated by the RANSAC algorithm using the extracted FAST corner points.
  • a real plane detection and recognition method is adopted based on image alignment.
  • a direct alignment operation is performed on all pixels between the previous frame and the current frame of the video frame image, and all pixel information on the image is used to solve the problem.
  • the incremental camera poses of adjacent frames restore the depth information of the pixels in the image to get the real plane.
  • a video frame image is converted into a three-dimensional point cloud form, and a single frame of three-dimensional point cloud reconstruction is completed; using SURF feature descriptors to perform feature extraction on two adjacent frames of images, and using Euclidean distance as a similarity measure, PnP is solved to obtain the preliminary rotation matrix of the three-dimensional point cloud of two adjacent frames; the VoxelGrid filter is used to downsample the reconstructed point cloud of each frame, and the RANSAC algorithm is used to extract the plane pose from the three-dimensional point cloud of each frame; The plane pose extracted from the frame 3D point cloud determines the positions of the real planes.
  • the placement module 33 is configured to place a virtual object corresponding to the model on one of the multiple real planes.
  • the model here may be a 3D model.
  • each 3D model When each 3D model is placed in a video image, it corresponds to a virtual object.
  • the virtual object is placed on the real plane identified in step S2. Which plane is placed on? This disclosure is not limited, and may be placed on the first identified plane, or may be placed on the plane specified by the user according to the user's designation.
  • the generating module 34 is configured to generate an animation trajectory of the virtual object between the multiple real planes according to the identified multiple real planes.
  • the pose of the virtual object (3D model) relative to the recognized plane in the three-dimensional plane coordinate system is usually built in by the system (for example, directly on the plane origin) or specified by the user.
  • the specific operation steps of the generating module 34 include:
  • S31 Calculate the pose of the virtual object relative to the world coordinate system through the plane pose in the world coordinate system and the pose of the virtual object relative to the identified plane coordinate system;
  • S32 Calculate a change matrix H (view matrix) based on the camera pose in the world coordinate system, which is used to change the pose of the virtual object relative to the world coordinate system to the pose of the virtual object relative to the camera coordinate system;
  • the phase formation process of the identified plane on the display image is equivalent to transforming the points of the plane from the world coordinate system to the camera coordinate system and then projecting onto the display image to form a two-dimensional image of the plane. Therefore, the 3D virtual object corresponding to the plane is retrieved from the corresponding data built-in or specified by the user through the identified plane, and the vertex array of the 3D virtual object is obtained. Finally, the vertex coordinates in the vertex array are multiplied by the transformation matrix. H obtains the coordinates of the three-dimensional virtual object in the camera coordinate system.
  • the product of the projection matrix and the transformation matrix H can be obtained through the simultaneous equations.
  • the projection matrix depends entirely on the internal parameters of the camera, so the transformation matrix H can be calculated.
  • S33 Generate animated trajectory data of the virtual object according to the recognized real plane data (including the plane pose).
  • the animation trajectory data includes the coordinate position, animation curve and jump relationship in the camera coordinate system.
  • an animation keypoint of the virtual object is generated according to the position of the recognized real plane and the jump relationship of the virtual object defined by the virtual object.
  • the jumping relationship of the animation track is, for example, which plane to jump to first, and then to which plane.
  • a Bezier curve configuration is used to generate an animation curve of a virtual object, that is, an animation trajectory, to achieve accurate sketching and configuration.
  • Determine the order of the Bezier equation according to the animation trajectory data such as first order, second order, third order, or higher, and use the animation keypoint of the virtual object as the control point of the Bezier curve to create the Bezier curve.
  • Curve equations such as linear Bezier equations, quadratic Bezier equations, cubic Bezier equations, or higher-order Bezier equations. Bezier curves are drawn based on the Bezier equation. , And then form the animation curve of the virtual object, that is, the animation track.
  • FIG. 4 is a hardware block diagram of an augmented reality multi-plane model animation interactive device according to an embodiment of the present disclosure.
  • the augmented reality multi-plane model animation interactive device 40 according to an embodiment of the present disclosure includes a memory 41 and a processor 42.
  • the memory 41 is configured to store non-transitory computer-readable instructions.
  • the memory 41 may include one or more computer program products, and the computer program product may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 42 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the augmented reality multi-plane model animation interactive device 40 to perform desired operations.
  • the processor 42 is configured to execute the computer-readable instructions stored in the memory 41, so that the augmented reality multi-plane model animation interactive device 40 executes the aforementioned augmented reality of the embodiments of the present disclosure. All or part of the steps of a multi-plane model animation interaction method.
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
  • FIG. 5 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 50 according to an embodiment of the present disclosure stores non-transitory computer-readable instructions 51 thereon.
  • the non-transitory computer-readable instruction 51 is executed by a processor, all or part of the steps of the foregoing method for interactively interacting with an augmented reality multi-plane model of the embodiments of the present disclosure are performed.
  • the above computer-readable storage medium includes, but is not limited to: optical storage media (for example: CD-ROM and DVD), magneto-optical storage media (for example: MO), magnetic storage media (for example: magnetic tape or mobile hard disk), Rewrites non-volatile memory media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example: CD-ROM and DVD
  • magneto-optical storage media for example: MO
  • magnetic storage media for example: magnetic tape or mobile hard disk
  • Rewrites non-volatile memory media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 6 is a schematic diagram illustrating a hardware structure of a terminal according to an embodiment of the present disclosure.
  • the augmented reality multi-plane model animation interactive terminal 60 includes the foregoing embodiment of the augmented reality multi-plane model animation interaction device.
  • the terminal may be implemented in various forms, and the terminal in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP ( Portable multimedia players), navigation devices, on-board terminals, on-board display terminals, on-board electronic rear-view mirrors, and other mobile terminals, and fixed terminals such as digital TVs, desktop computers, and the like.
  • a mobile phone such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP ( Portable multimedia players), navigation devices, on-board terminals, on-board display terminals, on-board electronic rear-view mirrors, and other mobile terminals, and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA personal digital assistant
  • PAD tablet computer
  • PMP Portable multimedia players
  • navigation devices
  • the terminal may further include other components.
  • the augmented reality multi-plane model animation interactive terminal 60 may include a power supply unit 61, a wireless communication unit 62, an A / V (audio / video) input unit 63, a user input unit 64, a sensing unit 65, and an interface.
  • FIG. 6 shows a terminal with various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 62 allows radio communication between the terminal 60 and a wireless communication system or network.
  • the A / V input unit 63 is used to receive audio or video signals.
  • the user input unit 64 may generate key input data according to a command input by the user to control various operations of the terminal.
  • the sensing unit 65 detects the current status of the terminal 60, the position of the terminal 60, the presence or absence of a user's touch input to the terminal 60, the orientation of the terminal 60, the acceleration or deceleration movement and direction of the terminal 60, and the like, and generates a terminal for controlling the terminal Command or signal of operation of 60.
  • the interface unit 66 functions as an interface through which at least one external device can be connected to the terminal 60.
  • the output unit 68 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the memory 69 may store software programs and the like for processing and control operations performed by the controller 66, or may temporarily store data that has been output or is to be output.
  • the memory 69 may include at least one type of storage medium.
  • the terminal 60 may cooperate with a network storage device that performs a storage function of the memory 69 through a network connection.
  • the controller 67 generally controls the overall operation of the terminal.
  • the controller 67 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 67 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 61 receives external power or internal power under the control of the controller 67 and supplies appropriate power required to operate the various elements and components.
  • augmented reality multi-plane model animation interaction method proposed by the present disclosure may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof.
  • various embodiments of the augmented reality multi-plane model animation interaction method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable At least one of a logic device (PLD), a field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein is implemented in some
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • FPGA field programmable gate array
  • processor a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein is implemented in some
  • various embodiments of the augmented reality multi-plane model animation interaction method proposed by the present disclosure
  • various embodiments of the augmented reality multi-plane model animation interaction method proposed by the present disclosure may be implemented with a separate software module that allows performing at least one function or operation.
  • the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the memory 69 and executed by the controller 67.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computational Linguistics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

一种增强现实多平面模型动画交互方法、设备、装置及存储介质。所述方法包括:获取现实环境视频图像(S1);对该视频图像进行计算处理,识别现实环境中的多个真实平面(S2);将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上(S3);根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹(S4)。该方法通过识别出的现实环境的真实平面来生成虚拟对象的动画轨迹,从而将虚拟对象的动画效果与现实场景相关联,增强用户的真实性感官体验。

Description

增强现实多平面模型动画交互方法、装置、设备及存储介质
交叉引用
本公开引用于2018年08月09日递交的名称为“增强现实多平面模型动画交互方法、装置、设备及存储介质”的、申请号为201810900487.7的中国专利申请,其通过引用被全部并入本申请。
技术领域
本公开涉及增强现实技术领域,特别是涉及一种增强现实多平面模型动画交互方法、装置、设备及存储介质。
背景技术
增强现实(Augmented Reality,以下简称AR),亦被称为扩增现实或混合现实,是在计算机虚拟现实的基础上发展起来的新技术。它通过计算机技术,提取真实世界的信息,并将虚拟的信息与真实世界相互叠加,达到虚拟信息与真实世界信息在同一个画面或空间同时存在的真实感官效果。AR技术在军事、科研、工业、医疗、游戏、教育、市政规划等领域都具有广泛的应用。例如在医疗领域,医生可以利用AR技术进行手术部位的精确定位。
现有增强现实AR系统实现现实图像与虚拟动画的融合过程首先是获取现实环境的视频帧,对获得的视频帧进行计算处理,获得环境和相机的相对方位,生成虚拟对象的图形帧,将虚拟对象的图形帧与现实环境的视频帧进行合成,得到增强现实环境的合成视频帧,输入显存信息,进行显示。
然而,通过上述方法实现的增强现实系统,在动画模型放入现实场景中后,动画模型勾画的虚拟对象动画在固定位置移动,产生动画效果,但这种动画与现实场景的平面没有关系,不能实现虚拟对象动画与现实场景的关联效果,给用户的真实性感官体验较差。
发明内容
针对上述问题,本公开提供一种增强现实多平面模型动画交互方法;还提供一种增强现实多平面模型动画交互设备、装置及存储介质。通过利用在现实场景中所识别的平面决定动画模型勾画的虚拟对象的动画轨迹,来实现虚拟对象动画与现实场景的关联,增强系统的真实性感官体验。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种增强现实多平面模型动画交互方法,包括:
获取现实环境视频图像;对该视频图像进行计算处理,识别现实环境中的多个真实平面;将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上;根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
进一步,所述对该视频图像进行计算处理,识别现实环境中的多个真实平面包括一次将所述视频图像中全部平面识别,或依次识别视频图像中的平面,或根据虚拟对象的动画需要,将所需的平面识别。
进一步,所述对该视频图像进行计算处理,识别现实环境中的多个真实平面包括通过SLAM算法检测出世界坐标系下的平面位姿和相机位姿。
所述根据识别出的所述真实平面,生成虚拟对象勾画的虚拟对象的动画轨迹进一步包括:
通过世界坐标系下的平面位姿和虚拟对象相对于识别出的平面的平面坐标系下的位姿计算出虚拟对象相对于世界坐标系的位姿;
通过世界坐标系下的相机位姿,计算出变化矩阵H,用于将虚拟对象相对于世界坐标系下的位姿转换成虚拟对象相对于相机坐标系下的位姿;
根据识别的多数多个真实平面的数据,生成虚拟对象的动画轨迹数据;
根据动画轨迹数据绘制对应的三维图形,生成多个虚拟图形帧以形成虚拟对象的动画轨迹。
进一步,所述动画轨迹数据包括相机坐标系下的坐标位置、动画曲线和跳转关系。
进一步,根据识别的所述真实平面的位姿以及所述跳转关系,生成所述虚拟对象的动画关键点,以动画关键点为参数,利用贝塞尔曲线配置生成所述虚拟对象的所述动画轨迹。
为了实现上述目的,根据本公开的另一个方面,提供以下技术方案:
一种增强现实多平面模型动画交互装置,包括:
获取模块:用于获取现实环境视频图像;识别模块:用于对该视频图像进行计算处理,识别现实环境中的真实平面;放置模块:用于将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上;生成模块:根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
进一步,所述识别模块识别现实环境中的多个真实平面包括一次将所述视频图像中全部平面识别,或依次识别视频图像中的平面,或根据虚拟对象的动画需要,将所需的平面识别。
进一步,所述识别模块识别现实环境中的多个真实平面包括通过SLAM算法检测出世界坐标系下的平面位姿和相机位姿。
所述生成模块根据识别出的所述多个真实平面,生成所属虚拟对象的动画轨迹进一步包括:
通过世界坐标系下的平面位姿和虚拟对象相对于识别出的平面的平面坐标系下的位姿计算出虚拟对象相对于世界坐标系的位姿;
通过世界坐标系下的相机位姿,计算出变化矩阵H,用于将虚拟对象相对于世界坐标系下的位姿转换成虚拟对象相对于相机坐标系下的位姿;
根据识别的多个真实平面的数据,生成虚拟对象的动画轨迹数据;
根据动画轨迹数据绘制对应的三维图形,生成多个虚拟图形帧以形成虚拟对象的动画轨迹。
进一步,所述动画轨迹数据包括相机坐标系下的坐标位置、动画曲线和跳转关系。
进一步,所述生成模块根据识别的所述真实平面的位姿以及所述跳转关系,生成所述虚拟对象的动画关键点,以动画关键点为参数,利用贝塞尔曲线配置生成所述虚拟对象的所述动画轨迹。
为了实现上述目的,根据本公开的另一个方面,提供以下技术方案:
一种增强现实多平面模型动画交互设备,包括处理器和存储器,所述存储器存储计算机可读指令;所述处理器执行所述计算机可读指令实现上述任一增强现实多平面模型动画交互方法。
为了实现上述目的,根据本公开的另一个方面,提供以下技术方案:
一种计算机可读存储介质,用于存储计算机可读指令,所述计算机可 读指令由计算机执行时,使得所述计算机实现上述任一所述增强现实多平面模型动画交互方法。
本公开实施例提供一种增强现实多平面模型动画交互方法、增强现实多平面模型动画交互装置、增强现实多平面模型动画交互设备、以及计算机可读存储介质。其中,增强现实多平面模型动画交互方法包括:获取现实环境视频图像;对该视频图像进行计算处理,识别现实环境中的多个真实平面;将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上;根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨。该方法通过识别出的现实环境的真实平面来生成虚拟对象的动画轨迹,从而将虚拟对象的动画效果与现实场景相关联,增强用户的真实性感官体验。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
图1为根据本公开一个实施例的增强现实多平面模型动画交互方法流程示意图;
图2为根据本公开另一个实施例的增强现实多平面模型动画交互方法流程示意图;
图2a为根据本公开一个实施例的虚拟对象动画生成实例;
图3为根据本公开一个实施例的增强现实多平面模型动画交互装置的结构示意图;
图4为根据本公开一个实施例的增强现实多平面模型动画交互设备的结构示意图;
图5为根据本公开一个实施例的计算机可读存储介质的结构示意图。
图6为根据本公开一个实施例的增强现实多平面模型动画交互终端的结构示意图。
具体实施方式
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
为了解决如何增强用户真实性感官体验效果的技术问题,本公开实施例提供一种增强现实多平面模型动画交互方法。如图1所示,该增强现实多平面模型动画交互方法主要包括如下步骤:
步骤S1:获取现实环境视频图像。
其中,首先初始化图形系统环境,图形系统环境初始化的目标是设置能够支持二维图形和三维图形的绘图环境,包括取得设置显示模式、设置显示参数列表、显示设备、创建显示表面、设置显示表面参数、设置视点位置和视平面等。
图形系统一般采用相机、摄像机等图像获取设备摄取现实环境视频图像。相机、摄像机的内部参数是指摄像头相机的焦距和形变等内部固有参 数,该参数确定了相机的投影变换矩阵,取决于相机本身的属性,所以对同一个相机来说其内部参数是恒定不变的。相机内部参数是通过一个独立的相机校准程序事先获得的,这里所做的是将这组参数读取到内存中。
通过相机、摄像机抓取视频帧图像,并将该视频帧图像进行相应的处理,例如缩放、灰度处理、二值化、轮廓提取等。
步骤S2:对获取的视频帧图像进行计算处理,识别现实环境中的多个真实平面。
其中真实平面的识别,既可以一次将环境中的全部平面识别,也可以逐一识别,或者根据虚拟对象的动画需要,将所需的平面识别。
其中,真实平面的识别能够采用多种方法,利用即时定位与地图重建(Simultaneous Localization And Mapping,SLAM)算法检测出世界坐标系下的平面位姿和相机位姿。其中,位姿信息(pose)包括位置(三维坐标)、姿态(分别绕X、Y、Z三轴的旋转角度),通常用位姿矩阵表示。世界坐标系是系统的绝对坐标系,在没有建立用户坐标系(即相机坐标系)之前画面上所有点的坐标都是以该坐标系的原点来确定各自的位置的。
在一个实施例中,采用基于特征点对齐的方法进行真实平面的检测识别,通过提取视频帧图像中离散的特征点,例如SIFT,SURF,FAST,ORB等特征,匹配相邻图像间的特征点,通过匹配的特征点计算相机的位姿增量并利用三角测量技术恢复得到特征点的三维坐标。假定提取得到的特征点大多数都位于同一个平面中,利用提取得到的FAST角点通过RANSAC算法估计场景的各平面。
在一个实施例中,采用基于图像对齐的方法进行真实平面的检测识别,通过视频帧图像的前一帧和当前帧之间所有像素点进行直接的对齐操作,利用图像上所有的像素点信息求解相邻帧的相机位姿增量,恢复图像中像素点的深度信息,从而得到真实平面。
在一个实施例中,将视频帧图像转换成三维点云形式,完成单帧三维点云重构;利用SURF特征描述子对相邻两帧图像进行特征提取,采用欧式距离作为相似性度量,采用PnP求解得到相邻两帧三维点云的初步旋转矩阵;采用VoxelGrid滤波器对重构出的各帧点云进行降采样,采用RANSAC算法从各帧三维点云中提取平面位姿;利用从各帧三维点云提取的平面位姿确定各真实平面位置。
步骤S3:将所述模型对应的虚拟对象放置于所述多个真实平面中的一 个平面上。
此处的模型可以是3D模型,每个3D模型放置于视频图像中时,对应一个虚拟对象,该虚拟对象被放置于步骤S2中所识别出的真实平面上,具体放置于哪一个平面上在本公开中并不限制,可以是放置于第一个识别出的平面上,也可以是根据用户的指定放置于用户所指定的平面上。步骤S4:根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
虚拟对象相对于识别出的平面的三维平面坐标系下的位姿通常由系统内置(例如直接放在平面原点),或由用户指定。
其中,如图2所示,具体步骤包括:
S31:通过世界坐标系下的平面位姿和虚拟对象相对于识别出的平面的平面坐标系下的位姿计算出虚拟对象相对于世界坐标系的位姿;
S32:通过世界坐标系下的相机位姿,计算出变化矩阵H(view matrix),用于将虚拟对象相对于世界坐标系下的位姿变化成虚拟对象相对于相机坐标系下的位姿;
被识别的平面在显示图像上的成相过程,相当于平面的点从世界坐标系上变换到相机坐标系上,然后投影到显示图像上形成平面的二维图像。因此,通过识别到的平面从上述系统内置或用户指定的对应数据中检索到该平面对应的三维虚拟对象,并获得该三维虚拟对象的顶点数组,最后对顶点数组中的顶点坐标乘以变换矩阵H得到该三维虚拟对象在相机坐标系下的坐标。
其中,在获得了相机坐标系和世界坐标系中对应的相机坐标后,通过联立方程组可以解得投影矩阵和变换矩阵H的乘积。而投影矩阵完全取决于相机的内部参数,因此可推算得到变换矩阵H。
计算得到所有相机内部参数和外部参数,就可以通过相应的计算实现从相机坐标系到显示图像的3D-2D变换。
S33:根据识别的真实的平面数据(包括平面位姿),生成虚拟对象的动画轨迹数据。动画轨迹数据包括相机坐标系下的坐标位置、动画曲线和跳转关系。其中,根据识别的真实平面的位置以及虚拟对象的跳转关系,生成虚拟对象的动画关键点keypoint。或者也可以通过设置动画关键点,生成跳转关系以及动画曲线。
其中动画轨迹的跳转关系例如,先跳到哪个平面,再跳到哪个平面。
S34:根据动画轨迹数据绘制对应的三维图形,存储在帧缓存中,生成多个虚拟图形帧,以勾画出虚拟对象的动画轨迹。
在一个实施例中,利用贝塞尔曲线配置生成虚拟对象的动画曲线,即动画轨迹,以达到精准的勾画与配置。根据动画轨迹数据确定贝塞尔曲线方程的阶数,例如一次阶、二次阶、三次阶或更高阶,以虚拟对象的动画关键点keypoint作为贝塞尔曲线的控制点,创建贝塞尔曲线方程,例如线性贝赛尔曲线方程,二次贝塞尔曲线方程、三次贝塞尔曲线方程或更高阶的贝塞尔曲线方程,根据该贝塞尔曲线方程进行贝塞尔曲线的勾画,进而形成虚拟对象的动画曲线,即动画轨迹。
为了便于理解,如图2a所示,为本公开实施例的增强现实多平面模型动画交互方法的一个实例。如图2a所示,在步骤S2中识别出4个真实平面,分别为P1、P2、P3和P4,虚拟对象M被放置于平面P1上,在该例子中,用户可以设置动画的关键点,如图2a所示,关键点分别为A、B、C,分别位于平面P2、P3和P4上,且跳转关系为P1到P2到P3到P4,如此根据所述关键点和跳转关系,可以生成动画,比如将关键点作为贝塞尔曲线的控制点,创建贝塞尔曲线方程,来生成虚拟对象的动画曲线。
为了解决如何增强用户真实性感官体验效果的技术问题,本公开实施例提供一种增强现实多平面模型动画交互装置30。该装置可以执行上述一种增强现实多平面模型动画交互方法实施例中所述的步骤。如图3所示,该装置30主要包括:获取模块31,识别模块32、放置模块33和生成模块34。
其中,获取模块31,用于获取现实环境视频图像。
一般基于图形系统实现获取模块。
首先初始化图形系统环境,图形系统环境初始化的目标是设置能够支持二维图形和三维图形的绘图环境,包括取得设置显示模式、设置显示参数列表、显示设备、创建显示表面、设置显示表面参数、设置视点位置和视平面等。
图形系统一般采用相机、摄像机等图像获取设备摄取现实环境视频图像。相机、摄像机的内部参数是指摄像头相机的焦距和形变等内部固有参数,该参数确定了相机的投影变换矩阵,取决于相机本身的属性,所以对同一个相机来说其内部参数是恒定不变的。相机内部参数是通过一个独立的相机校准程序事先获得的,这里所做的是将这组参数读取到内存中。
所述获取模块通过相机、摄像机抓取视频帧图像,并将该视频帧图像进行相应的处理,例如缩放、灰度处理、二值化、轮廓提取等。
其中,识别模块32,用于对获取模块获取的视频帧图像进行计算处理,识别现实环境中的真实平面。
真实平面的识别,既可以一次将环境中的全部平面识别,也可以逐一识别,或者根据虚拟对象的动画需要,将所需的平面识别。
其中,真实平面的识别能够采用多种方法,利用即时定位与地图重建(Simultaneous Localization And Mapping,SLAM)算法检测出世界坐标系下的平面位姿和相机位姿。其中,位姿信息(pose)包括位置(三维坐标)、姿态(分别绕X、Y、Z三轴的旋转角度),通常用位姿矩阵表示。
在一个实施例中,采用基于特征点对齐的方法进行真实平面的检测识别,通过提取视频帧图像中离散的特征点,例如SIFT,SURF,FAST,ORB等特征,匹配相邻图像间的特征点,通过匹配的特征点计算相机的位姿增量并利用三角测量技术恢复得到特征点的三维坐标。假定提取得到的特征点大多数都位于同一个平面中,利用提取得到的FAST角点通过RANSAC算法估计场景的各平面。
在一个实施例中,采用基于图像对齐的方法进行真实平面的检测识别,通过视频帧图像的前一帧和当前帧之间所有像素点进行直接的对齐操作,利用图像上所有的像素点信息求解相邻帧的相机位姿增量,恢复图像中像素点的深度信息,从而得到真实平面。
在一个实施例中,将视频帧图像转换成三维点云形式,完成单帧三维点云重构;利用SURF特征描述子对相邻两帧图像进行特征提取,采用欧式距离作为相似性度量,采用PnP求解得到相邻两帧三维点云的初步旋转矩阵;采用VoxelGrid滤波器对重构出的各帧点云进行降采样,采用RANSAC算法从各帧三维点云中提取平面位姿;利用从各帧三维点云提取的平面位姿确定各真实平面位置。
其中放置模块33,用于将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上。
此处的模型可以是3D模型,每个3D模型放置于视频图像中时,对应一个虚拟对象,该虚拟对象被放置于步骤S2中所识别出的真实平面上,具体放置于哪一个平面上在本公开中并不限制,可以是放置于第一个识别出的平面上,也可以是根据用户的指定放置于用户所指定的平面上。
其中,生成模块34,用于根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
虚拟对象(3D模型)相对于识别出的平面的三维平面坐标系下的位姿通常由系统内置(例如直接放在平面原点),或由用户指定。
其中,生成模块34具体操作步骤包括:
S31:通过世界坐标系下的平面位姿和虚拟对象相对于识别出的平面的平面坐标系下的位姿计算出虚拟对象相对于世界坐标系的位姿;
S32:通过世界坐标系下的相机位姿,计算出变化矩阵H(view matrix),用于将虚拟对象相对于世界坐标系下的位姿变化成虚拟对象相对于相机坐标系下的位姿;
被识别的平面在显示图像上的成相过程,相当于平面的点从世界坐标系上变换到相机坐标系上,然后投影到显示图像上形成平面的二维图像。因此,通过识别到的平面从上述系统内置或用户指定的对应数据中检索到该平面对应的三维虚拟对象,并获得该三维虚拟对象的顶点数组,最后对顶点数组中的顶点坐标乘以变换矩阵H得到该三维虚拟对象在相机坐标系下的坐标。
其中,在获得了相机坐标系和世界坐标系中对应的相机坐标后,通过联立方程组可以解得投影矩阵和变换矩阵H的乘积。而投影矩阵完全取决于相机的内部参数,因此可推算得到变换矩阵H。
计算得到所有相机内部参数和外部参数,就可以通过相应的计算实现从相机坐标系到显示图像的3D-2D变换。
S33:根据识别的真实的平面数据(包括平面位姿),生成虚拟对象的动画轨迹数据。动画轨迹数据包括相机坐标系下的坐标位置、动画曲线和跳转关系。其中,根据识别的真实平面的位置以及虚拟对象定义的虚拟对象的跳转关系,生成虚拟对象的动画关键点keypoint。
其中动画轨迹的跳转关系例如,先跳到哪个平面,再跳到哪个平面。
S34:根据动画轨迹数据绘制对应的三维图形,存储在帧缓存中,生成多个虚拟图形帧,以勾画出虚拟对象的动画轨迹。
在一个实施例中,利用贝塞尔曲线配置生成虚拟对象的动画曲线,即动画轨迹,以达到精准的勾画与配置。根据动画轨迹数据确定贝塞尔曲线方程的阶数,例如一次阶、二次阶、三次阶或更高阶,以虚拟对象的动画关 键点keypoint作为贝塞尔曲线的控制点,创建贝塞尔曲线方程,例如线性贝赛尔曲线方程,二次贝塞尔曲线方程、三次贝塞尔曲线方程或更高阶的贝塞尔曲线方程,根据该贝塞尔曲线方程进行贝塞尔曲线的勾画,进而形成虚拟对象的动画曲线,即动画轨迹。
图4是根据本公开的实施例的增强现实多平面模型动画交互设备的硬件框图。如图4所示,根据本公开实施例的增强现实多平面模型动画交互设备40包括存储器41和处理器42。
该存储器41用于存储非暂时性计算机可读指令。具体地,存储器41可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
该处理器42可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制增强现实多平面模型动画交互设备40中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器42用于运行该存储器41中存储的该计算机可读指令,使得该增强现实多平面模型动画交互设备40执行前述的本公开各实施例的增强现实多平面模型动画交互方法的全部或部分步骤。
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图5是图示根据本公开的实施例的计算机可读存储介质的示意图。如图5所示,根据本公开实施例的计算机可读存储介质50,其上存储有非暂时性计算机可读指令51。当该非暂时性计算机可读指令51由处理器运行时,执行前述的本公开各实施例的增强现实多平面模型动画交互方法的全部或部分步骤
上述计算机可读存储介质包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图6是图示根据本公开实施例的终端的硬件结构示意图。如图6所示,该增强现实多平面模型动画交互终端60包括上述增强现实多平面模型动画交互装置实施例。
该终端可以以各种形式来实施,本公开中的终端可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端、车载显示终端、车载电子后视镜等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。
作为等同替换的实施方式,该终端还可以包括其他组件。如图6所示,该增强现实多平面模型动画交互终端60可以包括电源单元61、无线通信单元62、A/V(音频/视频)输入单元63、用户输入单元64、感测单元65、接口单元66、控制器67、输出单元68和存储器69等等。图6示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。
其中,无线通信单元62允许终端60与无线通信系统或网络之间的无线电通信。A/V输入单元63用于接收音频或视频信号。用户输入单元64可以根据用户输入的命令生成键输入数据以控制终端的各种操作。感测单元65检测终端60的当前状态、终端60的位置、用户对于终端60的触摸输入的有无、终端60的取向、终端60的加速或减速移动和方向等等,并且生成用于控制终端60的操作的命令或信号。接口单元66用作至少一个外部装置与终端60连接可以通过的接口。输出单元68被构造为以视觉、音频和/或触觉方式提供输出信号。存储器69可以存储由控制器66执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储器69可以包括至少一种类型的存储介质。而且,终端60可以与通过网络连接执行存储器69的存储功能的网络存储装置协作。控制器67通常控制终端的总体操作。另外,控制器67可以包括用于再现或回放多媒体数据的多媒体模块。控制器67可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元61在控制器67的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的 适当的电力。
本公开提出的增强现实多平面模型动画交互方法的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的增强现实多平面模型动画交互方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的增强现实多平面模型动画交互方法的各种实施方式可以在控制器67中实施。对于软件实施,本公开提出的增强现实多平面模型动画交互方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器69中并且由控制器67执行。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。
还需要指出的是,在本公开的系统和方法中,各部件或各步骤是可以 分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (9)

  1. 一种增强现实多平面模型动画交互方法,其特征在于,包括:
    获取现实环境视频图像;
    对该视频图像进行计算处理,识别现实环境中的多个真实平面;
    将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上;
    根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
  2. 如权利要求1所述的增强现实多平面模型动画交互方法,其中对该视频图像进行计算处理,识别现实环境中的多个真实平面包括:
    一次将所述视频图像中全部平面识别,或
    依次识别视频图像中的平面,或
    根据虚拟对象的动画需要,将所需的平面识别。
  3. 如权利要求1所述的增强现实多平面模型动画交互方法,其中对该视频图像进行计算处理,识别现实环境中的多个真实平面包括:通过SLAM算法检测出世界坐标系下的平面位姿和相机位姿。
  4. 如权利要求1所述的增强现实多平面模型动画交互方法,其中根据识别出的所述多个真实平面,生成所述虚拟对象的在所述多个真实平面之间动画轨迹进一步包括:
    通过世界坐标系下的平面位姿和虚拟对象相对于识别出的平面的平面坐标系下的位姿计算出虚拟对象相对于世界坐标系的位姿;
    通过世界坐标系下的相机位姿,计算出变化矩阵H,用于将虚拟对象相对于世界坐标系下的位姿转换成虚拟对象相对于相机坐标系下的位姿;
    根据识别的所述多个真实平面的数据,生成虚拟对象的动画轨迹数据;
    根据动画轨迹数据绘制对应的三维图形,生成多个虚拟图形帧以形成虚拟对象的动画轨迹。
  5. 如权利要求4所述的增强现实多平面模型动画交互方法,其中所述动画轨迹数据包括相机坐标系下的坐标位置、动画曲线和跳转关系。
  6. 如权利要求5所述的增强现实多平面模型动画交互方法,还包括:
    根据识别的所述真实平面的位姿以及所述跳转关系,生成所述虚拟对象的动画关键点;
    以所述动画关键点为参数,利用贝塞尔曲线配置生成所述虚拟对象的所述动画轨迹。
  7. 一种增强现实多平面模型动画交互装置,包括:
    获取模块:用于获取现实环境视频图像;
    识别模块:用于对该视频图像进行计算处理,识别现实环境中的真实平面;
    放置模块:用于将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上;
    生成模块:根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
  8. 一种增强现实多平面模型动画交互设备,包括处理器和存储器,其中所述存储器存储计算机可读指令;所述处理器执行所述计算机可读指令实现根据权利要求1-6中任意一项所述的增强现实多平面模型动画交互方法。
  9. 一种计算机可读存储介质,用于存储计算机可读指令,所述计算机可读指令由计算机执行时,使得所述计算机实现权利要求1-6中任意一项所述增强现实多平面模型动画交互方法。
PCT/CN2019/073078 2018-08-09 2019-01-25 增强现实多平面模型动画交互方法、装置、设备及存储介质 WO2020029554A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020571801A JP7337104B2 (ja) 2018-08-09 2019-01-25 拡張現実によるモデル動画多平面インタラクション方法、装置、デバイス及び記憶媒体
GB2100236.5A GB2590212B (en) 2018-08-09 2019-01-25 Multi-plane model animation interaction method, apparatus and device for augmented reality, and storage medium
US16/967,950 US20210035346A1 (en) 2018-08-09 2019-01-25 Multi-Plane Model Animation Interaction Method, Apparatus And Device For Augmented Reality, And Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810900487.7A CN110827376A (zh) 2018-08-09 2018-08-09 增强现实多平面模型动画交互方法、装置、设备及存储介质
CN201810900487.7 2018-08-09

Publications (1)

Publication Number Publication Date
WO2020029554A1 true WO2020029554A1 (zh) 2020-02-13

Family

ID=69413908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073078 WO2020029554A1 (zh) 2018-08-09 2019-01-25 增强现实多平面模型动画交互方法、装置、设备及存储介质

Country Status (5)

Country Link
US (1) US20210035346A1 (zh)
JP (1) JP7337104B2 (zh)
CN (1) CN110827376A (zh)
GB (1) GB2590212B (zh)
WO (1) WO2020029554A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515452B (zh) * 2018-05-22 2022-02-22 腾讯科技(深圳)有限公司 图像处理方法、装置、存储介质和计算机设备
CN111522439B (zh) * 2020-04-02 2024-04-12 上海电气集团股份有限公司 一种虚拟样机的修订方法、装置、设备及计算机存储介质
CN111626183B (zh) * 2020-05-25 2024-07-16 深圳市商汤科技有限公司 一种目标对象展示方法及装置、电子设备和存储介质
CN111583421A (zh) * 2020-06-03 2020-08-25 浙江商汤科技开发有限公司 确定展示动画的方法、装置、电子设备及存储介质
US20230267664A1 (en) * 2020-07-16 2023-08-24 Beijing Bytedance Network Technology Co., Ltd. Animation processing method and apparatus, electronic device and storage medium
CN113476835B (zh) * 2020-10-22 2024-06-07 海信集团控股股份有限公司 一种画面显示的方法及装置
US11741676B2 (en) 2021-01-21 2023-08-29 Samsung Electronics Co., Ltd. System and method for target plane detection and space estimation
CN113034651B (zh) * 2021-03-18 2023-05-23 腾讯科技(深圳)有限公司 互动动画的播放方法、装置、设备及存储介质
CN113160308A (zh) * 2021-04-08 2021-07-23 北京鼎联网络科技有限公司 一种图像处理方法和装置、电子设备及存储介质
KR102594258B1 (ko) * 2021-04-26 2023-10-26 한국전자통신연구원 증강현실에서 실제 객체를 가상으로 이동하는 방법 및 장치
CN113888724B (zh) * 2021-09-30 2024-07-23 北京字节跳动网络技术有限公司 一种动画显示方法、装置及设备
CN114445541A (zh) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 处理视频的方法、装置、电子设备及存储介质
CN114584704A (zh) * 2022-02-08 2022-06-03 维沃移动通信有限公司 拍摄方法、装置和电子设备
CN115937299B (zh) * 2022-03-25 2024-01-30 北京字跳网络技术有限公司 在视频中放置虚拟对象的方法及相关设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130303285A1 (en) * 2012-05-11 2013-11-14 Sony Computer Entertainment Europe Limited Apparatus and method for augmented reality
CN104050475A (zh) * 2014-06-19 2014-09-17 樊晓东 基于图像特征匹配的增强现实的系统和方法
CN107371009A (zh) * 2017-06-07 2017-11-21 东南大学 一种人体动作增强可视化方法及人体动作增强现实系统
CN108111832A (zh) * 2017-12-25 2018-06-01 北京麒麟合盛网络技术有限公司 增强现实ar视频的异步交互方法及系统

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8970690B2 (en) * 2009-02-13 2015-03-03 Metaio Gmbh Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
JP2013164697A (ja) 2012-02-10 2013-08-22 Sony Corp 画像処理装置、画像処理方法、プログラム及び画像処理システム
US20130215230A1 (en) * 2012-02-22 2013-08-22 Matt Miesnieks Augmented Reality System Using a Portable Device
US20130215109A1 (en) * 2012-02-22 2013-08-22 Silka Miesnieks Designating Real World Locations for Virtual World Control
JP5988368B2 (ja) 2012-09-28 2016-09-07 Kddi株式会社 画像処理装置及び方法
US9953618B2 (en) 2012-11-02 2018-04-24 Qualcomm Incorporated Using a plurality of sensors for mapping and localization
JP2014178794A (ja) * 2013-03-14 2014-09-25 Hitachi Ltd 搬入経路計画システム
US9412040B2 (en) 2013-12-04 2016-08-09 Mitsubishi Electric Research Laboratories, Inc. Method for extracting planes from 3D point cloud sensor data
US9972131B2 (en) * 2014-06-03 2018-05-15 Intel Corporation Projecting a virtual image at a physical surface
US9754416B2 (en) * 2014-12-23 2017-09-05 Intel Corporation Systems and methods for contextually augmented video creation and sharing
US10845188B2 (en) * 2016-01-05 2020-11-24 Microsoft Technology Licensing, Llc Motion capture from a mobile self-tracking device
JP6763154B2 (ja) * 2016-03-09 2020-09-30 富士通株式会社 画像処理プログラム、画像処理装置、画像処理システム、及び画像処理方法
CN107358609B (zh) * 2016-04-29 2020-08-04 成都理想境界科技有限公司 一种用于增强现实的图像叠加方法及装置
CN107665508B (zh) * 2016-07-29 2021-06-01 成都理想境界科技有限公司 实现增强现实的方法及系统
CN107665506B (zh) * 2016-07-29 2021-06-01 成都理想境界科技有限公司 实现增强现实的方法及系统
CN106548519A (zh) * 2016-11-04 2017-03-29 上海玄彩美科网络科技有限公司 基于orb‑slam和深度相机的真实感的增强现实方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130303285A1 (en) * 2012-05-11 2013-11-14 Sony Computer Entertainment Europe Limited Apparatus and method for augmented reality
CN104050475A (zh) * 2014-06-19 2014-09-17 樊晓东 基于图像特征匹配的增强现实的系统和方法
CN107371009A (zh) * 2017-06-07 2017-11-21 东南大学 一种人体动作增强可视化方法及人体动作增强现实系统
CN108111832A (zh) * 2017-12-25 2018-06-01 北京麒麟合盛网络技术有限公司 增强现实ar视频的异步交互方法及系统

Also Published As

Publication number Publication date
GB202100236D0 (en) 2021-02-24
GB2590212B (en) 2023-05-24
JP2021532447A (ja) 2021-11-25
GB2590212A (en) 2021-06-23
JP7337104B2 (ja) 2023-09-01
US20210035346A1 (en) 2021-02-04
GB2590212A9 (en) 2023-03-29
CN110827376A (zh) 2020-02-21

Similar Documents

Publication Publication Date Title
WO2020029554A1 (zh) 增强现实多平面模型动画交互方法、装置、设备及存储介质
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN106875431B (zh) 具有移动预测的图像追踪方法及扩增实境实现方法
CN110163942B (zh) 一种图像数据处理方法和装置
US8681179B2 (en) Method and system for coordinating collisions between augmented reality and real reality
WO2020024569A1 (zh) 动态生成人脸三维模型的方法、装置、电子设备
KR101410273B1 (ko) 증강현실 응용을 위한 환경 모델링 방법 및 장치
WO2021018214A1 (zh) 虚拟对象处理方法及装置、存储介质和电子设备
US20120306874A1 (en) Method and system for single view image 3 d face synthesis
JP7556839B2 (ja) 複合現実において動的仮想コンテンツを生成するデバイスおよび方法
KR101723823B1 (ko) 인터랙티브 공간증강 체험전시를 위한 동적 객체와 가상 객체 간의 인터랙션 구현 장치
WO2020001014A1 (zh) 图像美化方法、装置及电子设备
CN110072046B (zh) 图像合成方法和装置
US11373329B2 (en) Method of generating 3-dimensional model data
WO2021164653A1 (zh) 动画形象的生成方法、设备及存储介质
AU2016230943B2 (en) Virtual trying-on experience
CN115775300B (zh) 人体模型的重建方法、人体重建模型的训练方法及装置
CN110827411B (zh) 自适应环境的增强现实模型显示方法、装置、设备及存储介质
CN112862981B (zh) 用于呈现虚拟表示的方法和装置、计算机设备和存储介质
CN108989681A (zh) 全景图像生成方法和装置
KR20140078083A (ko) 증강 현실이 구현된 만화책
McClean An Augmented Reality System for Urban Environments using a Planar Building Fa cade Model
WO2023017623A1 (ja) 情報処理装置、情報処理方法およびプログラム
US20230267664A1 (en) Animation processing method and apparatus, electronic device and storage medium
Setyati et al. Face tracking implementation with pose estimation algorithm in augmented reality technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19847541

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020571801

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 202100236

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20190125

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19847541

Country of ref document: EP

Kind code of ref document: A1