WO2020029554A1 - 增强现实多平面模型动画交互方法、装置、设备及存储介质 - Google Patents
增强现实多平面模型动画交互方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2020029554A1 WO2020029554A1 PCT/CN2019/073078 CN2019073078W WO2020029554A1 WO 2020029554 A1 WO2020029554 A1 WO 2020029554A1 CN 2019073078 W CN2019073078 W CN 2019073078W WO 2020029554 A1 WO2020029554 A1 WO 2020029554A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- animation
- real
- plane
- virtual object
- planes
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present disclosure relates to the technical field of augmented reality, and in particular, to an interactive method, device, device, and storage medium for an augmented reality multi-plane model animation.
- Augmented Reality also called augmented reality or mixed reality
- AR is a new technology developed on the basis of computer virtual reality. It uses computer technology to extract real-world information and overlay virtual information with the real world to achieve the real sensory effect of virtual information and real-world information in the same picture or space.
- AR technology has a wide range of applications in military, scientific research, industry, medical, gaming, education, and municipal planning. For example, in the medical field, doctors can use AR technology to precisely locate the surgical site.
- the existing augmented reality AR system realizes the fusion process of real images and virtual animations.
- the video frames of the real environment are obtained, the obtained video frames are calculated and processed, the relative position of the environment and the camera is obtained, and the graphic frames of the virtual objects are generated.
- the graphic frame of the object is synthesized with the video frame of the real environment to obtain a synthesized video frame of the augmented reality environment, and the video memory information is input for display.
- the present disclosure provides an augmented reality multi-plane model animation interaction method; and also provides an augmented reality multi-plane model animation interaction device, device, and storage medium.
- an augmented reality multi-plane model animation interaction method By using the plane identified in the real scene to determine the animation trajectory of the virtual object sketched by the animation model, the association between the virtual object animation and the real scene is realized, and the system's real sexy experience is enhanced.
- An interactive method for augmented reality multi-plane model animation including:
- the multiple real planes generate an animation trajectory of the virtual object between the multiple real planes.
- performing calculation processing on the video image to identify multiple real planes in a real environment includes recognizing all the planes in the video image at one time, or sequentially identifying the planes in the video image, or according to the animation needs of the virtual object, Identify the required plane.
- performing calculation processing on the video image, and identifying multiple real planes in a real environment include detecting a plane pose and a camera pose in a world coordinate system through a SLAM algorithm.
- the generating an animation trajectory of a virtual object sketched by the virtual object according to the identified real plane further includes:
- the corresponding three-dimensional graphics are drawn according to the animation trajectory data, and a plurality of virtual graphic frames are generated to form an animation trajectory of the virtual object.
- the animation trajectory data includes a coordinate position in a camera coordinate system, an animation curve, and a jump relationship.
- an animation keypoint of the virtual object is generated according to the identified posture and the jump relationship of the real plane, and the animation keypoint is used as a parameter to generate the virtual object by using a Bezier curve configuration. Animation track.
- An augmented reality multi-plane model animation interactive device includes:
- Obtaining module used to obtain the video image of the real environment; recognition module: used to perform calculation processing on the video image to identify the real plane in the real environment; placement module: used to place the virtual object corresponding to the model on the multiple One of the real planes; a generating module: generating an animation trajectory of the virtual object between the real planes according to the identified real planes.
- the recognition module recognizes multiple real planes in the real environment, including recognizing all the planes in the video image at one time, or sequentially identifying the planes in the video image, or identifying the required planes according to the animation needs of the virtual object .
- the recognition module recognizes a plurality of real planes in a real environment including detecting a plane pose and a camera pose in a world coordinate system through a SLAM algorithm.
- the generating module according to the identified real planes, generating an animation trajectory of a virtual object to which the generated module further includes:
- the corresponding three-dimensional graphics are drawn according to the animation trajectory data, and a plurality of virtual graphic frames are generated to form an animation trajectory of the virtual object.
- the animation trajectory data includes a coordinate position in a camera coordinate system, an animation curve, and a jump relationship.
- the generating module generates an animation keypoint of the virtual object according to the identified pose and the jump relationship of the real plane, and uses the Bezier curve configuration to generate the virtual keypoint using the animation keypoint as a parameter.
- the animation track of the object is not limited to the identified pose and the jump relationship of the real plane.
- An augmented reality multi-plane model animation interactive device includes a processor and a memory, and the memory stores computer-readable instructions; the processor executes the computer-readable instructions to implement any of the foregoing augmented reality multi-plane model animation interaction methods.
- a computer-readable storage medium is used to store computer-readable instructions.
- the computer When the computer-readable instructions are executed by a computer, the computer enables the computer to implement any one of the augmented reality multi-plane model animation interaction methods described above.
- Embodiments of the present disclosure provide an augmented reality multi-plane model animation interaction method, an augmented reality multi-plane model animation interaction device, an augmented reality multi-plane model animation interaction device, and a computer-readable storage medium.
- the method for interactively interacting with an augmented reality multi-plane model includes: obtaining a video image of a real environment; performing calculation processing on the video image to identify multiple real planes in the real environment; and placing virtual objects corresponding to the model on the multiple On one of the real planes; and generating an animation track of the virtual object between the plurality of real planes based on the identified real planes.
- the method generates the animation trajectory of the virtual object by identifying the real plane of the real environment, thereby associating the animation effect of the virtual object with the real scene, and enhancing the user's real sexy official experience.
- FIG. 1 is a schematic flowchart of an augmented reality multi-plane model animation interaction method according to an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of an interactive method for animated reality multi-plane model animation according to another embodiment of the present disclosure
- 2a is an example of generating a virtual object animation according to an embodiment of the present disclosure
- FIG. 3 is a schematic structural diagram of an augmented reality multi-plane model animation interactive device according to an embodiment of the present disclosure
- FIG. 4 is a schematic structural diagram of an augmented reality multi-plane model animation interactive device according to an embodiment of the present disclosure
- FIG. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of an augmented reality multi-plane model animation interactive terminal according to an embodiment of the present disclosure.
- an embodiment of the present disclosure provides an animation interactive method of an augmented reality multi-plane model.
- the augmented reality multi-plane model animation interaction method mainly includes the following steps:
- Step S1 Acquire a video image of a real environment.
- the graphics system environment is initialized first.
- the goal of the graphics system environment initialization is to set a drawing environment capable of supporting two-dimensional graphics and three-dimensional graphics, including obtaining a setting display mode, setting a display parameter list, a display device, creating a display surface, and setting a display surface parameter. , Set the viewpoint position and view plane, etc.
- Graphics systems generally use cameras, camcorders and other image acquisition equipment to capture real-world video images.
- the internal parameters of the camera and the camera refer to the internal parameters such as the focal length and deformation of the camera. This parameter determines the projection transformation matrix of the camera and depends on the properties of the camera. Therefore, the internal parameters of the same camera are constant. of.
- the camera's internal parameters are obtained in advance through an independent camera calibration program. What is done here is to read this set of parameters into memory.
- Step S2 Perform calculation processing on the obtained video frame image to identify multiple real planes in the real environment.
- the real plane recognition can recognize all the planes in the environment at one time, one by one, or the required planes according to the animation needs of the virtual object.
- the recognition of the real plane can adopt a variety of methods, using real-time localization and map reconstruction (Simultaneous Localization And Mapping (SLAM) algorithm to detect the plane pose and camera pose in the world coordinate system.
- the pose information includes a position (three-dimensional coordinates) and a pose (rotation angles about the three axes of X, Y, and Z, respectively), which are usually represented by a pose matrix.
- the world coordinate system is the absolute coordinate system of the system. Before the user coordinate system (ie the camera coordinate system) is established, the coordinates of all points on the screen are determined by the origin of the coordinate system.
- a method based on feature point alignment is used to detect and identify the real plane.
- discrete feature points in the video frame image such as SIFT, SURF, FAST, ORB and other features
- the feature points between adjacent images are matched.
- the pose increment of the camera is calculated and the triangulation technology is used to recover the 3D coordinates of the feature points. It is assumed that most of the extracted feature points are located in the same plane, and each plane of the scene is estimated by the RANSAC algorithm using the extracted FAST corner points.
- a real plane detection and recognition method is adopted based on image alignment.
- a direct alignment operation is performed on all pixels between the previous frame and the current frame of the video frame image, and all pixel information on the image is used to solve the problem.
- the incremental camera poses of adjacent frames restore the depth information of the pixels in the image to get the real plane.
- a video frame image is converted into a three-dimensional point cloud form, and a single frame of three-dimensional point cloud reconstruction is completed; a SURF feature descriptor is used to perform feature extraction on two adjacent frames of images, and Euclidean distance is used as a similarity measure.
- PnP is solved to obtain the preliminary rotation matrix of the three-dimensional point cloud of two adjacent frames; the VoxelGrid filter is used to down-sample the reconstructed point cloud of each frame, and the RANSAC algorithm is used to extract the plane pose from the three-dimensional point cloud of each frame;
- the plane pose extracted from the frame 3D point cloud determines the positions of the real planes.
- Step S3 placing the virtual object corresponding to the model on one of the plurality of real planes.
- the model here may be a 3D model.
- each 3D model When each 3D model is placed in a video image, it corresponds to a virtual object.
- the virtual object is placed on the real plane identified in step S2. Which plane is placed on? This disclosure is not limited, and may be placed on the first identified plane, or may be placed on the plane specified by the user according to the user's designation.
- Step S4 Generate an animation trajectory of the virtual object between the multiple real planes according to the identified multiple real planes.
- the pose of the virtual object relative to the recognized plane in a three-dimensional plane coordinate system is usually built into the system (for example, directly on the plane origin) or specified by the user.
- S31 Calculate the pose of the virtual object relative to the world coordinate system through the plane pose in the world coordinate system and the pose of the virtual object relative to the identified plane coordinate system;
- S32 Calculate a change matrix H (view matrix) based on the camera pose in the world coordinate system, which is used to change the pose of the virtual object relative to the world coordinate system to the pose of the virtual object relative to the camera coordinate system;
- the phase formation process of the identified plane on the display image is equivalent to transforming the points of the plane from the world coordinate system to the camera coordinate system and then projecting onto the display image to form a two-dimensional image of the plane. Therefore, the 3D virtual object corresponding to the plane is retrieved from the corresponding data built-in or specified by the user through the identified plane, and the vertex array of the 3D virtual object is obtained. Finally, the vertex coordinates in the vertex array are multiplied by the transformation matrix. H obtains the coordinates of the three-dimensional virtual object in the camera coordinate system.
- the product of the projection matrix and the transformation matrix H can be obtained through the simultaneous equations.
- the projection matrix depends entirely on the internal parameters of the camera, so the transformation matrix H can be calculated.
- S33 Generate animated trajectory data of the virtual object according to the recognized real plane data (including the plane pose).
- the animation trajectory data includes the coordinate position, animation curve and jump relationship in the camera coordinate system.
- the animation keypoint of the virtual object is generated according to the identified position of the real plane and the jump relationship of the virtual object. Or you can also set jump points and animation curves by setting animation key points.
- the jumping relationship of the animation track is, for example, which plane to jump to first, and then to which plane.
- a Bezier curve configuration is used to generate an animation curve of a virtual object, that is, an animation trajectory, to achieve accurate sketching and configuration.
- Determine the order of the Bezier equation according to the animation trajectory data such as first order, second order, third order, or higher, and use the animation keypoint of the virtual object as the control point of the Bezier curve to create the Bezier curve.
- Curve equations such as linear Bezier equations, quadratic Bezier equations, cubic Bezier equations, or higher-order Bezier equations. Bezier curves are drawn based on the Bezier equation. , And then form the animation curve of the virtual object, that is, the animation track.
- FIG. 2a this is an example of an interactive method of animated reality multi-plane model animation according to an embodiment of the present disclosure.
- four real planes are identified in step S2, which are P1, P2, P3, and P4, and the virtual object M is placed on the plane P1.
- the user can set the key points of the animation.
- the key points are A, B, and C, respectively located on the planes P2, P3, and P4, and the jump relationship is P1 to P2 to P3 to P4, so according to the key points and the jump relationship,
- the embodiment of the present disclosure provides an augmented reality multi-plane model animation interactive device 30.
- the device can perform the steps described in the embodiment of the method for interactively interacting with an augmented reality multi-plane model.
- the device 30 mainly includes: an acquisition module 31, an identification module 32, a placement module 33, and a generation module 34.
- the obtaining module 31 is configured to obtain a video image of a real environment.
- the acquisition module is generally implemented based on a graphics system.
- the graphics system environment is initialized.
- the goal of the graphics system environment initialization is to set a drawing environment that can support two-dimensional graphics and three-dimensional graphics, including obtaining a setting display mode, setting a display parameter list, a display device, creating a display surface, setting display surface parameters, Viewpoint position, view plane, etc.
- Graphics systems generally use cameras, camcorders and other image acquisition equipment to capture real-world video images.
- the internal parameters of the camera and the camera refer to the internal parameters such as the focal length and deformation of the camera. This parameter determines the projection transformation matrix of the camera and depends on the properties of the camera. Therefore, the internal parameters of the same camera are constant. of.
- the camera's internal parameters are obtained in advance through an independent camera calibration program. What is done here is to read this set of parameters into memory.
- the acquisition module captures a video frame image through a camera and a video camera, and performs corresponding processing on the video frame image, such as scaling, grayscale processing, binarization, and contour extraction.
- the identification module 32 is configured to perform calculation processing on the video frame image acquired by the acquisition module to identify a real plane in a real environment.
- Real plane recognition can either identify all planes in the environment at once, or one by one, or identify the required planes according to the animation needs of the virtual object.
- the recognition of the real plane can adopt a variety of methods, using real-time localization and map reconstruction (Simultaneous Localization And Mapping (SLAM) algorithm to detect the plane pose and camera pose in the world coordinate system.
- the pose information includes a position (three-dimensional coordinates) and a pose (rotation angles about the three axes of X, Y, and Z, respectively), which are usually represented by a pose matrix.
- a method based on feature point alignment is used to detect and identify the real plane.
- discrete feature points in the video frame image such as SIFT, SURF, FAST, ORB and other features
- the feature points between adjacent images are matched.
- the pose increment of the camera is calculated and the triangulation technology is used to recover the 3D coordinates of the feature points. It is assumed that most of the extracted feature points are located in the same plane, and each plane of the scene is estimated by the RANSAC algorithm using the extracted FAST corner points.
- a real plane detection and recognition method is adopted based on image alignment.
- a direct alignment operation is performed on all pixels between the previous frame and the current frame of the video frame image, and all pixel information on the image is used to solve the problem.
- the incremental camera poses of adjacent frames restore the depth information of the pixels in the image to get the real plane.
- a video frame image is converted into a three-dimensional point cloud form, and a single frame of three-dimensional point cloud reconstruction is completed; using SURF feature descriptors to perform feature extraction on two adjacent frames of images, and using Euclidean distance as a similarity measure, PnP is solved to obtain the preliminary rotation matrix of the three-dimensional point cloud of two adjacent frames; the VoxelGrid filter is used to downsample the reconstructed point cloud of each frame, and the RANSAC algorithm is used to extract the plane pose from the three-dimensional point cloud of each frame; The plane pose extracted from the frame 3D point cloud determines the positions of the real planes.
- the placement module 33 is configured to place a virtual object corresponding to the model on one of the multiple real planes.
- the model here may be a 3D model.
- each 3D model When each 3D model is placed in a video image, it corresponds to a virtual object.
- the virtual object is placed on the real plane identified in step S2. Which plane is placed on? This disclosure is not limited, and may be placed on the first identified plane, or may be placed on the plane specified by the user according to the user's designation.
- the generating module 34 is configured to generate an animation trajectory of the virtual object between the multiple real planes according to the identified multiple real planes.
- the pose of the virtual object (3D model) relative to the recognized plane in the three-dimensional plane coordinate system is usually built in by the system (for example, directly on the plane origin) or specified by the user.
- the specific operation steps of the generating module 34 include:
- S31 Calculate the pose of the virtual object relative to the world coordinate system through the plane pose in the world coordinate system and the pose of the virtual object relative to the identified plane coordinate system;
- S32 Calculate a change matrix H (view matrix) based on the camera pose in the world coordinate system, which is used to change the pose of the virtual object relative to the world coordinate system to the pose of the virtual object relative to the camera coordinate system;
- the phase formation process of the identified plane on the display image is equivalent to transforming the points of the plane from the world coordinate system to the camera coordinate system and then projecting onto the display image to form a two-dimensional image of the plane. Therefore, the 3D virtual object corresponding to the plane is retrieved from the corresponding data built-in or specified by the user through the identified plane, and the vertex array of the 3D virtual object is obtained. Finally, the vertex coordinates in the vertex array are multiplied by the transformation matrix. H obtains the coordinates of the three-dimensional virtual object in the camera coordinate system.
- the product of the projection matrix and the transformation matrix H can be obtained through the simultaneous equations.
- the projection matrix depends entirely on the internal parameters of the camera, so the transformation matrix H can be calculated.
- S33 Generate animated trajectory data of the virtual object according to the recognized real plane data (including the plane pose).
- the animation trajectory data includes the coordinate position, animation curve and jump relationship in the camera coordinate system.
- an animation keypoint of the virtual object is generated according to the position of the recognized real plane and the jump relationship of the virtual object defined by the virtual object.
- the jumping relationship of the animation track is, for example, which plane to jump to first, and then to which plane.
- a Bezier curve configuration is used to generate an animation curve of a virtual object, that is, an animation trajectory, to achieve accurate sketching and configuration.
- Determine the order of the Bezier equation according to the animation trajectory data such as first order, second order, third order, or higher, and use the animation keypoint of the virtual object as the control point of the Bezier curve to create the Bezier curve.
- Curve equations such as linear Bezier equations, quadratic Bezier equations, cubic Bezier equations, or higher-order Bezier equations. Bezier curves are drawn based on the Bezier equation. , And then form the animation curve of the virtual object, that is, the animation track.
- FIG. 4 is a hardware block diagram of an augmented reality multi-plane model animation interactive device according to an embodiment of the present disclosure.
- the augmented reality multi-plane model animation interactive device 40 according to an embodiment of the present disclosure includes a memory 41 and a processor 42.
- the memory 41 is configured to store non-transitory computer-readable instructions.
- the memory 41 may include one or more computer program products, and the computer program product may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
- the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
- the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
- the processor 42 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the augmented reality multi-plane model animation interactive device 40 to perform desired operations.
- the processor 42 is configured to execute the computer-readable instructions stored in the memory 41, so that the augmented reality multi-plane model animation interactive device 40 executes the aforementioned augmented reality of the embodiments of the present disclosure. All or part of the steps of a multi-plane model animation interaction method.
- this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
- FIG. 5 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
- a computer-readable storage medium 50 according to an embodiment of the present disclosure stores non-transitory computer-readable instructions 51 thereon.
- the non-transitory computer-readable instruction 51 is executed by a processor, all or part of the steps of the foregoing method for interactively interacting with an augmented reality multi-plane model of the embodiments of the present disclosure are performed.
- the above computer-readable storage medium includes, but is not limited to: optical storage media (for example: CD-ROM and DVD), magneto-optical storage media (for example: MO), magnetic storage media (for example: magnetic tape or mobile hard disk), Rewrites non-volatile memory media (for example: memory card) and media with built-in ROM (for example: ROM box).
- optical storage media for example: CD-ROM and DVD
- magneto-optical storage media for example: MO
- magnetic storage media for example: magnetic tape or mobile hard disk
- Rewrites non-volatile memory media for example: memory card
- media with built-in ROM for example: ROM box
- FIG. 6 is a schematic diagram illustrating a hardware structure of a terminal according to an embodiment of the present disclosure.
- the augmented reality multi-plane model animation interactive terminal 60 includes the foregoing embodiment of the augmented reality multi-plane model animation interaction device.
- the terminal may be implemented in various forms, and the terminal in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP ( Portable multimedia players), navigation devices, on-board terminals, on-board display terminals, on-board electronic rear-view mirrors, and other mobile terminals, and fixed terminals such as digital TVs, desktop computers, and the like.
- a mobile phone such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP ( Portable multimedia players), navigation devices, on-board terminals, on-board display terminals, on-board electronic rear-view mirrors, and other mobile terminals, and fixed terminals such as digital TVs, desktop computers, and the like.
- PDA personal digital assistant
- PAD tablet computer
- PMP Portable multimedia players
- navigation devices
- the terminal may further include other components.
- the augmented reality multi-plane model animation interactive terminal 60 may include a power supply unit 61, a wireless communication unit 62, an A / V (audio / video) input unit 63, a user input unit 64, a sensing unit 65, and an interface.
- FIG. 6 shows a terminal with various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
- the wireless communication unit 62 allows radio communication between the terminal 60 and a wireless communication system or network.
- the A / V input unit 63 is used to receive audio or video signals.
- the user input unit 64 may generate key input data according to a command input by the user to control various operations of the terminal.
- the sensing unit 65 detects the current status of the terminal 60, the position of the terminal 60, the presence or absence of a user's touch input to the terminal 60, the orientation of the terminal 60, the acceleration or deceleration movement and direction of the terminal 60, and the like, and generates a terminal for controlling the terminal Command or signal of operation of 60.
- the interface unit 66 functions as an interface through which at least one external device can be connected to the terminal 60.
- the output unit 68 is configured to provide an output signal in a visual, audio, and / or tactile manner.
- the memory 69 may store software programs and the like for processing and control operations performed by the controller 66, or may temporarily store data that has been output or is to be output.
- the memory 69 may include at least one type of storage medium.
- the terminal 60 may cooperate with a network storage device that performs a storage function of the memory 69 through a network connection.
- the controller 67 generally controls the overall operation of the terminal.
- the controller 67 may include a multimedia module for reproducing or playing back multimedia data.
- the controller 67 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
- the power supply unit 61 receives external power or internal power under the control of the controller 67 and supplies appropriate power required to operate the various elements and components.
- augmented reality multi-plane model animation interaction method proposed by the present disclosure may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof.
- various embodiments of the augmented reality multi-plane model animation interaction method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable At least one of a logic device (PLD), a field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein is implemented in some
- ASIC application specific integrated circuit
- DSP digital signal processor
- DSPD digital signal processing device
- FPGA field programmable gate array
- processor a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein is implemented in some
- various embodiments of the augmented reality multi-plane model animation interaction method proposed by the present disclosure
- various embodiments of the augmented reality multi-plane model animation interaction method proposed by the present disclosure may be implemented with a separate software module that allows performing at least one function or operation.
- the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the memory 69 and executed by the controller 67.
- an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
- the word "exemplary” does not mean that the described example is preferred or better than other examples.
- each component or each step can be disassembled and / or recombined.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Computational Linguistics (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (9)
- 一种增强现实多平面模型动画交互方法,其特征在于,包括:获取现实环境视频图像;对该视频图像进行计算处理,识别现实环境中的多个真实平面;将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上;根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
- 如权利要求1所述的增强现实多平面模型动画交互方法,其中对该视频图像进行计算处理,识别现实环境中的多个真实平面包括:一次将所述视频图像中全部平面识别,或依次识别视频图像中的平面,或根据虚拟对象的动画需要,将所需的平面识别。
- 如权利要求1所述的增强现实多平面模型动画交互方法,其中对该视频图像进行计算处理,识别现实环境中的多个真实平面包括:通过SLAM算法检测出世界坐标系下的平面位姿和相机位姿。
- 如权利要求1所述的增强现实多平面模型动画交互方法,其中根据识别出的所述多个真实平面,生成所述虚拟对象的在所述多个真实平面之间动画轨迹进一步包括:通过世界坐标系下的平面位姿和虚拟对象相对于识别出的平面的平面坐标系下的位姿计算出虚拟对象相对于世界坐标系的位姿;通过世界坐标系下的相机位姿,计算出变化矩阵H,用于将虚拟对象相对于世界坐标系下的位姿转换成虚拟对象相对于相机坐标系下的位姿;根据识别的所述多个真实平面的数据,生成虚拟对象的动画轨迹数据;根据动画轨迹数据绘制对应的三维图形,生成多个虚拟图形帧以形成虚拟对象的动画轨迹。
- 如权利要求4所述的增强现实多平面模型动画交互方法,其中所述动画轨迹数据包括相机坐标系下的坐标位置、动画曲线和跳转关系。
- 如权利要求5所述的增强现实多平面模型动画交互方法,还包括:根据识别的所述真实平面的位姿以及所述跳转关系,生成所述虚拟对象的动画关键点;以所述动画关键点为参数,利用贝塞尔曲线配置生成所述虚拟对象的所述动画轨迹。
- 一种增强现实多平面模型动画交互装置,包括:获取模块:用于获取现实环境视频图像;识别模块:用于对该视频图像进行计算处理,识别现实环境中的真实平面;放置模块:用于将所述模型对应的虚拟对象放置于所述多个真实平面中的一个平面上;生成模块:根据识别出的所述多个真实平面,生成所述虚拟对象在所述多个真实平面之间的动画轨迹。
- 一种增强现实多平面模型动画交互设备,包括处理器和存储器,其中所述存储器存储计算机可读指令;所述处理器执行所述计算机可读指令实现根据权利要求1-6中任意一项所述的增强现实多平面模型动画交互方法。
- 一种计算机可读存储介质,用于存储计算机可读指令,所述计算机可读指令由计算机执行时,使得所述计算机实现权利要求1-6中任意一项所述增强现实多平面模型动画交互方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020571801A JP7337104B2 (ja) | 2018-08-09 | 2019-01-25 | 拡張現実によるモデル動画多平面インタラクション方法、装置、デバイス及び記憶媒体 |
GB2100236.5A GB2590212B (en) | 2018-08-09 | 2019-01-25 | Multi-plane model animation interaction method, apparatus and device for augmented reality, and storage medium |
US16/967,950 US20210035346A1 (en) | 2018-08-09 | 2019-01-25 | Multi-Plane Model Animation Interaction Method, Apparatus And Device For Augmented Reality, And Storage Medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810900487.7A CN110827376A (zh) | 2018-08-09 | 2018-08-09 | 增强现实多平面模型动画交互方法、装置、设备及存储介质 |
CN201810900487.7 | 2018-08-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020029554A1 true WO2020029554A1 (zh) | 2020-02-13 |
Family
ID=69413908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/073078 WO2020029554A1 (zh) | 2018-08-09 | 2019-01-25 | 增强现实多平面模型动画交互方法、装置、设备及存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20210035346A1 (zh) |
JP (1) | JP7337104B2 (zh) |
CN (1) | CN110827376A (zh) |
GB (1) | GB2590212B (zh) |
WO (1) | WO2020029554A1 (zh) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110515452B (zh) * | 2018-05-22 | 2022-02-22 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、存储介质和计算机设备 |
CN111522439B (zh) * | 2020-04-02 | 2024-04-12 | 上海电气集团股份有限公司 | 一种虚拟样机的修订方法、装置、设备及计算机存储介质 |
CN111626183B (zh) * | 2020-05-25 | 2024-07-16 | 深圳市商汤科技有限公司 | 一种目标对象展示方法及装置、电子设备和存储介质 |
CN111583421A (zh) * | 2020-06-03 | 2020-08-25 | 浙江商汤科技开发有限公司 | 确定展示动画的方法、装置、电子设备及存储介质 |
US20230267664A1 (en) * | 2020-07-16 | 2023-08-24 | Beijing Bytedance Network Technology Co., Ltd. | Animation processing method and apparatus, electronic device and storage medium |
CN113476835B (zh) * | 2020-10-22 | 2024-06-07 | 海信集团控股股份有限公司 | 一种画面显示的方法及装置 |
US11741676B2 (en) | 2021-01-21 | 2023-08-29 | Samsung Electronics Co., Ltd. | System and method for target plane detection and space estimation |
CN113034651B (zh) * | 2021-03-18 | 2023-05-23 | 腾讯科技(深圳)有限公司 | 互动动画的播放方法、装置、设备及存储介质 |
CN113160308A (zh) * | 2021-04-08 | 2021-07-23 | 北京鼎联网络科技有限公司 | 一种图像处理方法和装置、电子设备及存储介质 |
KR102594258B1 (ko) * | 2021-04-26 | 2023-10-26 | 한국전자통신연구원 | 증강현실에서 실제 객체를 가상으로 이동하는 방법 및 장치 |
CN113888724B (zh) * | 2021-09-30 | 2024-07-23 | 北京字节跳动网络技术有限公司 | 一种动画显示方法、装置及设备 |
CN114445541A (zh) * | 2022-01-28 | 2022-05-06 | 北京百度网讯科技有限公司 | 处理视频的方法、装置、电子设备及存储介质 |
CN114584704A (zh) * | 2022-02-08 | 2022-06-03 | 维沃移动通信有限公司 | 拍摄方法、装置和电子设备 |
CN115937299B (zh) * | 2022-03-25 | 2024-01-30 | 北京字跳网络技术有限公司 | 在视频中放置虚拟对象的方法及相关设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130303285A1 (en) * | 2012-05-11 | 2013-11-14 | Sony Computer Entertainment Europe Limited | Apparatus and method for augmented reality |
CN104050475A (zh) * | 2014-06-19 | 2014-09-17 | 樊晓东 | 基于图像特征匹配的增强现实的系统和方法 |
CN107371009A (zh) * | 2017-06-07 | 2017-11-21 | 东南大学 | 一种人体动作增强可视化方法及人体动作增强现实系统 |
CN108111832A (zh) * | 2017-12-25 | 2018-06-01 | 北京麒麟合盛网络技术有限公司 | 增强现实ar视频的异步交互方法及系统 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8970690B2 (en) * | 2009-02-13 | 2015-03-03 | Metaio Gmbh | Methods and systems for determining the pose of a camera with respect to at least one object of a real environment |
JP2013164697A (ja) | 2012-02-10 | 2013-08-22 | Sony Corp | 画像処理装置、画像処理方法、プログラム及び画像処理システム |
US20130215230A1 (en) * | 2012-02-22 | 2013-08-22 | Matt Miesnieks | Augmented Reality System Using a Portable Device |
US20130215109A1 (en) * | 2012-02-22 | 2013-08-22 | Silka Miesnieks | Designating Real World Locations for Virtual World Control |
JP5988368B2 (ja) | 2012-09-28 | 2016-09-07 | Kddi株式会社 | 画像処理装置及び方法 |
US9953618B2 (en) | 2012-11-02 | 2018-04-24 | Qualcomm Incorporated | Using a plurality of sensors for mapping and localization |
JP2014178794A (ja) * | 2013-03-14 | 2014-09-25 | Hitachi Ltd | 搬入経路計画システム |
US9412040B2 (en) | 2013-12-04 | 2016-08-09 | Mitsubishi Electric Research Laboratories, Inc. | Method for extracting planes from 3D point cloud sensor data |
US9972131B2 (en) * | 2014-06-03 | 2018-05-15 | Intel Corporation | Projecting a virtual image at a physical surface |
US9754416B2 (en) * | 2014-12-23 | 2017-09-05 | Intel Corporation | Systems and methods for contextually augmented video creation and sharing |
US10845188B2 (en) * | 2016-01-05 | 2020-11-24 | Microsoft Technology Licensing, Llc | Motion capture from a mobile self-tracking device |
JP6763154B2 (ja) * | 2016-03-09 | 2020-09-30 | 富士通株式会社 | 画像処理プログラム、画像処理装置、画像処理システム、及び画像処理方法 |
CN107358609B (zh) * | 2016-04-29 | 2020-08-04 | 成都理想境界科技有限公司 | 一种用于增强现实的图像叠加方法及装置 |
CN107665508B (zh) * | 2016-07-29 | 2021-06-01 | 成都理想境界科技有限公司 | 实现增强现实的方法及系统 |
CN107665506B (zh) * | 2016-07-29 | 2021-06-01 | 成都理想境界科技有限公司 | 实现增强现实的方法及系统 |
CN106548519A (zh) * | 2016-11-04 | 2017-03-29 | 上海玄彩美科网络科技有限公司 | 基于orb‑slam和深度相机的真实感的增强现实方法 |
-
2018
- 2018-08-09 CN CN201810900487.7A patent/CN110827376A/zh active Pending
-
2019
- 2019-01-25 JP JP2020571801A patent/JP7337104B2/ja active Active
- 2019-01-25 US US16/967,950 patent/US20210035346A1/en not_active Abandoned
- 2019-01-25 WO PCT/CN2019/073078 patent/WO2020029554A1/zh active Application Filing
- 2019-01-25 GB GB2100236.5A patent/GB2590212B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130303285A1 (en) * | 2012-05-11 | 2013-11-14 | Sony Computer Entertainment Europe Limited | Apparatus and method for augmented reality |
CN104050475A (zh) * | 2014-06-19 | 2014-09-17 | 樊晓东 | 基于图像特征匹配的增强现实的系统和方法 |
CN107371009A (zh) * | 2017-06-07 | 2017-11-21 | 东南大学 | 一种人体动作增强可视化方法及人体动作增强现实系统 |
CN108111832A (zh) * | 2017-12-25 | 2018-06-01 | 北京麒麟合盛网络技术有限公司 | 增强现实ar视频的异步交互方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
GB202100236D0 (en) | 2021-02-24 |
GB2590212B (en) | 2023-05-24 |
JP2021532447A (ja) | 2021-11-25 |
GB2590212A (en) | 2021-06-23 |
JP7337104B2 (ja) | 2023-09-01 |
US20210035346A1 (en) | 2021-02-04 |
GB2590212A9 (en) | 2023-03-29 |
CN110827376A (zh) | 2020-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020029554A1 (zh) | 增强现实多平面模型动画交互方法、装置、设备及存储介质 | |
EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
CN106875431B (zh) | 具有移动预测的图像追踪方法及扩增实境实现方法 | |
CN110163942B (zh) | 一种图像数据处理方法和装置 | |
US8681179B2 (en) | Method and system for coordinating collisions between augmented reality and real reality | |
WO2020024569A1 (zh) | 动态生成人脸三维模型的方法、装置、电子设备 | |
KR101410273B1 (ko) | 증강현실 응용을 위한 환경 모델링 방법 및 장치 | |
WO2021018214A1 (zh) | 虚拟对象处理方法及装置、存储介质和电子设备 | |
US20120306874A1 (en) | Method and system for single view image 3 d face synthesis | |
JP7556839B2 (ja) | 複合現実において動的仮想コンテンツを生成するデバイスおよび方法 | |
KR101723823B1 (ko) | 인터랙티브 공간증강 체험전시를 위한 동적 객체와 가상 객체 간의 인터랙션 구현 장치 | |
WO2020001014A1 (zh) | 图像美化方法、装置及电子设备 | |
CN110072046B (zh) | 图像合成方法和装置 | |
US11373329B2 (en) | Method of generating 3-dimensional model data | |
WO2021164653A1 (zh) | 动画形象的生成方法、设备及存储介质 | |
AU2016230943B2 (en) | Virtual trying-on experience | |
CN115775300B (zh) | 人体模型的重建方法、人体重建模型的训练方法及装置 | |
CN110827411B (zh) | 自适应环境的增强现实模型显示方法、装置、设备及存储介质 | |
CN112862981B (zh) | 用于呈现虚拟表示的方法和装置、计算机设备和存储介质 | |
CN108989681A (zh) | 全景图像生成方法和装置 | |
KR20140078083A (ko) | 증강 현실이 구현된 만화책 | |
McClean | An Augmented Reality System for Urban Environments using a Planar Building Fa cade Model | |
WO2023017623A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
US20230267664A1 (en) | Animation processing method and apparatus, electronic device and storage medium | |
Setyati et al. | Face tracking implementation with pose estimation algorithm in augmented reality technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19847541 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020571801 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 202100236 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20190125 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.05.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19847541 Country of ref document: EP Kind code of ref document: A1 |