CN111179438A - AR model dynamic fixing method and device, electronic equipment and storage medium - Google Patents

AR model dynamic fixing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111179438A
CN111179438A CN202010007875.XA CN202010007875A CN111179438A CN 111179438 A CN111179438 A CN 111179438A CN 202010007875 A CN202010007875 A CN 202010007875A CN 111179438 A CN111179438 A CN 111179438A
Authority
CN
China
Prior art keywords
model
camera
constraint
change
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010007875.XA
Other languages
Chinese (zh)
Inventor
樊健荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202010007875.XA priority Critical patent/CN111179438A/en
Publication of CN111179438A publication Critical patent/CN111179438A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an AR model dynamic fixing method, an AR model dynamic fixing device, electronic equipment and a storage medium, and relates to the technical field of augmented reality. According to the AR model dynamic fixing method, the AR model dynamic fixing device, the electronic equipment and the storage medium, after the AR scene view is created, the camera is started according to the attribute information of the AR scene view, the change information of the camera is obtained, the conversion constraint is created according to the change information of the camera, and after the conversion constraint is created, the conversion constraint is added to the model in the AR scene view, so that the model changes along with the change of the camera, the model is dynamically fixed in the observation visual angle of the camera, and the model is more real and free.

Description

AR model dynamic fixing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of augmented reality, in particular to an AR model dynamic fixing method, an AR model dynamic fixing device, electronic equipment and a storage medium.
Background
The Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information and a real world, and is widely applied to the real world after simulating and simulating virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer by using various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, wherein the two kinds of information supplement each other, so that the real world is enhanced.
At present, in an AR scene, the position of each displayed three-dimensional model is fixed and unchanged in the AR scene, when a camera moves or rotates an observation angle, the three-dimensional model in the AR scene does not change along with the motion of the camera, and a user at the camera end may not observe a complete three-dimensional model, so that the reality of the AR scene is affected, and the user experience is reduced.
Disclosure of Invention
Based on the above research, the present invention provides a method, an apparatus, an electronic device, and a storage medium for dynamically fixing an AR model to solve the above problems.
Embodiments of the invention may be implemented as follows:
in a first aspect, an embodiment of the present invention provides a method for dynamically fixing an AR model, where the method includes:
creating an AR scene view, starting a camera according to the attribute information of the AR scene view, and acquiring the change information of the camera;
creating a transformation constraint according to the change information, and adding the transformation constraint to a model in the AR scene view so that the model changes along with the change of the camera.
In an alternative embodiment, the method further comprises:
creating a parent node in the AR scene view, and adding the model to the parent node;
adding the transformation constraint to the parent node to cause the model and the parent node to change following changes in the camera.
In an alternative embodiment, the method further comprises:
and updating the conversion constraint according to the change information of the camera to update the position of the model or the parent node, so that the model or the parent node changes along with the change of the camera.
In an alternative embodiment, the method further comprises:
creating a billboard constraint and setting a rotating shaft;
adding the billboard constraint to the model such that the model is movable along the axis of rotation.
In an alternative embodiment, the method further comprises:
creating a distance constraint and setting a moving distance;
adding the distance constraint to the model to move the model within the movement distance.
In an optional embodiment, the step of turning on a camera according to the attribute information of the AR scene view includes:
acquiring attribute information of the AR scene view, operating the attribute information in a world space capturing component, starting the camera, and establishing a space coordinate system;
and calculating change information of the camera through a world space capturing component based on the space coordinate system, and uploading the change information to the attribute information.
In an optional embodiment, the step of acquiring the change information of the camera includes:
and acquiring a current image rendering frame according to the attribute information, and acquiring the change information of the camera according to the image rendering frame.
In a second aspect, an embodiment of the present invention provides an AR model dynamic fixing device, where the AR model dynamic fixing device includes a creation module and a constraint addition module;
the creating module is used for creating an AR scene view, starting a camera according to the attribute information of the AR scene view, and acquiring the change information of the camera;
the constraint adding module is used for creating a conversion constraint according to the change information and adding the conversion constraint to a model in the AR scene view so as to enable the model to change along with the change of the camera.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a non-volatile memory storing computer instructions, where the computer instructions, when executed by the processor, cause the electronic device to perform the AR model dynamic fixing method according to any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present invention provides a storage medium, where a computer program is stored, and the computer program, when executed, implements the AR model dynamic fixing method according to any one of the foregoing embodiments.
According to the AR model dynamic fixing method, the AR model dynamic fixing device, the electronic equipment and the storage medium, after the AR scene view is created, the camera is started according to the attribute information of the AR scene view, the change information of the camera is obtained, the conversion constraint is created according to the change information of the camera, and after the conversion constraint is created, the conversion constraint is added to the model in the AR scene view, so that the model changes along with the change of the camera, the model is dynamically fixed in the observation angle of the camera, the model is more real and free, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a method for dynamically fixing an AR model according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an AR scene according to an embodiment of the present invention.
Fig. 4 is a scene schematic diagram of the AR model dynamic fixing method according to the embodiment of the present invention.
Fig. 5 is a flow chart illustrating a sub-step of the method for dynamically fixing the AR model according to the embodiment of the present invention.
Fig. 6 is another schematic flow chart of a method for dynamically fixing an AR model according to an embodiment of the present invention.
Fig. 7 is a schematic flow chart of a dynamic AR model fixing method according to an embodiment of the present invention.
Fig. 8 is a schematic coordinate axis diagram according to an embodiment of the present invention.
Fig. 9 is a schematic flow chart of a dynamic AR model fixing method according to an embodiment of the present invention.
Fig. 10 is a block diagram of an AR model dynamic fixing apparatus according to an embodiment of the present invention.
Icon: 100-an electronic device; 10-AR model dynamic fixing device; 11-a creation module; 12-constraint addition module; 20-a memory; 30-a processor; 40-a display unit; 50-a communication unit; 60-video camera.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. indicate an orientation or a positional relationship based on that shown in the drawings or that the product of the present invention is used as it is, this is only for convenience of description and simplification of the description, and it does not indicate or imply that the device or the element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
The augmented reality technology is also called augmented reality, and is a relatively new technology content which promotes integration of real world information and virtual world information content, and the augmented reality technology implements analog simulation processing on the basis of computer and other scientific technologies of entity information which is relatively difficult to experience in the space range of the real world originally, overlays the virtual information content in the real world for effective application, and can be perceived by human senses in the process, so that the sensory experience beyond reality is realized. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time.
Currently, in an AR environment, a three-dimensional model is generally added to a root node in an AR scene, a position is set in the scene, and the three-dimensional model is fixed at a relative position with respect to the position without moving with a lens of a camera, thereby achieving an AR preview effect.
In the prior art, although there is an effect of loading a three-dimensional model on pointOfView to realize fixing, pointOfView is an attribute of arscinview (AR scene view), and represents a node where a current video picture is located, and the node moves along with a device, the fixing mode has no perspective processing of three-dimensional (3Dimensions, 3D), and the position of the three-dimensional model does not change along with the position change of a camera.
In the prior art, the position of the three-dimensional model in the AR scene cannot change along with the position change of the camera, so that the authenticity, interest and playability of the AR scene are to be improved, and the experience of a user is seriously influenced.
Based on the above research, the present embodiment provides a dynamic fixing method for an AR model to improve the above problem.
Referring to fig. 1, fig. 1 is a block schematic diagram of an electronic device 100 provided in this embodiment, the AR model dynamic fixing method provided in this embodiment may be applied to the electronic device 100, and the electronic device 100 may be, but is not limited to, a terminal with an AR function, such as a smart phone and a tablet computer.
As shown in fig. 1, the electronic device 100 includes an AR model dynamic fixing apparatus 10, a memory 20, a processor 30, a display unit 40, a communication unit 50, and a camera 60.
The memory 20, processor 30, display unit 40, communication unit 50, and camera 60 are electrically connected to each other, directly or indirectly, to enable data transfer or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The AR model dynamic fixing apparatus 10 includes at least one software function module which may be stored in the memory 20 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 30 is used to execute executable modules stored in the memory 20, such as software functional modules and computer programs included in the AR model dynamic fixation device 10.
The Memory 20 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 20 is used for storing programs or data, among others.
The processor 30 may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP)), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The display unit 40 provides an interactive interface (e.g., a user operation interface) between the electronic device 100 and a user for displaying web page information. In particular, the display unit 40 may display pages and video output to the user, and the content of the output may include text, graphics, video, and any combination thereof. Some of the output results are for some of the user interface objects. In this embodiment, the display unit 40 may be a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations generated at one or more positions on the touch display and send the sensed touch operations to the processor for calculation and processing.
The communication unit 50 is used for establishing a communication connection between the electronic device 100 and another device via a network, and for transceiving data via the network.
The camera 60 is configured to acquire a real scene around the electronic device 100, and transmit acquired scene data to the processor 30 for processing, and the camera 60 may also be configured to display an effect of combining the real scene and a virtual scene, and render and display a virtual object (a three-dimensional model) while displaying the real scene.
The electronic device 100 may be installed with a plurality of clients, such as a browser (IE browser, UC browser, 360 browser, and QQ browser, etc.), AR software, and other applications.
It is to be understood that the configuration shown in fig. 1 is merely exemplary, and that the electronic device 100 may include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Based on the implementation architecture of the electronic device 100, please refer to fig. 2, fig. 2 is a flowchart illustrating the dynamic AR model fixing method provided in this embodiment, which is executed by the electronic device 100 shown in fig. 1, and the flowchart shown in fig. 2 is described in detail below.
Step S10: creating an AR scene view, starting a camera according to the attribute information of the AR scene view, and acquiring the change information of the camera.
Step S20: creating a transformation constraint according to the change information, and adding the transformation constraint to a model in the AR scene view so that the model changes along with the change of the camera.
The AR scene view (ARSCNView) is a carrier of an AR environment, and three-dimensional virtual content can be merged with a camera view of a real world through the AR scene view. Each AR scene view has attribute information (arsion), and when running the attribute information, the electronic device 100 defaults to creating an ARCamera (AR camera) in the AR scene view, the AR camera being a junction connecting the virtual scene and the real scene, being both a video camera capturing real images and a camera in the virtual world. Meanwhile, the electronic device 100 opens the video camera installed in the electronic device 100 while creating the AR camera in the AR scene view, and the effect of combining the model in the AR scene with the real scene through the video camera (ARCamera) can be shown on the interface of the electronic device 100, as shown in fig. 3.
When the electronic device 100 changes, that is, when the electronic device 100 changes in motion, such as rotation or movement, the position of the camera changes accordingly, so that the view angle of the camera changes, in order to make the model in the AR scene change along with the change of the electronic device 100, the model is dynamically fixed in front of the electronic device 100, and it is ensured that the model is always located within the view angle of the camera, in this embodiment, the model is dynamically fixed by adding a constraint (scnconstrative) to the model, and according to the constraint.
The constraint scnconstratment is a model constraint technology, and by agreement of various constraint conditions, the electronic device 100 may automatically update the model to which the constraint is added, so as to satisfy the constraint conditions. In this embodiment, when the camera changes, a change message is generated, and by acquiring the change message of the camera, creating a transformation constraint (scntransformcontraction) according to the change message, and adding the transformation constraint to the model in the AR scene view, the model can move along with the movement of the camera, as shown in fig. 4, so as to be always located within the view angle of the camera, thereby improving the reality, playability, and user experience of the AR scene.
In an exemplary embodiment, the transformation constraint is continuously called back, and the transformation constraint continuously acquires the change information of the camera and updates the change information according to the acquired change information, and the electronic device 100 calculates the position information of the model to be adjusted based on the updated transformation constraint, and adjusts the position of the model according to the calculated position information, thereby ensuring that the model always changes along with the change of the electronic device 100 and always lies within the view angle range of the camera.
In this embodiment, the three-dimensional model may be, but is not limited to, a hand-held card model, a selfie stick model, and the like.
In an alternative embodiment, referring to fig. 5 in combination, the step of turning on the camera according to the attribute information of the AR scene view includes steps S11 to S12.
Step S11: and acquiring the attribute information of the AR scene view, operating the attribute information in a world space capturing component, starting the camera, and establishing a space coordinate system.
Step S12: and calculating change information of the camera through a world space capturing component based on the space coordinate system, and uploading the change information to the attribute information.
Among them, the world space capture component (ARWorldTrackingConfiguration) is mainly used to track the orientation and position of the electronic device 100 through a camera, and detect the surface of the real world, as well as known images or objects. The attribute information (arsion) is a shared object related to a motion process required for managing the camera and the AR, and corresponds to a bridge that communicates with the AR scene view to display the AR scene on the one hand and manages the camera (ARCamera) and the world space capture component (arworldtrackingsession configuration) on the other hand.
After obtaining the attribute information of the AR scene view, operating the attribute information in a world space capturing component, simultaneously starting a camera through the attribute information, and establishing a space coordinate system, wherein the origin of the space coordinate system is the position of the camera. Based on the space coordinate system, the world space capturing component can calculate the change information of the camera, but the world space capturing component does not hold the change information of the camera, and gives the calculated change information to the arsmission for management.
Based on the above process, in an alternative embodiment, the step of acquiring the change information of the camera includes step S13.
Step S13: and acquiring a current image rendering frame according to the attribute information, and acquiring the change information of the camera according to the image rendering frame.
After calculating the change information of the camera, the world space capture component gives the calculated change information to the attribute information for management, and the category corresponding to the change information of the camera is ARFrame, which mainly tracks the current state of the camera, and this state is not only the change information of the position of the camera, but also parameters such as image frame and time, and the ARFrame is maintained by the current image rendering frame (currentFrame), so the current image rendering frame (currentFrame) can be obtained by the attribute information arsesision of the AR scene view, the change information of the camera (ARCamera) can be obtained according to the current image rendering frame (currentFrame), and the change information of the camera includes the displacement rotation information of the electronic device 100.
To make a model exposable in an AR scene, in this example, prior to obtaining the camera's change information, the method further includes a process of adding the model to a root node in the AR scene view.
In this embodiment, after creating the AR scene view, a scene (SCNScene), which is a rendering target, may be added to the AR scene view. Each scene comprises a plurality of nodes, the nodes are used as structural elements of the scene and have own positions and own coordinate systems, the model can be added into the scene by adding the nodes into the scene, specifically, a model node (modelNode) is created in the scene, and then the model to be added is assigned to the model node, so that the model can be added into the scene.
However, the display of the model requires a carrier, the model can be displayed only by adding the model to the carrier, and the root node (rootNode) in the scene is the carrier of all models. When a scene is added in the AR scene view, a root node is set by default, and the model can be displayed in the scene only by adding the model to the root node. Specifically, model nodes are created in the scene, the model nodes are added to the root nodes, then the model is assigned to the model nodes, and the model can be added to the scene and displayed.
After the model is added to the root node, the model can be displayed in an AR scene view, then the conversion constraint is added to the model, and the model can move along with the camera, so that the model is always positioned in the visual angle of the camera, and the interest of the AR is improved.
In an alternative embodiment, please refer to fig. 6 in combination, in order to further improve the flexibility and the realism of the model, in this embodiment, the method further includes steps S30 to S40.
Step S30: and creating a parent node in the AR scene view, and adding the model to the parent node.
Step S40: adding the transformation constraint to the parent node to cause the model and the parent node to change following changes in the camera.
In the AR scene view, a parent node is created, that is, a parent node is created in a scene (SCNScene), and the parent node is a blank node and is not assigned with an element value. After the parent node is created, the parent node is added to the root node as a child node of the root node, and then the model (model node) is added to the parent node as a child node of the parent node, so that the inheritance relationship between the nodes becomes that the model node inherits the parent node, and the parent node inherits the root node. And adding the conversion constraint to the father node, wherein the father node can change along with the change of the camera, the model can change according to the change of the father node, and further, when the camera changes, the father node and the model can change along with the change of the camera.
In an exemplary embodiment, after adding the transition constraint, the process of making the parent node or model follow the camera change based on the transition constraint can be implemented by the following steps.
And updating the conversion constraint according to the change information of the camera to update the position of the model or the parent node, so that the model or the parent node changes along with the change of the camera.
When the position of the camera changes, a change message is generated, the conversion constraint is updated based on the change message, then the electronic device 100 calculates the position information of the model or the father node to be adjusted based on the updated conversion constraint, and adjusts the position of the model or the father node according to the calculated position information, so that the model or the father node changes along with the camera. It can be understood that when the position of the father node is adjusted, the position of the father node changes, and the position of the model also changes along with the father node, so that the model changes along with the change of the camera, and the model is ensured to be always located in the visual angle range of the camera.
In an alternative embodiment, referring to fig. 7 in combination, in order to increase the flexibility of the movement of the model, the model located in the view of the camera is aligned with the camera, and the method further includes steps S50 to S60.
Step S50: creating a billboard constraint and setting a rotating shaft;
step S60: adding the billboard constraint to the model such that the model is movable along the axis of rotation.
The billboard constraint (SCNBillboardContraint) is a constraint capable of constraining the orientation of the model, and the (x, y, z) axis direction of the model can be adjusted through the billboard constraint, so that the model is always parallel to the camera and is right opposite to the camera. Optionally, after creating the billboard constraint, setting three axes (x, y, z) of the model with the model itself as the origin, as shown in fig. 8, after setting the three axes of the model, selecting a rotatable axis among the three axes, that is, selecting a corresponding rotatable axis direction, optionally, the embodiment selects the x axis and the y axis as rotatable axes, and accordingly, setting freeAxes in the billboard constraint as SCNBillboardAxisX | SCNBillboardAxisY. After the arrangement, the model can rotate up, down, left and right along the model, and the front of the model faces the camera.
It should be noted that, because the transformation constraint includes change information of the model motion, the change information includes rotation angle information and displacement information, and in order to avoid a contradiction between the motion of the model self rotation offset and the motion of the model following the camera change, optionally, in this embodiment, the transformation constraint may be added to the parent node, and the billboard constraint may be added to the model.
In an alternative embodiment, referring to fig. 9 in combination, in order to keep the model moving within a certain position relative to the camera, the method further includes steps S70 to S80.
Step S70: a distance constraint is created and a movement distance is set.
Step S80: adding the distance constraint to the model to move the model within the movement distance.
The model can move up and down, left and right within a certain distance relative to the camera through distance constraint (SCNDistansececonstrainnt). Optionally, after creating the distance constraint and setting the farthest and closest movement distances of the model that can move up, down, left and right, the distance constraint is added to the model, and the model can move within the set movement distances. For example, if the farthest moving distance of the upward movement is set to L1 and the closest distance is set to L2, the model can move between the distance L2 and the distance L1 in the upward direction with respect to the camera. For another example, if the farthest moving distance of the leftward movement is set to L3 and the closest distance is set to L4, the model can move between the distance L3 and the distance L4 in the leftward direction with respect to the camera.
It should be noted that, in order to avoid the contradiction between the motion of the model in a certain position relative to the camera and the motion of the model following the change of the camera, in this embodiment, a conversion constraint may be added to the parent node and a distance constraint may be added to the model.
In this example, the distance constraint and billboard constraint can be added to the model together to make the model more realistic.
The AR model dynamic fixing method provided by the embodiment adds the conversion constraint on the father node, adds the distance constraint and the billboard constraint on the model, on one hand, the model can be changed along with the change of the camera, the model is ensured to be always positioned in the visual angle of the camera, on the other hand, the model can be always moved in a certain position relative to the camera, the dynamic fixing of the model is realized, the model is more real and free, the vivid effect can be realized in the AR scene, more play possibilities are provided, the interestingness is improved, and the user experience is improved.
On the basis, please refer to fig. 10 in combination, the embodiment further provides an AR model dynamic fixing device 10, where the AR model dynamic fixing device 10 includes a creating module 11 and a constraint adding module 12.
The creating module 11 is configured to create an AR scene view, start a camera according to attribute information of the AR scene view, and acquire change information of the camera.
The constraint adding module 12 is configured to create a transformation constraint according to the change information, and add the transformation constraint to the model in the AR scene view, so that the model changes according to the change of the camera.
In an optional embodiment, the creating module 11 is further configured to create a parent node in the AR scene view, and add the model to the parent node.
The constraint adding module 12 is further configured to add the transformation constraint to the parent node so that the model and the parent node change following the camera change.
In an optional embodiment, the constraint adding module 12 is further configured to update the transformation constraint according to the change information of the camera to update the location of the model or the parent node, so that the model or the parent node changes following the change of the camera.
In an alternative embodiment, the creation module 11 is further configured to create billboard constraints and set a rotation axis. The constraint adding module 12 is further configured to add the billboard constraint to the model such that the model is movable along the rotational axis.
In an alternative embodiment, the creating module 11 is further configured to create a distance constraint and set a moving distance. The constraint adding module 12 is further configured to add the distance constraint to the model to move the model within the movement distance.
In an optional implementation manner, the creating module 11 is further configured to obtain attribute information of the AR scene view, operate the attribute information on a world space capture component, start the camera, establish a space coordinate system, calculate change information of the camera through the world space capture component based on the space coordinate system, upload the change information to the attribute information, obtain a current image rendering frame according to the attribute information, and obtain change information of the camera according to the image rendering frame.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the AR model dynamic fixing apparatus 10 described above may refer to the corresponding process in the foregoing method, and will not be described in detail herein.
On the basis of the above, the present embodiment further provides a storage medium, in which a computer program is stored, and the computer program, when executed, implements the AR model dynamic fixing method according to any one of the foregoing embodiments.
In summary, according to the AR model dynamic fixing method, apparatus, electronic device and storage medium provided in this embodiment, after the AR scene view is created, the camera is started according to the attribute information of the AR scene view, the change information of the camera is acquired, the conversion constraint is created according to the change information of the camera, and after the conversion constraint is created, the conversion constraint is added to the model in the AR scene view, so that the model changes along with the change of the camera, and thus the model is more real and free in the observation angle of the camera dynamically fixed by the model, and the user experience is improved.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method for dynamically fixing an AR model, the method comprising:
creating an AR scene view, starting a camera according to the attribute information of the AR scene view, and acquiring the change information of the camera;
creating a transformation constraint according to the change information, and adding the transformation constraint to a model in the AR scene view so that the model changes along with the change of the camera.
2. The method for dynamically fixing an AR model according to claim 1, wherein the method further comprises:
creating a parent node in the AR scene view, and adding the model to the parent node;
adding the transformation constraint to the parent node to cause the model and the parent node to change following changes in the camera.
3. The method for dynamically fixing the AR model according to claim 2, wherein the method further comprises:
and updating the conversion constraint according to the change information of the camera to update the position of the model or the parent node, so that the model or the parent node changes along with the change of the camera.
4. The method for dynamically fixing the AR model according to claim 2, wherein the method further comprises:
creating a billboard constraint and setting a rotating shaft;
adding the billboard constraint to the model such that the model is movable along the axis of rotation.
5. The method for dynamically fixing the AR model according to claim 2, wherein the method further comprises:
creating a distance constraint and setting a moving distance;
adding the distance constraint to the model to move the model within the movement distance.
6. The method for dynamically fixing the AR model according to claim 1, wherein the step of turning on a camera according to the attribute information of the view of the AR scene comprises:
acquiring attribute information of the AR scene view, operating the attribute information in a world space capturing component, starting the camera, and establishing a space coordinate system;
and calculating change information of the camera through a world space capturing component based on the space coordinate system, and uploading the change information to the attribute information.
7. The AR model dynamic fixation method of claim 6, wherein the step of obtaining the camera change information comprises:
and acquiring a current image rendering frame according to the attribute information, and acquiring the change information of the camera according to the image rendering frame.
8. An AR model dynamic fixing device is characterized by comprising a creating module and a constraint adding module;
the creating module is used for creating an AR scene view, starting a camera according to the attribute information of the AR scene view, and acquiring the change information of the camera;
the constraint adding module is used for creating a conversion constraint according to the change information and adding the conversion constraint to a model in the AR scene view so as to enable the model to change along with the change of the camera.
9. An electronic device comprising a processor and a non-volatile memory having computer instructions stored thereon, wherein the computer instructions, when executed by the processor, cause the electronic device to perform the method of dynamically fixing an AR model of any one of claims 1-7.
10. A storage medium having stored therein a computer program which, when executed, implements the AR model dynamic fixation method of any one of claims 1-7.
CN202010007875.XA 2020-01-02 2020-01-02 AR model dynamic fixing method and device, electronic equipment and storage medium Pending CN111179438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010007875.XA CN111179438A (en) 2020-01-02 2020-01-02 AR model dynamic fixing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010007875.XA CN111179438A (en) 2020-01-02 2020-01-02 AR model dynamic fixing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111179438A true CN111179438A (en) 2020-05-19

Family

ID=70647433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010007875.XA Pending CN111179438A (en) 2020-01-02 2020-01-02 AR model dynamic fixing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111179438A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541781A (en) * 2020-11-30 2021-03-23 深圳云天励飞技术股份有限公司 Collaborative planning method and device for camera and edge node, electronic equipment and medium
CN113936121A (en) * 2021-10-15 2022-01-14 杭州灵伴科技有限公司 AR (augmented reality) label setting method and remote collaboration system
CN115859749A (en) * 2023-02-17 2023-03-28 合肥联宝信息技术有限公司 Constraint building method and device of three-dimensional model, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN109147054A (en) * 2018-08-03 2019-01-04 五八有限公司 Setting method, device, storage medium and the terminal of the 3D model direction of AR
CN109300184A (en) * 2018-09-29 2019-02-01 五八有限公司 AR Dynamic Display method, apparatus, computer equipment and readable storage medium storing program for executing
CN109615703A (en) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 Image presentation method, device and the equipment of augmented reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN109147054A (en) * 2018-08-03 2019-01-04 五八有限公司 Setting method, device, storage medium and the terminal of the 3D model direction of AR
CN109615703A (en) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 Image presentation method, device and the equipment of augmented reality
CN109300184A (en) * 2018-09-29 2019-02-01 五八有限公司 AR Dynamic Display method, apparatus, computer equipment and readable storage medium storing program for executing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541781A (en) * 2020-11-30 2021-03-23 深圳云天励飞技术股份有限公司 Collaborative planning method and device for camera and edge node, electronic equipment and medium
CN112541781B (en) * 2020-11-30 2024-03-26 深圳云天励飞技术股份有限公司 Collaborative planning method and device for camera and edge node, electronic equipment and medium
CN113936121A (en) * 2021-10-15 2022-01-14 杭州灵伴科技有限公司 AR (augmented reality) label setting method and remote collaboration system
CN113936121B (en) * 2021-10-15 2023-10-13 杭州灵伴科技有限公司 AR label setting method and remote collaboration system
CN115859749A (en) * 2023-02-17 2023-03-28 合肥联宝信息技术有限公司 Constraint building method and device of three-dimensional model, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109939440B (en) Three-dimensional game map generation method and device, processor and terminal
US10755485B2 (en) Augmented reality product preview
CN108939556B (en) Screenshot method and device based on game platform
EP2816519A1 (en) Three-dimensional shopping platform displaying system
CN111803945B (en) Interface rendering method and device, electronic equipment and storage medium
US10943365B2 (en) Method and system of virtual footwear try-on with improved occlusion
CN111179438A (en) AR model dynamic fixing method and device, electronic equipment and storage medium
TW201346640A (en) Image processing device, and computer program product
CN109448050B (en) Method for determining position of target point and terminal
CN108133454B (en) Space geometric model image switching method, device and system and interaction equipment
CN112783700A (en) Computer readable medium for network-based remote assistance system
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
CN111142967A (en) Augmented reality display method and device, electronic equipment and storage medium
CN112827169B (en) Game image processing method and device, storage medium and electronic equipment
CN111710314B (en) Display picture adjusting method, intelligent terminal and readable storage medium
CN112675541A (en) AR information sharing method and device, electronic equipment and storage medium
KR102314782B1 (en) apparatus and method of displaying three dimensional augmented reality
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
KR101909994B1 (en) Method for providing 3d animating ar contents service using nano unit block
KR101910931B1 (en) Method for providing 3d ar contents service on food using 64bit-identifier
CN112667137B (en) Switching display method and device for house type graph and house three-dimensional model
CN115454250A (en) Method, apparatus, device and storage medium for augmented reality interaction
US20240020910A1 (en) Video playing method and apparatus, electronic device, medium, and program product
WO2020253342A1 (en) Panoramic rendering method for 3d video, computer device, and readable storage medium
Ohta et al. Photo-based Desktop Virtual Reality System Implemented on a Web-browser

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519