CN112070906A - Augmented reality system and augmented reality data generation method and device - Google Patents

Augmented reality system and augmented reality data generation method and device Download PDF

Info

Publication number
CN112070906A
CN112070906A CN202010898912.0A CN202010898912A CN112070906A CN 112070906 A CN112070906 A CN 112070906A CN 202010898912 A CN202010898912 A CN 202010898912A CN 112070906 A CN112070906 A CN 112070906A
Authority
CN
China
Prior art keywords
virtual object
editing
display effect
augmented reality
effect parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010898912.0A
Other languages
Chinese (zh)
Inventor
侯欣如
栾青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010898912.0A priority Critical patent/CN112070906A/en
Publication of CN112070906A publication Critical patent/CN112070906A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses augmented reality system and augmented reality data generation method and device, wherein the system comprises: a first editing end for: displaying a three-dimensional virtual model for representing a real scene on an editing interface; determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation; generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter; uploading the augmented reality data packet to a server; a server configured to: storing the augmented reality data packet; sending the augmented reality data packet matched with the display request to the display terminal; the display terminal is used for: sending a display request to a server; and displaying the augmented reality effect corresponding to the augmented reality data packet according to the received augmented reality data packet.

Description

Augmented reality system and augmented reality data generation method and device
Technical Field
The present application relates to, but not limited to, the field of computer vision technologies, and in particular, to an augmented reality system, and a method, an apparatus, a device, and a storage medium for generating augmented reality data.
Background
Augmented Reality (AR) technology is a technology for fusing virtual information and real world information, and the technology realizes loading and interacting of a virtual object in the real world in a way of rendering the virtual object in a real-time image, so that a real environment and the virtual object are presented on the same interface in real time. In the related art, the AR effect is displayed by identifying a single image by the augmented reality system, and the displayed AR effect is not enough in diversity generally and cannot well meet the experience requirements of users.
Disclosure of Invention
In view of this, embodiments of the present application provide an augmented reality system, and a method, an apparatus, a device, and a storage medium for generating augmented reality data.
The technical scheme of the embodiment of the application is realized as follows:
in one aspect, an embodiment of the present application provides an augmented reality system, where the system includes:
a first editing end for: displaying a three-dimensional virtual model for representing a real scene on an editing interface; determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation; generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter; uploading the augmented reality data packet to a server;
a server configured to: storing the augmented reality data packet uploaded by the first editing end; responding to a received display request, and sending an augmented reality data packet matched with the display request to a display terminal;
the display terminal is used for: sending the display request to the server; and displaying an augmented reality effect corresponding to the augmented reality data packet according to the received augmented reality data packet.
In some embodiments, the editing operations include a function selection operation and an editing execution operation;
the first editing end is further used for: determining an operating position on the three-dimensional virtual model; determining a function to be edited of the operation position; and determining virtual objects superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object according to the function to be edited and the editing execution operation.
Therefore, different functions to be edited can be realized according to different operation positions, and operation use experience of a user in the process of editing the augmented reality data can be effectively improved.
In some embodiments, the first editing end is further configured to: determining a function to be edited of the operation position based on the acquired function selection operation; or determining the function to be edited according to the operation position.
Therefore, the editing function which can be performed can be automatically determined according to the operation position of the user on the three-dimensional virtual model, and the operation use experience of the user in the process of editing the augmented reality data can be effectively improved.
In some embodiments, the first editing end is further configured to: when the virtual object does not exist at the operation position, determining that the function to be edited is a new virtual object; when it is determined that a virtual object exists at the operation position, determining that the function to be edited is one of: virtual object removal function, virtual object modification function.
Therefore, the editing function which can be performed can be automatically determined according to whether the virtual object exists at the operation position of the user on the three-dimensional virtual model, so that the user can directly determine the corresponding editing function by using different operation positions according to the editing requirement, the workload of the user in editing operation can be reduced, and the operation and use experience of the user in editing the augmented reality data can be further improved.
In some embodiments, the editing execution operation comprises an effect setting operation; the first editing end is further used for: when the function to be edited is to modify a virtual object, responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; and updating the display effect of the virtual object based on the acquired first display effect parameter.
Therefore, the first display effect parameter of the added virtual object can be modified, so that the editing requirement of a user can be better met, and the diversity of the augmented reality effect can be improved.
In some embodiments, the editing execution operation includes a virtual object selection operation and an effect setting operation; the first editing end is further used for: when the function to be edited is a newly added virtual object, responding to the selection operation of the virtual object, and sending a material packet acquisition request to the server; displaying a virtual object corresponding to the material package on the editing interface based on the material package requested from the server; responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; determining the display effect of the virtual object based on the acquired first display effect parameter; and the server is also used for responding to the received material packet acquisition request and returning the material packet matched with the material packet acquisition request.
Therefore, the virtual object can be newly added in the three-dimensional virtual model, so that the editing requirement of a user can be better met, and the diversity of the augmented reality effect can be improved.
In some embodiments, the first display effect parameter comprises a presentation position, and the effect setting operation comprises a position moving operation; the first editing end is further used for: in response to a position moving operation for the virtual object, acquiring a target position of the moving operation; determining the target position as a presentation position of the virtual object in the three-dimensional virtual model.
Therefore, the presenting position of the virtual object in the three-dimensional virtual model can be modified through position moving operation, so that the use experience of a user in editing the augmented reality effect data can be improved, the editing requirement of the user can be better met, and the diversity of the augmented reality effect can be improved.
In some embodiments, the editing execution operation comprises a virtual object removal operation; the first editing end is further used for responding to the virtual object removing operation and removing the virtual object from the three-dimensional virtual model when the function to be edited is to remove the virtual object.
Therefore, the virtual object in the three-dimensional virtual model can be removed, so that the editing requirement of a user can be better met, and the diversity of the augmented reality effect can be improved.
In some embodiments, the first display effect parameter comprises at least one of: presenting a pose, displaying a trigger condition, displaying size and circularly displaying times.
Therefore, the user has better flexibility when editing the augmented reality data, so that the display effect diversity of the virtual object in the augmented reality effect can be improved, and better augmented reality effect experience is brought to the user.
In some embodiments, the display trigger condition comprises one of: displaying the virtual object in real time; triggering to display the virtual object when a display terminal for displaying the augmented reality effect is at a specific position; and triggering the virtual object to be displayed when a specific gesture is detected by a display terminal for displaying the augmented reality effect.
Therefore, the display effect diversity of the virtual object in the augmented reality effect can be further improved, and better augmented reality effect experience is brought to a user.
In some embodiments, the first editing end is further configured to: sending a three-dimensional virtual model acquisition request to the server; receiving the three-dimensional virtual model from the server; the server is further configured to: and responding to the received three-dimensional virtual model obtaining request, and returning the three-dimensional virtual model matched with the three-dimensional virtual model obtaining request.
In some embodiments, the first editing end is further configured to: and responding to the three-dimensional virtual model import operation, and acquiring the three-dimensional virtual model imported by the three-dimensional virtual model import operation.
In this way, since the modeling calculation of the three-dimensional virtual model is not required at the first editing end, the calculation amount of the first editing end in generating the augmented reality data can be reduced, and the hardware requirement on the first editing end can be reduced.
In some embodiments, the first editing end is further configured to: sending a navigation path model acquisition request to the server; displaying the received navigation path model on the editing interface; determining a second display effect parameter of a virtual object superimposed on the navigation path model in response to an editing operation for the navigation path model; generating an augmented reality navigation data packet containing the second display effect parameter according to the second display effect parameter; the server is further configured to: and responding to the received navigation path model acquisition request, and returning the navigation path model matched with the navigation path model acquisition request.
Therefore, the user can edit and generate a data packet used for rendering the augmented reality effect of the navigation path and the virtual object overlapped based on the navigation path model through the first editing end according to actual requirements, and then can realize augmented reality navigation based on the data packet of the augmented reality effect, so that the experience requirements of the user can be well met, and the diversity of the augmented reality effect can be improved.
In some embodiments, the first editing end is further configured to: obtaining a first display effect parameter of each virtual object currently superposed on the three-dimensional virtual model by sending a virtual object obtaining request to the server; and displaying the virtual objects on the editing interface based on the first display effect parameter of each virtual object.
Therefore, when the user carries out editing operation, the user can see the virtual objects and the display effect of each virtual object which are superposed on the three-dimensional virtual model at present, so that the three-dimensional virtual model can be more accurately edited, and the operation and use experience of the user when the user carries out the editing of the augmented reality data can be effectively improved.
On the other hand, an embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, and the method includes: displaying a three-dimensional virtual model for representing a real scene on an editing interface; determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation; and generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter.
In another aspect, an embodiment of the present application provides an augmented reality data generating apparatus, including: the first display module is used for displaying a three-dimensional virtual model for representing a real scene on an editing interface; a first determining module, configured to determine, based on the obtained editing operation, a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model; and the first generation module is used for generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter.
In yet another aspect, the present application provides a computer device, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps of the method when executing the program.
In yet another aspect, the present application provides a computer storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method.
In yet another aspect, the present application provides a computer program, which includes computer readable code, and when the computer readable code runs in a display device, a processor in the display device executes steps for implementing the method.
In the embodiment of the application, a solution for editing and generating the augmented reality effect is provided, a user can edit and generate an augmented reality data packet for rendering the augmented reality effect of the real scene and the virtual object overlapped based on a three-dimensional virtual model representing the real scene through a first editing end, the generated augmented reality data packet is uploaded to a server, and the uploaded augmented reality data packet is stored through the server. When the display terminal needs to display the augmented reality effect of the real scene overlapped with the virtual object, the corresponding augmented reality data packet can be obtained by sending a display request to the server, and the augmented reality effect of the real scene overlapped with the virtual object is rendered and displayed based on the augmented reality data packet. Therefore, the user can edit the experience content of the augmented reality effect according to the actual demand, so that the experience demand of the user can be well met, the diversity of the augmented reality effect can be improved, and the augmented reality basic demand of industries such as the smart industry and the smart city can be better supported.
Drawings
Fig. 1A is a schematic structural diagram of an augmented reality system according to an embodiment of the present disclosure;
fig. 1B is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 2A is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 2B is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 2C is a schematic implementation flow chart of a method for determining virtual objects superimposed on a three-dimensional virtual model and a first display effect parameter of each virtual object according to an embodiment of the present application;
fig. 2D is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 2E is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 2F is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 2G is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 2H is a schematic implementation flow diagram of a method for generating augmented reality data according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an augmented reality system according to an embodiment of the present application;
fig. 4A is an appearance structure schematic diagram of a collector provided in the embodiment of the present application;
fig. 4B is a schematic diagram of a panoramic image acquired by an acquirer provided in an embodiment of the present application;
fig. 5 is a schematic view of an editing interface of an AR editing tool according to an embodiment of the present application;
fig. 6 is a schematic structural diagram illustrating a composition of a system for generating a three-dimensional map and a three-dimensional virtual model according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a device for generating augmented reality data according to an embodiment of the present application;
fig. 8 is a hardware entity diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the technical solutions of the present application are further described in detail with reference to the drawings and the embodiments, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Where similar language of "first/second" appears in the specification, the following description is added, and where reference is made to the term "first \ second \ third" merely to distinguish between similar items and not to imply a particular ordering with respect to the items, it is to be understood that "first \ second \ third" may be interchanged with a particular sequence or order as permitted, to enable the embodiments of the application described herein to be performed in an order other than that illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application.
An embodiment of the present application provides an augmented reality system, where fig. 1A is a schematic diagram of a composition structure of the augmented reality system in an embodiment of the present application, and as shown in fig. 1A, the system includes: the first editing terminal 100, the server 200 and the display terminal 300, wherein:
a first editing end 100, configured to: displaying a three-dimensional virtual model for representing a real scene on an editing interface; determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation; generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter; uploading the augmented reality data packet to a server;
the server 200 is configured to: storing the augmented reality data packet uploaded by the first editing end; responding to a received display request, and sending an augmented reality data packet matched with the display request to a display terminal;
a display terminal 300 configured to: sending the display request to the server; and displaying an augmented reality effect corresponding to the augmented reality data packet according to the received augmented reality data packet.
Here, the first editing terminal may be any suitable electronic device with an interface interaction function, such as a notebook computer, a mobile phone, a tablet computer, a palm computer, a personal digital assistant, a digital Television (TV) or a desktop computer. The first editing terminal may include a processor, which may be an integrated circuit chip having signal processing capability. In the implementation process, the first editing end can complete interface interaction, information processing and the like through an integrated logic circuit of hardware in the processor or an instruction in a software form. Here, the Processor may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like.
The server can be any suitable server, can be a single server, and can also be a server cluster. In implementation, the server may be deployed locally, remotely, in a cloud, or the like, and is not limited herein.
The display terminal may be any suitable terminal device supporting the augmented Reality technology, such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a personal digital assistant, a portable media player, an intelligent sound box, a navigation device, a display device, a wearable device such as an intelligent bracelet, a Virtual Reality (VR) device, an augmented Reality device, a pedometer, a digital Television (TV) or a desktop computer supporting the augmented Reality technology.
The first editing end and the server end, and the server end and the display terminal can be in communication connection in any suitable communication mode. The communication connection may be a wired connection or a Wireless connection, which may be, for example, a bluetooth connection, a Wireless broadband (WIFI) connection, etc.
The three-dimensional virtual model is a virtual model established based on a real scene and can be used for representing the current real scene. The three-dimensional virtual model can include, but is not limited to, a real scene model, a real object model and other three-dimensional materials which are combined with positioning and converted according to a certain proportion relation with a real object. In implementation, the first editing end may reconstruct the three-dimensional virtual model based on the acquired image data of the real scene, for example, may acquire an image of the real scene, extract dense point cloud data of the acquired image, perform dense reconstruction, align a coordinate system of the reconstructed virtual scene with a coordinate system of the real scene, and obtain a three-dimensional virtual model of the virtual scene having a certain proportional relationship with the real scene.
The editing operation is an operation performed by a user on an editing interface, and may include, but is not limited to, adding a virtual object on the three-dimensional virtual model, removing the virtual object, or editing the first display effect parameter of the virtual object. For example, the first editing end may obtain the editing operation by detecting a click operation of a user on the editing interface, or may determine the editing operation according to a received operation instruction by receiving an operation instruction sent by the user, which is not limited in this embodiment of the present application.
The virtual objects superimposed on the three-dimensional virtual model include, but are not limited to, characters, articles, characters and other objects, and can be determined by the user through editing operations. For example, virtual props for decorating desks are superposed on a three-dimensional virtual model representing a real office scene; or a virtual digital person for explaining the exhibit is superposed on the three-dimensional virtual model representing the scene of the real museum; or virtual labels which are superposed on the three-dimensional virtual model representing the real building scene and used for marking and explaining each building, guide lines corresponding to the virtual labels and the like. In implementation, the first editing end may generate a material package for rendering the virtual object by using technologies such as image, video, or Three-Dimensional (3D) model generation, or may obtain the material package of the edited virtual object from a local storage or a server, which is not limited herein.
The first display effect parameter of the virtual object is an effect parameter describing the virtual object added on the three-dimensional virtual model when the virtual object is displayed on the display terminal, and may include, but is not limited to, one or more of a presentation position, an orientation, a presentation duration, a display color, an interaction manner, a display trigger condition, a display size, a cycle display frequency, and the like of the virtual object. In implementation, a user may edit the first display effect parameter of each virtual object superimposed on the three-dimensional virtual model through an editing operation, and the first editing end may determine the first display effect parameter of each virtual object according to the obtained editing operation.
The augmented reality data packet may be a data packet for rendering an augmented reality effect of a real scene represented by the three-dimensional virtual model overlaid with the virtual objects, and may include a first display effect parameter of each virtual object overlaid on the three-dimensional virtual model. In some embodiments, the augmented reality data packet may further include an identification of each virtual object superimposed on the three-dimensional virtual model, and the material packet for rendering the virtual object may be obtained according to the identification of the virtual object. In some embodiments, a material package for rendering each virtual object superimposed on the three-dimensional virtual model may also be included in the augmented reality data package.
After the first editing end generates the augmented reality data packet, the augmented reality data packet can be uploaded to the server. And after receiving the augmented reality data packet uploaded by the first editing end, the server end can store the augmented reality data packet. In implementation, the augmented reality data packet may be stored in a local memory of the server or in a database, and a person skilled in the art may select an appropriate manner to store the augmented reality data packet according to actual situations, which is not limited herein.
The display request may be a request for calling the augmented reality data packet sent by the display terminal to the server. In some embodiments, the display terminal may send a display request to the server when a specific condition is satisfied. Here, the specific condition may include, but is not limited to, one or more of a location condition, a time condition, and the like. For example, the display terminal may obtain a current positioning result, map the current positioning result to a corresponding position on the three-dimensional virtual model, further detect whether an augmented reality data packet is configured at the position, and directly send a display request to the server if the augmented reality data packet is configured at the position, so as to obtain the configured augmented reality data packet from the server. In some embodiments, the display terminal may also send a display request to the server in response to a display operation triggered by a user when receiving the display operation.
After receiving the display request, the server can obtain an augmented reality data packet matched with the display request from the stored augmented reality data packets, and send the augmented reality data packet to the display terminal. In implementation, matching information such as an identifier, a name, or a version number of the augmented reality data packet to be requested may be carried in the presentation request, and the server may obtain the corresponding augmented reality data packet according to the matching information.
After receiving the augmented reality data packet sent by the server, the display terminal can display an augmented reality effect corresponding to the augmented reality data packet.
Based on the augmented reality system, the embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, and the method can be executed by a processor of the first editing end. As shown in fig. 1B, the method includes:
step S101, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S102, determining a first display effect parameter of a virtual object superposed on the three-dimensional virtual model based on the obtained editing operation;
step S103, generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter.
In the embodiment of the application, a solution for editing and generating the augmented reality effect is provided, a user can edit and generate an augmented reality data packet for rendering the augmented reality effect of the real scene and the virtual object overlapped based on a three-dimensional virtual model representing the real scene through a first editing end, the generated augmented reality data packet is uploaded to a server, and the uploaded augmented reality data packet is stored through the server. When the display terminal needs to display the augmented reality effect of the real scene overlapped with the virtual object, the corresponding augmented reality data packet can be obtained by sending a display request to the server, and the augmented reality effect of the real scene overlapped with the virtual object is rendered and displayed based on the augmented reality data packet. Therefore, the user can edit the experience content of the augmented reality effect according to the actual demand, so that the experience demand of the user can be well met, the diversity of the augmented reality effect can be improved, and the augmented reality basic demand of industries such as the smart industry and the smart city can be better supported.
The embodiment of the application provides a method for generating augmented reality data, which is applied to a first editing end and can be executed by a processor of the first editing end. As shown in fig. 2A, the method includes:
step S201, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S202, determining an operation position on the three-dimensional virtual model;
here, the operation position is a target position at which the user needs to edit the virtual object, and may be a position corresponding to a click operation of the user on the three-dimensional virtual model, or a position that has been selected by the user on the three-dimensional virtual model when the user performs a menu item selection operation on a menu bar in the editing interface. The skilled person can determine the operation position by selecting a suitable manner according to practical situations, and the method is not limited herein.
Step S203, determining the function to be edited of the operation position;
here, the function to be edited of the operation position is an editing function to be performed at the operation position, and may include, but is not limited to, an addition of a virtual object, a virtual object removal function, a virtual object modification function, and the like. In implementation, the determination may be performed according to a click operation or a menu item selection operation of a user in an editing interface, and a person skilled in the art may select a suitable manner for determining a function to be edited according to an actual situation in implementation, which is not limited herein.
In some embodiments, step S203 may comprise: and determining the function to be edited of the operation position based on the acquired function selection operation.
Here, the function selection operation is an operation performed to determine a function to be edited of the operation position. Based on the acquired function selection operation, a function to be edited of the operation position can be determined. For example, the function selecting operation may be a selecting operation performed by a user in a to-be-edited function selection list displayed at any position on the three-dimensional virtual model, for example, after performing a single click, a double click, or a right-click operation at an operation position on the three-dimensional virtual model, the to-be-edited function selection list to be selected is displayed at the operation position, and the user may perform a selecting operation in the to-be-edited function selection list to determine a to-be-edited function at the operation position; the function selection operation may also be a menu item selection operation of a menu bar in the editing interface. The skilled person can select an appropriate manner to determine the function selection operation and the determination manner of the function to be edited according to the actual situation, which is not limited herein.
In other embodiments, step S203 may include: and determining the function to be edited according to the operation position.
Here, the editing functions that can be performed at different positions on the three-dimensional virtual model are different, and therefore, the function to be edited can be determined according to the operation position. In implementation, the function to be edited may be determined according to a corresponding relationship between a specific position range and the function to be edited, and different functions to be edited may also be determined by detecting a virtual object on the three-dimensional virtual model at the operation position or a state of a corresponding real scene in the three-dimensional virtual model at the operation position, which is not limited in the embodiment of the present application.
Step S204, according to the function to be edited and the obtained editing execution operation, determining virtual objects superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object;
here, the edit execution operation is an operation performed by the user to complete the corresponding function after the function selection operation is performed to determine the function type of the edit operation, and may be a single operation or an operation group composed of a series of operations in common. In implementation, the editing execution operation may be an operation performed by a user to add a virtual object, remove a virtual object, edit a virtual object, and the like, and may include, but is not limited to, one or more of selection, movement, setting of a first display effect parameter, and the like of a virtual object, which is not limited in this embodiment of the present application.
Step S205, generating an augmented reality data packet including the first display effect parameter according to the first display effect parameter.
The method for generating augmented reality data, provided by the embodiment of the application, can determine the operation position on the three-dimensional virtual model and the function to be edited of the operation position, and further determine the virtual object superimposed on the three-dimensional virtual model and the first display effect parameter of each virtual object according to the determined function to be edited and the obtained editing execution operation. Therefore, different functions to be edited can be realized according to different operation positions, and operation use experience of a user in the process of editing the augmented reality data can be effectively improved. Furthermore, the function to be edited can be determined according to the operation position, so that the editing function which can be performed can be automatically determined according to the operation position of the user on the three-dimensional virtual model, and the operation and use experience of the user in the process of editing the augmented reality data can be effectively improved.
An embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, where the method may be executed by a processor of the first editing end, as shown in fig. 2B, and the method includes:
step S301, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S302, determining an operation position on the three-dimensional virtual model;
step S303, when the virtual object does not exist at the operation position, determining that the function to be edited is a new virtual object;
here, whether or not the virtual object exists at the operation position may be determined by querying the virtual object that has been currently superimposed in the three-dimensional virtual model. When a virtual object does not exist at the operation position, a virtual object may be newly added at the position. For example, when the user performs a click operation on the three-dimensional virtual model, and there is no virtual object at the click position, the function to be edited may be a newly added virtual object, and the user may newly add a virtual object at the click position.
Step S304, when the virtual object is determined to exist at the operation position, determining that the function to be edited is one of the following: a virtual object removal function, a virtual object modification function;
here, when a virtual object exists at an operation position, the user may remove or modify the virtual object at the position. For example, when a user performs a click operation on the three-dimensional virtual model and there is a virtual object at the click point, the function to be edited may be a virtual object removal function, and the user may remove the virtual object, or the function to be edited may be a virtual object modification function, and the user may modify the virtual object.
Step S305, according to the function to be edited and the editing execution operation, determining virtual objects superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object;
step S306, generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter.
In some embodiments, the editing execution operation includes an effect setting operation, and correspondingly, the step S305 may include: step S351a, when the function to be edited is to modify a virtual object, in response to an effect setting operation for the virtual object, obtaining a first display effect parameter of the virtual object; step S351b, based on the acquired first display effect parameter, updates the display effect of the virtual object. Here, the effect setting operation is an operation performed by the user to set the first display effect parameter of the virtual object, and may be a single operation or a group of operations, which is not limited herein.
In some embodiments, the editing execution operation includes a virtual object selection operation and an effect setting operation, and correspondingly, as shown in fig. 2C, the step S305 may include:
step S352a, when the function to be edited is a newly added virtual object, responding to the virtual object selection operation, and sending a material package acquisition request to the server;
here, the virtual object selection operation is an operation performed when the user selects a virtual object.
The material package is a data package for rendering the virtual object, and may be a data package containing three-dimensional or two-dimensional contents such as an augmented reality model, animation, and a special effect of the virtual object. The server can store the material packets of the virtual objects for editing and calling in advance, and the first editing end can obtain the material packets of the selected virtual objects by sending a material packet obtaining request to the server. During implementation, the material package obtaining request may carry information such as an identifier and a name of the selected virtual object, and the server may return the corresponding material package to the first editing end according to the information such as the identifier and the name of the virtual object carried in the material package obtaining request.
Step S352b, displaying, on the editing interface, a virtual object corresponding to the material package based on the material package requested from the server;
step S352c, in response to an effect setting operation for the virtual object, acquiring a first display effect parameter of the virtual object;
in step S352d, the display effect of the virtual object is determined based on the acquired first display effect parameter.
Here, the effect setting operation is an operation performed by the user to set the first display effect parameter of the virtual object, and may be a single operation or a group of operations, which is not limited herein.
In some embodiments, the first display effect parameter includes a presentation position, the effect setting operation includes a position moving operation, and correspondingly, the step S351a or the step S352c may include: in response to a position moving operation for the virtual object, acquiring a target position of the moving operation; determining the target position as a presentation position of the virtual object in the three-dimensional virtual model.
In some embodiments, the editing execution operation includes a virtual object removal operation, and correspondingly, the step S305 may include: step S353a, when the function to be edited is to remove a virtual object, removing the virtual object from the three-dimensional virtual model in response to the virtual object removal operation.
According to the method for generating the augmented reality data, when the virtual object does not exist at the operation position, the user can add the virtual object newly, and when the virtual object is determined to exist at the operation position, the user can remove or modify the virtual object. Therefore, the editing function which can be performed can be automatically determined according to whether the virtual object exists at the operation position of the user on the three-dimensional virtual model, so that the user can directly determine the corresponding editing function by using different operation positions according to the editing requirement, the workload of the user in editing operation can be reduced, and the operation and use experience of the user in editing the augmented reality data can be further improved.
An embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, where the method may be executed by a processor of the first editing end, as shown in fig. 2D, and includes:
step S401, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S402, determining a first display effect parameter of a virtual object superposed on the three-dimensional virtual model based on the obtained editing operation; wherein the first display effect parameter comprises at least one of: presenting a pose, a display trigger condition, a display size and a cycle display frequency;
here, the rendering pose is a pose when the virtual object is rendered in the three-dimensional virtual model, and may include, but is not limited to, a rendering position, a rendering orientation, and the like.
The display triggering condition is a condition which needs to be met by triggering and displaying the virtual object when the display terminal displays the augmented reality effect corresponding to the augmented reality data packet. In implementation, the display trigger condition may be a condition defined by a user, or may be a condition selected by the user from a plurality of preset conditions, and a person skilled in the art may select an appropriate manner to determine the display trigger condition according to actual conditions, which is not limited herein.
The display size may include, but is not limited to, one or more of a display length, width, height, diagonal length, etc. of the virtual object. The number of times of loop display is the number of times of loop when the virtual object is displayed in a loop in the augmented reality effect.
In some embodiments, the display trigger condition may include one of: displaying the virtual object in real time; triggering to display the virtual object when a display terminal for displaying the augmented reality effect is at a specific position; and triggering the virtual object to be displayed when a specific gesture is detected by a display terminal for displaying the augmented reality effect. Here, the specific position may be a position preset by the user or a default position, and is not limited herein. The specific gesture may be a gesture preset by the user, or may be a default gesture, which is not limited herein.
Step S403, generating an augmented reality data packet including the first display effect parameter according to the first display effect parameter.
The method for generating the augmented reality data can edit the presenting pose, the display triggering condition, the display size and the cycle display frequency of the virtual object superposed on the three-dimensional virtual model. Therefore, the user has better flexibility when editing the augmented reality data, so that the display effect diversity of the virtual object in the augmented reality effect can be improved, and better augmented reality effect experience is brought to the user.
An embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, where the method may be executed by a processor of the first editing end, as shown in fig. 2E, and includes:
step S501, sending a three-dimensional virtual model acquisition request to a server;
step S502, receiving the three-dimensional virtual model from the server;
here, the server may store the established three-dimensional virtual model representing the real scene. The three-dimensional virtual model can be established by the server according to the acquired image data of the real scene, or can be uploaded to the server after being established in advance by the user. The first editing end can obtain the three-dimensional virtual model stored by the server end by sending a three-dimensional virtual model obtaining request to the server end.
Step S503, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S504, determining a first display effect parameter of a virtual object superposed on the three-dimensional virtual model based on the obtained editing operation;
step S505, generating an augmented reality data packet including the first display effect parameter according to the first display effect parameter.
According to the method for generating the augmented reality data, the three-dimensional virtual model used for representing the real scene and stored by the server can be obtained by sending the three-dimensional virtual model obtaining request to the server. In this way, since the modeling calculation of the three-dimensional virtual model is not required at the first editing end, the calculation amount of the first editing end in generating the augmented reality data can be reduced, and the hardware requirement on the first editing end can be reduced.
An embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, where the method may be executed by a processor of the first editing end, as shown in fig. 2F, and includes:
step S601, responding to three-dimensional virtual model importing operation, and acquiring a three-dimensional virtual model imported by the three-dimensional virtual model importing operation;
here, the three-dimensional virtual model may be previously established from image data of a real scene. The three-dimensional virtual model importing operation may be any suitable operation for importing the three-dimensional virtual model into the first editing end, and may be executed in an editing interface or through a specific script, which is not limited in this embodiment of the present application.
Step S602, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S603 of determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation;
step S604, generating an augmented reality data packet including the first display effect parameter according to the first display effect parameter.
According to the method for generating the augmented reality data, the three-dimensional virtual model used for representing the real scene can be obtained through three-dimensional virtual model importing operation. In this way, since the modeling calculation of the three-dimensional virtual model is not required at the first editing end, the calculation amount of the first editing end in generating the augmented reality data can be reduced, and the hardware requirement on the first editing end can be reduced.
An embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, where the method may be executed by a processor of the first editing end, as shown in fig. 2G, and includes:
step S701, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S702, determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation;
step S703 of generating an augmented reality data packet including the first display effect parameter according to the first display effect parameter;
step S704, sending a navigation path model acquisition request to the server;
here, the navigation path model is a virtual model for representing a specific navigation path in the real scene, and may be established in advance according to the specific navigation path. The specific navigation path may be obtained from a third-party map application, or may be collected by a specific data collection device, and a person skilled in the art may select an appropriate manner to obtain the specific navigation path according to an actual situation, which is not limited in the embodiment of the present application.
The established navigation path model can be stored in the server side, and the first editing side can obtain the navigation path model stored in the server side by sending a navigation path model obtaining request to the server side.
Step S705, displaying the received navigation path model on the editing interface;
here, after receiving the navigation path model, the first editing end may display the navigation path model on an editing interface, so that the user may refer to the displayed navigation path model to edit the virtual objects that need to be superimposed on the navigation path and the display effect of each virtual object.
Step S706, responding to the obtained editing operation, and determining a second display effect parameter of the virtual object superposed on the navigation path model;
here, the second display effect parameter is an effect parameter describing when the virtual object added on the navigation path model is displayed on the display terminal, and may include, but is not limited to, one or more of a presentation position, an orientation, a presentation time length, a display color, an interaction manner, a display trigger condition, a display size, a cycle display number, and the like of the virtual object. In implementation, the user may edit the second display effect parameter of each virtual object superimposed on the navigation model through an editing operation, and the first editing end may determine the second display effect parameter of each virtual object according to the obtained editing operation.
For a specific embodiment of determining the second display effect parameter of the virtual object superimposed on the navigation path model, reference may be made to the specific embodiment of determining the first display effect parameter of the virtual object superimposed on the three-dimensional virtual model in step S102, which is not described herein again.
Step S707, according to the second display effect parameter, generating an augmented reality navigation data packet including the second display effect parameter.
Here, the augmented reality navigation packet is a packet for rendering an augmented reality effect in which a navigation path is superimposed with a virtual object. The augmented reality navigation data packet may be generated in a manner that refers to a specific embodiment of generating the augmented reality data packet including the first display effect parameter in the foregoing step S103.
In the method for generating augmented reality data provided by the embodiment of the application, the first editing end may determine, in response to the obtained editing operation, a second display effect parameter of the virtual object superimposed on the navigation path model, and generate, according to the second display effect parameter, a data packet for rendering an augmented reality effect in which the navigation path and the virtual object are superimposed. Therefore, the user can edit and generate a data packet used for rendering the augmented reality effect of the navigation path and the virtual object overlapped based on the navigation path model through the first editing end according to actual requirements, and then can realize augmented reality navigation based on the data packet of the augmented reality effect, so that the experience requirements of the user can be well met, and the diversity of the augmented reality effect can be improved.
An embodiment of the present application provides a method for generating augmented reality data, which is applied to a first editing end, where the method may be executed by a processor of the first editing end, as shown in fig. 2H, and includes:
step S801, displaying a three-dimensional virtual model for representing a real scene on an editing interface;
step S802, a virtual object acquisition request is sent to a server, and first display effect parameters of all virtual objects currently superposed on the three-dimensional virtual model are acquired;
here, the server may store the virtual objects currently superimposed on each three-dimensional virtual model and the first display effect parameter of each virtual object. The first editing end can obtain the virtual objects currently superposed on the three-dimensional virtual model matched with the virtual object obtaining request and the first display effect parameters of the virtual objects by sending the virtual object obtaining request to the server end. In implementation, matching information such as an identifier and a name of the three-dimensional virtual model may be carried in the virtual object acquisition request, and the server sends, according to the matching information carried in the virtual object acquisition request, each virtual object currently superimposed on the three-dimensional virtual model matched with the virtual object acquisition request and the first display effect parameter of each virtual object to the first editing end.
Step S803, displaying each virtual object on the editing interface based on the first display effect parameter of each virtual object;
step S804, determining the display effect of the virtual object superposed on the three-dimensional virtual model based on the obtained editing operation aiming at the three-dimensional virtual model;
step S805, generating an augmented reality data packet including the first display effect parameter according to the first display effect parameter.
According to the method for generating augmented reality data, before the first display effect parameter of the virtual object superimposed on the three-dimensional virtual model is determined, the virtual object currently superimposed on the three-dimensional virtual model and the first display effect parameter of each virtual object can be obtained by sending a virtual object obtaining request to the server, and the obtained virtual objects are displayed on the editing interface based on the first display effect parameter of each virtual object. Therefore, when the user carries out editing operation, the user can see the virtual objects and the display effect of each virtual object which are superposed on the three-dimensional virtual model at present, so that the three-dimensional virtual model can be more accurately edited, and the operation and use experience of the user when the user carries out the editing of the augmented reality data can be effectively improved.
An embodiment of the present application provides an augmented reality system, as shown in fig. 1A, the system includes: the first editing terminal 100, the server 200 and the display terminal 300, wherein:
a first editing end 100, configured to: displaying a three-dimensional virtual model for representing a real scene on an editing interface; determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation; generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter; uploading the augmented reality data packet to a server;
the server 200 is configured to: storing the augmented reality data packet uploaded by the first editing end; responding to a received display request, and sending an augmented reality data packet matched with the display request to a display terminal;
a display terminal 300 configured to: sending the display request to the server; and displaying an augmented reality effect corresponding to the augmented reality data packet according to the received augmented reality data packet.
In some embodiments, the editing operations include a function selection operation and an editing execution operation; the first editing end is further used for: determining an operating position on the three-dimensional virtual model; determining a function to be edited of the operation position; and determining virtual objects superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object according to the function to be edited and the editing execution operation.
In some embodiments, the first editing end is further configured to: when the virtual object does not exist at the operation position, determining that the function to be edited is a new virtual object; when it is determined that a virtual object exists at the operation position, determining that the function to be edited is one of: virtual object removal function, virtual object modification function.
In some embodiments, the editing execution operation comprises an effect setting operation; the first editing end is further used for: when the function to be edited is to modify a virtual object, responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; and updating the display effect of the virtual object based on the acquired first display effect parameter.
In some embodiments, the editing execution operation includes a virtual object selection operation and an effect setting operation; the first editing end is further used for: when the function to be edited is a newly added virtual object, responding to the selection operation of the virtual object, and sending a material packet acquisition request to the server; displaying a virtual object corresponding to the material package on the editing interface based on the material package requested from the server; responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; determining the display effect of the virtual object based on the acquired first display effect parameter; and the server is also used for responding to the received material packet acquisition request and returning the material packet matched with the material packet acquisition request.
In some embodiments, the first display effect parameter comprises a presentation position, and the effect setting operation comprises a position moving operation; the first editing end is further used for: in response to a position moving operation for the virtual object, acquiring a target position of the moving operation; determining the target position as a presentation position of the virtual object in the three-dimensional virtual model.
In some embodiments, the editing execution operation comprises a virtual object removal operation; the first editing end is further used for responding to the virtual object removing operation and removing the virtual object from the three-dimensional virtual model when the function to be edited is to remove the virtual object.
In some embodiments, the first display effect parameter comprises at least one of: presenting a pose, displaying a trigger condition, displaying size and circularly displaying times.
In some embodiments, the display trigger condition comprises one of: displaying the virtual object in real time; triggering to display the virtual object when a display terminal for displaying the augmented reality effect is at a specific position; and triggering the virtual object to be displayed when a specific gesture is detected by a display terminal for displaying the augmented reality effect.
In some embodiments, the first editing end is further configured to: sending a three-dimensional virtual model acquisition request to the server; receiving the three-dimensional virtual model from the server; the server is further configured to: and responding to the received three-dimensional virtual model obtaining request, and returning the three-dimensional virtual model matched with the three-dimensional virtual model obtaining request.
In some embodiments, the first editing end is further configured to: and responding to the three-dimensional virtual model import operation, and acquiring the three-dimensional virtual model imported by the three-dimensional virtual model import operation.
In some embodiments, the first editing end is further configured to: sending a navigation path model acquisition request to the server; displaying the received navigation path model on the editing interface; determining a second display effect parameter of a virtual object superimposed on the navigation path model in response to an editing operation for the navigation path model; generating an augmented reality navigation data packet containing the second display effect parameter according to the second display effect parameter; the server is further configured to: and responding to the received navigation path model acquisition request, and returning the navigation path model matched with the navigation path model acquisition request.
In some embodiments, the first editing end is further configured to: obtaining a first display effect parameter of each virtual object currently superposed on the three-dimensional virtual model by sending a virtual object obtaining request to the server; and displaying the virtual objects on the editing interface based on the first display effect parameter of each virtual object.
The above description of the system embodiment is similar to the above description of the method embodiment, with similar beneficial effects as the method embodiment. For technical details not disclosed in the embodiments of the system of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
An exemplary application of the embodiment of the present application in an actual application scenario is described below, and an application scenario in the cultural tourism industry is taken as an example for description, and the embodiment of the present application provides an augmented reality system, which brings virtual and real integrated AR interactive experiences for a user based on spatial positioning, including AR scene reappearance, AR exhibits, AR navigation, AR games, AR marketing, and the like, and provides content, software and hardware tools, cloud services, and the like for generating the AR experiences, and is an end-to-end comprehensive solution. The tool and cloud service that generate the AR experience simultaneously can support the AR basic demand of trades such as wisdom industry, wisdom city.
Fig. 3 is a schematic structural diagram of a composition of an augmented reality system provided in an embodiment of the present application, and as shown in fig. 3, the system includes the following modules:
and the data acquisition module 301 is used for acquiring data.
In some possible implementations, the capture tool 312 or the panoramic camera 311 provided in the embodiments of the present application may be used to capture high definition images, scene data of multiple illuminations, parameters of various sensors, and the like. And for the image of the later scene with larger transformation, the part with larger transformation in the image can be locally complemented. Sensor parameters include, but are not limited to, parameters of a microphone, parameters of a vision sensor, and the like. In the embodiment of the present application, the collector 401 shown in fig. 4A may be used to collect image data, for example, a panoramic image 402 or a panoramic video shown in fig. 4B may be collected.
And a navigation editing tool 302 for performing path planning 321 and navigation editing 322.
In some possible implementations, the navigation editing tool 302 supports editing Information Points (POIs) on the navigation diagram, navigation paths, Information on a map, and other navigation-related Information. The navigation schematic diagram is a three-dimensional high-precision map generated according to the acquired image data.
And an AR editing tool 303, configured to set the presented AR content, and display logic, state change, display effect, and the like of the AR content.
In some possible implementation manners, firstly, three-dimensional materials such as a three-dimensional virtual model which is combined with positioning and converted with a real object according to a certain proportional relation are imported into an AR editing tool 303; then, the presented AR content, and the display logic, state change, display effect, and the like of the AR content are set in the AR editing tool 303. The three-dimensional virtual model is shown as a model 501 in fig. 5, and the model 501 is presented in an editing interface 502 in fig. 5, so that a designer can edit the virtual object and the first display effect parameter of the virtual object in the model 501. The model 501 is obtained by performing dense model reconstruction using a cloud tool in the cloud service 304 according to the collected data. The tool bar in the editing interface 501 includes: file 51, edit 52, resource 53, game object 54, window 55, and help 56; the tools for editing in the editing interface 501 include: a main camera 511 and directional light 512, etc.; the functions that enable editing in the editing interface 501 include: a call 513, a cloud tower 514, a trademark 515, etc.
After the AR content is edited in the AR editing tool 303, the editing result may be synchronized to the cloud server. Moreover, the editing of the AR content in the embodiment of the present application includes: and (4) off-field editing 332, namely adjusting the presentation effect of the virtual object to be superposed on the preset three-dimensional virtual model in a design interface of the computer.
The cloud service 304 includes: a mapping service 341 related to positioning and navigation, an experience service 342 related to AR content, a collaboration service 343 related to multi-person synchronization. Wherein,
the map service 341 is configured to generate a three-dimensional map from the acquired image data, or acquire the three-dimensional map from a third party. The three-dimensional map can contain a navigation path model, and the three-dimensional map is used for positioning the interactive object, so that AR navigation service can be provided for the user.
In some possible implementations, a three-dimensional virtual model (e.g., a dense reconstructed model) may be created from the acquired image data and imported into the AR editing tool for the AR editing tool to generate the material package based on the three-dimensional model. Fig. 6 is a schematic structural diagram of a system for generating a three-dimensional map and a three-dimensional virtual model according to an embodiment of the present application, and as shown in fig. 6, the system includes:
a data collection module 601, configured to collect data, where the collected data includes: video data 611, signal data 612, and sensor data 613, among others.
Cloud platform 602, comprising: cloud tool 603 and cloud service 604, wherein:
the cloud tool 603 is configured to perform point cloud reconstruction 631, signal positioning modeling 632, map stitching 633 and dense reconstruction 634 according to the data acquired by the data acquisition module 601.
The cloud service 604 is configured to generate a three-dimensional map 641 supporting positioning and generate a dense reconstruction model 642 by using a cloud tool 603.
In some possible implementations, the three-dimensional map 641 is invoked by the client 643 to locate the interactive object. The dense reconstruction model 642 provides designers with editing AR data matching real objects in an editor 644.
Experience service 342 for providing experience package 344 and material package 345.
In some possible implementations, the experience package 344 refers to a comprehensive interactive experience package that contains AR content, interactions, functionality, and the like. The material package 345 refers to three-dimensional or two-dimensional content of an AR model, animation, special effects, and the like, which does not increase interaction. Both the editing tool and the application terminal may invoke the experience package 344 and the material package 345.
The cooperative service 343 is used for a function related to multi-person interaction, and the effect, progress, and the like of interaction are synchronized to each person through the cooperative service.
The Software Development Kit (SDK) 305 includes a local SDK 351 invoked by an application terminal and an SDK 352 cooperating with a cloud.
In some possible implementations, the local SDK 351 refers to an SDK where the computation flows are all done locally. The main computing link of the SDK 352 cooperating with the cloud is completed at the cloud, a simpler computing processing flow is completed locally, and meanwhile, data required by the cloud is transmitted to the cloud server, and then a computing result returned by the cloud is received. The functions that the local SDK 351 can implement include: gesture recognition 31, callout 32, and SLAM 33; and, the functions that can be implemented by the SDK 352 in cooperation with the cloud include: visual positioning 34, signal positioning 35, experience package parsing 36, and the like.
In the embodiment of the present Application, the SDK is used by a supporting interactive Application (Player APP), and may also be provided for an Application terminal to use, and is integrated into its own Application (APP) by the Application terminal.
And an interactive application (Player APP)306 installed in the terminal device.
The Player APP 306 functions mainly include and are not limited to: the functions of AR interaction experience 361, AR navigation 362, AR navigation 363, AR photography 364, AR message 365, AR graffiti 367, or AR praise.
The matching AR hardware 307 includes AR glasses 371, a tablet computer + matching handheld protective case 372, a mobile terminal such as a mobile phone, and accessories such as a matching housing. The Player APP 306 can be installed in any of these pieces of hardware.
And data statistics 308, which is used for counting various data generated in the whole augmented reality data generation system so as to obtain a quantized operation reference.
In some possible implementations, the data statistics 308 include: APP statistics 381, cloud service statistics 382, editing tool statistics 383, where:
APP statistics 381 for counting data generated during application of the Player APP 306.
Cloud service statistics 382 to perform statistics on data generated in cloud service 304.
Editing tool statistics 383 for making statistics of data generated during the application of the navigation editing tool 302 and the AR editing tool 303.
It should be noted that, in implementation, the AR editing tool may be implemented as the first editing end in the above embodiments of the present application, the cloud service, the cloud platform, the cloud tool, and the cloud service may be implemented as the service end in the above embodiments of the present application, and the supporting AR hardware may be implemented as the display terminal in the above embodiments of the present application.
Based on the foregoing embodiments, an augmented reality data generation apparatus is provided in an embodiment of the present application, where the apparatus includes each included unit and each module included in each unit, and may be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 7 is a schematic structural diagram of a composition of an augmented reality data generating apparatus according to an embodiment of the present application, and as shown in fig. 7, an apparatus 700 includes: a first display module 710, a first determination module 720, and a first generation module 730, wherein:
a first display module 710 for displaying a three-dimensional virtual model representing a real scene on an editing interface;
a first determining module 720, configured to determine, based on the obtained editing operation, a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model;
the first generating module 730 is configured to generate an augmented reality data packet including the first display effect parameter according to the first display effect parameter.
In some embodiments, the editing operations include a function selection operation and an editing execution operation; the first determination module is further to: determining an operating position on the three-dimensional virtual model; determining a function to be edited of the operation position; and determining virtual objects superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object according to the function to be edited and the editing execution operation.
In some embodiments, the first editing end is further configured to: determining a function to be edited of the operation position based on the acquired function selection operation; or determining the function to be edited according to the operation position.
In some embodiments, the first determining module is further configured to: when the virtual object does not exist at the operation position, determining that the function to be edited is a new virtual object; when it is determined that a virtual object exists at the operation position, determining that the function to be edited is one of: virtual object removal function, virtual object modification function.
In some embodiments, the editing execution operation comprises an effect setting operation. The first determination module is further to: when the function to be edited is to modify a virtual object, responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; and updating the display effect of the virtual object based on the acquired first display effect parameter.
In some embodiments, the editing execution operation includes a virtual object selection operation and an effect setting operation. The first determination module is further to: when the function to be edited is a newly added virtual object, responding to the selection operation of the virtual object, and sending a material packet acquisition request to the server; displaying a virtual object corresponding to the material package on the editing interface based on the material package requested from the server; responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; and determining the display effect of the virtual object based on the acquired first display effect parameter.
In some embodiments, the first display effect parameter comprises a presentation position and the effect setting operation comprises a position moving operation. The first determination module is further to: in response to a position moving operation for the virtual object, acquiring a target position of the moving operation; determining the target position as a presentation position of the virtual object in the three-dimensional virtual model.
In some embodiments, the editing execution operation comprises a virtual object removal operation. The first determination module is further to: when the function to be edited is to remove a virtual object, the virtual object is removed from the three-dimensional virtual model in response to the virtual object removal operation.
In some embodiments, the first display effect parameter comprises at least one of: presenting a pose, displaying a trigger condition, displaying size and circularly displaying times.
In some embodiments, the display trigger condition comprises one of: displaying the virtual object in real time; triggering to display the virtual object when a display terminal for displaying the augmented reality effect is at a specific position; and triggering the virtual object to be displayed when a specific gesture is detected by a display terminal for displaying the augmented reality effect.
In some embodiments, the apparatus further comprises: the first sending module is used for sending a three-dimensional virtual model obtaining request to the server; a first receiving module, configured to receive the three-dimensional virtual model from the server.
In some embodiments, the apparatus further comprises: the first obtaining module is used for responding to three-dimensional virtual model importing operation and obtaining a three-dimensional virtual model imported by the three-dimensional virtual model importing operation.
In some embodiments, the apparatus further comprises: the second sending module is used for sending a navigation path model obtaining request to the server; the second receiving module is used for displaying the received navigation path model on the editing interface; a second determining module, configured to determine, in response to the obtained editing operation, a second display effect parameter of the virtual object superimposed on the navigation path model; and the second generation module is used for generating an augmented reality navigation data packet containing the second display effect parameter according to the second display effect parameter.
In some embodiments, the apparatus further comprises: the second acquisition module is used for acquiring virtual objects currently superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object by sending a virtual object acquisition request to the server; and the second display module is used for displaying each virtual object on the editing interface based on the first display effect parameter of each virtual object.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
In the embodiment of the present application, if the method for generating augmented reality data is implemented in the form of a software functional module and is sold or used as a standalone product, the augmented reality data may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the present application provides a computer device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor implements the steps in the above method when executing the program.
Correspondingly, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program realizes the steps of the above method when being executed by a processor.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that fig. 8 is a schematic hardware entity diagram of a computer device in an embodiment of the present application, and as shown in fig. 8, the hardware entity of the computer device 800 includes: a processor 801, a communication interface 802, and a memory 803, wherein,
the processor 801 generally controls the overall operation of the computer device 800.
The communication interface 802 may enable the computer device to communicate with other terminals or servers via a network.
The Memory 803 is configured to store instructions and applications executable by the processor 801, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 801 and modules in the computer apparatus 800, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (30)

1. An augmented reality system, the system comprising:
a first editing end for: displaying a three-dimensional virtual model for representing a real scene on an editing interface; determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation; generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter; uploading the augmented reality data packet to a server;
a server configured to: storing the augmented reality data packet uploaded by the first editing end; responding to a received display request, and sending an augmented reality data packet matched with the display request to a display terminal;
the display terminal is used for: sending the display request to the server; and displaying an augmented reality effect corresponding to the augmented reality data packet according to the received augmented reality data packet.
2. The system of claim 1, wherein the editing operations include a function selection operation and an editing execution operation;
the first editing end is further configured to: determining an operating position on the three-dimensional virtual model; determining a function to be edited of the operation position; and determining virtual objects superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object according to the function to be edited and the editing execution operation.
3. The system of claim 2, wherein the first editing end is further configured to:
determining a function to be edited of the operation position based on the acquired function selection operation; or,
and determining the function to be edited according to the operation position.
4. The system of claim 3, wherein the first editing end is further configured to: when the virtual object does not exist at the operation position, determining that the function to be edited is a new virtual object; when it is determined that a virtual object exists at the operation position, determining that the function to be edited is one of: virtual object removal function, virtual object modification function.
5. The system according to claim 4, wherein the editing execution operation comprises an effect setting operation;
the first editing end is further configured to: when the function to be edited is to modify a virtual object, responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; and updating the display effect of the virtual object based on the acquired first display effect parameter.
6. The system according to claim 4 or 5, wherein the editing execution operation includes a virtual object selection operation and an effect setting operation;
the first editing end is further configured to: when the function to be edited is a newly added virtual object, responding to the selection operation of the virtual object, and sending a material packet acquisition request to the server; displaying a virtual object corresponding to the material package on the editing interface based on the material package requested from the server; responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object; determining the display effect of the virtual object based on the acquired first display effect parameter;
and the server is also used for responding to the received material packet acquisition request and returning the material packet matched with the material packet acquisition request.
7. The system according to claim 5 or 6, wherein the first display effect parameter comprises a presentation position, and the effect setting operation comprises a position moving operation;
the first editing end is further configured to: in response to a position moving operation for the virtual object, acquiring a target position of the moving operation; determining the target position as a presentation position of the virtual object in the three-dimensional virtual model.
8. The system according to any one of claims 4 to 7, wherein the editing execution operation comprises a virtual object removal operation;
the first editing end is further configured to, when the function to be edited is to remove a virtual object, remove the virtual object from the three-dimensional virtual model in response to the virtual object removal operation.
9. The system of any of claims 1 to 8, wherein the first display effect parameter comprises at least one of: presenting a pose, displaying a trigger condition, displaying size and circularly displaying times.
10. The system of claim 9, wherein the display trigger condition comprises one of: displaying the virtual object in real time; triggering to display the virtual object when a display terminal for displaying the augmented reality effect is at a specific position; and triggering the virtual object to be displayed when a specific gesture is detected by a display terminal for displaying the augmented reality effect.
11. The system according to any one of claims 1 to 10,
the first editing end is further configured to: sending a three-dimensional virtual model acquisition request to the server; receiving the three-dimensional virtual model from the server;
the server is further configured to: and responding to the received three-dimensional virtual model obtaining request, and returning the three-dimensional virtual model matched with the three-dimensional virtual model obtaining request.
12. The system according to any one of claims 1 to 10,
the first editing end is further configured to: and responding to the three-dimensional virtual model import operation, and acquiring the three-dimensional virtual model imported by the three-dimensional virtual model import operation.
13. The system according to any one of claims 1 to 12,
the first editing end is further configured to: sending a navigation path model acquisition request to the server; displaying the received navigation path model on the editing interface; determining a second display effect parameter of a virtual object superimposed on the navigation path model in response to an editing operation for the navigation path model; generating an augmented reality navigation data packet containing the second display effect parameter according to the second display effect parameter;
the server is further configured to: and responding to the received navigation path model acquisition request, and returning the navigation path model matched with the navigation path model acquisition request.
14. The system according to any one of claims 1 to 13, wherein the first editing end is further configured to: obtaining a first display effect parameter of each virtual object currently superposed on the three-dimensional virtual model by sending a virtual object obtaining request to the server; and displaying the virtual objects on the editing interface based on the first display effect parameter of each virtual object.
15. A method for generating augmented reality data is applied to a first editing end, and the method comprises the following steps:
displaying a three-dimensional virtual model for representing a real scene on an editing interface;
determining a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model based on the obtained editing operation;
and generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter.
16. The method of claim 15, wherein the editing operations include a function selection operation and an editing execution operation; the determining, based on the obtained editing operation, a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model includes:
determining an operating position on the three-dimensional virtual model;
determining a function to be edited of the operation position;
and determining virtual objects superposed on the three-dimensional virtual model and a first display effect parameter of each virtual object according to the function to be edited and the editing execution operation.
17. The method according to claim 16, wherein the determining the function to be edited of the operation position comprises:
determining a function to be edited of the operation position based on the acquired function selection operation; or,
and determining the function to be edited according to the operation position.
18. The method according to claim 17, wherein the determining a function to be edited according to the operation position comprises:
when the virtual object does not exist at the operation position, determining that the function to be edited is a new virtual object;
when it is determined that a virtual object exists at the operation position, determining that the function to be edited is one of: virtual object removal function, virtual object modification function.
19. The method according to claim 18, wherein the editing execution operation includes an effect setting operation;
the determining, according to the function to be edited and the editing execution operation, a virtual object superimposed on the three-dimensional virtual model and a first display effect parameter of each virtual object includes:
when the function to be edited is to modify a virtual object, responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object;
and updating the display effect of the virtual object based on the acquired first display effect parameter.
20. The method according to claim 18 or 19, wherein the editing execution operation includes a virtual object selection operation and an effect setting operation;
the determining, according to the function to be edited and the editing execution operation, a virtual object superimposed on the three-dimensional virtual model and a first display effect parameter of each virtual object includes:
when the function to be edited is a newly added virtual object, responding to the selection operation of the virtual object, and sending a material packet acquisition request to the server;
displaying a virtual object corresponding to the material package on the editing interface based on the material package requested from the server;
responding to an effect setting operation aiming at the virtual object, and acquiring a first display effect parameter of the virtual object;
and determining the display effect of the virtual object based on the acquired first display effect parameter.
21. The method according to claim 19 or 20, wherein the first display effect parameter comprises a presentation position, and the effect setting operation comprises a position moving operation;
the obtaining of the first display effect parameter of the virtual object in response to the effect setting operation for the virtual object includes:
in response to a position moving operation for the virtual object, acquiring a target position of the moving operation;
determining the target position as a presentation position of the virtual object in the three-dimensional virtual model.
22. The method of any of claims 18 to 21, wherein the editing execution operation comprises a virtual object removal operation;
the determining, according to the function to be edited and the editing execution operation, a virtual object superimposed on the three-dimensional virtual model and a first display effect parameter of each virtual object includes:
when the function to be edited is to remove a virtual object, the virtual object is removed from the three-dimensional virtual model in response to the virtual object removal operation.
23. The method according to any of claims 15 to 20, wherein the first display effect parameter comprises at least one of: presenting a pose, displaying a trigger condition, displaying size and circularly displaying times.
24. The method of claim 23, wherein the display trigger condition comprises one of:
displaying the virtual object in real time;
triggering to display the virtual object when a display terminal for displaying the augmented reality effect is at a specific position;
and triggering the virtual object to be displayed when a specific gesture is detected by a display terminal for displaying the augmented reality effect.
25. The method of any of claims 15 to 24, wherein prior to displaying the three-dimensional virtual model characterizing the real scene on the editing interface, the method further comprises:
sending a three-dimensional virtual model acquisition request to the server; receiving the three-dimensional virtual model from the server;
or responding to the three-dimensional virtual model import operation, and acquiring the three-dimensional virtual model imported by the three-dimensional virtual model import operation.
26. The method of any one of claims 15 to 25, further comprising:
sending a navigation path model acquisition request to the server;
displaying the received navigation path model on the editing interface;
determining a second display effect parameter of the virtual object superimposed on the navigation path model in response to the obtained editing operation;
and generating an augmented reality navigation data packet containing the second display effect parameter according to the second display effect parameter.
27. The method according to any of claims 15 to 26, wherein prior to said determining a first display effect parameter of a virtual object superimposed on said three-dimensional virtual model based on said obtained editing operation, the method further comprises:
acquiring a first display effect parameter of each virtual object currently superposed on the three-dimensional virtual model by sending a virtual object acquisition request to the server;
and displaying the virtual objects on the editing interface based on the first display effect parameter of each virtual object.
28. An apparatus for generating augmented reality data, comprising:
the first display module is used for displaying a three-dimensional virtual model for representing a real scene on an editing interface;
a first determining module, configured to determine, based on the obtained editing operation, a first display effect parameter of a virtual object superimposed on the three-dimensional virtual model;
and the first generation module is used for generating an augmented reality data packet comprising the first display effect parameter according to the first display effect parameter.
29. A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 15 to 27 when executing the program.
30. A computer storage medium having a computer program stored thereon, the computer program, when being executed by a processor, performing the steps of the method of any one of claims 15 to 27.
CN202010898912.0A 2020-08-31 2020-08-31 Augmented reality system and augmented reality data generation method and device Withdrawn CN112070906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010898912.0A CN112070906A (en) 2020-08-31 2020-08-31 Augmented reality system and augmented reality data generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010898912.0A CN112070906A (en) 2020-08-31 2020-08-31 Augmented reality system and augmented reality data generation method and device

Publications (1)

Publication Number Publication Date
CN112070906A true CN112070906A (en) 2020-12-11

Family

ID=73665182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010898912.0A Withdrawn CN112070906A (en) 2020-08-31 2020-08-31 Augmented reality system and augmented reality data generation method and device

Country Status (1)

Country Link
CN (1) CN112070906A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112316424A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Game data processing method, device and storage medium
CN113094205A (en) * 2021-04-08 2021-07-09 民航数据通信有限责任公司 Data chain-based field-entering and field-leaving enhancement type display device
CN113126770A (en) * 2021-04-30 2021-07-16 塔普翊海(上海)智能科技有限公司 Interactive three-dimensional scenery system based on augmented reality
CN113706718A (en) * 2021-07-21 2021-11-26 广州中智达信科技有限公司 Augmented reality collaboration method, system and application
CN113891140A (en) * 2021-09-30 2022-01-04 北京市商汤科技开发有限公司 Material editing method, device, equipment and storage medium
CN114047864A (en) * 2021-10-29 2022-02-15 北京市商汤科技开发有限公司 Special effect data packet generating and displaying method, device, equipment, medium and product
CN114697703A (en) * 2022-04-01 2022-07-01 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
CN114979457A (en) * 2021-02-26 2022-08-30 华为技术有限公司 Image processing method and related device
CN116430991A (en) * 2023-03-06 2023-07-14 北京黑油数字展览股份有限公司 Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment
CN117274558A (en) * 2023-11-22 2023-12-22 湖北珞珈实验室 AR navigation method, device and equipment for visual positioning and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN111179435A (en) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 Augmented reality processing method, device and system, storage medium and electronic equipment
CN111476911A (en) * 2020-04-08 2020-07-31 Oppo广东移动通信有限公司 Virtual image implementation method and device, storage medium and terminal equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN111179435A (en) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 Augmented reality processing method, device and system, storage medium and electronic equipment
CN111476911A (en) * 2020-04-08 2020-07-31 Oppo广东移动通信有限公司 Virtual image implementation method and device, storage medium and terminal equipment

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112316424A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Game data processing method, device and storage medium
CN114979457B (en) * 2021-02-26 2023-04-07 华为技术有限公司 Image processing method and related device
CN114979457A (en) * 2021-02-26 2022-08-30 华为技术有限公司 Image processing method and related device
CN113094205A (en) * 2021-04-08 2021-07-09 民航数据通信有限责任公司 Data chain-based field-entering and field-leaving enhancement type display device
CN113126770A (en) * 2021-04-30 2021-07-16 塔普翊海(上海)智能科技有限公司 Interactive three-dimensional scenery system based on augmented reality
CN113706718A (en) * 2021-07-21 2021-11-26 广州中智达信科技有限公司 Augmented reality collaboration method, system and application
CN113891140A (en) * 2021-09-30 2022-01-04 北京市商汤科技开发有限公司 Material editing method, device, equipment and storage medium
CN114047864A (en) * 2021-10-29 2022-02-15 北京市商汤科技开发有限公司 Special effect data packet generating and displaying method, device, equipment, medium and product
CN114697703A (en) * 2022-04-01 2022-07-01 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
CN114697703B (en) * 2022-04-01 2024-03-22 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
CN116430991A (en) * 2023-03-06 2023-07-14 北京黑油数字展览股份有限公司 Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment
CN117274558A (en) * 2023-11-22 2023-12-22 湖北珞珈实验室 AR navigation method, device and equipment for visual positioning and storage medium
CN117274558B (en) * 2023-11-22 2024-02-13 湖北珞珈实验室 AR navigation method, device and equipment for visual positioning and storage medium

Similar Documents

Publication Publication Date Title
CN112070906A (en) Augmented reality system and augmented reality data generation method and device
KR102417645B1 (en) AR scene image processing method, device, electronic device and storage medium
CN112070907A (en) Augmented reality system and augmented reality data generation method and device
CN111311756B (en) Augmented reality AR display method and related device
CN110533755B (en) Scene rendering method and related device
CN108520552A (en) Image processing method, device, storage medium and electronic equipment
TWI783472B (en) Ar scene content generation method, display method, electronic equipment and computer readable storage medium
US20120194541A1 (en) Apparatus to edit augmented reality data
JP2022505998A (en) Augmented reality data presentation methods, devices, electronic devices and storage media
JP2015001875A (en) Image processing apparatus, image processing method, program, print medium, and print-media set
JP2022545598A (en) Virtual object adjustment method, device, electronic device, computer storage medium and program
CN111610997A (en) AR scene content generation method, display system and device
CN112884906A (en) System and method for realizing multi-person mixed virtual and augmented reality interaction
CN111866372A (en) Self-photographing method, device, storage medium and terminal
CN114942737A (en) Display method, display device, head-mounted device and storage medium
CN113362474A (en) Augmented reality data display method and device, electronic equipment and storage medium
Rattanarungrot et al. The application of augmented reality for reanimating cultural heritage
KR101317869B1 (en) Device for creating mesh-data, method thereof, server for guide service and smart device
Khan The rise of augmented reality browsers: Trends, challenges and opportunities
KR102040392B1 (en) Method for providing augmented reality contents service based on cloud
CN112862977A (en) Management method, device and equipment of digital space
KR102443049B1 (en) Electric apparatus and operation method thereof
US10466818B2 (en) Pointing action
CN114095719B (en) Image display method, image display device and storage medium
CN114816622B (en) Scene picture display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201211