CN113253842A - Scene editing method and related device and equipment - Google Patents

Scene editing method and related device and equipment Download PDF

Info

Publication number
CN113253842A
CN113253842A CN202110552314.2A CN202110552314A CN113253842A CN 113253842 A CN113253842 A CN 113253842A CN 202110552314 A CN202110552314 A CN 202110552314A CN 113253842 A CN113253842 A CN 113253842A
Authority
CN
China
Prior art keywords
virtual digital
placing
digital model
user
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110552314.2A
Other languages
Chinese (zh)
Inventor
李宇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110552314.2A priority Critical patent/CN113253842A/en
Publication of CN113253842A publication Critical patent/CN113253842A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The application discloses a scene editing method, a related device and equipment, wherein the scene editing method comprises the following steps: acquiring image data of a real scene; performing three-dimensional reconstruction on the image data to obtain a three-dimensional model corresponding to a real scene; converting the three-dimensional model by using a virtual digital space to obtain a virtual digital model of a real scene; and receiving a placing instruction sent by a user by using the visual wearing equipment, and placing articles in the virtual digital model based on the placing instruction. By the scheme, the degree of freedom of editing the real scene can be improved, and the virtual-real fused display effect of the real scene can be enhanced.

Description

Scene editing method and related device and equipment
Technical Field
The present application relates to the field of augmented reality technologies, and in particular, to a scene editing method, and a related apparatus and device.
Background
The Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and the real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, so that the real world is enhanced.
At present, the editing of Augmented Reality (AR) content scenes based on high-precision maps is almost completed in a computer. Editing is completed in a two-dimensional screen by using a unity tool (a three-dimensional testing tool written in pure C language). In the process, the user needs to import the collected and generated spatial model into a three-dimensional tool such as unity, and then puts the target AR content at a certain position in the spatial model by dragging, rotating and zooming the mouse, and adjusts the orientation of the AR content. And then exporting the position information and the direction information of the AR content to a server, so that a user watches the AR content in an actual physical space by using the intelligent terminal. While the user watches the screen, the user can only imagine the final performance state in the actual physical space through the two-dimensional screen.
Moreover, since the user needs to operate the three-dimensional model on the screen, the operation mode is limited by the size of the computer screen and the mere plane display capability, so that the placement of the AR content is often inaccurate, unintuitive and inconvenient.
Disclosure of Invention
The application at least provides a scene editing method and a related device and equipment.
A first aspect of the present application provides a scene editing method, including: acquiring image data of a real scene; performing three-dimensional reconstruction on the image data to obtain a three-dimensional model corresponding to a real scene; converting the three-dimensional model by using a virtual digital space to obtain a virtual digital model of a real scene; and receiving a placing instruction sent by a user by using the visual wearing equipment, and placing articles in the virtual digital model based on the placing instruction.
Therefore, the three-dimensional model corresponding to the real scene is obtained by performing three-dimensional reconstruction on the acquired image data of the real scene, and the virtual digital space is utilized to convert the three-dimensional model to obtain the virtual digital model of the real scene, so that the virtual digital model of the real scene can be adapted to a format which can be supported by the visual wearable device, and the virtual digital model can be applied to the visual wearable device to provide a three-dimensional display effect for a user. And articles are placed in the virtual digital model based on the placing instruction, so that the degree of freedom of editing the real scene can be improved, and the final display effect of virtual-real fusion of the real scene is enhanced.
Wherein, receive the user and utilize the visual instruction of putting that wears the equipment and send to put the instruction and carry out article based on putting in virtual digital model, include: receiving a placing instruction sent by a user by using the visual wearing equipment, determining a target point in the virtual digital model based on the placing instruction, and acquiring a target object to be placed from the virtual digital space; and placing the target object at a target point of the virtual digital model.
Therefore, the target object is placed in the virtual digital model by determining a target point in the virtual digital model, acquiring the target object to be placed from the virtual digital space and placing the target object to the target point of the virtual digital model.
And acquiring a storage path of the target object in the virtual digital space, and adding the storage path of the target object to a directory corresponding to the target point in the virtual digital model.
Therefore, by acquiring the storage path of the target object in the virtual digital space and adding the storage path of the target object to the directory corresponding to the target point in the virtual digital model, the target object and the target point under the storage path can be placed and edited, so that a user can simultaneously access the storage path of the target object through the file storage of the virtual digital model.
Placing the target object to a target point of the virtual digital model, aligning a center point of the target object with the target point, and acquiring a mapping relation between the aligned center point and the target point; and binding the aligned central point with the target point by using the mapping relation.
Therefore, the center point of the target object and the target point are aligned and placed, and the mapping relation between the aligned center point and the target point is obtained. The aligned center point and the target point can be bound by using a mapping relation, so that the center point of the target object and the target point can be aligned and placed in the display step after the real scene is edited by using the mapping relation.
After the target object is placed at the target point of the virtual digital model, the method further comprises the following steps: and receiving an adjusting instruction sent by a user, and rotating and/or zooming the target object based on the adjusting instruction.
Therefore, after the target object is aligned with the coordinates of the target point, the size and the orientation of the target object are further adjusted, so that the user can further adjust the placing effect of the target object in the virtual digital model.
Wherein, receive the user and utilize the visual instruction of putting that wears the equipment and send to after putting in virtual digital model article based on putting the instruction, still include: exporting the configuration information of the virtual digital model after the articles are placed to a server; the server is used for transmitting the configuration information to the terminal so as to trigger the configuration information to take effect and finish editing of the real scene.
Therefore, the configuration information of the virtual digital model placed with the target object is exported to the server; the server transmits the configuration information to the terminals so as to take effect on the configuration information and finish editing of the real scene, so that the editing effect of the real scene is displayed on the plurality of terminals, and the acquisition channel for the user to know the editing effect is improved.
The method for converting the three-dimensional model by using the virtual digital space to obtain the virtual digital model of the real scene specifically comprises the following steps: and converting the three-dimensional model by using the data format of the virtual digital space to obtain a virtual digital model of the real scene, wherein the data format of the virtual digital space is matched with the visual wearing equipment.
Therefore, the data format of the virtual digital space is used for converting the three-dimensional model to obtain the virtual digital model of the real scene, so that the data format of the virtual digital space is matched with the visual wearing equipment, and the virtual digital model is edited through the visual wearing equipment.
A second aspect of the present application provides a scene editing apparatus, including: the device comprises an acquisition module, a reconstruction module, a conversion module and a placement module; the acquisition module is used for acquiring image data of a real scene; the reconstruction module is used for performing three-dimensional reconstruction on the image data to obtain a three-dimensional model corresponding to a real scene; the conversion module is used for converting the three-dimensional model by utilizing the virtual digital space to obtain a virtual digital model of the real scene; and the placing module is used for receiving and receiving a placing instruction sent by a user by using the visual wearing equipment and placing the articles in the virtual digital model based on the placing instruction.
A third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, where the processor is configured to execute program instructions stored in the memory to implement the scene editing method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the scene editing method in the first aspect described above.
According to the scheme, the three-dimensional model corresponding to the real scene is obtained by performing three-dimensional reconstruction on the acquired image data of the real scene; converting the three-dimensional model by using a virtual digital space to obtain a virtual digital model of the image data; and finally, receiving a placing instruction sent by the user by using the visual wearing equipment, and placing the articles in the virtual digital model based on the placing instruction. The virtual digital model of the image data is led into the visual wearing equipment to place the target object, so that the degree of freedom of the user for visually acquiring the three-dimensional information of the virtual digital model and the object and the degree of freedom of the user for placing the object are improved, the display effect of placing the object in the virtual digital model is improved, the feedback efficiency of the user on a placing result is enhanced, and the efficiency and the display effect of editing a real scene are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic flowchart of an embodiment of a scene editing method according to the present application;
FIG. 2 is a schematic structural diagram of an embodiment of the wearable visual equipment of the present application;
FIG. 3 is a schematic flowchart of another embodiment of a scene editing method according to the present application;
FIG. 4 is a schematic diagram of a six degree of freedom configuration;
FIG. 5a is a schematic view of a display screen of the wearable visual device of the present application shown to a user prior to editing;
FIG. 5b is a schematic view of a display screen of the wearable visual device of the present application shown to a user in an edit;
FIG. 5c is a schematic view of a display screen of the wearable visual device of the present application shown to a user after editing;
FIG. 6 is a block diagram of an embodiment of a scene editing apparatus according to the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The conversion form of the three-dimensional model is converted by the conversion form of the virtual digital space, the virtual digital model of the real scene matched with the visual wearing equipment is obtained, and the user can edit the virtual digital model of the real scene through the three-dimensional visual angle of the visual wearing equipment, so that the placement of the target object in the real scene is simulated to the maximum extent. On this basis, the reality editing method of the embodiment can also improve the degree of freedom of visually acquiring three-dimensional information of the virtual digital model and the target object by the user and the degree of freedom of placing the target model by the user through the three-dimensional visual angle of the visual wearable device, improve the display effect of placing the target object in the virtual digital model, enhance the three-dimensional viewing experience of the user on the placing result, and improve the efficiency and the display effect of editing the real scene.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a scene editing method according to an embodiment of the present application.
Specifically, the method may include the steps of:
step S11: image data of a real scene is acquired.
The intelligent terminal acquires image data of a real scene, wherein the intelligent terminal of the embodiment includes at least one of a microcomputer, a server, a processor, a mobile terminal, a PC terminal, and other intelligent terminals, and may be specifically set based on actual application, which is not limited herein.
The real scene of the embodiment may include a real scene existing in reality, such as buildings, streets, mountains and rivers. In a specific implementation scenario, the intelligent terminal may acquire image data of a real scene of an entity through the image acquisition device. The image acquisition device may include a camera and/or a sensor. The map information of the real scene is collected through a camera and/or a sensor and then converted into image data. The map information of the real scene comprises visual physical information such as a building structure, road planning, geographic terrain and the like. And the image data acquired by the image acquisition means includes video data and photo data.
Step S12: and performing three-dimensional reconstruction on the image data to obtain a three-dimensional model corresponding to the real scene.
And after the intelligent terminal acquires the image data of the real scene, three-dimensional reconstruction is carried out on the image data to obtain a three-dimensional model corresponding to the real scene. In a specific application scenario, a three-dimensional model can be reconstructed in a computer-recognizable storage medium based on image data by using a three-dimensional reconstruction algorithm, so that a three-dimensional model corresponding to the image data, namely a virtual three-dimensional model of a real scene, is constructed.
Step S13: and converting the three-dimensional model by using the virtual digital space to obtain a virtual digital model of the real scene.
After the intelligent terminal obtains the three-dimensional model of the real scene, the three-dimensional model of the image data is converted by using the virtual digital space, so that the three-dimensional model of the image data is matched with the virtual digital space, and the virtual digital model of the image data, namely the virtual digital model of the real scene displayed based on the virtual digital space, is obtained.
Step S14: and receiving a placing instruction sent by a user by using the visual wearing equipment, and placing articles in the virtual digital model based on the placing instruction.
The intelligent terminal receives a placing instruction sent by a user through the visual wearing equipment, and places articles in the virtual digital model based on the placing instruction. The article of the present embodiment refers to a virtual article in a virtual digital space.
In a specific application scenario, the visual wearable device is a virtual reality head-mounted display device. The virtual reality head-mounted display device utilizes a head-mounted display, and can seal the vision and the hearing of people to the outside and guide a user to generate a feeling of the user in a virtual environment. When the user wears the virtual reality head-mounted display equipment, the user can operate and watch the scene displayed by the virtual reality head-mounted display equipment in a three-dimensional and three-dimensional manner by watching the display screen of the virtual reality head-mounted display equipment through vision, so that the user is in the scene, the visual feeling of the user on the virtual digital model and the article is further improved, and the freedom degree of the user in putting the article is enhanced.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an embodiment of a wearable device according to the present application.
The visual wearable device 10 includes a head mounted device 11 and a handle 12. The visual wearable device 10 of the present embodiment is in a state of being worn by a user, and at this time, the head-mounted device 11 is worn on the head of the user, and seals the vision and hearing of the user to the outside to show a three-dimensional display screen in the head-mounted device 11 of the user. The handle 12 is held in the hand of a user for the user to manipulate the virtual digital space in the display of the display screen. In fig. 2, only one handle 12 is shown, but in practical application, the number of the handles 12 is not limited, and may be one, two or more. In other embodiments, the wearable visual device 10 may not include the handle 12, but may be operated by other control methods, which are not limited herein.
In a specific implementation scenario, the virtual digital model of the image data is imported into the wearable visual device, and the virtual digital model of the image data is displayed to the user through a display screen of the wearable visual device. The user both can utilize visual wearing equipment to operate to send and put the instruction, can see the three-dimensional effect after putting through visual wearing equipment again, improved the display effect that article put in virtual digital model.
Because the virtual digital model of this embodiment is that the virtual digital model that real scene corresponds then carries out article based on the instruction of putting that the user sent puts in virtual digital model, can carry out article through the mode simulation show of virtual putting and put in real scene, improve the degree of freedom of editing real scene, augmented reality scene virtuality and reality amalgamation's display effect, and easy operation, high-efficient convenience.
By the method, the reality editing method of the embodiment firstly carries out three-dimensional reconstruction on the acquired image data of the real scene to obtain a three-dimensional model corresponding to the real scene; converting the three-dimensional model by using a virtual digital space to obtain a virtual digital model of the image data; and finally, receiving a placing instruction sent by the user by using the visual wearing equipment, and placing the articles in the virtual digital model based on the placing instruction. The virtual digital model of the image data is led into the visual wearing equipment to place the target object, so that the degree of freedom of the user for visually acquiring the three-dimensional information of the virtual digital model and the object and the degree of freedom of the user for placing the object are improved, the display effect of placing the object in the virtual digital model is improved, the feedback efficiency of the user on a placing result is enhanced, and the efficiency and the display effect of editing a real scene are improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a scene editing method according to another embodiment of the present application. Specifically, the method may include the steps of:
s21: and performing three-dimensional reconstruction on the acquired image data of the real scene to obtain a three-dimensional model corresponding to the real scene.
A user firstly determines the scene range of a certain real scene needing to be edited, and acquires the map information of the real scene by using an image acquisition device through an intelligent terminal. Specifically, in this step, map information of a real scene may be acquired by an image acquisition device such as a smart terminal, a panoramic camera, and a visual sensor, and converted into image data. And the image data includes video data and photo data.
And the intelligent terminal carries out three-dimensional reconstruction of the real scene by using a three-dimensional reconstruction algorithm based on the image data so as to obtain a three-dimensional model corresponding to the image data. The three-dimensional reconstruction algorithm may be SfM (structure from motion) or other three-dimensional reconstruction algorithms, and the SfM three-dimensional reconstruction algorithm is a method for realizing three-dimensional reconstruction from visual motion information, that is, for calculating three-dimensional reconstruction from time series of two-dimensional images.
In a specific implementation scenario, when the image data acquired by the image acquisition device is video data, a SfM three-dimensional reconstruction algorithm may be used to perform three-dimensional reconstruction of a physical space of a real scene based on the video information acquired by the image acquisition device, so as to obtain a three-dimensional model corresponding to the image data. In other implementation scenarios, when the image data acquired by the image acquiring device is photo data, other adaptive three-dimensional reconstruction algorithms may be used to perform three-dimensional reconstruction of the real scene, which is not limited herein.
S22: and converting the three-dimensional model by using the data format of the virtual digital space to obtain the virtual digital model of the real scene.
After the three-dimensional model corresponding to the image data is obtained, the intelligent terminal needs to convert the data format of the three-dimensional model so as to adapt to the visual wearable equipment, and therefore the three-dimensional model can be applied to the visual wearable equipment for displaying and operating. In a disclosed embodiment, after a three-dimensional model of a real scene is acquired, a data format of the three-dimensional model is converted based on a data format of a virtual digital space to obtain a virtual digital model of image data. And the virtual digital model of the image data can be matched with an application platform in the visual wearable device, so that the application platform in the visual wearable device can read and display the three-dimensional model resource of the image data constructed in the previous step.
In a specific implementation scenario, when the wearable device is a virtual reality head-mounted display device, an application platform of the virtual reality head-mounted display device has certain format requirements on a three-dimensional space model supported by the application platform. Therefore, format conversion needs to be performed on the three-dimensional model of the image data according to the format of the virtual digital space supported by the application platform of the virtual reality head-mounted display device, so that the format conversion is adapted to the format requirement of the application platform of the virtual reality head-mounted display device, and the application platform of the virtual reality head-mounted display device is convenient for reading the three-dimensional model resource of the image data under the fixed file path and displaying the three-dimensional model resource to the user for viewing.
S23: receiving a placing instruction sent by a user by using the visual wearing equipment, determining a target point in the virtual digital model based on the placing instruction, and acquiring a target object to be placed from the virtual digital space.
The visual wearing equipment is provided with a function of moving and zooming the virtual digital model, and a user can send a placing instruction through the function of moving and zooming the virtual digital model of the visual wearing equipment so as to select any point in the virtual digital model, determine a certain target point or a plurality of target points and acquire a target object to be placed from the virtual digital space.
In a specific implementation scenario, a user wears a visual wearable device, and an application platform of the visual wearable device displays a virtual digital model. At this time, the user can move the virtual digital model in the virtual digital model and/or move the virtual digital model through the left hand handle of the visually worn device, and zoom the virtual digital model through the right hand handle. Through the operation, the user can select any point in the virtual digital model by sending the placing instruction, so that the target point can be selected and determined according to the editing requirement.
In another disclosed embodiment, the user can select any number of points in the virtual digital model through the functions of moving and zooming the virtual digital model of the visual wearable device to determine a plurality of target points, so as to realize the step of editing a plurality of positions of the real scene. The plurality of target points include two target points, specifically, the number of the target points is determined according to the actual requirement of the user, specifically, the number of the target points may be 2, 4, 5, 8, and the like, which is not limited herein.
In a specific implementation scenario, a user may send a placing instruction through a visual wearable device, so as to obtain a target object to be placed in a virtual digital space. The target object to be placed is a pre-modeled three-dimensional model, which can be placed on a menu of a virtual digital space of a visually worn device, and which has a requirement to be placed in the virtual digital model of a real scene. And the user acquires the target object to be placed in the menu of the virtual digital space of the visual wearing equipment through the visual wearing equipment. The number of target objects to be placed may be determined based on actual situations, for example: 2, 5, etc., without limitation.
S24: and aligning the center point of the target object with the target point, acquiring the mapping relation between the aligned center point and the target point, and binding the aligned center point and the target point by using the mapping relation.
After the user selects the target point and the target object to be placed, the user is accepted to align the target point and the target object to be placed, and the intelligent terminal aligns the coordinate of the center point of the target object with the coordinate of the target point, so that a mapping relation between the center point of the target object and the target point is established based on the coordinate of the center point of the target object and the coordinate of the target point, and the aligned center point and the target point are bound by the mapping relation. The bound center point and the target point can be automatically aligned without manually aligning again.
In a specific implementation scenario, after the wearable visual device obtains a placing instruction sent by a user through the wearable visual device, a target object to be placed is dragged and dropped from a menu of a virtual digital space of the wearable visual device to a target point, so that alignment between a three-dimensional coordinate of a center point of the target object and a three-dimensional coordinate of the target point is completed, and at this time, a mapping relationship between the center point of the target object and the target point can be established based on the three-dimensional coordinate of the center point of the target object and the three-dimensional coordinate of the target point.
When the target object is placed at the target point of the virtual digital model, the storage path of the target object in the virtual digital space can be obtained, and the storage path of the target object is added to the directory corresponding to the target point in the virtual digital model, so that the target object and the target point of the virtual digital model are bound at the data level. To complete the mount between the target object and the virtual digital model also with the center point of the target object aligned with the target point on the storage path.
In a specific implementation scenario, the wearable visual device receives a placing instruction sent by a user based on a handle of the wearable visual device and places the target object to the target point based on the placing instruction. In actual operation, the sending of the placing instruction is not limited to the handle of the visual wearable device. If the visual wearable device has other operation modes, the user can also send the placing instruction by using other operation modes, which is not limited herein.
S25: and receiving an adjusting instruction sent by a user, and rotating and/or zooming the target object based on the adjusting instruction.
After aligning the three-dimensional coordinates of the target object with the three-dimensional coordinates of the target point, the size and the orientation of the target object may need to be adjusted to achieve six-degree-of-freedom alignment of the target object and the target point. The six degrees of freedom refer to the degrees of freedom of movement in the directions of three orthogonal coordinate axes of x, y and z and the degrees of freedom of rotation around the three coordinate axes.
Referring to fig. 4, fig. 4 is a schematic diagram of a six-degree-of-freedom structure.
When six degrees of freedom of a target object are determined, the method specifically comprises the following steps: firstly, determining coordinates of the central point of the target object on x, y and z rectangular coordinate axes respectively. This step corresponds to step S24 in the embodiment of fig. 3. And then determining the directions R1, R2 and R3 of the target object overall in three rotational motions around the three orthogonal coordinate axes of x, y and z. This step corresponds to step S25 in the embodiment of fig. 3.
In a specific implementation scenario, the intelligent terminal receives an adjusting instruction sent by a user based on a handle of the visual wearable device, and rotates and/or zooms the target object based on the adjusting instruction, so that the target object is placed in six degrees of freedom, and the requirement of the user for the degree of freedom of placing the target object is met.
After the user finishes the placement of the target object in the virtual digital model with six degrees of freedom, the user can watch the placement effect of the target object through the display screen of the visual wearable device. At this moment, the visual wearing device is a visual device which displays a three-dimensional structure through a three-dimensional visual angle, so that a user can view the placing effect of the target object in the virtual digital model in a three-dimensional and optionally-converted visual angle manner, the effect that the target object entity is placed in the real scene can be obtained to the maximum extent, and the reality degree and the freedom degree of real scene editing are fully improved. The display effect of entity placement infinitely close to a real scene to a certain extent. The three-dimensional viewing experience of the user on the display result is enhanced, and the efficiency and the display effect of editing the real scene are improved.
Step S26: exporting the configuration information of the virtual digital model after the articles are placed to a server; the server is used for transmitting the configuration information to the terminal so as to trigger the configuration information to take effect and finish editing of the real scene.
After the placement and adjustment of the target object and the virtual digital model with six degrees of freedom are completed, the configuration information of the virtual digital model with the target object is exported to the server, and then the server transmits the configuration information to the terminal to trigger the configuration information to take effect, so that the editing of the real scene is completed.
In a specific implementation scenario, after the user completes the step of placing the target object in the virtual digital model on the visual wearable device, the configuration information of the virtual digital model in which the target object is placed may be exported to the server, and then the server transmits the configuration information to the mobile phone, and triggers the configuration information to take effect, so that the user can also view the placing effect of the target object in the virtual digital model on the screen of the mobile phone. Due to the fact that the visual wearing equipment has certain wearing requirements in use, the user can operate the visual wearing equipment with certain complexity, the user can view the placing effect of the target object in the virtual digital model on the mobile phone screen through the implementation scene, and the degree of freedom of the user for obtaining the placing effect of the target object can be improved.
The scene editing method can be applied to scenes such as virtual decoration, virtual navigation, virtual text travel and virtual marketing. Taking virtual decoration as an example, before a user decorates a room, the user may have a need to know the effect of placing certain furniture in advance, so as to determine whether to place the furniture. In a specific implementation scenario, image data of the whole room is acquired by the image acquisition device, and then a three-dimensional model of the whole room is constructed based on the image data. After the three-dimensional model is converted into the format, the three-dimensional model is applied to the visual wearable device, so that a user can visually realize three-dimensional viewing. A user selects a position (namely a target point) where furniture needs to be placed and a furniture model to be placed in a room through a handle of a visual wearable device, drags and drops the furniture to the target point, and adjusts the size and the direction of the furniture, so that the placement of the furniture in the room in reality is simulated to the maximum extent. A user can see the furniture model placed in the three-dimensional model of the room through the visual wearing equipment, so that the actual placing effect of the furniture in the room can be known in advance, and whether the furniture placing mode is adopted or not can be better determined.
Referring to fig. 5a, fig. 5a is a schematic view of a display screen of the wearable visual device of the present application shown to a user before editing. The embodiment is applied to virtual decoration, and the user has the requirement of placing furniture in a room.
The display screen 50 includes a virtual digital model presentation area 52 and a menu 51. The virtual digital model display area 52 displays a room model (not shown) with a three-dimensional view angle to a user, and the menu 51 is provided with a plurality of furniture models 511. The user has determined a target point 521 in the room model for furniture placement.
Specifically, when editing the room model, the user performs drag-and-drop alignment to the target point 521 in the room model by selecting a certain furniture model 511 on the menu 51.
Referring to fig. 5b, fig. 5b is a schematic diagram of a display screen of the wearable device shown to a user during editing. This diagram is an effect diagram in which the display screen is presented to the user after step S24.
At this time, the user has already aligned a furniture model 512 with the target point 521, but the mere alignment between the furniture model 512 and the target point 521 cannot simulate the effect of the furniture model 512 being placed in a real scene, so the furniture model 512 also needs to be rotated or zoomed.
Referring to fig. 5c, fig. 5c is a schematic view showing a display screen of the wearable visual device to a user after editing. This diagram is an effect diagram in which the display screen is presented to the user after step S25.
The user has adjusted the orientation and size of furniture model 512 so that it is placed as desired. At this time, the user can view the effect of placing the furniture model 512 in the room model through the display screen of the visual wearing device, and can also change the visual angle of the display screen by controlling the visual wearing device, thereby viewing the effect of placing the furniture model 512 in the room model from multiple angles. Thereby facilitating the user to determine whether the furniture model 512 is suitable for being placed on the target point 521 during the furniture decoration.
The furniture of this embodiment is put the effect directly perceived, can laminate furniture in the at utmost and put the bandwagon effect in the real scene, and the user is strengthened watching the impression to putting the three-dimensional of result, improves the efficiency and the display effect of editing the real scene. Meanwhile, the step of actually placing furniture in reality is omitted, and time and energy of a user are saved.
By the method, the virtual digital model of the image data is imported into the visual wearable device to place the target object, so that the placement of the target object in the real scene can be simulated to the greatest extent. On this basis, the reality editing method of the embodiment can also improve the degree of freedom of the user to visually acquire three-dimensional information of the virtual digital model and the target object and the degree of freedom of the user to place the target model through the visual wearable device, improve the display effect of placing the target object in the virtual digital model, enhance the three-dimensional viewing experience of the user on the placed result, and improve the efficiency and the display effect of editing the real scene.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an embodiment of a scene editing apparatus according to the present application. The scene editing apparatus 60 includes an obtaining module 61, a reconstructing module 62, a converting module 63, and a placing module 64. An obtaining module 61, configured to obtain image data of a real scene; the reconstruction module 62 is configured to perform three-dimensional reconstruction on the image data to obtain a three-dimensional model corresponding to a real scene; a conversion module 63, configured to convert the three-dimensional model by using a virtual digital space to obtain a virtual digital model of a real scene; and the placing module 64 is configured to receive a placing instruction sent by the user by using the visual wearing device, and place the article in the virtual digital model based on the placing instruction.
According to the scheme, the three-dimensional model corresponding to the real scene is obtained by performing three-dimensional reconstruction on the acquired image data of the real scene, the three-dimensional model is converted by using the virtual digital space, the virtual digital model of the real scene is obtained, and the virtual digital model of the real scene can be adapted to a format which can be supported by the visual wearing equipment, so that the virtual digital model can be applied to the visual wearing equipment to provide a three-dimensional display effect for a user. And articles are placed in the virtual digital model based on the placing instruction, so that the degree of freedom of editing the real scene can be improved, and the final display effect of virtual-real fusion of the real scene is enhanced.
In some disclosed embodiments, receiving a placement instruction sent by a user by using a visual wearable device, and placing an article in a virtual digital model based on the placement instruction, includes: receiving a placing instruction sent by a user by using the visual wearing equipment, determining a target point in the virtual digital model based on the placing instruction, and acquiring a target object to be placed from the virtual digital space; and placing the target object at a target point of the virtual digital model.
Different from the foregoing embodiment, the target object is placed in the virtual digital model by determining a target point in the virtual digital model, acquiring the target object to be placed from the virtual digital space, and placing the target object to the target point of the virtual digital model.
In some disclosed embodiments, a storage path of the target object in the virtual digital space is obtained, and the storage path of the target object is added to a directory corresponding to the target point in the virtual digital model.
Different from the foregoing embodiment, by acquiring the storage path of the target object in the virtual digital space and adding the storage path of the target object to the directory corresponding to the target point in the virtual digital model, the placement and editing between the target object and the target point under the storage path can be realized, so that the user can access the storage path of the target object through the file storage of the virtual digital model.
In some disclosed embodiments, placing the target object at the target point of the virtual digital model includes aligning the center point of the target object with the target point, and obtaining a mapping relationship between the aligned center point and the target point; and binding the aligned central point with the target point by using the mapping relation.
Different from the foregoing embodiment, the center point of the target object and the target point are aligned and placed, and the mapping relationship between the aligned center point and the target point is obtained. The aligned center point and the target point can be bound by using a mapping relation, so that the center point of the target object and the target point can be aligned and placed in the display step after the real scene is edited by using the mapping relation.
In some disclosed embodiments, after the target object is placed at the target point of the virtual digital model, the method further comprises: and receiving an adjusting instruction sent by a user, and rotating and/or zooming the target object based on the adjusting instruction.
Different from the foregoing embodiment, after receiving a placement instruction sent by a user by using a visual wearable device and placing an article in a virtual digital model based on the placement instruction, the method further includes: and receiving an adjusting instruction sent by a user, and rotating and/or zooming the target object based on the adjusting instruction.
In some disclosed embodiments, after the target object is aligned with the coordinates of the target point, the size and orientation of the target object are further adjusted, so that the user can further adjust the placing effect of the target object in the virtual digital model.
Different from the foregoing embodiment, after receiving a placement instruction sent by a user by using a visual wearable device and placing an article in a virtual digital model based on the placement instruction, the method further includes: exporting the configuration information of the virtual digital model after the articles are placed to a server; the server is used for transmitting the configuration information to the terminal so as to trigger the configuration information to take effect and finish editing of the real scene.
In some disclosed embodiments, the method comprises the steps of obtaining configuration information of a virtual digital model in which a target object is placed by exporting the configuration information to a server; the server transmits the configuration information to the terminals so as to take effect on the configuration information and finish editing of the real scene, so that the editing effect of the real scene is displayed on the plurality of terminals, and the acquisition channel for the user to know the editing effect is improved.
Different from the foregoing embodiment, the step of converting the three-dimensional model by using the virtual digital space to obtain the virtual digital model of the real scene specifically includes: and converting the three-dimensional model by using the data format of the virtual digital space to obtain a virtual digital model of the real scene, wherein the data format of the virtual digital space is matched with the visual wearing equipment.
In some disclosed embodiments, the data format of the virtual digital space is used for converting the three-dimensional model to obtain a virtual digital model of a real scene, so that the data format of the virtual digital space is matched with the visual wearable equipment, and the virtual digital model is edited by the visual wearable equipment.
Different from the embodiment, the data format of the virtual digital space is used for converting the three-dimensional model to obtain the virtual digital model of the real scene, so that the data format of the virtual digital space is matched with the visual wearing equipment, and the virtual digital model is edited by the visual wearing equipment.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application. The electronic device 70 comprises a memory 71 and a processor 72 coupled to each other, the processor 72 being configured to execute program instructions stored in the memory 71 to implement the steps of any of the above-described scene editing method embodiments. In one particular implementation scenario, the electronic device 70 may include, but is not limited to: a microcomputer, a server, and the electronic device 70 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps of any of the above-described scene editing method embodiments. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 72 may be collectively implemented by an integrated circuit chip.
According to the scheme, the three-dimensional model corresponding to the real scene is obtained by performing three-dimensional reconstruction on the acquired image data of the real scene, the three-dimensional model is converted by using the virtual digital space, the virtual digital model of the real scene is obtained, and the virtual digital model of the real scene can be adapted to a format which can be supported by the visual wearing equipment, so that the virtual digital model can be applied to the visual wearing equipment to provide a three-dimensional display effect for a user. And articles are placed in the virtual digital model based on the placing instruction, so that the degree of freedom of editing the real scene can be improved, and the final display effect of virtual-real fusion of the real scene is enhanced.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 80 stores program instructions 801 that can be executed by the processor, the program instructions 801 being for implementing the steps of any of the above-described scene editing method embodiments.
By the scheme, the degree of freedom of editing the real scene can be improved, and the virtual-real fused display effect of the real scene can be enhanced.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. A method for scene editing, comprising:
acquiring image data of a real scene;
performing three-dimensional reconstruction on the image data to obtain a three-dimensional model corresponding to the real scene;
converting the three-dimensional model by using a virtual digital space to obtain a virtual digital model of the real scene;
and receiving a placing instruction sent by a user by using the visual wearing equipment, and placing articles in the virtual digital model based on the placing instruction.
2. The scene editing method of claim 1, wherein the receiving a placement instruction sent by a user by using a visual wearable device and placing an article in the virtual digital model based on the placement instruction comprises:
receiving a placing instruction sent by a user by using the visual wearing equipment, determining a target point in the virtual digital model based on the placing instruction, and acquiring a target object to be placed from the virtual digital space;
placing the target object at the target point of the virtual digital model.
3. The scene editing method of claim 2, wherein said placing the target object to the target point of the virtual digital model comprises:
and acquiring a storage path of the target object in the virtual digital space, and adding the storage path of the target object to a directory corresponding to the target point in the virtual digital model.
4. The scene editing method of claim 2, wherein said placing said target object at said target point of said virtual digital model comprises
Aligning the center point of the target object with the target point, and acquiring a mapping relation between the aligned center point and the target point;
and binding the aligned central point and the target point by using the mapping relation.
5. The scene editing method of claim 2, wherein after the placing the target object at the target point of the virtual digital model, further comprising:
and receiving an adjusting instruction sent by a user, and rotating and/or zooming the target object based on the adjusting instruction.
6. The scene editing method according to any one of claims 1 to 5, wherein, after receiving a placement instruction sent by a user using a visual wearable device and placing an article in the virtual digital model based on the placement instruction, the method further comprises:
exporting the configuration information of the virtual digital model after the articles are placed to a server; the server is used for transmitting the configuration information to a terminal so as to trigger the configuration information to take effect and finish editing the real scene.
7. The scene editing method according to any one of claims 1 to 5, wherein the converting the three-dimensional model using the virtual digital space to obtain the virtual digital model of the real scene specifically includes:
and converting the three-dimensional model by using the data format of the virtual digital space to obtain the virtual digital model of the real scene, wherein the data format of the virtual digital space is matched with the visual wearing equipment.
8. A scene editing apparatus, comprising: the device comprises an acquisition module, a reconstruction module, a conversion module and a placement module;
the acquisition module is used for acquiring image data of a real scene;
the reconstruction module is used for performing three-dimensional reconstruction on the image data to obtain a three-dimensional model corresponding to the real scene;
the conversion module is used for converting the three-dimensional model by utilizing a virtual digital space to obtain a virtual digital model of the real scene;
the placing module is used for receiving a placing instruction sent by a user by using the visual wearing equipment and placing articles in the virtual digital model based on the placing instruction.
9. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the scene editing method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor implement the scene editing method of any one of claims 1 to 7.
CN202110552314.2A 2021-05-20 2021-05-20 Scene editing method and related device and equipment Pending CN113253842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110552314.2A CN113253842A (en) 2021-05-20 2021-05-20 Scene editing method and related device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110552314.2A CN113253842A (en) 2021-05-20 2021-05-20 Scene editing method and related device and equipment

Publications (1)

Publication Number Publication Date
CN113253842A true CN113253842A (en) 2021-08-13

Family

ID=77183141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110552314.2A Pending CN113253842A (en) 2021-05-20 2021-05-20 Scene editing method and related device and equipment

Country Status (1)

Country Link
CN (1) CN113253842A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401451A (en) * 2021-12-28 2022-04-26 有半岛(北京)信息科技有限公司 Video editing method and device, electronic equipment and readable storage medium
CN114494635A (en) * 2021-12-28 2022-05-13 北京城市网邻信息技术有限公司 Virtual decoration method, device, equipment and storage medium
WO2023155394A1 (en) * 2022-02-18 2023-08-24 深圳市慧鲤科技有限公司 Virtual space fusion method and related apparatus, electronic device, medium, and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600709A (en) * 2016-12-15 2017-04-26 苏州酷外文化传媒有限公司 Decoration information model-based VR virtual decoration method
CN106780421A (en) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 Finishing effect methods of exhibiting based on panoramic platform
CN106845008A (en) * 2017-02-16 2017-06-13 珠海格力电器股份有限公司 A kind of processing method and processing device of VR equipment
CN108959668A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 The Home Fashion & Design Shanghai method and apparatus of intelligence
CN109685910A (en) * 2018-11-16 2019-04-26 成都生活家网络科技有限公司 Room setting setting method, device and VR wearable device based on VR

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600709A (en) * 2016-12-15 2017-04-26 苏州酷外文化传媒有限公司 Decoration information model-based VR virtual decoration method
CN106780421A (en) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 Finishing effect methods of exhibiting based on panoramic platform
CN106845008A (en) * 2017-02-16 2017-06-13 珠海格力电器股份有限公司 A kind of processing method and processing device of VR equipment
CN108959668A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 The Home Fashion & Design Shanghai method and apparatus of intelligence
CN109685910A (en) * 2018-11-16 2019-04-26 成都生活家网络科技有限公司 Room setting setting method, device and VR wearable device based on VR

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401451A (en) * 2021-12-28 2022-04-26 有半岛(北京)信息科技有限公司 Video editing method and device, electronic equipment and readable storage medium
CN114494635A (en) * 2021-12-28 2022-05-13 北京城市网邻信息技术有限公司 Virtual decoration method, device, equipment and storage medium
WO2023155394A1 (en) * 2022-02-18 2023-08-24 深圳市慧鲤科技有限公司 Virtual space fusion method and related apparatus, electronic device, medium, and program

Similar Documents

Publication Publication Date Title
US20210209857A1 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
CN104219584B (en) Panoramic video exchange method and system based on augmented reality
CN102959616B (en) Interactive reality augmentation for natural interaction
CN106598229B (en) Virtual reality scene generation method and device and virtual reality system
CN113253842A (en) Scene editing method and related device and equipment
CN109584295A (en) The method, apparatus and system of automatic marking are carried out to target object in image
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
WO2015122108A1 (en) Information processing device, information processing method and program
CN107491174A (en) Method, apparatus, system and electronic equipment for remote assistance
Andersen et al. Virtual annotations of the surgical field through an augmented reality transparent display
CN108769517A (en) A kind of method and apparatus carrying out remote assistant based on augmented reality
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
Wu et al. Efficient VR and AR navigation through multiperspective occlusion management
Tatzgern et al. Exploring real world points of interest: Design and evaluation of object-centric exploration techniques for augmented reality
CN108594999A (en) Control method and device for panoramic picture display systems
Hoberman et al. Immersive training games for smartphone-based head mounted displays
CN105657406A (en) Three-dimensional observation perspective selecting method and apparatus
CN108280873A (en) Model space position capture and hot spot automatically generate processing system
CN106530408A (en) Museum temporary exhibition planning and design system
CA3119609A1 (en) Augmented reality (ar) imprinting methods and systems
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
CN106483814A (en) A kind of 3D holographic projection system based on augmented reality and its using method
Tatzgern et al. Exploring Distant Objects with Augmented Reality.
Mori et al. An overview of augmented visualization: observing the real world as desired

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210813

RJ01 Rejection of invention patent application after publication