WO2021238145A1 - Ar场景内容的生成方法、展示方法、装置及存储介质 - Google Patents
Ar场景内容的生成方法、展示方法、装置及存储介质 Download PDFInfo
- Publication number
- WO2021238145A1 WO2021238145A1 PCT/CN2020/135048 CN2020135048W WO2021238145A1 WO 2021238145 A1 WO2021238145 A1 WO 2021238145A1 CN 2020135048 W CN2020135048 W CN 2020135048W WO 2021238145 A1 WO2021238145 A1 WO 2021238145A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual object
- data
- initial
- scene
- pose
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 104
- 230000004044 response Effects 0.000 claims abstract description 26
- 230000000694 effects Effects 0.000 claims description 80
- 230000001960 triggered effect Effects 0.000 claims description 50
- 230000002452 interceptive effect Effects 0.000 claims description 49
- 230000003190 augmentative effect Effects 0.000 claims description 43
- 230000003993 interaction Effects 0.000 claims description 19
- 239000000463 material Substances 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 11
- 239000011521 glass Substances 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 241000282326 Felis catus Species 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present disclosure relates to the field of augmented reality technology, and in particular to a method, display method, device, and storage medium for generating AR scene content.
- Augmented Reality (AR) technology is a technology that ingeniously integrates virtual objects with the real world. After computer-generated text, images, three-dimensional models, music, video and other virtual objects are simulated and simulated, they are applied to the real world. Thus presenting an augmented reality scene.
- the AR content displayed in the augmented reality scene can be determined in advance. For example, it can contain virtual objects and display information of the virtual objects. However, if the internal environment of the current reality scene changes, the pre-produced virtual objects It no longer matches the current real scene, so when the virtual object is superimposed on the real scene, the display effect of the augmented reality scene is poor.
- the embodiments of the present disclosure provide at least one solution for generating AR scene content.
- embodiments of the present disclosure provide a method for generating augmented reality AR scene content, including:
- the update data includes first pose data corresponding to the at least one virtual object
- the initial AR data packet is updated to generate an updated AR data packet.
- the initial AR data package associated with the target reality scene can be acquired, and the initial AR data can be acquired including update data associated with at least one virtual object, for example, it can include at least one virtual object.
- the first pose data corresponding to the object is then updated to the initial AR data package based on the updated data to obtain a virtual object that more closely matches the target reality scene, which improves the realistic effect of the augmented reality scene.
- the embodiments of the present disclosure provide a method for displaying content of an augmented reality AR scene, including:
- the AR data packet In response to the second triggering operation, acquiring an AR data packet associated with the target reality scene indicated by the second triggering operation; the AR data packet includes first pose data corresponding to at least one virtual object;
- the at least one virtual object is displayed through the AR device.
- the AR data packet associated with the second trigger operation can be obtained in response to the second trigger operation, and further can be based on the second pose data corresponding to the AR device, and the virtual data in the AR data packet can be set in advance.
- the first pose data of the object in the target reality scene determines the special effect information of the virtual object in the target reality scene, and finally displays the realistic augmented reality scene effect in the AR device.
- an augmented reality AR scene content generation device including:
- the first acquisition module is configured to, in response to the first trigger operation, acquire the initial AR data packet associated with the target reality scene indicated by the first trigger operation;
- the second acquisition module is configured to acquire update data of at least one virtual object associated with the initial AR data packet; the update data includes the first pose data corresponding to the at least one virtual object;
- the update module is configured to update the initial AR data package based on the update data of the at least one virtual object to generate an updated AR data package.
- an augmented reality AR scene content display device including:
- the acquiring module is configured to acquire an AR data packet associated with the target reality scene indicated by the second trigger operation in response to a second trigger operation; the AR data packet includes first pose data corresponding to at least one virtual object;
- a determining module configured to determine the at least one virtual object based on the second pose data of the target reality scene currently captured by the AR device and the first pose data corresponding to at least one virtual object in the AR data packet The presentation of special effects information;
- the display module is configured to display the at least one virtual object through the AR device based on the presentation special effect information.
- an embodiment of the present disclosure provides an electronic device, including a processor, a memory, and a bus.
- the memory stores machine-readable instructions executable by the processor.
- the processing communicates with the memory through a bus.
- the machine-readable instructions are executed by the processor, the steps of the generation method as described in the first aspect or the steps of the display method as described in the second aspect are executed. .
- the embodiments of the present disclosure provide a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the generating method as described in the first aspect when the computer program is run by a processor , Or execute the steps of the display method as described in the second aspect.
- FIG. 1 shows a flowchart of the first method for generating AR scene content provided by an embodiment of the present disclosure
- Figure 2a shows a schematic diagram of an AR data package download interface provided by an embodiment of the present disclosure
- Figure 2b shows a schematic diagram of an interface for downloading and uploading an AR data package provided by an embodiment of the present disclosure
- FIG. 3 shows a schematic diagram of a page of a positioning prompt provided by an embodiment of the present disclosure
- FIG. 4 shows a schematic diagram of a pose data display page of a virtual object provided by an embodiment of the present disclosure
- FIG. 5 shows a schematic diagram of a pose data editing page of a virtual object provided by an embodiment of the present disclosure
- FIG. 6 shows a schematic diagram of an interface for generating interactive data provided by an embodiment of the present disclosure
- FIG. 7 shows a schematic diagram of an AR data package saving page provided by an embodiment of the present disclosure
- FIG. 8 shows a flowchart of an interface for uploading an updated AR data package provided by an embodiment of the present disclosure
- FIG. 9 shows a flowchart of an AR scene content display method provided by an embodiment of the present disclosure.
- FIG. 10 shows a schematic diagram of a positioning prompt performed on an AR device when the content of an AR scene is displayed according to an embodiment of the present disclosure
- FIG. 11 shows a schematic diagram of an augmented reality scene provided by an embodiment of the present disclosure
- FIG. 12 shows a flowchart of a second method for generating AR scene content provided by an embodiment of the present disclosure
- FIG. 13 shows a schematic structural diagram of an AR scene content display system provided by an embodiment of the present disclosure
- FIG. 14 shows a flowchart of a third method for generating AR scene content provided by an embodiment of the present disclosure
- FIG. 15 shows a schematic structural diagram of the first AR scene content generating apparatus provided by an embodiment of the present disclosure
- FIG. 16 shows a schematic structural diagram of a first AR scene content display device provided by an embodiment of the present disclosure
- FIG. 17 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- Augmented Reality (AR) technology can be applied to AR devices, which can be any electronic device that can support AR functions, including but not limited to AR glasses, tablet computers, smart phones, etc.
- AR devices can be any electronic device that can support AR functions, including but not limited to AR glasses, tablet computers, smart phones, etc.
- the AR device When the AR device is operated in a real scene, the virtual object superimposed in the real scene can be viewed through the AR device. For example, when passing by some buildings or tourist attractions, the AR device can be used to see the superimposed buildings or tourist attractions.
- the virtual graphics and texts here can be called virtual objects. Buildings or tourist attractions can be real scenes. In this scenario, the virtual graphics and text introduced through AR glasses will follow the introduction of AR glasses. It changes toward the angle.
- the virtual graphic introduction here is related to the positional relationship of AR glasses.
- the method for generating AR scene content disclosed in the embodiment of the present disclosure is first introduced in detail.
- the execution subject of the method for generating AR scene content provided by the embodiment of the present disclosure may be the above-mentioned AR device, For example, it may include AR glasses, tablet computers, smart phones, smart wearable devices and other devices with display functions and data processing capabilities, which are not limited in the embodiments of the present disclosure.
- the method for generating AR scene content It can be implemented by a processor invoking computer-readable instructions stored in the memory.
- FIG. 1 it is a flowchart of a method for generating AR scene content provided by an embodiment of the present disclosure.
- the method for generating includes the following S101 to S103:
- S101 In response to a first trigger operation, obtain an initial AR data packet associated with a target reality scene indicated by the first trigger operation;
- the first triggering operation may be a triggering operation of an editing option corresponding to any initial AR data packet associated with the target reality scene.
- the triggering operation is, for example, an operation of selecting an editing option, or directly by means of voice or gestures. Trigger, etc., this disclosure is not limited to this.
- FIG. 2a it is a schematic diagram of the AR data packets respectively associated with "XXX Building-15th Floor” and other real scenes. If it is detected that the editing option of any AR data packet corresponding to "XXX Building-15th Floor” is triggered, For example, the editing option of the initial AR data package of the "[Example] Science Fiction” category is triggered, and the server can request to obtain the initial AR data package of the "[Example] Science Fiction" category associated with the target reality scene "XXX Building-15th Floor”.
- each initial AR data packet may contain label information, and the label information is used to characterize the category of the initial AR data packet, for example, it may include one of "science fiction category", "cartoon category", and "historical category".
- each category is used to represent the style of the virtual object to be displayed in the AR scene; wherein, each initial AR data packet may contain a preset virtual object, or may not contain a virtual object.
- the target reality scene may be an indoor scene of a building, or a street scene, and may also be any target reality scene capable of superimposing virtual objects.
- S102 Obtain update data of at least one virtual object associated with the initial AR data packet; the update data includes first pose data corresponding to the at least one virtual object.
- the at least one virtual object associated with the initial AR data package may include a virtual object locally contained in the initial AR data package, may also include a virtual object downloaded through the network, and may also include a virtual object obtained from a pre-established material library, Among them, the material library can be set locally or in the cloud, which is not limited in the present disclosure.
- the virtual object may be a static virtual model such as the aforementioned virtual flowerpot and a virtual tree, or may be some dynamic virtual objects such as virtual videos and virtual animations.
- the first pose data of the virtual object includes but It is not limited to data that can represent the position and/or posture of the virtual object when it is presented. For example, it may include the position coordinates, deflection angle, and size information of the virtual object in the coordinate system corresponding to the target reality scene.
- the content of the virtual object in the initial AR data package can be updated to obtain the updated AR data package.
- the update method can be to directly add the update data to the initial AR data package.
- the updated AR data packet contains the aforementioned virtual objects associated with the initial AR data packet, and Update data of at least one virtual object.
- the obtained updated AR data package can be used to display virtual objects integrated into the target real scene according to the updated AR data package when the AR device shoots the target real scene.
- the initial AR data package associated with the target reality scene can be acquired, and the initial AR data can be acquired including update data associated with at least one virtual object, for example, it can include at least one virtual object.
- the first pose data corresponding to the object is then updated to the initial AR data package based on the updated data to obtain a virtual object that more closely matches the target reality scene, which improves the realistic effect of the augmented reality scene.
- the generation method provided in the embodiment of the present disclosure may be applied to the AR generation end, and the generation method further includes:
- the AR generator can be a computer, a notebook, a tablet, or other devices. These devices can be installed with applications for generating and editing AR scene content or can access WEB pages for generating and editing AR scene content.
- the user can Remotely edit AR scene content in the application or WEB page. For example, you can simulate the target reality scene through the 3D scene model representing the target reality scene, and directly configure the relevant data of the virtual object to be displayed in the 3D scene model without going to the target Configure in the real scene to realize the generation of AR scene content.
- the display interface of the AR generator can display editing options corresponding to multiple real scenes. After detecting that any one of the editing options is triggered, the real scene corresponding to the triggered editing option can be used as the target real scene. , The editing option of the target reality scene is triggered, and then the 3D scene model of the target reality scene can be obtained, so that the virtual object to be displayed can be added to the 3D scene model of the target reality scene model subsequently.
- a map can be displayed on the display interface of the AR generator. The map is set with multiple points of interest (POI), and each POI point can correspond to a real scene.
- POI points of interest
- the AR generator can also detect that the editing option for the target reality scene is triggered, and then obtain the 3D scene model representing the target reality scene, and the initial augmented reality AR data package associated with the target reality scene, so as to follow the target reality scene Add virtual objects to be displayed in the 3D scene model of the model.
- the three-dimensional scene model representing the target reality scene and the target reality scene are presented in equal proportions under the same coordinate system.
- the target reality scene includes the street and the buildings on both sides of the street.
- the three-dimensional scene model also includes the model of the street and the buildings on both sides of the street, and the three-dimensional scene model and the target reality scene can be presented in the same coordinate system, for example, according to 1:1, or can also be presented in equal proportions. of.
- the editing page showing the contents of the AR scene when it is detected that the "Update Experience Package List” option in the editing interface is triggered, a variety of real scenes can be obtained, such as "XXX Building-15th Floor",
- the "download scene” when it is detected that the "download scene” is triggered in the editing interface, it can be regarded as the first trigger operation for the target reality scene detected, and then the 3D scene model representing the target reality scene and the initial scene associated with the target reality scene can be obtained.
- AR data packages such as two initial AR experience packages with label information of "Christmas" and "New Year's Day”.
- the category label of the initial AR experience package "initial AR data package) "Christmas" associated with the above-mentioned target reality scene further contains the labels "Science Fiction” and "Nature”, indicating that the created virtual objects can belong to the sci-fi category Virtual objects, and virtual objects belonging to the natural category, of course, in the later editing, the category of the AR data package can be changed based on the category of the uploaded virtual object.
- the method when obtaining update data of at least one virtual object associated with the initial AR data packet, the method includes:
- S1022 Obtain update data of the at least one virtual object placed in the three-dimensional scene model, where the update data includes first pose data of the at least one virtual object placed in the three-dimensional scene model.
- the first pose data of the virtual object placed in the three-dimensional scene model includes the position coordinates, deflection angle, and size information of the virtual object in the coordinate system of the three-dimensional scene model.
- the angle between the specified positive direction of the object and the coordinate axis of the 3D scene model coordinate system is expressed.
- the three-dimensional scene model and the target reality scene can be presented at 1:1 in the same coordinate system, and are presented at an intermediate scale in different coordinate systems. Therefore, the first pose data of the virtual object when presented in the three-dimensional scene model is obtained here. , When presented in the later AR device, it can show the special effect information of the virtual object in the target real scene.
- the content edited by the editing operation can be obtained, and the content edited by the editing operation can be used as the update data.
- a schematic diagram of the 3D scene model and the pose data edit bar about the virtual object can be displayed.
- the user can check the first pose data of the virtual object in the 3D scene model in the pose data edit bar of the virtual object
- the AR generator can obtain the first pose data of the virtual object in the 3D scene model.
- the first pose data when the virtual object is displayed in the target real scene can be determined, and the first pose data can be performed in the target real scene according to the first pose data.
- the virtual object can be better integrated with the target reality scene, so that the effect of the realistic augmented reality scene can be displayed in the AR device.
- the user can be provided with the three-dimensional scene model and the initial AR data package associated with the target reality scene, so that the user can intuitively edit at least one virtual object associated with the initial AR data package through the three-dimensional scene model according to their own needs.
- Update The updated AR data package is obtained in this way.
- the augmented reality experiencer performs the augmented reality experience in the target reality scene, he can directly call the updated AR data package for the augmented reality experience.
- the method of editing AR data packets can simplify the generation of AR data packets and provide convenient AR materials for subsequent display of augmented reality scenes.
- the three-dimensional scene model used to edit the first pose data of the virtual object can be provided, it is convenient for users to intuitively edit the first pose data of the virtual object in the three-dimensional scene, so that the virtual object can be personalized based on user needs.
- the setting of the first pose data of the object is convenient for users to intuitively edit the first pose data of the virtual object in the three-dimensional scene, so that the virtual object can be personalized based on user needs.
- the generation method provided in the embodiment of the present disclosure can also be generated during the experience of AR scenes, for example, when applied to an AR device for displaying AR scenes, the method further includes:
- At least one virtual object associated with the initial AR data packet is displayed.
- an application program for displaying and editing AR scene content can be installed in the AR device.
- the user can open the application program in the AR device to edit the AR scene content in the augmented reality scene.
- the AR device After opening the application program, the AR device’s
- the display interface can display at least one editing option corresponding to a real scene.
- Each real scene is associated with multiple initial AR data packages, and each initial AR data package has corresponding editing options. These initial AR data packages can be edited online .
- the display interface of the AR device may display the initial AR data packets corresponding to multiple real-world scenes. After detecting that the editing option of any of the initial AR data packets is triggered, the triggered editing option can be set The corresponding initial AR data packet is used as the AR data packet associated with the aforementioned target reality scene.
- the embodiments of the present disclosure provide a solution for editing AR scene content in the process of experiencing an augmented reality scene.
- the second pose data when the AR device currently photographs the real scene of the target may include the position and/or display angle of the display component used to display the virtual object.
- the concept of coordinate system is introduced here.
- the second pose data corresponding to the AR device may include but is not limited to at least one of the following:
- the display component of the AR device is in the world The coordinate position in the coordinate system; the angle between the display part of the AR device and each coordinate axis in the world coordinate system; including the coordinate position of the display part of the AR device in the world coordinate system and the angle between each coordinate axis in the world coordinate system.
- the display component of the AR device specifically refers to the component used to display virtual objects in the AR device.
- the corresponding display component may be a display screen.
- the corresponding display component may be a lens for displaying virtual objects.
- the second pose data corresponding to the AR device can be obtained in many ways.
- the second pose data of the AR device can be determined through the pose sensor on the AR device;
- an image acquisition component such as a camera, is configured, the second pose data can be determined through the target reality scene image collected by the camera.
- the pose sensor may include an angular velocity sensor used to determine the shooting angle of the AR device, such as a gyroscope, an inertial measurement unit (IMU), etc.; it may include a positioning component used to determine the shooting position of the AR device, For example, it can be a positioning component based on the Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), and wireless fidelity (Wireless Fidelity, WiFi) positioning technology; it can also include positioning components used to determine The angular velocity sensor of the shooting angle of the AR device and the positioning component of the shooting position.
- GPS Global Positioning System
- GLONASS Global Navigation Satellite System
- WiFi wireless fidelity
- the AR may be determined through the target reality scene image and the pre-stored first neural network for positioning.
- the second pose data corresponding to the device is determined through the target reality scene image and the pre-stored first neural network for positioning.
- the first neural network may be trained based on multiple sample pictures obtained by photographing the real scene of the target in advance, and the corresponding second pose data when each sample picture is photographed.
- the AR device When determining the second pose data corresponding to the AR device based on the actual scene image of the target captured by the AR device, as shown in Figure 3, the AR device displays the information used to prompt the user to enter the editing state and start positioning, such as displaying information for Information that prompts the user to take a real-world image of the target for positioning.
- the second pose data when the AR device currently shoots the real scene of the target is acquired, it may be based on the second pose data corresponding to the AR device and at least one virtual object associated with the initial AR data packet in the target real scene.
- the initial first pose data of the at least one virtual object is displayed. While displaying the virtual object, the pose data display interface of the at least one virtual object can also be displayed, as shown in FIG. 4, which is a virtual The display schematic diagram of the subject Tang Sancai, and the pose data display interface for the virtual object in the display area.
- the pose data display interface displays the initial first pose data of the virtual object Tang San Caima in the target reality scene, mainly It includes the coordinate information of the Tang Sancai in the target realistic scene.
- the pose data display interface shown on the left side of FIG. 4 contains "My Map Coordinates", which can indicate the coordinate information of the display part of the AR device in the coordinate system where the target reality scene is located; the model list contains multiple
- the coordinate information of a virtual object in the target reality scene for example, includes the coordinate information of the virtual objects "Tang San Caima” and “Stone Lion” in the target reality scene respectively. Furthermore, other virtual objects and other virtual objects can be added.
- the coordinate information of the target in the real scene contains "My Map Coordinates”, which can indicate the coordinate information of the display part of the AR device in the coordinate system where the target reality scene is located; the model list contains multiple The coordinate information of a virtual object in the target reality scene, for example, includes the coordinate information of the virtual objects "Tang San Caima” and “Stone Lion” in the target reality scene respectively. Furthermore, other virtual objects and other virtual objects can be added.
- the coordinate information of the target in the real scene for example, includes the coordinate information of the virtual objects "Tang San Caima”
- the method when obtaining update data of at least one virtual object associated with the initial AR data packet, the method includes:
- update data for the displayed at least one virtual object includes the first pose data of the at least one virtual object in the target reality scene.
- the content edited by the editing operation can be obtained, and the content edited by the editing operation can be used as the update data, such as in the pose data editing
- the page can display the pose data edit bar of the virtual object on the AR device.
- the user can check the first pose of the virtual object in the target reality scene in the pose data edit bar of the virtual object
- the data is edited.
- the editing of the pose data of the virtual object is completed in the world coordinate system where the target reality scene is located. Therefore, after the editing is completed, the AR device can obtain the virtual object's position in the target reality scene.
- the first posture data is generated in the world coordinate system where the target reality scene is located.
- Fig. 5 shows the coordinate information of the currently displayed virtual object "Tang Sancai" in the target real scene, and the interactive data of the virtual object in the target real scene.
- the interactive data will be described later.
- the right side of Figure 5 shows the interface for editing the first pose data of the currently displayed virtual object in the target reality scene.
- the coordinates of the virtual object in the coordinate system of the target reality scene can be adjusted.
- Information can be edited, the size ratio of the virtual object can be edited, and the angle between the virtual object and the three coordinate axes in the coordinate system can be edited.
- the AR device after acquiring the second pose data corresponding to the AR device, based on the second pose data corresponding to the AR device, at least one virtual object associated with the initial AR data packet can be displayed, which is convenient for the user
- the AR device can intuitively edit and adjust the first pose data of the virtual object in the target reality scene as required, thereby simplifying the generation process of the AR scene content, because this process can be used for AR during the experience of the augmented reality scene.
- the intuitive adjustment of the data packet can therefore make the adjusted AR data packet more accurate.
- obtaining update data of the displayed at least one virtual object includes:
- the first pose data includes at least one of the following: position coordinates, deflection angle, and size information in the coordinate system where the target reality scene is located.
- the editing operations on the position coordinates, deflection angle, and size information of the displayed at least one virtual object in the target real scene can be obtained, so as to obtain that the at least one virtual object is in The first pose data of the target reality scene.
- the pose data editing interface may include the position coordinates of the "Tang San Color Horse” in the target real scene. Editing may include editing for the size information of the "Tang Sancai” in the target reality scene, as well as editing for the deflection angle of the "Tang Sancai” in the world coordinate system where the target reality scene is located.
- a display pose data editing interface for editing the first pose data of a virtual object can be provided, so that the user can intuitively adjust the first position of the virtual object in the target reality scene.
- the pose data can be personalized to set the first pose data of the virtual object based on user needs.
- the aforementioned update data may also include interaction data of at least one virtual object in the target reality scene, or , The interaction data of at least one virtual object in the three-dimensional scene model that represents the target reality scene.
- the updated data provided by the embodiment of the present disclosure also includes interaction data corresponding to at least one virtual object.
- the interaction data includes at least one of the following: at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the number of recurring display times of the virtual object after being triggered and displayed.
- the interactive data editing interface displayed on the AR device or the interactive data editing interface displayed on the AR generator can be used to obtain the interactive data editing operation for at least one virtual object, that is, the interactive data can be performed in the target reality scene.
- the subsequent state trigger condition of the virtual object may be used to trigger the virtual object to be presented according to the presentation state corresponding to the state trigger condition, and/or according to The number of recurring impressions after being triggered by the state triggering condition.
- the interactive data editing interface shown in Figure 6 (b) is displayed, which specifically refers to the interactive data of the virtual animation.
- An editing interface the interactive data editing interface may include an editing area for a state triggering condition of at least one virtual object, a presentation state corresponding to the state triggering condition, and the number of recurring displays of the virtual object after being triggered to display.
- the editing operation for the state triggering condition of the virtual object can be obtained through the editing area corresponding to the state triggering condition, so as to obtain the state triggering condition corresponding to the virtual object.
- the subsequent AR device can trigger the state triggering condition after obtaining the state triggering condition.
- the virtual object is displayed in the target real scene.
- the editing operation for the number of recurring impressions of the virtual object after being displayed can be obtained through the editing area corresponding to the recurring number. For example, if the obtained number of impressions is n times, it can indicate that the current virtual animation is triggered after triggering condition 1. Show n times in a loop.
- the interactive data when the interactive data includes multiple state triggering conditions, the interactive data also includes: the priority of each state triggering condition.
- the interactive data editing interface displayed by the AR device edits the interactive data
- it can be displayed on the page as shown in Figure 5.
- the currently displayed data can be displayed on the left side of Figure 5
- the coordinate information of the virtual object "Tang San Color Horse” in the coordinate system where the target reality scene is located, and the interactive data corresponding to the virtual object "Tang San Color Horse” (not shown in Figure 5), including trigger conditions, display status, whether to cycle display, Priority etc.
- the interactive data contains multiple state triggering conditions.
- the virtual object associated with the initial AR data packet contains multiple, and each virtual object corresponds to a state triggering condition.
- the state triggering condition corresponding to each virtual object can be set. Set the priority.
- the subsequent AR device obtains multiple state trigger conditions at the same time, the virtual object corresponding to the state trigger condition with the highest priority will be triggered, and the presentation state corresponding to the state trigger condition corresponding to the virtual object will be triggered, and/or,
- the virtual object is displayed according to the number of recurring impressions after the virtual object is triggered.
- the state trigger conditions mentioned here can include but are not limited to at least one of the following:
- Click model trigger condition Click model trigger condition, slide model trigger condition, distance recognition trigger condition, specified area trigger condition, gesture recognition trigger condition, face current model trigger condition, and face specified model trigger condition.
- the click model trigger condition refers to the state trigger condition of displaying the virtual object A in the AR device after clicking the three-dimensional model of the virtual object A displayed in the AR device.
- the AR device can display the pending state The displayed three-dimensional model of the virtual object, after detecting the click operation on the three-dimensional model, display the virtual object corresponding to the three-dimensional model;
- the sliding model trigger condition refers to the state trigger condition for the virtual object A that is triggered by sliding the three-dimensional model of the virtual object A in the set manner in the AR device. Illustratively, it can be set in the AR device for the virtual object A right sliding operation on the three-dimensional model of A triggers virtual object A to display, and a left sliding operation on the three-dimensional model of virtual object A in the AR device triggers virtual object A to disappear;
- the distance trigger condition refers to the state trigger condition for the virtual object A that is triggered when the distance between the location coordinate of the AR device and the set location point meets the set distance;
- the designated area trigger condition refers to the state trigger condition for the virtual object A that is triggered after the AR device enters the designated area;
- the gesture recognition trigger condition refers to the state trigger condition for the virtual object A triggered by the set gesture action
- the trigger condition towards the current model refers to the state trigger condition for the virtual object A that is triggered when the shooting angle of the AR device is towards the position where the virtual object A is located;
- the trigger condition for the orientation designated model refers to the state trigger condition for the virtual object A that is triggered when the AR device faces the position where the specific virtual object is located.
- a trigger logic chain can be formed for these state triggering conditions.
- the AR data packet contains 3 virtual objects.
- the state triggering conditions corresponding to one virtual object, the second virtual object and the third virtual object are recorded as state triggering condition 1, state triggering condition 2 and state triggering condition 3, when state triggering condition 1, state triggering condition 2
- the sequence of the formed trigger logic chain is status trigger condition 1, status trigger condition 2 and status trigger condition 3, then when the user triggers status trigger condition 1, status trigger condition 2 in turn And state trigger condition 3, then virtual object 1, virtual object 2, and virtual object 3 can be displayed in sequence.
- an interactive data editing interface for editing interactive data of a virtual object may be provided to support the trigger mode and display form of editing the virtual object.
- the generating method provided in the embodiment of the present disclosure further includes:
- the initial AR data package is updated to generate the updated AR data package, including:
- the initial AR data packet is updated to generate an updated AR data packet.
- the third pose data corresponding to the virtual object model may include, but is not limited to, data that can represent the position and/or posture of the virtual object model when presented in the target reality scene, or may include, but is not limited to, data that can represent the virtual object model.
- the position and/or posture data of the object model in the three-dimensional scene model may include position coordinates, deflection angle, and size information of the virtual object model in the target real scene or the coordinate system where the three-dimensional scene model is located.
- the third pose data of the virtual object model in the target reality scene or the three-dimensional scene model can represent the third pose data of the target real object corresponding to the virtual object model in the target reality scene.
- Edit the display form of the virtual object model when it is displayed in the AR device For example, edit the display form of the virtual object model in the AR device to the occlusion form, and perform transparent processing when it is presented in the AR device.
- the virtual object model can be used for The virtual object that needs to be occluded is occluded. For example, when the virtual object is displayed, the part of the virtual object that needs to be occluded is not rendered. The part of the occluded area can be transparently processed through the virtual object model to achieve Occlusion effect.
- the virtual object model used to present the occlusion effect can be edited.
- the real third pose data of the virtual object model in the target reality scene can be restored. It is convenient to provide a more realistic display effect when the virtual object is displayed on the AR device in the follow-up.
- the at least one virtual object includes at least one first virtual object.
- the at least one virtual object when acquiring update data of the at least one virtual object associated with the initial AR data packet, it includes:
- acquiring update data of at least one virtual object associated with the initial AR data package may be acquiring update data of at least one first virtual object included in the initial AR data package.
- the at least one virtual object includes at least one second virtual object.
- the method when obtaining update data of the at least one virtual object associated with the initial AR data packet, the method includes:
- the update data of at least one virtual object associated with the initial AR data package can be obtained, and at least one first object associated with the initial AR data package can be obtained from a pre-established material library. Two virtual objects, and obtaining update data of at least one second virtual object.
- the update data of the at least one virtual object associated with the initial AR data packet when the update data of the at least one virtual object associated with the initial AR data packet is acquired, the update data of the at least one first virtual object contained in the initial AR data packet can be acquired at the same time, and the update data from the pre-established Obtain update data of at least one second virtual object associated with the initial AR data package from the material library.
- the pre-established material library may contain various virtual objects such as virtual static models, animations, and videos, and the user can select the virtual objects to be uploaded to the initial AR data package from the pre-established material library.
- the second virtual object may be the same as or different from the first virtual object, and the initial AR data packet may be updated through the update data of the first virtual object and/or the second virtual object.
- the virtual object associated with the initial AR data package and the update data of the virtual object can be acquired in various ways, and the update data of the virtual object can be flexibly acquired.
- the initial AR data package when displaying at least one virtual object associated with the initial AR data package, it may include:
- the initial AR data package may or may not contain virtual objects.
- at least one virtual object associated with the initial AR data package displayed may be the display of the initial AR data package.
- the at least one first virtual object contained in the data package may also be displayed after obtaining at least one second virtual object associated with the initial AR data package from a pre-established material library, or may be displayed in the initial AR data package at the same time At least one first virtual object included, and at least one second virtual object associated with the initial AR data package obtained from a pre-established material library.
- the generating method provided in the embodiment of the present disclosure further includes:
- the status information indicating whether the updated AR data package is enabled can be set in the interface shown in Figure 2a, and an "enable” button can be set under each AR data package (not shown in Figure 2a) If the "Enable” button under the AR data package is triggered, it means that the updated AR data package corresponding to the AR data package can be downloaded and experienced by the AR device after being uploaded to the server.
- the generated updated AR data package can be published to the server, and can be downloaded and used by other users. For example, it can be downloaded and edited by other AR devices, and at the same time, AR data can be downloaded and experienced by AR devices. Bag.
- the generating method provided in the embodiment of the present disclosure further includes:
- the label information of the updated AR data package is obtained, and the updated AR data package and label information are sent to the server.
- the initial AR data package is edited, and after the updated AR data package is obtained, the updated data package can be sent to the server.
- the obtained AR data package associated with the target reality scene can be Including multiple, after the user triggers the upload experience package operation on the page shown in Figure 8 (a), in order to determine the target updated AR data package to be uploaded by the user, the page shown in Figure 8 (b) can be displayed , The user can fill in the tag information corresponding to the updated AR data package of the target to be uploaded on this page.
- the label information may include the name of the target reality scene, the name of the floor, the name of the experience package, the subject, and remarks.
- the filling of the label information is convenient for determining the updated AR data package to be uploaded on the one hand, and on the other hand, it is convenient
- the server saves the uploaded and updated AR data package based on the tag information, so that it is convenient for users of the AR device to download the AR data package on the AR device for experience.
- the generated updated AR data package can be published to the server, and can be downloaded and used by other users.
- it can be downloaded and edited by other AR generators, and can be used by AR devices to download and experience AR. data pack.
- sending the updated AR data packet and label information to the server includes:
- AR data packets in the enabled state can be used.
- the status information indicating whether the updated AR data packet is enabled can be set in Figure 8(a), and an "enable” button is set under each AR data packet.
- an "enable” button is set under each AR data packet.
- the display process can be applied to an AR device.
- the AR device may be the same as or different from the above AR device used to generate AR data packets, which is not limited here, as shown in Figure 9 As shown, the following S201 ⁇ S203 are included:
- S201 In response to the second trigger operation, obtain an AR data packet associated with the target reality scene indicated by the second trigger operation; the AR data packet includes first pose data corresponding to at least one virtual object.
- AR devices may include, but are not limited to, AR glasses, tablets, smart phones, smart wearable devices and other devices with display functions and data processing capabilities. These AR devices may be installed with applications for displaying AR scene content. , Users can experience AR scene content in this application.
- the AR device can display at least one real scene and the AR data packet associated with each real scene.
- the second trigger operation can be to display the target reality.
- the triggering operation of the AR data packet associated in the scene In one embodiment, as shown in Figure 2a, the user can click on the target reality scene "XXX Building-15 Floor” in the AR data packet associated with the real scene displayed by the AR device "Associated AR data package, such as clicking on the AR data package "[Example] Sci-Fi", the AR device can detect that there is a second trigger operation for the AR data package, and can request the server to obtain the target reality scene "XXX Building” -15 layer” associated AR data package.
- the first pose data corresponding to at least one virtual object contained in the AR data packet may be the first pose data in the target reality scene, or at least one virtual object contained in the AR data packet corresponds to the first pose data It can be the first pose data in the three-dimensional scene model of the target reality scene. For details, please refer to the above description and will not be repeated here.
- S202 Determine presentation special effect information of at least one virtual object based on the second pose data of the real scene of the target currently photographed by the AR device and the first pose data corresponding to the at least one virtual object in the AR data packet.
- the second pose data of the AR device when the target is currently photographed in the real scene and the method of obtaining the second pose data corresponding to the AR device are described in detail above, and will not be repeated here.
- information for prompting the user to use the AR device to shoot may be displayed on the AR device.
- the special effect information of the virtual object in the AR device can be determined.
- the first pose data corresponding to the virtual object is the first pose data in the three-dimensional scene model of the target real scene
- the three-dimensional scene model and the real scene are presented in the same coordinate system as 1:1
- the coordinate system is presented in a medium scale, so the first pose data of the virtual object set in advance when presented in the 3D scene model and the second pose data corresponding to the AR device can be used to determine the appearance of the virtual object in the AR device Special effects information.
- S203 Based on the presentation special effect information, display at least one virtual object through the AR device.
- the at least one virtual object After obtaining the presentation special effect information of the virtual object in the target reality scene, the at least one virtual object can be displayed according to the presentation special effect information through the AR device.
- the AR data packet associated with the second trigger operation can be obtained in response to the second trigger operation, and further can be based on the second pose data corresponding to the AR device, and the virtual data in the AR data packet can be set in advance.
- the first pose data corresponding to the object determines the special effect information of the virtual object in the target reality scene, and finally displays the realistic augmented reality scene effect in the AR device.
- the AR data package further includes third pose data corresponding to at least one virtual object model; the virtual object model represents the target object in the target reality scene;
- the first pose data corresponding to at least one virtual object in the AR data package, and the third pose data corresponding to the virtual object model determine the position of at least one virtual object Present special effects information.
- the virtual object is virtualized according to the third pose data of the virtual object model in the target reality scene, the second pose data corresponding to the AR device, and the first pose data corresponding to the virtual object.
- the physical object corresponding to the object model is occluded.
- the occluded part of the area will not be rendered, and the virtual object model can be processed It is the occlusion form, and the virtual object model of the occlusion form is transparentized, so that the user will not see the transparent virtual object model in the AR device, which can show that the virtual object is the physical object in the target real scene
- the rendering effect of the occlusion is the rendering effect of the occlusion.
- the third pose data of the virtual object model can be used to restore the real third pose data of the virtual object model in the target reality scene.
- the occlusion effect of the virtual object can be realized through the virtual object model, so as to show the effect of a more realistic augmented reality scene in the AR device.
- the AR data package further includes interaction data corresponding to at least one virtual object.
- the interaction data includes at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the virtual object being triggered to display At least one of the subsequent recurring impressions.
- the state triggering conditions included in the interaction data, the presentation state corresponding to each state triggering condition, and the number of recurring display times of the virtual object after the triggered display are explained in detail above, and will not be repeated here.
- state trigger conditions can include multiple, such as click model trigger condition, sliding model trigger condition, distance recognition trigger condition, designated area trigger condition, gesture recognition trigger condition, orientation current model trigger condition and orientation designated model At least one or more of the trigger conditions.
- the explanation of each trigger condition is detailed above. I won’t go into details here. You can see that the state trigger condition can include two types. One type is the second type corresponding to the AR device.
- State trigger conditions related to pose data such as at least one or more of distance recognition trigger conditions, designated area trigger conditions, orientation current model trigger conditions, and orientation designated model trigger conditions, and the other type is corresponding to the AR device
- the second pose data irrelevant state trigger conditions such as at least one or more of the click model trigger condition, the sliding model trigger condition, and the gesture recognition trigger condition. The following will consider the relationship with the virtual object for these two types. Interactive.
- the display method provided in the embodiment of the present disclosure further includes:
- displaying at least one virtual object through the AR device includes:
- At least one virtual object is displayed through the AR device.
- the interactive operation may be an operation for triggering the update of the presentation special effect information of the virtual object, where the first type of state triggering condition may be the aforementioned state triggering that is not related to the second pose data corresponding to the AR device.
- the conditions such as at least one or more of a click model trigger condition, a sliding model trigger condition, and a gesture recognition trigger condition.
- the AR device detects the presence of the interactive operation. For example, if it is detected that the user acts on the sliding operation triggered by the virtual object A in the AR device, the corresponding presentation state of the virtual object A under the sliding operation can be obtained, and /Or, the number of recurring display times after the virtual object A is triggered to be displayed under the sliding operation, and then the presentation special effect information of the virtual object A is updated based on this.
- the manner of updating the presentation special effect information of the virtual object can be divided into three situations.
- the first situation can be solely based on the presentation state of the at least one virtual object corresponding to the at least one virtual object under the first type of state triggering condition.
- the presentation special effect information of the virtual object is updated.
- the presentation special effect information of the at least one virtual object may be updated based solely on the number of recurring display times of the at least one virtual object after the trigger condition of the first type of state is triggered to display.
- the third case can be combined with the corresponding presentation state of at least one virtual object under the first type of state triggering condition, and the number of recurring display times of at least one virtual object after the first type of state triggering condition is triggered to display at least.
- the presentation special effect information of a virtual object is updated.
- the initial presentation special effect information of the virtual object A is a virtual vase displayed on the table
- the interactive operation is a sliding operation acting on the virtual object
- the virtual object A corresponds to the sliding operation
- the presentation status of is not displayed.
- the presentation special effect information of the virtual vase can be changed from the original display to disappear.
- the virtual object A is a virtual tabby cat that appears on the wall from position A to position B
- the interactive operation is a click operation on the three-dimensional model corresponding to the virtual tabby cat, and the virtual object is clicked
- the number of recurring presentations after the display is triggered under operation is 5, and when a user's click operation on the virtual tabby cat is detected, the virtual tabby cat can be triggered to recur 5 times in a display manner from position A to position B.
- the virtual object A is a virtual light that flashes once in a display lantern
- the interactive operation is a gesture recognition operation
- the corresponding presentation state of the virtual light under the gesture recognition operation is display
- the number of recurring display times is 5 times.
- the presentation special effect information of the virtual light can be updated to flash 5 times.
- the augmented reality experiencer can display a set gesture to the AR device, thereby triggering the virtual object to be displayed according to the special effect corresponding to the gesture. This method improves the augmented reality experiencer’s presence in the target reality scene The interactivity with virtual objects enhances the user experience.
- the display method provided by the embodiment of the present disclosure further includes:
- the second pose data meets the second type of state triggering condition, based on the corresponding presentation state of at least one virtual object under the second type of state triggering condition, and/or, at least one virtual object is in the second type of state
- At least one virtual object is displayed through the AR device, including:
- At least one virtual object is displayed through the AR device.
- the second type of state triggering conditions may be the aforementioned state triggering conditions related to the second pose data corresponding to the AR device, such as distance recognition triggering conditions, designated area triggering conditions, orientation current model triggering conditions, and orientation Specify at least one or more of the model trigger conditions.
- the second pose data corresponding to the AR device meets the second type of state triggering conditions, including multiple situations.
- the second position corresponding to the AR device can be determined by the position and/or display angle of the display component of the AR device.
- Whether the pose data meets the second type of state triggering conditions can be determined by the position of the AR device’s display part alone to determine whether the second pose data corresponding to the AR device meets the second type of state triggering conditions, or it can be displayed by the AR device alone Angle to determine whether the second pose data corresponding to the AR device meets the second type of state trigger condition, or determine whether the second pose data corresponding to the AR device meets the second type of state triggering condition by combining the position and display angle of the display component of the AR device Class state trigger condition.
- the second pose data corresponding to the AR device meets the second type of state trigger condition; or ,
- the display angle of the AR device faces the position of the virtual object A
- the distance between the location coordinates and the set location point meets the set distance
- the display angle of the AR device faces the position displayed by the virtual object A
- the second pose data corresponding to the AR device meets the second category
- the presentation state corresponding to the at least one virtual object under the second type of state triggering condition may be used, and/or, at least one virtual object The number of recurring display times of the object after the second type of state triggering condition is triggered, and the presentation special effect information of at least one virtual object is updated.
- the specific update method is similar to the update method based on interactive operations above. Go into details.
- the virtual object when it is detected that the second pose data corresponding to the AR device meets the set state trigger condition, the virtual object is displayed according to the display mode of the virtual object under the set state trigger condition.
- Example sexually when the AR device is close to the set position, and the display angle of the AR device faces the position where the virtual object A is located, the virtual object A is triggered to display according to the presentation special effect corresponding to the second pose data of the AR device.
- the process can be Makes the augmented reality scene effect more realistic, thereby enhancing the user experience.
- S301 in response to the first trigger operation, obtain a three-dimensional scene model of the target reality scene indicated by the first trigger operation, and an initial AR data packet associated with the target reality scene;
- S302 Obtain update data of at least one virtual object associated with the initial AR data package; the update data includes first pose data of the at least one virtual object in the three-dimensional scene model;
- the user in response to the first trigger operation, can be provided with the three-dimensional scene model and the initial AR data package associated with the target reality scene, so that the user can intuitively associate the initial AR data package through the three-dimensional scene model according to his own needs.
- At least one virtual object is edited and updated, and the updated AR data package is obtained in this way.
- the augmented reality experiencer performs the augmented reality experience in the target reality scene, he can directly call the updated AR data package for the augmented reality experience.
- the method of remotely editing AR data packets provided in the present disclosure can simplify the generation method of AR data packets, and provide convenient AR materials for subsequent display of augmented reality scenes.
- an embodiment of the present disclosure also provides an AR scene content display system 400, which includes an AR generator 401, a server 402, and an AR device 403.
- the AR generator 401 is in communication connection with the server 402, and the AR device 403 is in communication with the server 402. Server 402 communication connection;
- the AR generating terminal 401 is configured to obtain the three-dimensional scene model of the target reality scene indicated by the first trigger operation and the initial augmented reality AR data packet associated with the target reality scene in response to the first trigger operation; obtain the association with the initial AR data packet Update data of at least one virtual object; update data includes the first pose data of at least one virtual object in the three-dimensional scene model; and used to update the initial AR data package based on the update data of at least one virtual object, and Send the updated AR data package to the server;
- the server 402 is configured to receive the updated AR data packet and forward the updated AR data packet to the AR device;
- the AR device 403 is configured to, in response to the second trigger operation, obtain the updated AR data packet stored in the server and associated with the target reality scene indicated by the second trigger operation; based on the second position when the AR device currently photographs the target reality scene Pose data, and the first pose data of at least one virtual object in the three-dimensional scene model in the updated AR data package to determine the presentation special effect information of the at least one virtual object; based on the presentation special effect information, the at least one virtual object is displayed through the AR device .
- the display system provided by the embodiments of the present disclosure can remotely edit and generate AR data packets, and publish the generated AR data packets to the server for augmented reality experience on the AR device side.
- an AR data packet can be provided on the AR generator side.
- the simple and convenient way to generate the AR data package is convenient for users to edit, and the server can save the AR data package, which is convenient for the AR device to download the AR data package.
- S501 In response to the first trigger operation, obtain an initial AR data packet associated with the target reality scene indicated by the first trigger operation;
- the AR data packet editing state in response to the first trigger operation, can be entered, and after the second pose data corresponding to the AR device is obtained, based on the second pose data corresponding to the AR device, the data
- the at least one virtual object associated with the initial AR data package facilitates the user to intuitively edit and adjust the second pose data of the virtual object in the target reality scene as required in the AR device, thereby simplifying the generation process of the AR scene content. It can also make the adjusted AR data packet more accurate.
- the embodiment of the present disclosure also provides a generating device corresponding to the AR scene content generating method shown in FIG. Similar, so the implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
- an AR scene content generating apparatus 600 provided by an embodiment of the present disclosure includes:
- the first obtaining module 601 is configured to obtain the initial AR data packet associated with the target reality scene indicated by the first trigger operation in response to the first trigger operation;
- the second acquisition module 602 is configured to acquire update data of at least one virtual object associated with the initial AR data packet; the update data includes the first pose data corresponding to the at least one virtual object;
- the update module 603 is configured to update the initial AR data package based on the update data of at least one virtual object, and generate an updated AR data package.
- the first obtaining module 601 is further used for:
- the method includes:
- update data of the at least one virtual object placed in the three-dimensional scene model where the update data includes the first pose data of the at least one virtual object placed in the three-dimensional scene model.
- the generating device further includes a display module 604, and the display module 604 is configured to:
- the second acquiring module 602 is configured to acquire update data of at least one virtual object associated with the initial AR data packet, it includes:
- update data for the displayed at least one virtual object includes the first pose data of the at least one virtual object in the target reality scene.
- the method when the second acquiring module is used to acquire updated data of the displayed at least one virtual object, the method includes:
- the first pose data includes at least one of the following: position coordinates, deflection angle, and size information in the coordinate system where the target reality scene is located.
- the update data further includes interaction data corresponding to at least one virtual object
- the second acquiring module When the second acquiring module is used to acquire update data of at least one virtual object associated with the initial AR data packet, it includes:
- the interaction data includes at least one of the following: at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the number of recurring display times of the virtual object after being triggered and displayed.
- the interactive data when the interactive data includes multiple state triggering conditions, the interactive data further includes: the priority of each state triggering condition.
- the second obtaining module 602 is further configured to:
- the update module 603 When the update module 603 is used to update the initial AR data package based on the update data of at least one virtual object and generate the updated AR data package, it includes:
- the initial AR data packet is updated to generate an updated AR data packet.
- the at least one virtual object includes at least one first virtual object
- the second obtaining module 602 when the second obtaining module 602 is used to obtain update data of the at least one virtual object associated with the initial AR data packet, the method includes:
- the at least one virtual object includes at least one second virtual object
- the method includes:
- the generating device further includes a sending module 605.
- the sending module 605 is configured to:
- the sending module 605 is further configured to:
- the label information of the updated AR data package is obtained, and the label information is sent to the server.
- the embodiment of the present disclosure also provides a display device corresponding to the display method of the AR scene content shown in FIG. Therefore, the implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
- FIG. 16 it is a schematic diagram of an AR scene content display apparatus 700 provided by an embodiment of the present disclosure, including:
- the obtaining module 701 is configured to obtain an AR data packet associated with the target reality scene indicated by the second trigger operation in response to the second trigger operation; the AR data packet contains the first pose data corresponding to at least one virtual object;
- the determining module 702 is configured to determine the presentation special effect information of at least one virtual object based on the second pose data of the actual scene of the target currently photographed by the AR device and the first pose data corresponding to the at least one virtual object in the AR data packet;
- the display module 703 displays at least one virtual object through the AR device based on the presentation special effect information.
- the AR data package further includes third pose data corresponding to at least one virtual object model; the virtual object model represents the target object in the target reality scene;
- the determining module 702 When the determining module 702 is used to determine the presentation special effect information of at least one virtual object based on the second pose data when the AR device is currently photographing the real scene of the target and the first pose data corresponding to at least one virtual object in the AR data packet, include:
- the first pose data corresponding to at least one virtual object in the AR data package, and the third pose data corresponding to the virtual object model determine the position of at least one virtual object Present special effects information.
- the AR data package further includes interaction data corresponding to at least one virtual object.
- the interaction data includes at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the virtual object being At least one of the number of recurring impressions after triggering the impression.
- the display device further includes an interaction module 704, and the interaction module 704 is configured to:
- the interactive operation acting on at least one virtual object meets the first type of state triggering condition, based on the corresponding presentation state of the at least one virtual object under the first type of state triggering condition, and/or, at least one virtual object is in the first type of state triggering condition.
- the displaying module 704 When the displaying module 704 is used to display at least one virtual object through the AR device based on the presentation special effect information, it includes:
- At least one virtual object is displayed through the AR device.
- the display device further includes an interaction module 704, and the interaction module 704 is configured to:
- the second pose data meets the second type of state triggering condition, based on the corresponding presentation state of at least one virtual object under the second type of state triggering condition, and/or, at least one virtual object is in the second type of state
- the display module When the display module is used to display at least one virtual object through the AR device based on the presentation special effect information, it includes:
- At least one virtual object is displayed through the AR device.
- an embodiment of the present disclosure also provides an electronic device 800.
- a schematic structural diagram of the electronic device 800 provided by the embodiment of the present disclosure includes:
- the processor 81 and the memory 82 communicate through the bus 83, so that the processor 81 executes the following instructions : In response to the first trigger operation, obtain the initial AR data packet associated with the target reality scene indicated by the first trigger operation; obtain the update data of at least one virtual object associated with the initial AR data packet; the update data includes at least one virtual object corresponding The first pose data; based on the update data of at least one virtual object, the initial AR data packet is updated to generate an updated AR data packet.
- the processor 81 may be caused to execute the following instruction: in response to the second trigger operation, obtain an AR data packet associated with the target reality scene indicated by the second trigger operation; the AR data packet contains the first pose corresponding to at least one virtual object Data; based on the second pose data of the actual scene of the AR device currently shooting the target, and the first pose data corresponding to at least one virtual object in the AR data package, determine the presentation special effect information of at least one virtual object; based on the presentation special effect information, pass The AR device displays at least one virtual object.
- the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the generation method or the display method in the above method embodiment when the computer program is run by a processor.
- the storage medium may be a volatile or nonvolatile computer readable storage medium.
- the embodiments of the present disclosure also provide a computer program product, the computer program product carries program code, and the instructions included in the program code can be used to execute the steps of the generation method or the display method described in the above method embodiment.
- the computer program product carries program code
- the instructions included in the program code can be used to execute the steps of the generation method or the display method described in the above method embodiment.
- the above-mentioned computer program product can be implemented by hardware, software, or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
- SDK software development kit
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software function unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor.
- the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Security & Cryptography (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
Description
Claims (20)
- 一种增强现实AR场景内容的生成方法,包括:响应于第一触发操作,获取所述第一触发操作指示的目标现实场景关联的初始AR数据包;获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象对应的第一位姿数据;基于所述至少一个虚拟对象的更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
- 根据权利要求1所述的生成方法,所述生成方法还包括:获取所述第一触发操作指示的目标现实场景的三维场景模型;所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:展示加载的所述三维场景模型;获取所述至少一个虚拟对象在置于所述三维场景模型中的情况下的更新数据,所述更新数据包括所述至少一个虚拟对象在置于所述三维场景模型中的第一位姿数据。
- 根据权利要求1所述的生成方法,所述生成方法还包括:基于AR设备当前拍摄所述目标现实场景时的第二位姿数据,以及所述初始AR数据包,展示与所述初始AR数据包关联的至少一个虚拟对象;所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:获取对展示的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象在所述目标现实场景中的第一位姿数据。
- 根据权利要求3所述的生成方法,所述获取对展示的至少一个虚拟对象的更新数据,包括:展示位姿数据编辑界面,并获取通过所述位姿数据编辑界面接收的所述至少一个虚拟对象的第一位姿数据;其中,所述第一位姿数据中包括以下至少一种:在所述目标现实场景所处坐标系中的位置坐标、偏转角度以及尺寸信息。
- 根据权利要求1至4任一所述的生成方法,所述更新数据还包括所述至少一个虚拟对象对应的交互数据;所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:展示交互数据编辑界面,并获取通过所述交互数据编辑界面分别接收的每个虚拟对象的交互数据;其中,所述交互数据中包括以下至少一种:至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数。
- 根据权利要求5所述的生成方法,在所述交互数据中包括多种状态触发条件的情况下,所述交互数据还包括:每种状态触发条件的优先级。
- 根据权利要求1至6任一所述的生成方法,所述生成方法还包括:获取与所述初始AR数据包关联的至少一个虚拟物体模型对应的第三位姿数据,所述至少一个虚拟物体模型表示所述目标现实场景中的目标物体;所述基于所述至少一个虚拟对象的所述更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包,包括:基于所述至少一个虚拟对象的所述更新数据,以及所述虚拟物体模型对应的所述第三位姿数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
- 根据权利要求1至7任一所述的生成方法,所述至少一个虚拟对象包括至少一个第一虚拟对象,所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:获取所述初始AR数据包中包含的至少一个第一虚拟对象的更新数据。
- 根据权利要求1至8任一所述的生成方法,所述至少一个虚拟对象包括至少一个第二虚拟对象,所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:从预先建立的素材库中获取与所述初始AR数据包关联的至少一个第二虚拟对象,并获取所述至少一个第二虚拟对象的更新数据。
- 根据权利要求1至9任一所述的生成方法,生成更新后的AR数据包括之后,所述生成方法还包括:将所述更新后的AR数据包发送至服务器;或者,将所述更新后的AR数据包以及指示所述更新后的AR数据包是否启用的状态信息发送至服务器。
- 根据权利要求10所述的生成方法,所述生成更新后的AR数据包之后,还包括:响应于对所述更新后的AR数据包的上传触发操作,获取所述更新后的AR数据包的标签信息,并将所述标签信息发送至所述服务器。
- 一种增强现实AR场景内容的展示方法,包括:响应于第二触发操作,获取与所述第二触发操作指示的目标现实场景关联的AR数据包;所述AR数据包中包含至少一个虚拟对象对应的第一位姿数据;基于AR设备当前拍摄所述目标现实场景的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息;基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
- 根据权利要求12所述的展示方法,所述AR数据包还包括至少一个虚拟物体模型对应的第三位姿数据;所述虚拟物体模型表示所述目标现实场景中的目标物体;所述基于所述AR设备当前拍摄所述目标现实场景时的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息,包括:基于所述AR设备当前拍摄所述目标现实场景时的所述第二位姿数据、所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据、以及所述虚拟物体模型对应的所述第三位姿数据,确定所述至少一个虚拟对象的呈现特效信息。
- 根据权利要求12或13所述的展示方法,所述AR数据包中还包括至少一个虚拟对象对应的交互数据,所述交互数据包括至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数中的至少一种。
- 根据权利要求14所述的展示方法,所述展示方法还包括:检测作用于所述至少一个虚拟对象的交互操作;在所述作用于所述至少一个虚拟对象的交互操作符合第一类状态触发条件的情况下,基于所述至少一个虚拟对象在该第一类状态触发条件下对应的呈现状态,和/或,所述至少一个虚拟对象在该第一类状态触发条件被触发展示后的循环展示次数,对所述至少一个虚拟对象的呈现特效信息进行更新,得到更新后的呈现特效信息;所述基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象,包括:基于所述更新后的呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
- 根据权利要求14或15所述的展示方法,所述展示方法还包括:在所述第二位姿数据符合第二类状态触发条件的情况下,基于所述至少一个虚拟对象在该第二类状态触发条件下对应的呈现状态,和/或,所述至少一个虚拟对象在该第二类状态触发条件被触发展示后的循环展示次数,对所述至少一个虚拟对象的呈现特效信息进行更新,得到更新后的呈现特效信息;所述基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象,包括:基于所述更新后的呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
- 一种增强现实AR场景内容的生成装置,包括:第一获取模块,用于响应于第一触发操作,获取所述第一触发操作指示的目标现实场景关联的初始AR数据包;第二获取模块,用于获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象对应的第一位姿数据;更新模块,用于基于所述至少一个虚拟对象的更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
- 一种增强现实AR场景内容的展示装置,包括:获取模块,用于响应于第二触发操作,获取与所述第二触发操作指示的目标现实场景关联的AR数据包;所述AR数据包中包含至少一个虚拟对象对应的第一位姿数据;确定模块,用于基于AR设备当前拍摄所述目标现实场景的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息;展示模块,用于基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
- 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至11任一所述的生成方法的步骤,或者执行如权利要求12至16任一所述的展示方法的步骤。
- 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至11任一所述的生成方法的步骤,或者执行如权利要求12至16任一所述的展示方法的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021538425A JP2022537861A (ja) | 2020-05-26 | 2020-12-09 | Arシーンコンテンツの生成方法、表示方法、装置及び記憶媒体 |
KR1020217020429A KR20210148074A (ko) | 2020-05-26 | 2020-12-09 | Ar 시나리오 콘텐츠의 생성 방법, 전시 방법, 장치 및 저장 매체 |
SG11202108241QA SG11202108241QA (en) | 2020-05-26 | 2020-12-09 | Ar scene content generation method and presentation method, apparatuses, and storage medium |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010456842.3A CN111610997A (zh) | 2020-05-26 | 2020-05-26 | Ar场景内容的生成方法、展示方法、展示系统及装置 |
CN202010456842.3 | 2020-05-26 | ||
CN202010456843.8 | 2020-05-26 | ||
CN202010456843.8A CN111610998A (zh) | 2020-05-26 | 2020-05-26 | Ar场景内容的生成方法、展示方法、装置及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021238145A1 true WO2021238145A1 (zh) | 2021-12-02 |
Family
ID=78745558
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/135048 WO2021238145A1 (zh) | 2020-05-26 | 2020-12-09 | Ar场景内容的生成方法、展示方法、装置及存储介质 |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP2022537861A (zh) |
KR (1) | KR20210148074A (zh) |
SG (1) | SG11202108241QA (zh) |
TW (1) | TWI783472B (zh) |
WO (1) | WO2021238145A1 (zh) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114401442A (zh) * | 2022-01-14 | 2022-04-26 | 北京字跳网络技术有限公司 | 视频直播及特效控制方法、装置、电子设备及存储介质 |
CN114758098A (zh) * | 2021-12-30 | 2022-07-15 | 北京城市网邻信息技术有限公司 | 基于WebGL的信息标注方法、实景导航方法及终端 |
CN114764327A (zh) * | 2022-05-09 | 2022-07-19 | 北京未来时空科技有限公司 | 一种三维可交互媒体的制作方法、装置及存储介质 |
CN115291939A (zh) * | 2022-08-17 | 2022-11-04 | 北京字跳网络技术有限公司 | 互动场景配置方法、装置、存储介质、设备及程序产品 |
CN115374141A (zh) * | 2022-09-20 | 2022-11-22 | 支付宝(杭州)信息技术有限公司 | 虚拟形象的更新处理方法及装置 |
WO2023207174A1 (zh) * | 2022-04-28 | 2023-11-02 | Oppo广东移动通信有限公司 | 显示方法、装置、显示设备、头戴式设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020071038A1 (en) * | 2000-12-07 | 2002-06-13 | Joe Mihelcic | Method and system for complete 3D object and area digitizing |
CN108416832A (zh) * | 2018-01-30 | 2018-08-17 | 腾讯科技(深圳)有限公司 | 媒体信息的显示方法、装置和存储介质 |
CN108520552A (zh) * | 2018-03-26 | 2018-09-11 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN109564472A (zh) * | 2016-08-11 | 2019-04-02 | 微软技术许可有限责任公司 | 沉浸式环境中的交互方法的选取 |
CN110716645A (zh) * | 2019-10-15 | 2020-01-21 | 北京市商汤科技开发有限公司 | 一种增强现实数据呈现方法、装置、电子设备及存储介质 |
CN111610997A (zh) * | 2020-05-26 | 2020-09-01 | 北京市商汤科技开发有限公司 | Ar场景内容的生成方法、展示方法、展示系统及装置 |
CN111610998A (zh) * | 2020-05-26 | 2020-09-01 | 北京市商汤科技开发有限公司 | Ar场景内容的生成方法、展示方法、装置及存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI628613B (zh) * | 2014-12-09 | 2018-07-01 | 財團法人工業技術研究院 | 擴增實境方法與系統 |
EP3621039A1 (en) * | 2018-09-06 | 2020-03-11 | Tata Consultancy Services Limited | Real time overlay placement in videos for augmented reality applications |
WO2020072972A1 (en) * | 2018-10-05 | 2020-04-09 | Magic Leap, Inc. | A cross reality system |
CN110764614B (zh) * | 2019-10-15 | 2021-10-08 | 北京市商汤科技开发有限公司 | 增强现实数据呈现方法、装置、设备及存储介质 |
-
2020
- 2020-12-09 KR KR1020217020429A patent/KR20210148074A/ko active IP Right Grant
- 2020-12-09 SG SG11202108241QA patent/SG11202108241QA/en unknown
- 2020-12-09 JP JP2021538425A patent/JP2022537861A/ja active Pending
- 2020-12-09 WO PCT/CN2020/135048 patent/WO2021238145A1/zh active Application Filing
-
2021
- 2021-05-04 TW TW110116126A patent/TWI783472B/zh active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020071038A1 (en) * | 2000-12-07 | 2002-06-13 | Joe Mihelcic | Method and system for complete 3D object and area digitizing |
CN109564472A (zh) * | 2016-08-11 | 2019-04-02 | 微软技术许可有限责任公司 | 沉浸式环境中的交互方法的选取 |
CN108416832A (zh) * | 2018-01-30 | 2018-08-17 | 腾讯科技(深圳)有限公司 | 媒体信息的显示方法、装置和存储介质 |
CN108520552A (zh) * | 2018-03-26 | 2018-09-11 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN110716645A (zh) * | 2019-10-15 | 2020-01-21 | 北京市商汤科技开发有限公司 | 一种增强现实数据呈现方法、装置、电子设备及存储介质 |
CN111610997A (zh) * | 2020-05-26 | 2020-09-01 | 北京市商汤科技开发有限公司 | Ar场景内容的生成方法、展示方法、展示系统及装置 |
CN111610998A (zh) * | 2020-05-26 | 2020-09-01 | 北京市商汤科技开发有限公司 | Ar场景内容的生成方法、展示方法、装置及存储介质 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114758098A (zh) * | 2021-12-30 | 2022-07-15 | 北京城市网邻信息技术有限公司 | 基于WebGL的信息标注方法、实景导航方法及终端 |
CN114401442A (zh) * | 2022-01-14 | 2022-04-26 | 北京字跳网络技术有限公司 | 视频直播及特效控制方法、装置、电子设备及存储介质 |
CN114401442B (zh) * | 2022-01-14 | 2023-10-24 | 北京字跳网络技术有限公司 | 视频直播及特效控制方法、装置、电子设备及存储介质 |
WO2023207174A1 (zh) * | 2022-04-28 | 2023-11-02 | Oppo广东移动通信有限公司 | 显示方法、装置、显示设备、头戴式设备及存储介质 |
CN114764327A (zh) * | 2022-05-09 | 2022-07-19 | 北京未来时空科技有限公司 | 一种三维可交互媒体的制作方法、装置及存储介质 |
CN114764327B (zh) * | 2022-05-09 | 2023-05-05 | 北京未来时空科技有限公司 | 一种三维可交互媒体的制作方法、装置及存储介质 |
CN115291939A (zh) * | 2022-08-17 | 2022-11-04 | 北京字跳网络技术有限公司 | 互动场景配置方法、装置、存储介质、设备及程序产品 |
CN115374141A (zh) * | 2022-09-20 | 2022-11-22 | 支付宝(杭州)信息技术有限公司 | 虚拟形象的更新处理方法及装置 |
CN115374141B (zh) * | 2022-09-20 | 2024-05-10 | 支付宝(杭州)信息技术有限公司 | 虚拟形象的更新处理方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
JP2022537861A (ja) | 2022-08-31 |
KR20210148074A (ko) | 2021-12-07 |
SG11202108241QA (en) | 2021-12-30 |
TW202145150A (zh) | 2021-12-01 |
TWI783472B (zh) | 2022-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021238145A1 (zh) | Ar场景内容的生成方法、展示方法、装置及存储介质 | |
US11854149B2 (en) | Techniques for capturing and displaying partial motion in virtual or augmented reality scenes | |
KR102417645B1 (ko) | Ar 장면 이미지 처리 방법, 장치, 전자 기기 및 저장 매체 | |
US12020377B2 (en) | Textured mesh building | |
KR102534637B1 (ko) | 증강 현실 시스템 | |
KR102414587B1 (ko) | 증강 현실 데이터 제시 방법, 장치, 기기 및 저장 매체 | |
KR102367928B1 (ko) | 표면 인식 렌즈 | |
CN111610998A (zh) | Ar场景内容的生成方法、展示方法、装置及存储介质 | |
WO2021073269A1 (zh) | 增强现实数据呈现方法、装置、设备、存储介质和程序 | |
US20160217616A1 (en) | Method and System for Providing Virtual Display of a Physical Environment | |
US11335022B2 (en) | 3D reconstruction using wide-angle imaging devices | |
Keil et al. | The House of Olbrich—An augmented reality tour through architectural history | |
US20160063764A1 (en) | Image processing apparatus, image processing method, and computer program product | |
CN112070906A (zh) | 一种增强现实系统及增强现实数据的生成方法、装置 | |
CN111610997A (zh) | Ar场景内容的生成方法、展示方法、展示系统及装置 | |
CN109448050B (zh) | 一种目标点的位置的确定方法及终端 | |
US20210118236A1 (en) | Method and apparatus for presenting augmented reality data, device and storage medium | |
KR20150079387A (ko) | 카메라 광 데이터로 가상 환경을 조명하는 방법 | |
CN112306228B (zh) | 计算机生成渲染环境的视觉搜索细化 | |
CN112070907A (zh) | 一种增强现实系统及增强现实数据的生成方法、装置 | |
CN111815783A (zh) | 虚拟场景的呈现方法及装置、电子设备及存储介质 | |
CN113678173A (zh) | 用于虚拟对象的基于图绘的放置的方法和设备 | |
JP2016139199A (ja) | 画像処理装置、画像処理方法、およびプログラム | |
US11656576B2 (en) | Apparatus and method for providing mapping pseudo-hologram using individual video signal output | |
KR20180075222A (ko) | 전자 장치 및 그 동작 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021538425 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20938244 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20938244 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 521430195 Country of ref document: SA |