WO2021238145A1 - Ar场景内容的生成方法、展示方法、装置及存储介质 - Google Patents

Ar场景内容的生成方法、展示方法、装置及存储介质 Download PDF

Info

Publication number
WO2021238145A1
WO2021238145A1 PCT/CN2020/135048 CN2020135048W WO2021238145A1 WO 2021238145 A1 WO2021238145 A1 WO 2021238145A1 CN 2020135048 W CN2020135048 W CN 2020135048W WO 2021238145 A1 WO2021238145 A1 WO 2021238145A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
data
initial
scene
pose
Prior art date
Application number
PCT/CN2020/135048
Other languages
English (en)
French (fr)
Inventor
侯欣如
栾青
王鼎禄
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010456843.8A external-priority patent/CN111610998A/zh
Priority claimed from CN202010456842.3A external-priority patent/CN111610997A/zh
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to SG11202108241QA priority Critical patent/SG11202108241QA/en
Priority to JP2021538425A priority patent/JP2022537861A/ja
Priority to KR1020217020429A priority patent/KR20210148074A/ko
Publication of WO2021238145A1 publication Critical patent/WO2021238145A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present disclosure relates to the field of augmented reality technology, and in particular to a method, display method, device, and storage medium for generating AR scene content.
  • Augmented Reality (AR) technology is a technology that ingeniously integrates virtual objects with the real world. After computer-generated text, images, three-dimensional models, music, video and other virtual objects are simulated and simulated, they are applied to the real world. Thus presenting an augmented reality scene.
  • the AR content displayed in the augmented reality scene can be determined in advance. For example, it can contain virtual objects and display information of the virtual objects. However, if the internal environment of the current reality scene changes, the pre-produced virtual objects It no longer matches the current real scene, so when the virtual object is superimposed on the real scene, the display effect of the augmented reality scene is poor.
  • the embodiments of the present disclosure provide at least one solution for generating AR scene content.
  • embodiments of the present disclosure provide a method for generating augmented reality AR scene content, including:
  • the update data includes first pose data corresponding to the at least one virtual object
  • the initial AR data packet is updated to generate an updated AR data packet.
  • the initial AR data package associated with the target reality scene can be acquired, and the initial AR data can be acquired including update data associated with at least one virtual object, for example, it can include at least one virtual object.
  • the first pose data corresponding to the object is then updated to the initial AR data package based on the updated data to obtain a virtual object that more closely matches the target reality scene, which improves the realistic effect of the augmented reality scene.
  • the embodiments of the present disclosure provide a method for displaying content of an augmented reality AR scene, including:
  • the AR data packet In response to the second triggering operation, acquiring an AR data packet associated with the target reality scene indicated by the second triggering operation; the AR data packet includes first pose data corresponding to at least one virtual object;
  • the at least one virtual object is displayed through the AR device.
  • the AR data packet associated with the second trigger operation can be obtained in response to the second trigger operation, and further can be based on the second pose data corresponding to the AR device, and the virtual data in the AR data packet can be set in advance.
  • the first pose data of the object in the target reality scene determines the special effect information of the virtual object in the target reality scene, and finally displays the realistic augmented reality scene effect in the AR device.
  • an augmented reality AR scene content generation device including:
  • the first acquisition module is configured to, in response to the first trigger operation, acquire the initial AR data packet associated with the target reality scene indicated by the first trigger operation;
  • the second acquisition module is configured to acquire update data of at least one virtual object associated with the initial AR data packet; the update data includes the first pose data corresponding to the at least one virtual object;
  • the update module is configured to update the initial AR data package based on the update data of the at least one virtual object to generate an updated AR data package.
  • an augmented reality AR scene content display device including:
  • the acquiring module is configured to acquire an AR data packet associated with the target reality scene indicated by the second trigger operation in response to a second trigger operation; the AR data packet includes first pose data corresponding to at least one virtual object;
  • a determining module configured to determine the at least one virtual object based on the second pose data of the target reality scene currently captured by the AR device and the first pose data corresponding to at least one virtual object in the AR data packet The presentation of special effects information;
  • the display module is configured to display the at least one virtual object through the AR device based on the presentation special effect information.
  • an embodiment of the present disclosure provides an electronic device, including a processor, a memory, and a bus.
  • the memory stores machine-readable instructions executable by the processor.
  • the processing communicates with the memory through a bus.
  • the machine-readable instructions are executed by the processor, the steps of the generation method as described in the first aspect or the steps of the display method as described in the second aspect are executed. .
  • the embodiments of the present disclosure provide a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the generating method as described in the first aspect when the computer program is run by a processor , Or execute the steps of the display method as described in the second aspect.
  • FIG. 1 shows a flowchart of the first method for generating AR scene content provided by an embodiment of the present disclosure
  • Figure 2a shows a schematic diagram of an AR data package download interface provided by an embodiment of the present disclosure
  • Figure 2b shows a schematic diagram of an interface for downloading and uploading an AR data package provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a page of a positioning prompt provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of a pose data display page of a virtual object provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of a pose data editing page of a virtual object provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of an interface for generating interactive data provided by an embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of an AR data package saving page provided by an embodiment of the present disclosure
  • FIG. 8 shows a flowchart of an interface for uploading an updated AR data package provided by an embodiment of the present disclosure
  • FIG. 9 shows a flowchart of an AR scene content display method provided by an embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of a positioning prompt performed on an AR device when the content of an AR scene is displayed according to an embodiment of the present disclosure
  • FIG. 11 shows a schematic diagram of an augmented reality scene provided by an embodiment of the present disclosure
  • FIG. 12 shows a flowchart of a second method for generating AR scene content provided by an embodiment of the present disclosure
  • FIG. 13 shows a schematic structural diagram of an AR scene content display system provided by an embodiment of the present disclosure
  • FIG. 14 shows a flowchart of a third method for generating AR scene content provided by an embodiment of the present disclosure
  • FIG. 15 shows a schematic structural diagram of the first AR scene content generating apparatus provided by an embodiment of the present disclosure
  • FIG. 16 shows a schematic structural diagram of a first AR scene content display device provided by an embodiment of the present disclosure
  • FIG. 17 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Augmented Reality (AR) technology can be applied to AR devices, which can be any electronic device that can support AR functions, including but not limited to AR glasses, tablet computers, smart phones, etc.
  • AR devices can be any electronic device that can support AR functions, including but not limited to AR glasses, tablet computers, smart phones, etc.
  • the AR device When the AR device is operated in a real scene, the virtual object superimposed in the real scene can be viewed through the AR device. For example, when passing by some buildings or tourist attractions, the AR device can be used to see the superimposed buildings or tourist attractions.
  • the virtual graphics and texts here can be called virtual objects. Buildings or tourist attractions can be real scenes. In this scenario, the virtual graphics and text introduced through AR glasses will follow the introduction of AR glasses. It changes toward the angle.
  • the virtual graphic introduction here is related to the positional relationship of AR glasses.
  • the method for generating AR scene content disclosed in the embodiment of the present disclosure is first introduced in detail.
  • the execution subject of the method for generating AR scene content provided by the embodiment of the present disclosure may be the above-mentioned AR device, For example, it may include AR glasses, tablet computers, smart phones, smart wearable devices and other devices with display functions and data processing capabilities, which are not limited in the embodiments of the present disclosure.
  • the method for generating AR scene content It can be implemented by a processor invoking computer-readable instructions stored in the memory.
  • FIG. 1 it is a flowchart of a method for generating AR scene content provided by an embodiment of the present disclosure.
  • the method for generating includes the following S101 to S103:
  • S101 In response to a first trigger operation, obtain an initial AR data packet associated with a target reality scene indicated by the first trigger operation;
  • the first triggering operation may be a triggering operation of an editing option corresponding to any initial AR data packet associated with the target reality scene.
  • the triggering operation is, for example, an operation of selecting an editing option, or directly by means of voice or gestures. Trigger, etc., this disclosure is not limited to this.
  • FIG. 2a it is a schematic diagram of the AR data packets respectively associated with "XXX Building-15th Floor” and other real scenes. If it is detected that the editing option of any AR data packet corresponding to "XXX Building-15th Floor” is triggered, For example, the editing option of the initial AR data package of the "[Example] Science Fiction” category is triggered, and the server can request to obtain the initial AR data package of the "[Example] Science Fiction" category associated with the target reality scene "XXX Building-15th Floor”.
  • each initial AR data packet may contain label information, and the label information is used to characterize the category of the initial AR data packet, for example, it may include one of "science fiction category", "cartoon category", and "historical category".
  • each category is used to represent the style of the virtual object to be displayed in the AR scene; wherein, each initial AR data packet may contain a preset virtual object, or may not contain a virtual object.
  • the target reality scene may be an indoor scene of a building, or a street scene, and may also be any target reality scene capable of superimposing virtual objects.
  • S102 Obtain update data of at least one virtual object associated with the initial AR data packet; the update data includes first pose data corresponding to the at least one virtual object.
  • the at least one virtual object associated with the initial AR data package may include a virtual object locally contained in the initial AR data package, may also include a virtual object downloaded through the network, and may also include a virtual object obtained from a pre-established material library, Among them, the material library can be set locally or in the cloud, which is not limited in the present disclosure.
  • the virtual object may be a static virtual model such as the aforementioned virtual flowerpot and a virtual tree, or may be some dynamic virtual objects such as virtual videos and virtual animations.
  • the first pose data of the virtual object includes but It is not limited to data that can represent the position and/or posture of the virtual object when it is presented. For example, it may include the position coordinates, deflection angle, and size information of the virtual object in the coordinate system corresponding to the target reality scene.
  • the content of the virtual object in the initial AR data package can be updated to obtain the updated AR data package.
  • the update method can be to directly add the update data to the initial AR data package.
  • the updated AR data packet contains the aforementioned virtual objects associated with the initial AR data packet, and Update data of at least one virtual object.
  • the obtained updated AR data package can be used to display virtual objects integrated into the target real scene according to the updated AR data package when the AR device shoots the target real scene.
  • the initial AR data package associated with the target reality scene can be acquired, and the initial AR data can be acquired including update data associated with at least one virtual object, for example, it can include at least one virtual object.
  • the first pose data corresponding to the object is then updated to the initial AR data package based on the updated data to obtain a virtual object that more closely matches the target reality scene, which improves the realistic effect of the augmented reality scene.
  • the generation method provided in the embodiment of the present disclosure may be applied to the AR generation end, and the generation method further includes:
  • the AR generator can be a computer, a notebook, a tablet, or other devices. These devices can be installed with applications for generating and editing AR scene content or can access WEB pages for generating and editing AR scene content.
  • the user can Remotely edit AR scene content in the application or WEB page. For example, you can simulate the target reality scene through the 3D scene model representing the target reality scene, and directly configure the relevant data of the virtual object to be displayed in the 3D scene model without going to the target Configure in the real scene to realize the generation of AR scene content.
  • the display interface of the AR generator can display editing options corresponding to multiple real scenes. After detecting that any one of the editing options is triggered, the real scene corresponding to the triggered editing option can be used as the target real scene. , The editing option of the target reality scene is triggered, and then the 3D scene model of the target reality scene can be obtained, so that the virtual object to be displayed can be added to the 3D scene model of the target reality scene model subsequently.
  • a map can be displayed on the display interface of the AR generator. The map is set with multiple points of interest (POI), and each POI point can correspond to a real scene.
  • POI points of interest
  • the AR generator can also detect that the editing option for the target reality scene is triggered, and then obtain the 3D scene model representing the target reality scene, and the initial augmented reality AR data package associated with the target reality scene, so as to follow the target reality scene Add virtual objects to be displayed in the 3D scene model of the model.
  • the three-dimensional scene model representing the target reality scene and the target reality scene are presented in equal proportions under the same coordinate system.
  • the target reality scene includes the street and the buildings on both sides of the street.
  • the three-dimensional scene model also includes the model of the street and the buildings on both sides of the street, and the three-dimensional scene model and the target reality scene can be presented in the same coordinate system, for example, according to 1:1, or can also be presented in equal proportions. of.
  • the editing page showing the contents of the AR scene when it is detected that the "Update Experience Package List” option in the editing interface is triggered, a variety of real scenes can be obtained, such as "XXX Building-15th Floor",
  • the "download scene” when it is detected that the "download scene” is triggered in the editing interface, it can be regarded as the first trigger operation for the target reality scene detected, and then the 3D scene model representing the target reality scene and the initial scene associated with the target reality scene can be obtained.
  • AR data packages such as two initial AR experience packages with label information of "Christmas" and "New Year's Day”.
  • the category label of the initial AR experience package "initial AR data package) "Christmas" associated with the above-mentioned target reality scene further contains the labels "Science Fiction” and "Nature”, indicating that the created virtual objects can belong to the sci-fi category Virtual objects, and virtual objects belonging to the natural category, of course, in the later editing, the category of the AR data package can be changed based on the category of the uploaded virtual object.
  • the method when obtaining update data of at least one virtual object associated with the initial AR data packet, the method includes:
  • S1022 Obtain update data of the at least one virtual object placed in the three-dimensional scene model, where the update data includes first pose data of the at least one virtual object placed in the three-dimensional scene model.
  • the first pose data of the virtual object placed in the three-dimensional scene model includes the position coordinates, deflection angle, and size information of the virtual object in the coordinate system of the three-dimensional scene model.
  • the angle between the specified positive direction of the object and the coordinate axis of the 3D scene model coordinate system is expressed.
  • the three-dimensional scene model and the target reality scene can be presented at 1:1 in the same coordinate system, and are presented at an intermediate scale in different coordinate systems. Therefore, the first pose data of the virtual object when presented in the three-dimensional scene model is obtained here. , When presented in the later AR device, it can show the special effect information of the virtual object in the target real scene.
  • the content edited by the editing operation can be obtained, and the content edited by the editing operation can be used as the update data.
  • a schematic diagram of the 3D scene model and the pose data edit bar about the virtual object can be displayed.
  • the user can check the first pose data of the virtual object in the 3D scene model in the pose data edit bar of the virtual object
  • the AR generator can obtain the first pose data of the virtual object in the 3D scene model.
  • the first pose data when the virtual object is displayed in the target real scene can be determined, and the first pose data can be performed in the target real scene according to the first pose data.
  • the virtual object can be better integrated with the target reality scene, so that the effect of the realistic augmented reality scene can be displayed in the AR device.
  • the user can be provided with the three-dimensional scene model and the initial AR data package associated with the target reality scene, so that the user can intuitively edit at least one virtual object associated with the initial AR data package through the three-dimensional scene model according to their own needs.
  • Update The updated AR data package is obtained in this way.
  • the augmented reality experiencer performs the augmented reality experience in the target reality scene, he can directly call the updated AR data package for the augmented reality experience.
  • the method of editing AR data packets can simplify the generation of AR data packets and provide convenient AR materials for subsequent display of augmented reality scenes.
  • the three-dimensional scene model used to edit the first pose data of the virtual object can be provided, it is convenient for users to intuitively edit the first pose data of the virtual object in the three-dimensional scene, so that the virtual object can be personalized based on user needs.
  • the setting of the first pose data of the object is convenient for users to intuitively edit the first pose data of the virtual object in the three-dimensional scene, so that the virtual object can be personalized based on user needs.
  • the generation method provided in the embodiment of the present disclosure can also be generated during the experience of AR scenes, for example, when applied to an AR device for displaying AR scenes, the method further includes:
  • At least one virtual object associated with the initial AR data packet is displayed.
  • an application program for displaying and editing AR scene content can be installed in the AR device.
  • the user can open the application program in the AR device to edit the AR scene content in the augmented reality scene.
  • the AR device After opening the application program, the AR device’s
  • the display interface can display at least one editing option corresponding to a real scene.
  • Each real scene is associated with multiple initial AR data packages, and each initial AR data package has corresponding editing options. These initial AR data packages can be edited online .
  • the display interface of the AR device may display the initial AR data packets corresponding to multiple real-world scenes. After detecting that the editing option of any of the initial AR data packets is triggered, the triggered editing option can be set The corresponding initial AR data packet is used as the AR data packet associated with the aforementioned target reality scene.
  • the embodiments of the present disclosure provide a solution for editing AR scene content in the process of experiencing an augmented reality scene.
  • the second pose data when the AR device currently photographs the real scene of the target may include the position and/or display angle of the display component used to display the virtual object.
  • the concept of coordinate system is introduced here.
  • the second pose data corresponding to the AR device may include but is not limited to at least one of the following:
  • the display component of the AR device is in the world The coordinate position in the coordinate system; the angle between the display part of the AR device and each coordinate axis in the world coordinate system; including the coordinate position of the display part of the AR device in the world coordinate system and the angle between each coordinate axis in the world coordinate system.
  • the display component of the AR device specifically refers to the component used to display virtual objects in the AR device.
  • the corresponding display component may be a display screen.
  • the corresponding display component may be a lens for displaying virtual objects.
  • the second pose data corresponding to the AR device can be obtained in many ways.
  • the second pose data of the AR device can be determined through the pose sensor on the AR device;
  • an image acquisition component such as a camera, is configured, the second pose data can be determined through the target reality scene image collected by the camera.
  • the pose sensor may include an angular velocity sensor used to determine the shooting angle of the AR device, such as a gyroscope, an inertial measurement unit (IMU), etc.; it may include a positioning component used to determine the shooting position of the AR device, For example, it can be a positioning component based on the Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), and wireless fidelity (Wireless Fidelity, WiFi) positioning technology; it can also include positioning components used to determine The angular velocity sensor of the shooting angle of the AR device and the positioning component of the shooting position.
  • GPS Global Positioning System
  • GLONASS Global Navigation Satellite System
  • WiFi wireless fidelity
  • the AR may be determined through the target reality scene image and the pre-stored first neural network for positioning.
  • the second pose data corresponding to the device is determined through the target reality scene image and the pre-stored first neural network for positioning.
  • the first neural network may be trained based on multiple sample pictures obtained by photographing the real scene of the target in advance, and the corresponding second pose data when each sample picture is photographed.
  • the AR device When determining the second pose data corresponding to the AR device based on the actual scene image of the target captured by the AR device, as shown in Figure 3, the AR device displays the information used to prompt the user to enter the editing state and start positioning, such as displaying information for Information that prompts the user to take a real-world image of the target for positioning.
  • the second pose data when the AR device currently shoots the real scene of the target is acquired, it may be based on the second pose data corresponding to the AR device and at least one virtual object associated with the initial AR data packet in the target real scene.
  • the initial first pose data of the at least one virtual object is displayed. While displaying the virtual object, the pose data display interface of the at least one virtual object can also be displayed, as shown in FIG. 4, which is a virtual The display schematic diagram of the subject Tang Sancai, and the pose data display interface for the virtual object in the display area.
  • the pose data display interface displays the initial first pose data of the virtual object Tang San Caima in the target reality scene, mainly It includes the coordinate information of the Tang Sancai in the target realistic scene.
  • the pose data display interface shown on the left side of FIG. 4 contains "My Map Coordinates", which can indicate the coordinate information of the display part of the AR device in the coordinate system where the target reality scene is located; the model list contains multiple
  • the coordinate information of a virtual object in the target reality scene for example, includes the coordinate information of the virtual objects "Tang San Caima” and “Stone Lion” in the target reality scene respectively. Furthermore, other virtual objects and other virtual objects can be added.
  • the coordinate information of the target in the real scene contains "My Map Coordinates”, which can indicate the coordinate information of the display part of the AR device in the coordinate system where the target reality scene is located; the model list contains multiple The coordinate information of a virtual object in the target reality scene, for example, includes the coordinate information of the virtual objects "Tang San Caima” and “Stone Lion” in the target reality scene respectively. Furthermore, other virtual objects and other virtual objects can be added.
  • the coordinate information of the target in the real scene for example, includes the coordinate information of the virtual objects "Tang San Caima”
  • the method when obtaining update data of at least one virtual object associated with the initial AR data packet, the method includes:
  • update data for the displayed at least one virtual object includes the first pose data of the at least one virtual object in the target reality scene.
  • the content edited by the editing operation can be obtained, and the content edited by the editing operation can be used as the update data, such as in the pose data editing
  • the page can display the pose data edit bar of the virtual object on the AR device.
  • the user can check the first pose of the virtual object in the target reality scene in the pose data edit bar of the virtual object
  • the data is edited.
  • the editing of the pose data of the virtual object is completed in the world coordinate system where the target reality scene is located. Therefore, after the editing is completed, the AR device can obtain the virtual object's position in the target reality scene.
  • the first posture data is generated in the world coordinate system where the target reality scene is located.
  • Fig. 5 shows the coordinate information of the currently displayed virtual object "Tang Sancai" in the target real scene, and the interactive data of the virtual object in the target real scene.
  • the interactive data will be described later.
  • the right side of Figure 5 shows the interface for editing the first pose data of the currently displayed virtual object in the target reality scene.
  • the coordinates of the virtual object in the coordinate system of the target reality scene can be adjusted.
  • Information can be edited, the size ratio of the virtual object can be edited, and the angle between the virtual object and the three coordinate axes in the coordinate system can be edited.
  • the AR device after acquiring the second pose data corresponding to the AR device, based on the second pose data corresponding to the AR device, at least one virtual object associated with the initial AR data packet can be displayed, which is convenient for the user
  • the AR device can intuitively edit and adjust the first pose data of the virtual object in the target reality scene as required, thereby simplifying the generation process of the AR scene content, because this process can be used for AR during the experience of the augmented reality scene.
  • the intuitive adjustment of the data packet can therefore make the adjusted AR data packet more accurate.
  • obtaining update data of the displayed at least one virtual object includes:
  • the first pose data includes at least one of the following: position coordinates, deflection angle, and size information in the coordinate system where the target reality scene is located.
  • the editing operations on the position coordinates, deflection angle, and size information of the displayed at least one virtual object in the target real scene can be obtained, so as to obtain that the at least one virtual object is in The first pose data of the target reality scene.
  • the pose data editing interface may include the position coordinates of the "Tang San Color Horse” in the target real scene. Editing may include editing for the size information of the "Tang Sancai” in the target reality scene, as well as editing for the deflection angle of the "Tang Sancai” in the world coordinate system where the target reality scene is located.
  • a display pose data editing interface for editing the first pose data of a virtual object can be provided, so that the user can intuitively adjust the first position of the virtual object in the target reality scene.
  • the pose data can be personalized to set the first pose data of the virtual object based on user needs.
  • the aforementioned update data may also include interaction data of at least one virtual object in the target reality scene, or , The interaction data of at least one virtual object in the three-dimensional scene model that represents the target reality scene.
  • the updated data provided by the embodiment of the present disclosure also includes interaction data corresponding to at least one virtual object.
  • the interaction data includes at least one of the following: at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the number of recurring display times of the virtual object after being triggered and displayed.
  • the interactive data editing interface displayed on the AR device or the interactive data editing interface displayed on the AR generator can be used to obtain the interactive data editing operation for at least one virtual object, that is, the interactive data can be performed in the target reality scene.
  • the subsequent state trigger condition of the virtual object may be used to trigger the virtual object to be presented according to the presentation state corresponding to the state trigger condition, and/or according to The number of recurring impressions after being triggered by the state triggering condition.
  • the interactive data editing interface shown in Figure 6 (b) is displayed, which specifically refers to the interactive data of the virtual animation.
  • An editing interface the interactive data editing interface may include an editing area for a state triggering condition of at least one virtual object, a presentation state corresponding to the state triggering condition, and the number of recurring displays of the virtual object after being triggered to display.
  • the editing operation for the state triggering condition of the virtual object can be obtained through the editing area corresponding to the state triggering condition, so as to obtain the state triggering condition corresponding to the virtual object.
  • the subsequent AR device can trigger the state triggering condition after obtaining the state triggering condition.
  • the virtual object is displayed in the target real scene.
  • the editing operation for the number of recurring impressions of the virtual object after being displayed can be obtained through the editing area corresponding to the recurring number. For example, if the obtained number of impressions is n times, it can indicate that the current virtual animation is triggered after triggering condition 1. Show n times in a loop.
  • the interactive data when the interactive data includes multiple state triggering conditions, the interactive data also includes: the priority of each state triggering condition.
  • the interactive data editing interface displayed by the AR device edits the interactive data
  • it can be displayed on the page as shown in Figure 5.
  • the currently displayed data can be displayed on the left side of Figure 5
  • the coordinate information of the virtual object "Tang San Color Horse” in the coordinate system where the target reality scene is located, and the interactive data corresponding to the virtual object "Tang San Color Horse” (not shown in Figure 5), including trigger conditions, display status, whether to cycle display, Priority etc.
  • the interactive data contains multiple state triggering conditions.
  • the virtual object associated with the initial AR data packet contains multiple, and each virtual object corresponds to a state triggering condition.
  • the state triggering condition corresponding to each virtual object can be set. Set the priority.
  • the subsequent AR device obtains multiple state trigger conditions at the same time, the virtual object corresponding to the state trigger condition with the highest priority will be triggered, and the presentation state corresponding to the state trigger condition corresponding to the virtual object will be triggered, and/or,
  • the virtual object is displayed according to the number of recurring impressions after the virtual object is triggered.
  • the state trigger conditions mentioned here can include but are not limited to at least one of the following:
  • Click model trigger condition Click model trigger condition, slide model trigger condition, distance recognition trigger condition, specified area trigger condition, gesture recognition trigger condition, face current model trigger condition, and face specified model trigger condition.
  • the click model trigger condition refers to the state trigger condition of displaying the virtual object A in the AR device after clicking the three-dimensional model of the virtual object A displayed in the AR device.
  • the AR device can display the pending state The displayed three-dimensional model of the virtual object, after detecting the click operation on the three-dimensional model, display the virtual object corresponding to the three-dimensional model;
  • the sliding model trigger condition refers to the state trigger condition for the virtual object A that is triggered by sliding the three-dimensional model of the virtual object A in the set manner in the AR device. Illustratively, it can be set in the AR device for the virtual object A right sliding operation on the three-dimensional model of A triggers virtual object A to display, and a left sliding operation on the three-dimensional model of virtual object A in the AR device triggers virtual object A to disappear;
  • the distance trigger condition refers to the state trigger condition for the virtual object A that is triggered when the distance between the location coordinate of the AR device and the set location point meets the set distance;
  • the designated area trigger condition refers to the state trigger condition for the virtual object A that is triggered after the AR device enters the designated area;
  • the gesture recognition trigger condition refers to the state trigger condition for the virtual object A triggered by the set gesture action
  • the trigger condition towards the current model refers to the state trigger condition for the virtual object A that is triggered when the shooting angle of the AR device is towards the position where the virtual object A is located;
  • the trigger condition for the orientation designated model refers to the state trigger condition for the virtual object A that is triggered when the AR device faces the position where the specific virtual object is located.
  • a trigger logic chain can be formed for these state triggering conditions.
  • the AR data packet contains 3 virtual objects.
  • the state triggering conditions corresponding to one virtual object, the second virtual object and the third virtual object are recorded as state triggering condition 1, state triggering condition 2 and state triggering condition 3, when state triggering condition 1, state triggering condition 2
  • the sequence of the formed trigger logic chain is status trigger condition 1, status trigger condition 2 and status trigger condition 3, then when the user triggers status trigger condition 1, status trigger condition 2 in turn And state trigger condition 3, then virtual object 1, virtual object 2, and virtual object 3 can be displayed in sequence.
  • an interactive data editing interface for editing interactive data of a virtual object may be provided to support the trigger mode and display form of editing the virtual object.
  • the generating method provided in the embodiment of the present disclosure further includes:
  • the initial AR data package is updated to generate the updated AR data package, including:
  • the initial AR data packet is updated to generate an updated AR data packet.
  • the third pose data corresponding to the virtual object model may include, but is not limited to, data that can represent the position and/or posture of the virtual object model when presented in the target reality scene, or may include, but is not limited to, data that can represent the virtual object model.
  • the position and/or posture data of the object model in the three-dimensional scene model may include position coordinates, deflection angle, and size information of the virtual object model in the target real scene or the coordinate system where the three-dimensional scene model is located.
  • the third pose data of the virtual object model in the target reality scene or the three-dimensional scene model can represent the third pose data of the target real object corresponding to the virtual object model in the target reality scene.
  • Edit the display form of the virtual object model when it is displayed in the AR device For example, edit the display form of the virtual object model in the AR device to the occlusion form, and perform transparent processing when it is presented in the AR device.
  • the virtual object model can be used for The virtual object that needs to be occluded is occluded. For example, when the virtual object is displayed, the part of the virtual object that needs to be occluded is not rendered. The part of the occluded area can be transparently processed through the virtual object model to achieve Occlusion effect.
  • the virtual object model used to present the occlusion effect can be edited.
  • the real third pose data of the virtual object model in the target reality scene can be restored. It is convenient to provide a more realistic display effect when the virtual object is displayed on the AR device in the follow-up.
  • the at least one virtual object includes at least one first virtual object.
  • the at least one virtual object when acquiring update data of the at least one virtual object associated with the initial AR data packet, it includes:
  • acquiring update data of at least one virtual object associated with the initial AR data package may be acquiring update data of at least one first virtual object included in the initial AR data package.
  • the at least one virtual object includes at least one second virtual object.
  • the method when obtaining update data of the at least one virtual object associated with the initial AR data packet, the method includes:
  • the update data of at least one virtual object associated with the initial AR data package can be obtained, and at least one first object associated with the initial AR data package can be obtained from a pre-established material library. Two virtual objects, and obtaining update data of at least one second virtual object.
  • the update data of the at least one virtual object associated with the initial AR data packet when the update data of the at least one virtual object associated with the initial AR data packet is acquired, the update data of the at least one first virtual object contained in the initial AR data packet can be acquired at the same time, and the update data from the pre-established Obtain update data of at least one second virtual object associated with the initial AR data package from the material library.
  • the pre-established material library may contain various virtual objects such as virtual static models, animations, and videos, and the user can select the virtual objects to be uploaded to the initial AR data package from the pre-established material library.
  • the second virtual object may be the same as or different from the first virtual object, and the initial AR data packet may be updated through the update data of the first virtual object and/or the second virtual object.
  • the virtual object associated with the initial AR data package and the update data of the virtual object can be acquired in various ways, and the update data of the virtual object can be flexibly acquired.
  • the initial AR data package when displaying at least one virtual object associated with the initial AR data package, it may include:
  • the initial AR data package may or may not contain virtual objects.
  • at least one virtual object associated with the initial AR data package displayed may be the display of the initial AR data package.
  • the at least one first virtual object contained in the data package may also be displayed after obtaining at least one second virtual object associated with the initial AR data package from a pre-established material library, or may be displayed in the initial AR data package at the same time At least one first virtual object included, and at least one second virtual object associated with the initial AR data package obtained from a pre-established material library.
  • the generating method provided in the embodiment of the present disclosure further includes:
  • the status information indicating whether the updated AR data package is enabled can be set in the interface shown in Figure 2a, and an "enable” button can be set under each AR data package (not shown in Figure 2a) If the "Enable” button under the AR data package is triggered, it means that the updated AR data package corresponding to the AR data package can be downloaded and experienced by the AR device after being uploaded to the server.
  • the generated updated AR data package can be published to the server, and can be downloaded and used by other users. For example, it can be downloaded and edited by other AR devices, and at the same time, AR data can be downloaded and experienced by AR devices. Bag.
  • the generating method provided in the embodiment of the present disclosure further includes:
  • the label information of the updated AR data package is obtained, and the updated AR data package and label information are sent to the server.
  • the initial AR data package is edited, and after the updated AR data package is obtained, the updated data package can be sent to the server.
  • the obtained AR data package associated with the target reality scene can be Including multiple, after the user triggers the upload experience package operation on the page shown in Figure 8 (a), in order to determine the target updated AR data package to be uploaded by the user, the page shown in Figure 8 (b) can be displayed , The user can fill in the tag information corresponding to the updated AR data package of the target to be uploaded on this page.
  • the label information may include the name of the target reality scene, the name of the floor, the name of the experience package, the subject, and remarks.
  • the filling of the label information is convenient for determining the updated AR data package to be uploaded on the one hand, and on the other hand, it is convenient
  • the server saves the uploaded and updated AR data package based on the tag information, so that it is convenient for users of the AR device to download the AR data package on the AR device for experience.
  • the generated updated AR data package can be published to the server, and can be downloaded and used by other users.
  • it can be downloaded and edited by other AR generators, and can be used by AR devices to download and experience AR. data pack.
  • sending the updated AR data packet and label information to the server includes:
  • AR data packets in the enabled state can be used.
  • the status information indicating whether the updated AR data packet is enabled can be set in Figure 8(a), and an "enable” button is set under each AR data packet.
  • an "enable” button is set under each AR data packet.
  • the display process can be applied to an AR device.
  • the AR device may be the same as or different from the above AR device used to generate AR data packets, which is not limited here, as shown in Figure 9 As shown, the following S201 ⁇ S203 are included:
  • S201 In response to the second trigger operation, obtain an AR data packet associated with the target reality scene indicated by the second trigger operation; the AR data packet includes first pose data corresponding to at least one virtual object.
  • AR devices may include, but are not limited to, AR glasses, tablets, smart phones, smart wearable devices and other devices with display functions and data processing capabilities. These AR devices may be installed with applications for displaying AR scene content. , Users can experience AR scene content in this application.
  • the AR device can display at least one real scene and the AR data packet associated with each real scene.
  • the second trigger operation can be to display the target reality.
  • the triggering operation of the AR data packet associated in the scene In one embodiment, as shown in Figure 2a, the user can click on the target reality scene "XXX Building-15 Floor” in the AR data packet associated with the real scene displayed by the AR device "Associated AR data package, such as clicking on the AR data package "[Example] Sci-Fi", the AR device can detect that there is a second trigger operation for the AR data package, and can request the server to obtain the target reality scene "XXX Building” -15 layer” associated AR data package.
  • the first pose data corresponding to at least one virtual object contained in the AR data packet may be the first pose data in the target reality scene, or at least one virtual object contained in the AR data packet corresponds to the first pose data It can be the first pose data in the three-dimensional scene model of the target reality scene. For details, please refer to the above description and will not be repeated here.
  • S202 Determine presentation special effect information of at least one virtual object based on the second pose data of the real scene of the target currently photographed by the AR device and the first pose data corresponding to the at least one virtual object in the AR data packet.
  • the second pose data of the AR device when the target is currently photographed in the real scene and the method of obtaining the second pose data corresponding to the AR device are described in detail above, and will not be repeated here.
  • information for prompting the user to use the AR device to shoot may be displayed on the AR device.
  • the special effect information of the virtual object in the AR device can be determined.
  • the first pose data corresponding to the virtual object is the first pose data in the three-dimensional scene model of the target real scene
  • the three-dimensional scene model and the real scene are presented in the same coordinate system as 1:1
  • the coordinate system is presented in a medium scale, so the first pose data of the virtual object set in advance when presented in the 3D scene model and the second pose data corresponding to the AR device can be used to determine the appearance of the virtual object in the AR device Special effects information.
  • S203 Based on the presentation special effect information, display at least one virtual object through the AR device.
  • the at least one virtual object After obtaining the presentation special effect information of the virtual object in the target reality scene, the at least one virtual object can be displayed according to the presentation special effect information through the AR device.
  • the AR data packet associated with the second trigger operation can be obtained in response to the second trigger operation, and further can be based on the second pose data corresponding to the AR device, and the virtual data in the AR data packet can be set in advance.
  • the first pose data corresponding to the object determines the special effect information of the virtual object in the target reality scene, and finally displays the realistic augmented reality scene effect in the AR device.
  • the AR data package further includes third pose data corresponding to at least one virtual object model; the virtual object model represents the target object in the target reality scene;
  • the first pose data corresponding to at least one virtual object in the AR data package, and the third pose data corresponding to the virtual object model determine the position of at least one virtual object Present special effects information.
  • the virtual object is virtualized according to the third pose data of the virtual object model in the target reality scene, the second pose data corresponding to the AR device, and the first pose data corresponding to the virtual object.
  • the physical object corresponding to the object model is occluded.
  • the occluded part of the area will not be rendered, and the virtual object model can be processed It is the occlusion form, and the virtual object model of the occlusion form is transparentized, so that the user will not see the transparent virtual object model in the AR device, which can show that the virtual object is the physical object in the target real scene
  • the rendering effect of the occlusion is the rendering effect of the occlusion.
  • the third pose data of the virtual object model can be used to restore the real third pose data of the virtual object model in the target reality scene.
  • the occlusion effect of the virtual object can be realized through the virtual object model, so as to show the effect of a more realistic augmented reality scene in the AR device.
  • the AR data package further includes interaction data corresponding to at least one virtual object.
  • the interaction data includes at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the virtual object being triggered to display At least one of the subsequent recurring impressions.
  • the state triggering conditions included in the interaction data, the presentation state corresponding to each state triggering condition, and the number of recurring display times of the virtual object after the triggered display are explained in detail above, and will not be repeated here.
  • state trigger conditions can include multiple, such as click model trigger condition, sliding model trigger condition, distance recognition trigger condition, designated area trigger condition, gesture recognition trigger condition, orientation current model trigger condition and orientation designated model At least one or more of the trigger conditions.
  • the explanation of each trigger condition is detailed above. I won’t go into details here. You can see that the state trigger condition can include two types. One type is the second type corresponding to the AR device.
  • State trigger conditions related to pose data such as at least one or more of distance recognition trigger conditions, designated area trigger conditions, orientation current model trigger conditions, and orientation designated model trigger conditions, and the other type is corresponding to the AR device
  • the second pose data irrelevant state trigger conditions such as at least one or more of the click model trigger condition, the sliding model trigger condition, and the gesture recognition trigger condition. The following will consider the relationship with the virtual object for these two types. Interactive.
  • the display method provided in the embodiment of the present disclosure further includes:
  • displaying at least one virtual object through the AR device includes:
  • At least one virtual object is displayed through the AR device.
  • the interactive operation may be an operation for triggering the update of the presentation special effect information of the virtual object, where the first type of state triggering condition may be the aforementioned state triggering that is not related to the second pose data corresponding to the AR device.
  • the conditions such as at least one or more of a click model trigger condition, a sliding model trigger condition, and a gesture recognition trigger condition.
  • the AR device detects the presence of the interactive operation. For example, if it is detected that the user acts on the sliding operation triggered by the virtual object A in the AR device, the corresponding presentation state of the virtual object A under the sliding operation can be obtained, and /Or, the number of recurring display times after the virtual object A is triggered to be displayed under the sliding operation, and then the presentation special effect information of the virtual object A is updated based on this.
  • the manner of updating the presentation special effect information of the virtual object can be divided into three situations.
  • the first situation can be solely based on the presentation state of the at least one virtual object corresponding to the at least one virtual object under the first type of state triggering condition.
  • the presentation special effect information of the virtual object is updated.
  • the presentation special effect information of the at least one virtual object may be updated based solely on the number of recurring display times of the at least one virtual object after the trigger condition of the first type of state is triggered to display.
  • the third case can be combined with the corresponding presentation state of at least one virtual object under the first type of state triggering condition, and the number of recurring display times of at least one virtual object after the first type of state triggering condition is triggered to display at least.
  • the presentation special effect information of a virtual object is updated.
  • the initial presentation special effect information of the virtual object A is a virtual vase displayed on the table
  • the interactive operation is a sliding operation acting on the virtual object
  • the virtual object A corresponds to the sliding operation
  • the presentation status of is not displayed.
  • the presentation special effect information of the virtual vase can be changed from the original display to disappear.
  • the virtual object A is a virtual tabby cat that appears on the wall from position A to position B
  • the interactive operation is a click operation on the three-dimensional model corresponding to the virtual tabby cat, and the virtual object is clicked
  • the number of recurring presentations after the display is triggered under operation is 5, and when a user's click operation on the virtual tabby cat is detected, the virtual tabby cat can be triggered to recur 5 times in a display manner from position A to position B.
  • the virtual object A is a virtual light that flashes once in a display lantern
  • the interactive operation is a gesture recognition operation
  • the corresponding presentation state of the virtual light under the gesture recognition operation is display
  • the number of recurring display times is 5 times.
  • the presentation special effect information of the virtual light can be updated to flash 5 times.
  • the augmented reality experiencer can display a set gesture to the AR device, thereby triggering the virtual object to be displayed according to the special effect corresponding to the gesture. This method improves the augmented reality experiencer’s presence in the target reality scene The interactivity with virtual objects enhances the user experience.
  • the display method provided by the embodiment of the present disclosure further includes:
  • the second pose data meets the second type of state triggering condition, based on the corresponding presentation state of at least one virtual object under the second type of state triggering condition, and/or, at least one virtual object is in the second type of state
  • At least one virtual object is displayed through the AR device, including:
  • At least one virtual object is displayed through the AR device.
  • the second type of state triggering conditions may be the aforementioned state triggering conditions related to the second pose data corresponding to the AR device, such as distance recognition triggering conditions, designated area triggering conditions, orientation current model triggering conditions, and orientation Specify at least one or more of the model trigger conditions.
  • the second pose data corresponding to the AR device meets the second type of state triggering conditions, including multiple situations.
  • the second position corresponding to the AR device can be determined by the position and/or display angle of the display component of the AR device.
  • Whether the pose data meets the second type of state triggering conditions can be determined by the position of the AR device’s display part alone to determine whether the second pose data corresponding to the AR device meets the second type of state triggering conditions, or it can be displayed by the AR device alone Angle to determine whether the second pose data corresponding to the AR device meets the second type of state trigger condition, or determine whether the second pose data corresponding to the AR device meets the second type of state triggering condition by combining the position and display angle of the display component of the AR device Class state trigger condition.
  • the second pose data corresponding to the AR device meets the second type of state trigger condition; or ,
  • the display angle of the AR device faces the position of the virtual object A
  • the distance between the location coordinates and the set location point meets the set distance
  • the display angle of the AR device faces the position displayed by the virtual object A
  • the second pose data corresponding to the AR device meets the second category
  • the presentation state corresponding to the at least one virtual object under the second type of state triggering condition may be used, and/or, at least one virtual object The number of recurring display times of the object after the second type of state triggering condition is triggered, and the presentation special effect information of at least one virtual object is updated.
  • the specific update method is similar to the update method based on interactive operations above. Go into details.
  • the virtual object when it is detected that the second pose data corresponding to the AR device meets the set state trigger condition, the virtual object is displayed according to the display mode of the virtual object under the set state trigger condition.
  • Example sexually when the AR device is close to the set position, and the display angle of the AR device faces the position where the virtual object A is located, the virtual object A is triggered to display according to the presentation special effect corresponding to the second pose data of the AR device.
  • the process can be Makes the augmented reality scene effect more realistic, thereby enhancing the user experience.
  • S301 in response to the first trigger operation, obtain a three-dimensional scene model of the target reality scene indicated by the first trigger operation, and an initial AR data packet associated with the target reality scene;
  • S302 Obtain update data of at least one virtual object associated with the initial AR data package; the update data includes first pose data of the at least one virtual object in the three-dimensional scene model;
  • the user in response to the first trigger operation, can be provided with the three-dimensional scene model and the initial AR data package associated with the target reality scene, so that the user can intuitively associate the initial AR data package through the three-dimensional scene model according to his own needs.
  • At least one virtual object is edited and updated, and the updated AR data package is obtained in this way.
  • the augmented reality experiencer performs the augmented reality experience in the target reality scene, he can directly call the updated AR data package for the augmented reality experience.
  • the method of remotely editing AR data packets provided in the present disclosure can simplify the generation method of AR data packets, and provide convenient AR materials for subsequent display of augmented reality scenes.
  • an embodiment of the present disclosure also provides an AR scene content display system 400, which includes an AR generator 401, a server 402, and an AR device 403.
  • the AR generator 401 is in communication connection with the server 402, and the AR device 403 is in communication with the server 402. Server 402 communication connection;
  • the AR generating terminal 401 is configured to obtain the three-dimensional scene model of the target reality scene indicated by the first trigger operation and the initial augmented reality AR data packet associated with the target reality scene in response to the first trigger operation; obtain the association with the initial AR data packet Update data of at least one virtual object; update data includes the first pose data of at least one virtual object in the three-dimensional scene model; and used to update the initial AR data package based on the update data of at least one virtual object, and Send the updated AR data package to the server;
  • the server 402 is configured to receive the updated AR data packet and forward the updated AR data packet to the AR device;
  • the AR device 403 is configured to, in response to the second trigger operation, obtain the updated AR data packet stored in the server and associated with the target reality scene indicated by the second trigger operation; based on the second position when the AR device currently photographs the target reality scene Pose data, and the first pose data of at least one virtual object in the three-dimensional scene model in the updated AR data package to determine the presentation special effect information of the at least one virtual object; based on the presentation special effect information, the at least one virtual object is displayed through the AR device .
  • the display system provided by the embodiments of the present disclosure can remotely edit and generate AR data packets, and publish the generated AR data packets to the server for augmented reality experience on the AR device side.
  • an AR data packet can be provided on the AR generator side.
  • the simple and convenient way to generate the AR data package is convenient for users to edit, and the server can save the AR data package, which is convenient for the AR device to download the AR data package.
  • S501 In response to the first trigger operation, obtain an initial AR data packet associated with the target reality scene indicated by the first trigger operation;
  • the AR data packet editing state in response to the first trigger operation, can be entered, and after the second pose data corresponding to the AR device is obtained, based on the second pose data corresponding to the AR device, the data
  • the at least one virtual object associated with the initial AR data package facilitates the user to intuitively edit and adjust the second pose data of the virtual object in the target reality scene as required in the AR device, thereby simplifying the generation process of the AR scene content. It can also make the adjusted AR data packet more accurate.
  • the embodiment of the present disclosure also provides a generating device corresponding to the AR scene content generating method shown in FIG. Similar, so the implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
  • an AR scene content generating apparatus 600 provided by an embodiment of the present disclosure includes:
  • the first obtaining module 601 is configured to obtain the initial AR data packet associated with the target reality scene indicated by the first trigger operation in response to the first trigger operation;
  • the second acquisition module 602 is configured to acquire update data of at least one virtual object associated with the initial AR data packet; the update data includes the first pose data corresponding to the at least one virtual object;
  • the update module 603 is configured to update the initial AR data package based on the update data of at least one virtual object, and generate an updated AR data package.
  • the first obtaining module 601 is further used for:
  • the method includes:
  • update data of the at least one virtual object placed in the three-dimensional scene model where the update data includes the first pose data of the at least one virtual object placed in the three-dimensional scene model.
  • the generating device further includes a display module 604, and the display module 604 is configured to:
  • the second acquiring module 602 is configured to acquire update data of at least one virtual object associated with the initial AR data packet, it includes:
  • update data for the displayed at least one virtual object includes the first pose data of the at least one virtual object in the target reality scene.
  • the method when the second acquiring module is used to acquire updated data of the displayed at least one virtual object, the method includes:
  • the first pose data includes at least one of the following: position coordinates, deflection angle, and size information in the coordinate system where the target reality scene is located.
  • the update data further includes interaction data corresponding to at least one virtual object
  • the second acquiring module When the second acquiring module is used to acquire update data of at least one virtual object associated with the initial AR data packet, it includes:
  • the interaction data includes at least one of the following: at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the number of recurring display times of the virtual object after being triggered and displayed.
  • the interactive data when the interactive data includes multiple state triggering conditions, the interactive data further includes: the priority of each state triggering condition.
  • the second obtaining module 602 is further configured to:
  • the update module 603 When the update module 603 is used to update the initial AR data package based on the update data of at least one virtual object and generate the updated AR data package, it includes:
  • the initial AR data packet is updated to generate an updated AR data packet.
  • the at least one virtual object includes at least one first virtual object
  • the second obtaining module 602 when the second obtaining module 602 is used to obtain update data of the at least one virtual object associated with the initial AR data packet, the method includes:
  • the at least one virtual object includes at least one second virtual object
  • the method includes:
  • the generating device further includes a sending module 605.
  • the sending module 605 is configured to:
  • the sending module 605 is further configured to:
  • the label information of the updated AR data package is obtained, and the label information is sent to the server.
  • the embodiment of the present disclosure also provides a display device corresponding to the display method of the AR scene content shown in FIG. Therefore, the implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
  • FIG. 16 it is a schematic diagram of an AR scene content display apparatus 700 provided by an embodiment of the present disclosure, including:
  • the obtaining module 701 is configured to obtain an AR data packet associated with the target reality scene indicated by the second trigger operation in response to the second trigger operation; the AR data packet contains the first pose data corresponding to at least one virtual object;
  • the determining module 702 is configured to determine the presentation special effect information of at least one virtual object based on the second pose data of the actual scene of the target currently photographed by the AR device and the first pose data corresponding to the at least one virtual object in the AR data packet;
  • the display module 703 displays at least one virtual object through the AR device based on the presentation special effect information.
  • the AR data package further includes third pose data corresponding to at least one virtual object model; the virtual object model represents the target object in the target reality scene;
  • the determining module 702 When the determining module 702 is used to determine the presentation special effect information of at least one virtual object based on the second pose data when the AR device is currently photographing the real scene of the target and the first pose data corresponding to at least one virtual object in the AR data packet, include:
  • the first pose data corresponding to at least one virtual object in the AR data package, and the third pose data corresponding to the virtual object model determine the position of at least one virtual object Present special effects information.
  • the AR data package further includes interaction data corresponding to at least one virtual object.
  • the interaction data includes at least one state triggering condition, a presentation state corresponding to each state triggering condition, and the virtual object being At least one of the number of recurring impressions after triggering the impression.
  • the display device further includes an interaction module 704, and the interaction module 704 is configured to:
  • the interactive operation acting on at least one virtual object meets the first type of state triggering condition, based on the corresponding presentation state of the at least one virtual object under the first type of state triggering condition, and/or, at least one virtual object is in the first type of state triggering condition.
  • the displaying module 704 When the displaying module 704 is used to display at least one virtual object through the AR device based on the presentation special effect information, it includes:
  • At least one virtual object is displayed through the AR device.
  • the display device further includes an interaction module 704, and the interaction module 704 is configured to:
  • the second pose data meets the second type of state triggering condition, based on the corresponding presentation state of at least one virtual object under the second type of state triggering condition, and/or, at least one virtual object is in the second type of state
  • the display module When the display module is used to display at least one virtual object through the AR device based on the presentation special effect information, it includes:
  • At least one virtual object is displayed through the AR device.
  • an embodiment of the present disclosure also provides an electronic device 800.
  • a schematic structural diagram of the electronic device 800 provided by the embodiment of the present disclosure includes:
  • the processor 81 and the memory 82 communicate through the bus 83, so that the processor 81 executes the following instructions : In response to the first trigger operation, obtain the initial AR data packet associated with the target reality scene indicated by the first trigger operation; obtain the update data of at least one virtual object associated with the initial AR data packet; the update data includes at least one virtual object corresponding The first pose data; based on the update data of at least one virtual object, the initial AR data packet is updated to generate an updated AR data packet.
  • the processor 81 may be caused to execute the following instruction: in response to the second trigger operation, obtain an AR data packet associated with the target reality scene indicated by the second trigger operation; the AR data packet contains the first pose corresponding to at least one virtual object Data; based on the second pose data of the actual scene of the AR device currently shooting the target, and the first pose data corresponding to at least one virtual object in the AR data package, determine the presentation special effect information of at least one virtual object; based on the presentation special effect information, pass The AR device displays at least one virtual object.
  • the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the generation method or the display method in the above method embodiment when the computer program is run by a processor.
  • the storage medium may be a volatile or nonvolatile computer readable storage medium.
  • the embodiments of the present disclosure also provide a computer program product, the computer program product carries program code, and the instructions included in the program code can be used to execute the steps of the generation method or the display method described in the above method embodiment.
  • the computer program product carries program code
  • the instructions included in the program code can be used to execute the steps of the generation method or the display method described in the above method embodiment.
  • the above-mentioned computer program product can be implemented by hardware, software, or a combination thereof.
  • the computer program product is specifically embodied as a computer storage medium.
  • the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
  • SDK software development kit
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software function unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

一种AR场景内容的生成方法、展示方法、装置及存储介质,其中,生成方法包括:响应于第一触发操作,获取所述第一触发操作指示的目标现实场景关联的初始AR数据包(S101);获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象对应的第一位姿数据(S102);基于所述至少一个虚拟对象的更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包(S103)。

Description

AR场景内容的生成方法、展示方法、装置及存储介质
本公开要求在2020年05月26日提交中国专利局、申请号为202010456842.3、申请名称为“AR场景内容的生成方法、展示方法、展示系统及装置”以及在2020年05月26日提交中国专利局、申请号为202010456843.8、申请名称为“AR场景内容的生成方法、展示方法、装置及存储介质”的中国专利的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及增强现实技术领域,具体而言,涉及一种AR场景内容的生成方法、展示方法、装置及存储介质。
背景技术
增强现实(Augmented Reality,AR)技术是一种将虚拟对象与真实世界巧妙融合的技术,将计算机生成的文字、图像、三维模型、音乐、视频等虚拟对象模拟仿真后,应用到真实世界中,从而呈现增强现实场景。
将呈现增强现实场景之前,可以提前确定出在增强现实场景中展示的AR内容,比如可以包含虚拟对象,以及虚拟对象的展示信息,但是若当前现实场景的内部环境发生变化,前期制作的虚拟对象与当前现实场景不再匹配,这样在现实场景中叠加虚拟对象时,增强现实场景的展示效果较差。
发明内容
本公开实施例至少提供一种AR场景内容的生成方案。
第一方面,本公开实施例提供了一种增强现实AR场景内容的生成方法,包括:
响应于第一触发操作,获取所述第一触发操作指示的目标现实场景关联的初始AR数据包;
获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象对应的第一位姿数据;
基于所述至少一个虚拟对象的更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
本公开实施例中,检测到第一触发操作时,可以获取与目标现实场景关联的初始AR数据包,进一步可以获取与初始AR数据包括关联至少一个虚拟对象的更新数据,比如可以包含至少一个虚拟对象对应的第一位姿数据,然后基于更新数据对初始AR数据包更新,得到与目标现实场景更加匹配的虚拟对象,提高增强现实场景的逼真效果。
第二方面,本公开实施例提供了一种增强现实AR场景内容的展示方法,包括:
响应于第二触发操作,获取与所述第二触发操作指示的目标现实场景关联的AR数据包;所述AR数据包中包含至少一个虚拟对象对应的第一位姿数据;
基于AR设备当前拍摄所述目标现实场景的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息;
基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
本公开实施例中,可以响应第二触发操作,获取到与该第二触发操作关联的AR数据包,进一步可以基于AR设备对应的第二位姿数据,以及提前设置好AR数据包中的 虚拟对象在目标现实场景的第一位姿数据,确定虚拟对象在该目标现实场景中的呈现特效信息,最终在AR设备中展示出逼真的增强现实场景效果。
第三方面,本公开实施例提供了一种增强现实AR场景内容的生成装置,包括:
第一获取模块,用于响应于第一触发操作,获取所述第一触发操作指示的目标现实场景关联的初始AR数据包;
第二获取模块,用于获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象对应的第一位姿数据;
更新模块,用于基于所述至少一个虚拟对象的更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
第四方面,本公开实施例提供了一种增强现实AR场景内容的展示装置,包括:
获取模块,用于响应于第二触发操作,获取与所述第二触发操作指示的目标现实场景关联的AR数据包;所述AR数据包中包含至少一个虚拟对象对应的第一位姿数据;
确定模块,用于基于AR设备当前拍摄所述目标现实场景的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息;
展示模块,用于基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
第五方面,本公开实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面所述的生成方法的步骤,或者执行如第二方面所述的展示方法的步骤。
第六方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面所述的生成方法的步骤,或者执行如第二方面所述的展示方法的步骤。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的第一种AR场景内容的生成方法的流程图;
图2a示出了本公开实施例所提供的一种AR数据包下载界面示意图;
图2b示出了本公开实施例所提供的一种AR数据包下载和上传的界面示意图;
图3示出了本公开实施例所提供的一种定位提示的页面示意图;
图4示出了本公开实施例所提供的一种虚拟对象的位姿数据展示页面示意图;
图5示出了本公开实施例所提供的一种虚拟对象的位姿数据编辑页面示意图;
图6示出了本公开实施例所提供的一种交互数据生成的界面示意图;
图7示出了本公开实施例所提供的一种AR数据包保存页面示意图;
图8示出了本公开实施例所提供的一种上传更新后的AR数据包的界面流程图;
图9示出了本公开实施例所提供的一种AR场景内容的展示方法的流程图;
图10示出了本公开实施例所提供的一种AR场景内容展示时在AR设备进行的定位提示示意图;
图11示出了本公开实施例所提供的一种增强现实的场景示意图;
图12示出了本公开实施例所提供的第二种AR场景内容的生成方法的流程图;
图13示出了本公开实施例所提供的一种AR场景内容的展示系统的结构示意图;
图14示出了本公开实施例所提供的第三种AR场景内容的生成方法的流程图;
图15示出了本公开实施例所提供的第一种AR场景内容的生成装置的结构示意图;
图16示出了本公开实施例所提供的第一种AR场景内容的展示装置的结构示意图;
图17示出了本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
增强现实(Augmented Reality,AR)技术可以应用于AR设备中,AR设备可以为任何能够支持AR功能的电子设备,包括但不限于AR眼镜、平板电脑、智能手机等。当AR设备在现实场景中被操作时,通过该AR设备可以观看到叠加在现实场景中的虚拟对象,比如路过一些建筑物或者旅游景点时,通过AR设备可以看到叠加在建筑物或者旅游景点附近的虚拟图文介绍,这里的虚拟图文可以称为虚拟对象,建筑物或者旅游景点可以为真实场景,这种场景下,通过AR眼镜看到的虚拟图文介绍,会随着AR眼镜的朝向角度变化而变化,这里虚拟图文介绍与AR眼镜位置关系相关,然而在另一些场景中,我们希望看到更加真实的虚实结合的增强现实场景,比如想要看到放置在真实桌子上的虚拟花盆,看到比如可以看到叠加在真实的校园操场上的虚拟大树等,这种方式下,需要考虑如何使得虚拟花盆和虚拟大树能够更好地与现实场景相融合,实现增强现实场景中对虚拟对象的呈现效果,该呈现效果如何实现,为本公开实施例所要讨论的内容,下面将结合以下具体实施例进行阐述。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种AR场景内容的生成方法进行详细介绍,本公开实施例所提供的AR场景内容的生成方法的执行主体可以为上述AR设备,比如可以包括AR眼镜、平板电脑、智能手机、智能穿戴式设备等具有显示功能和数据处理能力的设备,本公开实施例中不作限定,在一些可能的实现方式中,该AR场景内容的生成方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
参见图1所示,为本公开实施例提供的AR场景内容的生成方法的流程图,该生成方法包括以下S101~S103:
S101,响应于第一触发操作,获取第一触发操作指示的目标现实场景关联的初始 AR数据包;
示例性地,第一触发操作可以是对目标现实场景关联的任一初始AR数据包对应的编辑选项的触发操作,触发操作例如为选中编辑选项的操作,或者,直接通过语音或者手势等方式进行触发等,本公开对此并不限定。
如图2a所示,为“XXX大厦-15层”以及其它现实场景分别关联的AR数据包的示意图,若检测到“XXX大厦-15层”对应的任一AR数据包的编辑选项被触发,比如“【示例】科幻”类的初始AR数据包的编辑选项被触发,进而可以向服务器请求获取目标现实场景“XXX大厦-15层”关联的“【示例】科幻”类的初始AR数据包。
其中,与目标现实场景关联的初始AR数据包的个数可以为至少一个,不同初始AR数据包可对应不同的AR场景的类别。示例性地,每个初始AR数据包可以包含有标签信息,该标签信息用于表征该初始AR数据包的类别,比如可以包括“科幻类别”、“卡通类别”和“历史类别”中的一种或多种,每种类别用于表示AR场景中待展示的虚拟对象的风格;其中,每个初始AR数据包中可以包含预先设置好的虚拟对象,也可以不包含虚拟对象。
示例性地,目标现实场景可以是建筑物的室内场景、或者是街道场景,还可以是任一能够叠加虚拟对象的目标现实场景。
S102,获取与初始AR数据包关联的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象对应的第一位姿数据。
其中,初始AR数据包关联的至少一个虚拟对象可以包括初始AR数据包中本地包含的虚拟对象,还可以包括通过网络下载的虚拟对象,还可以包括从预先建立的素材库中获取的虚拟对象,其中,素材库既可以设置在本地,也可以设置在云端,本公开对此并不限定。
示例性地,虚拟对象可以为上述提到的虚拟花盆、虚拟大树等静态的虚拟模型、或者可以为一些虚拟视频和虚拟动画等动态的虚拟对象,虚拟对象的第一位姿数据包括但不限于能够表示虚拟对象呈现时的位置和/或姿态的数据,比如可以包括虚拟对象在目标现实场景对应的坐标系下的位置坐标、偏转角度以及尺寸信息。
S103,基于至少一个虚拟对象的更新数据,对初始AR数据包进行更新,生成更新后的AR数据包。
在一些实施例中,基于获取到的更新数据,可以对初始AR数据包中的有关虚拟对象的内容进行更新,得到更新后的AR数据包,其中,更新方式可以是直接将更新数据添加在初始AR数据包中,或者,是替换初始AR数据包中原有的部分数据,得到更新后的AR数据包,该更新后的AR数据包中包含上述提到的初始AR数据包关联的虚拟对象,以及至少一个虚拟对象的更新数据。
得到的更新后的AR数据包可以用于在AR设备拍摄目标现实场景时,按照更新后的AR数据包展示融入目标现实场景的虚拟对象。
本公开实施例中,检测到第一触发操作时,可以获取与目标现实场景关联的初始AR数据包,进一步可以获取与初始AR数据包括关联至少一个虚拟对象的更新数据,比如可以包含至少一个虚拟对象对应的第一位姿数据,然后基于更新数据对初始AR数据包更新,得到与目标现实场景更加匹配的虚拟对象,提高增强现实场景的逼真效果。
下面将结合具体实施例对上述S101~S103进行具体阐述。
在一种实施方式中,本公开实施例提供的生成方法可以应用于AR生成端,该生成方法还包括:
获取第一触发操作指示的目标现实场景的三维场景模型。
示例性地,AR生成端可以为计算机、笔记本、平板等设备,这些设备中可以安装 用于生成及编辑AR场景内容的应用程序或者可以访问用于生成及编辑AR场景内容的WEB页面,用户可以在应用程序或WEB页面中远程编辑AR场景内容,比如,可以通过表示目标现实场景的三维场景模型来模拟目标现实场景,直接在三维场景模型中配置待展示的虚拟对象的相关数据,无需前往目标现实场景中进行配置,便可以实现生成AR场景内容。
在一些实施例中,AR生成端的显示界面中可以展示多个现实场景对应的编辑选项,在检测到其中任一编辑选项被触发后,可将被触发的编辑选项对应的现实场景作为目标现实场景,目标现实场景的编辑选项被触发,进而可获取该目标现实场景的三维场景模型,以便后续在该目标现实场景模型的三维场景模型中添加待展示的虚拟对象。或者,AR生成端的显示界面中可以展示地图,该地图中设置有多个兴趣点(Point Of Interest,POI),每个POI点可对应一个现实场景,当用户点击任一现实场景的POI时,AR生成端同样可以检测到针对该目标现实场景的编辑选项被触发,进而获取表示目标现实场景的三维场景模型,以及与目标现实场景关联的初始增强现实AR数据包,以便后续在该目标现实场景模型的三维场景模型中添加待展示的虚拟对象。
示例性地,表示目标现实场景的三维场景模型与该目标现实场景在相同坐标系下是等比例呈现的,比如目标现实场景中包括街道以及街道两侧的楼栋,则表示该目标现实场景的三维场景模型同样包括该条街道的模型以及该条街道两侧的楼栋,且三维场景模型与目标现实场景在相同坐标系中例如可以是按照1:1呈现的,或者也可以是等比例呈现的。
如图2b所示,展示有AR场景内容的编辑页面,在检测到编辑界面中“更新体验包列表”选项被触发,可以获取到多种现实场景,比如可以包括“XXX大厦-15层”,在检测到编辑界面中“下载场景”被触发,可视为检测到针对目标现实场景的第一触发操作,进而可以获取表示该目标现实场景的三维场景模型,以及与该目标现实场景关联的初始AR数据包,比如标签信息为“圣诞节”和“元旦节”的两个初始AR体验包。
可以看到上述目标现实场景关联的初始AR体验包(初始AR数据包)“圣诞节”的类别标签中进一步包含“科幻”和“自然”的标签,说明创建的虚拟对象可以是属于科幻类别的虚拟对象,以及属于自然类别的虚拟对象,当然在后期编辑的时候,可以基于上传的虚拟对象的类别对AR数据包的类别进行更改。
进一步地,针对上述S102,在获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
S1021,展示加载的三维场景模型;
S1022,获取至少一个虚拟对象在置于三维场景模型中的情况下的更新数据,更新数据包括至少一个虚拟对象在置于三维场景模型中的第一位姿数据。
示例性地,虚拟对象在置于三维场景模型中的第一位姿数据中包含该虚拟对象位于该三维场景模型的坐标系中的位置坐标,偏转角度以及尺寸信息,其中,偏转角度可以通过虚拟对象的指定正方向与三维场景模型坐标系的坐标轴之间的夹角表示。
示例性地,三维场景模型与目标现实场景在相同坐标系中可以按照1:1呈现,在不同坐标系中等比例呈现,故这里获取到虚拟对象在三维场景模型中呈现时的第一位姿数据,在后期AR设备中呈现时,能够表示出虚拟对象在目标现实场景中的呈现特效信息。
其中,通过检测用户在AR生成端展示的编辑页面中输入的编辑操作,可以得到该编辑操作所编辑的内容,并将该编辑操作所编辑的内容作为更新数据。比如在编辑页面中可以展示出三维场景模型的示意图以及关于虚拟对象的位姿数据编辑栏,用户可以在虚拟对象的位姿数据编辑栏中对虚拟对象在三维场景模型中的第一位姿数据进行编辑, 在编辑完成后,AR生成端可以获取到虚拟对象在三维场景模型中的第一位姿数据。
进一步地,可以基于虚拟对象在三维场景模型中的第一位姿数据,确定虚拟对象在目标现实场景中被展示时的第一位姿数据,按照该第一位姿数据在目标现实场景中进行展示时,可以使得虚拟对象能够与目标现实场景更好的融合,这样可以在AR设备中展示出逼真的增强现实场景的效果。
本公开实施例中,可以向用户提供与该目标现实场景关联的三维场景模型和初始AR数据包,便于用户按照自身需求直观地通过三维场景模型对初始AR数据包关联的至少一个虚拟对象进行编辑更新,通过该方式得到更新后的AR数据包,当增强现实体验者在目标现实场景中进行增强现实体验时,可以直接调用更新后的AR数据包进行增强现实体验,通过本公开给出的远程编辑AR数据包的方式,能够简化AR数据包的生成方式,为后续增强现实场景的展示提供了便利的AR素材。
此外,因为可以提供用于编辑虚拟对象的第一位姿数据的三维场景模型,便于用户能够直观地编辑虚拟对象在三维场景中的第一位姿数据,从而能够基于用户需求个性化地进行虚拟对象的第一位姿数据的设置。
在另一种实施方式中,本公开实施例提供的生成方法还可以在AR场景进行体验的过程中生成,比如应用于进行AR场景展示的AR设备,该方法还包括:
基于AR设备当前拍摄目标现实场景时的第二位姿数据,以及初始AR数据包,展示与初始AR数据包关联的至少一个虚拟对象。
示例性地,AR设备中可以安装用于展示及编辑AR场景内容的应用程序,用户可以在AR设备中打开该应用程序在增强现实场景中编辑AR场景内容,打开该应用程序后,AR设备的显示界面中可以展示出至少一个现实场景对应的编辑选项,每个现实场景关联有多个初始AR数据包,每个初始AR数据包具有对应的编辑选项,可以针对这些初始AR数据包进行在线编辑。
在一些实施例中,AR设备的显示界面中可以展示多个现实场景分别对应的初始AR数据包,在检测到其中任一初始AR数据包的编辑选项被触发后,可将被触发的编辑选项对应的初始AR数据包作为上述目标现实场景关联的AR数据包。
在展示虚拟对象之前可以先获取AR设备当前拍摄目标现实场景时的第二位姿数据,然后基于AR设备当前拍摄目标现实场景时的第二位姿数据,来确定应该在该AR设备中展示的虚拟对象,可见本公开实施例提供的是一种在体验增强现实场景的过程中,对AR场景内容进行编辑的方案。
示例性地,AR设备当前拍摄目标现实场景时的第二位姿数据可以包括用于显示虚拟对象的显示部件所在的位置和/或显示角度,为了方便解释AR设备对应的第二位姿数据,这里引入坐标系的概念,比如以目标现实场景所在的世界坐标系为例,示例性地,AR设备对应第二位姿数据可以包括但不限于以下至少一种:AR设备的显示部件在该世界坐标系中的坐标位置;AR设备的显示部件与世界坐标系中各个坐标轴的夹角;包括AR设备的显示部件在世界坐标系中的坐标位置以及与世界坐标中各个坐标轴的夹角。
其中,AR设备的显示部件具体指该AR设备中用于显示虚拟对象的部件,示例性地,AR设备为手机或者平板时,对应的显示部件可以为显示屏,当AR设备为AR眼镜时,对应的显示部件可以为用于显示虚拟对象的镜片。
AR设备对应的第二位姿数据可以通过多种方式获取,比如当AR设备配置有位姿传感器时,可以通过AR设备上的位姿传感器来确定AR设备的第二位姿数据;当AR设备配置有图像采集部件,比如摄像头时,可以通过摄像头采集的目标现实场景图像来确定该第二位姿数据。
示例性地,位姿传感器可以包括用来确定AR设备的拍摄角度的角速度传感器,比 如陀螺仪、惯性测量单元(Inertial measurement unit,IMU)等;可以包括用来确定AR设备拍摄位置的定位部件,比如可以是基于全球定位系统(Global Positioning System,GPS)、全球导航卫星系统(Global Navigation Satellite System,GLONASS),无线保真(Wireless Fidelity,WiFi)定位技术的定位部件;也可以同时包括用来确定AR设备的拍摄角度的角速度传感器和拍摄位置的定位部件。
具体地,在通过AR设备拍摄的目标现实场景图像确定该AR设备对应的第二位姿数据时,可以通过该目标现实场景图像和预先存储的用于定位的第一神经网络,来确定该AR设备对应的第二位姿数据。
示例性地,第一神经网络可以基于预先拍摄目标现实场景得到的多张样本图片,和拍摄每张样本图片时对应的第二位姿数据,训练得到。
针对AR设备拍摄的目标现实场景图像来确定AR设备对应的第二位姿数据时,如图3所示,在AR设备展示用于提示用户已进入编辑状态,开始定位的信息,比如展示用于提示用户拍摄用于定位的目标现实场景图像的信息。
当获取到AR设备当前拍摄目标现实场景时的第二位姿数据时,可以基于该AR设备对应的第二位姿数据、以及与初始AR数据包关联的至少一个虚拟对象在该目标现实场景下的初始第一位姿数据,对该至少一个虚拟对象进行展示,在对该虚拟对象进行展示的同时,还可以展示该至少一个虚拟对象的位姿数据展示界面,如图4所示,为虚拟对象唐三彩马的展示示意图,以及针对该展示区域内的虚拟对象的位姿数据展示界面,该位姿数据展示界面上展示有虚拟对象唐三彩马在目标现实场景下的初始第一位姿数据,主要是包括该唐三彩马在目标现实场景下的坐标信息。
示例性地,图4左侧示出的位姿数据展示界面中包含“我的地图坐标”,能够表示AR设备的显示部件在目标现实场景所在的坐标系中的坐标信息;模型列表中包含多个虚拟对象在该目标现实场景中的坐标信息,比如包括虚拟对象“唐三彩马”和“石狮子”分别在目标现实场景中的坐标信息,进一步地,还可以添加其它虚拟对象,以及其它虚拟对象在该目标现实场景中的坐标信息。
进一步地,针对上述S102,在获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
获取对展示的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象在目标现实场景中的第一位姿数据。
其中,通过检测用户在AR设备展示的位姿数据编辑页面中输入的编辑操作,可以得到该编辑操作所编辑的内容,并将该编辑操作所编辑的内容作为更新数据,比如在位姿数据编辑页面中可以在AR设备上展示出关于虚拟对象的位姿数据编辑栏,如图5所示,用户可以在虚拟对象的位姿数据编辑栏中对虚拟对象在目标现实场景中的第一位姿数据进行编辑,示例性地,针对虚拟对象的位姿数据的编辑是在目标现实场景所在的世界坐标系中完成的,因此在编辑完成后,AR设备可以获取到虚拟对象在目标现实场景中的第一位姿数据。
示例性地,图5中的左侧展示出当前展示的虚拟对象“唐三彩马”在目标现实场景中的坐标信息,以及该虚拟对象在目标现实场景中的交互数据,针对交互数据将在后文进行解释。图5中的右侧展示出针对当前展示出的虚拟对象在目标现实场景中的第一位姿数据进行编辑的界面,在该界面中,可以对虚拟对象在目标现实场景所在坐标系中的坐标信息进行编辑,以及可以对该虚拟对象的尺寸比例进行编辑,以及可以对该虚拟对象分别与坐标系中的三个坐标轴的夹角进行编辑。
本公开实施例中,并在获取到AR设备对应的第二姿数据后,可以基于该AR设备对应的第二位姿数据,展示出与初始AR数据包关联的至少一个虚拟对象,便于用户在 AR设备中能够按照需求直观地对虚拟对象在目标现实场景中的第一位姿数据进行编辑调整,从而可以简化AR场景内容的生成过程,因为该过程可以在增强现实场景的体验过程中对AR数据包进行的直观调整,因而能够使得调整后的AR数据包的准确度更高。
在一种实施方式中,针对上述提到的获取对展示的至少一个虚拟对象的更新数据,包括:
展示位姿数据编辑界面,并获取通过位姿数据编辑界面接收的至少一个虚拟对象的第一位姿数据;
其中,第一位姿数据中包括以下至少一种:在目标现实场景所处坐标系中的位置坐标、偏转角度以及尺寸信息。
示例性地,可以通过AR设备展示的位姿数据编辑页面,获取对展示的至少一个虚拟对象在目标现实场景中的位置坐标、偏转角度以及尺寸信息的编辑操作,从而得到该至少一个虚拟对象在目标现实场景中的第一位姿数据。
示例性地,参见图5所示,为针对虚拟对象“唐三彩马”的位姿数据编辑界面,具体地,该位姿数据编辑界面可以包括针对“唐三彩马”在目标现实场景中的位置坐标的编辑,可以包括针对“唐三彩马”在目标现实场景中的尺寸信息的编辑,以及可以包括针对“唐三彩马”在目标现实场景所在的世界坐标系中沿坐标轴的偏转角度的编辑。
本公开实施例中,在增强现实场景中,可以提供用于编辑虚拟对象的第一位姿数据的展示位姿数据编辑界面,便于用户能够直观地调整虚拟对象在目标现实场景中的第一位姿数据,从而能够基于用户需求个性化地进行虚拟对象的第一位姿数据的设置。
在另一种实施方式中,上述提到的更新数据除了包括至少一个虚拟对象在目标现实场景中的第一位姿数据以外,还可以包括至少一个虚拟对象在目标现实场景中的交互数据,或者,至少一个虚拟对象在表征目标现实场景的三维场景模型中的交互数据,本公开实施例提供的更新数据还包括至少一个虚拟对象对应的交互数据,针对上述S102,在获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
展示交互数据编辑界面,并获取通过所述交互数据编辑界面分别接收的每个虚拟对象的交互数据;
其中,所述交互数据中包括以下至少一种:至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数。
示例性地,可以在AR设备展示的交互数据编辑界面,或者AR生成端展示的交互数据编辑界面,获取针对至少一个虚拟对象的交互数据的编辑操作,即可以在目标现实场景中针对交互数据进行现场编辑,也可以在表征目标现实场景的三维场景模型中针对交互数据进行远程编辑,从而得到针对至少一个虚拟对象的至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数中的至少一种。
示例性地,基于获取到的针对任一虚拟对象的交互数据,后续可以基于虚拟对象的状态触发条件,触发该虚拟对象按照与该状态触发条件对应的呈现状态进行呈现,和/或,按照在被该状态触发条件触发展示后的循环展示次数进行展示。
如图6所示,在触发图6中(a)所示的创建触发条件按钮后,展示出如图6中(b)所示的交互数据编辑界面,具体是指针对虚拟动画的交互数据进行编辑的界面,该交互数据编辑界面中可以包含针对至少一个虚拟对象的状态触发条件、与该状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数的编辑区域。
示例性地,可以通过状态触发条件对应的编辑区域获取针对虚拟对象的状态触发条件的编辑操作,从而得到该虚拟对象对应的状态触发条件,后续AR设备可以在获取到该状态触发条件后触发该虚拟对象在目标现实场景中进行展示。
示例性地,可以通过呈现状态的编辑区域对虚拟对象被触发后是否进行显示进行编辑,如图6中(b)所示,在“模型显示”对应的按钮处于被选中状态时,表示当前虚拟动画对应的触发条件1被触发后,当前虚拟动画被显示,若“模型显示”对应的按钮处于未被选中状态时,表示当前虚拟动画对应的触发条件1被触发后,当前虚拟动画不被显示。
示例性地,可以通过循环次数对应的编辑区域获取针对虚拟对象在展示后的循环展示次数的编辑操作,比如获取的展示次数为n次,则可以表示当前虚拟动画在触发条件1被触发后,循环展示n次。
特别地,在交互数据中包括多种状态触发条件的情况下,交互数据还包括:每种状态触发条件的优先级。
示例性地,在AR设备展示的交互数据编辑界面对交互数据进行编辑的情况下,可以在如图5所示的页面中进行展示,比如,可以在图5的左侧,展示出当前展示的虚拟对象“唐三彩马”在目标现实场景所在坐标系下的坐标信息,以及该虚拟对象“唐三彩马”对应的交互数据(图5中未示出),包括触发条件、显示状态、是否循环展示、优先级等。
示例性地,交互数据包含多种状态触发条件具体是指初始AR数据包关联的虚拟对象包含多个,每个虚拟对象对应一个状态触发条件的情况,可以对每个虚拟对象对应的状态触发条件设置优先级,当后续AR设备同时获取到多个状态触发条件时,将触发优先级最高的状态触发条件对应的虚拟对象,按照该虚拟对象对应的状态触发条件对应的呈现状态,和/或,按照该虚拟对象在被触发展示后的循环展示次数进行展示。
这里提到的状态触发条件可以包含但不局限于以下至少一种:
点击模型触发条件、滑动模型触发条件、距离识别触发条件、指定区域触发条件、手势识别触发条件、朝向当前模型触发条件以及朝向指定模型触发条件。
下面将对结合针对虚拟对象A来说明这些状态触发条件的含义:
其中,(1)点击模型触发条件是指通过点击AR设备中展示的虚拟对象A的三维模型后触发在AR设备中展示该虚拟对象A的状态触发条件,示例性地,AR设备可以展示待进行展示的虚拟对象的三维模型,在检测到针对该三维模型的点击操作后,展示该三维模型对应的虚拟对象;
(2)滑动模型触发条件是指通过在AR设备中按照设定方式滑动虚拟对象A的三维模型触发的针对该虚拟对象A的状态触发条件,示例性地,可以设置在AR设备中针对虚拟对象A的三维模型进行右滑操作触发虚拟对象A进行展示,在AR设备中针对虚拟对象A的三维模型进行左滑操作触发虚拟对象A进行消失;
(3)距离触发条件是指在AR设备所在的位置坐标与设定位置点之间的距离满足设定距离的情况下,触发的针对该虚拟对象A的状态触发条件;
(4)指定区域触发条件是指AR设备进入指定区域后,触发的针对虚拟对象A的状态触发条件;
(5)手势识别触发条件是指通过设定的手势动作,触发的针对虚拟对象A的状态触发条件;
(6)朝向当前模型触发条件是指当AR设备的拍摄角度朝向虚拟对象A所在的位置时,触发的针对该虚拟对象A的状态触发条件;
(7)朝向指定模型触发条件是指AR设备朝向特定的虚拟对象所在的位置时,触发的针对虚拟对象A的状态触发条件。
当AR数据包中包含多个虚拟对象时,且每个虚拟对象对应一种状态触发条件时,可以针对这些状态触发条件形成触发逻辑链,比如AR数据包中包含3个虚拟对象,若 将第1个虚拟对象、第2个虚拟对象和第3个虚拟对象各自对应的状态触发条件分别记录为状态触发条件1、状态触发条件2和状态触发条件3,当状态触发条件1、状态触发条件2和状态触发条件3形成触发逻辑链后,比如形成的触发逻辑链的顺序为状态触发条件1、状态触发条件2和状态触发条件3,则当用户依次触发了状态触发条件1、状态触发条件2和状态触发条件3,则可以依次展示虚拟对象1、虚拟对象2和虚拟对象3。
本公开实施例中,可以提供针对虚拟对象的交互数据进行编辑的交互数据编辑界面,以支持编辑虚拟对象的触发方式以及展示形式。
在一种实施方式中,本公开实施例提供的生成方法还包括:
获取与初始AR数据包关联的至少一个虚拟物体模型对应的第三位姿数据,至少一个虚拟物体模型表示目标现实场景中的目标物体;
基于至少一个虚拟对象的更新数据,对初始AR数据包进行更新,生成更新后的AR数据包,包括:
基于至少一个虚拟对象的更新数据,以及虚拟物体模型对应的第三位姿数据,对初始AR数据包进行更新,生成更新后的AR数据包。
示例性地,虚拟物体模型对应的第三位姿数据,可以包括但不限于能够表示虚拟物体模型在目标现实场景中呈现时的位置和/或姿态的数据,或者可以包括但不限于能够表示虚拟物体模型在三维场景模型中的位置和/或姿态的数据,比如可以包括虚拟物体模型在目标现实场景或者三维场景模型所处坐标系中的位置坐标、偏转角度以及尺寸信息。
具体地,虚拟物体模型在目标现实场景中或者三维场景模型中的第三位姿数据,能够表示该虚拟物体模型对应的目标真实物体在目标现实场景中的第三位姿数据,此外,还可以编辑虚拟物体模型在AR设备中被展示时的展示形态,比如将虚拟物体模型在AR设备中的展示形态编辑为遮挡形态,并在AR设备中呈现时进行透明处理,该虚拟物体模型可以用于对需要被遮挡的虚拟对象进行遮挡,比如在对虚拟对象进行展示时,对虚拟对象需要被遮挡的部分区域不进行渲染,可以通过虚拟物体模型对被遮挡的部分区域进行透明处理,从而可以达到遮挡效果。
本公开实施例中,可以编辑用于呈现遮挡效果的虚拟物体模型,通过编辑虚拟物体模型对应的第三位姿数据,可以实现还原虚拟物体模型在目标现实场景中的真实第三位姿数据,便于后续在AR设备展示虚拟对象时,提供更加逼真的展示效果。
在一种实施方式中,至少一个虚拟对象包含至少一个第一虚拟对象,针对上述S102,在获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
获取初始AR数据包中包含的至少一个第一虚拟对象的更新数据。
当初始AR数据包中包含虚拟对象时,这里获取初始AR数据包关联的至少一个虚拟对象的更新数据,可以是获取初始AR数据包中包含的至少一个第一虚拟对象的更新数据。
在另一种实施方式中,至少一个虚拟对象包括至少一个第二虚拟对象,针对上述S102,在获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
从预先建立的素材库中获取与初始AR数据包关联的至少一个第二虚拟对象,并获取至少一个第二虚拟对象的更新数据。
当初始AR数据包中包含虚拟对象或者不包含虚拟对象时,获取初始AR数据包关联的至少一个虚拟对象的更新数据,可以从预先建立的素材库中获取与初始AR数据包关联的至少一个第二虚拟对象,并获取至少一个第二虚拟对象的更新数据。
在另一种实施方式中,在获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,可以同时获取初始AR数据包中包含的至少一个第一虚拟对象的更新数据,以及从预先建立的素材库中获取与该初始AR数据包关联的至少一个第二虚拟对象的更新数 据。
示例性地,预先建立的素材库中可以包含多种虚拟静态模型、动画和视频等虚拟对象,用户可以在预先建立的素材库中选择要上传至该初始AR数据包中的虚拟对象。
示例性地,第二虚拟对象可以和第一虚拟对象相同,也可以不相同,可以通过第一虚拟对象和/或第二虚拟对象的更新数据对初始AR数据包进行更新。
本公开实施例中,可以通过多种方式获取初始AR数据包关联的虚拟对象以及该虚拟对象的更新数据,可以灵活获取虚拟对象的更新数据。
对应的,在展示与初始AR数据包关联的至少一个虚拟对象时,可以包括:
展示初始AR数据包中的至少一个第一虚拟对象,和/或,展示从预先建立的素材库中获取的与初始AR数据包关联的至少一个第二虚拟对象。
上文提到初始AR数据包中可能包含虚拟对象,也可能不包含虚拟对象,当初始AR数据包中包含虚拟对象时,展示的初始AR数据包关联的至少一个虚拟对象,可以是展示初始AR数据包中包含的至少一个第一虚拟对象,也可以从预先建立的素材库中获取与该初始AR数据包关联的至少一个第二虚拟对象后进行展示,或者,可以同时展示初始AR数据包中包含的至少一个第一虚拟对象,以及从预先建立的素材库中获取的与该初始AR数据包关联的至少一个第二虚拟对象。
进一步地,在生成更新后的AR数据包之后,本公开实施例提供的生成方法还包括:
将更新后的AR数据包发送至服务器;或者,将更新后的AR数据包以及指示更新后的AR数据包是否启用的状态信息发送至服务器。
示例性地,如图7所示,在针对初始AR数据包编辑完成得到更新后的AR数据包后,可以点击保存,比如AR设备检测到用户触发的保存触发操作,并在保存成功后,可以将该更新后的AR数据包发送至服务器。
示例性地,指示更新后的AR数据包是否启用的状态信息可以通过在图2a展示的界面中进行设置,针对每个AR数据包下面可以设置有“启用”按钮(图2a中未示出),若该AR数据包下面的“启用”按钮被触发,则表示该AR数据包对应的更新后的AR数据包上传至服务器后,可以被AR设备下载体验。
本公开实施例中,可以将生成的更新后的AR数据包发布至服务器,可以供其它用户进行下载使用,示例性地,可以供其它AR设备下载进行编辑,同时可以供AR设备下载体验AR数据包。
在另一种实施方式中,在生成更新后的AR数据包之后,本公开实施例提供的生成方法还包括:
响应于对更新后的AR数据包的上传触发操作,获取更新后的AR数据包的标签信息,并将更新后的AR数据包和标签信息发送至服务器。
示例性地,针对初始AR数据包进行编辑,得到更新后的AR数据包后,可以将更新后的数据包发送至服务器,如图8所示,得到的与目标现实场景关联的AR数据包可以包括多个,用户触发图8中(a)所示页面的上传体验包操作后,为了确定用户要上传的目标更新后的AR数据包,可以展示出如图8中(b)所示的页面,用户可以在该页面中填写要上传的目标更新后的AR数据包对应的标签信息。
示例性地,标签信息可以包含目标现实场景的名称、楼层名称、体验包名称、主题、备注等信息,标签信息的填写一方面便于确定待上传的更新后的AR数据包,另一方面,便于服务器基于该标签信息对上传的更新后的AR数据包进行保存,从而方便AR设备端的用户在AR设备下载AR数据包进行体验。
本公开实施例中,可以将生成的更新后的AR数据包发布至服务器,可以供其它用户进行下载使用,示例性地,可以供其它AR生成端下载进行编辑,同时可以供AR设 备下载体验AR数据包。
示例性地,将更新后的AR数据包和标签信息发送至服务器,包括:
将更新后的AR数据包、标签信息以及指示更新后的AR数据包是否启用的状态信息发送至服务器;
其中,处于启用状态的AR数据包能够被使用。
示例性地,指示更新后的AR数据包是否启用的状态信息可以通过在图8的(a)图中进行设置,每个AR数据包下面设置有“启用”按钮,若该AR数据包下面的“启用”按钮被触发,则表示该AR数据包对应的更新后的AR数据包上传至服务器后,可以被AR设备下载体验。
下面介绍对AR场景内容的展示过程,该展示过程可以应用于AR设备,该AR设备可以与上述用于生成AR数据包的AR设备相同,也可以不相同,在此不进行限定,如图9所示,包括以下S201~S203:
S201,响应于第二触发操作,获取与第二触发操作指示的目标现实场景关联的AR数据包;AR数据包中包含至少一个虚拟对象对应的第一位姿数据。
示例性地,AR设备可以包括但不限于AR眼镜、平板电脑、智能手机、智能穿戴式设备等具有显示功能和数据处理能力的设备,这些AR设备中可以安装用于展示AR场景内容的应用程序,用户可以在该应用程序中体验AR场景内容。
在AR设备打开用于展示AR场景内容的应用程序后,该AR设备可以展示出至少一个现实场景,以及每个现实场景关联的AR数据包,示例性地,第二触发操作可以是对目标现实场景中关联的AR数据包的触发操作,在一种实施例中,如图2a所示,用户可以在AR设备展示的现实场景关联的AR数据包中,点击目标现实场景“XXX大厦-15层”关联的AR数据包,比如点击“【示例】科幻”该AR数据包,则AR设备可以检测到存在针对该AR数据包的第二触发操作,进而可以向服务器请求获取目标现实场景“XXX大厦-15层”关联的AR数据包。
其中,AR数据包包含的至少一个虚拟对象对应的第一位姿数据可以为在目标现实场景中的第一位姿数据,或者,AR数据包中包含的至少一个虚拟对象对应第一位姿数据可以为在目标现实场景的三维场景模型中的第一位姿数据,具体详见上文描述,这里不再赘述。
S202,基于AR设备当前拍摄目标现实场景的第二位姿数据,以及AR数据包中至少一个虚拟对象对应的第一位姿数据,确定至少一个虚拟对象的呈现特效信息。
其中,AR设备当前拍摄目标现实场景时的第二位姿数据,以及获取该AR设备对应的第二位姿数据的方式详见上文描述,在此不再赘述。
在获取到AR设备当前拍摄目标现实场景的第二位姿数据之前,如图10所示,可以在AR设备展示用于提示用户使用AR设备进行拍摄的信息。
在虚拟对象对应的第一位姿数据为在目标现实场景中的第一位姿数据的情况下,因为AR设备对应的第二位姿数据和虚拟对象对应的第一位姿数据均为在相同坐标系下位姿数据,这样基于AR设备对应的第二位姿数据、至少一个虚拟对象对应第一位姿数据,可以确定出该虚拟对象在AR设备中的呈现特效信息。
在虚拟对象对应的第一位姿数据为在目标现实场景的三维场景模型中的第一位姿数据的情况下,因为三维场景模型与现实场景在相同坐标系中按照1:1呈现,在不同坐标系中等比例呈现,故通过提前设置好的虚拟对象在三维场景模型中呈现时的第一位姿数据,以及AR设备对应的第二位姿数据,能够确定该虚拟对象在AR设备中的呈现特效信息。
S203,基于呈现特效信息,通过AR设备展示至少一个虚拟对象。
在得到虚拟对象在目标现实场景中的呈现特效信息后,可以通过AR设备按照该呈现特效信息展示该至少一个虚拟对象,如图11所示,为在目标现实场景中展示虚拟对象“唐三彩马”的示意图。
本公开实施例中,可以响应第二触发操作,获取到与该第二触发操作关联的AR数据包,进一步可以基于AR设备对应的第二位姿数据,以及提前设置好AR数据包中的虚拟对象对应的第一位姿数据,确定虚拟对象在该目标现实场景中的呈现特效信息,最终在AR设备中展示出逼真的增强现实场景效果。
进一步地,AR数据包还包括至少一个虚拟物体模型对应的第三位姿数据;虚拟物体模型表示目标现实场景中的目标物体;
在基于AR设备当前拍摄目标现实场景时的第二位姿数据,以及AR数据包中至少一个虚拟对象对应的第一位姿数据,确定至少一个虚拟对象的呈现特效信息时,包括:
基于AR设备当前拍摄目标现实场景时的第二位姿数据、AR数据包中至少一个虚拟对象对应的第一位姿数据、以及虚拟物体模型对应的第三位姿数据,确定至少一个虚拟对象的呈现特效信息。
示例性地,可以根据该虚拟物体模型在目标现实场景中的第三位姿数据、AR设备对应的第二位姿数据、以及虚拟对象对应的第一位姿数据,确定该虚拟对象是否被虚拟物体模型对应的实体物体遮挡,在确定该虚拟对象的部分区域或者全部区域被虚拟物体模型对应的实体物体遮挡时,将不会对被遮挡的该部分区域进行渲染,该虚拟物体模型可以被处理为遮挡形态,并对该遮挡形态的虚拟物体模型进行透明化处理,这样用户在AR设备中不会看到透明处理后的虚拟物体模型,这样能够展示出虚拟对象被目标现实场景中的实体物体遮挡的呈现效果。
本公开实施例中,通过虚拟物体模型的第三位姿数据,可以实现还原虚拟物体模型在目标现实场景中的真实第三位姿数据,当确定出虚拟对象被虚拟物体模型对应的实体物体遮挡时,能够通过虚拟物体模型实现对虚拟对象的遮挡效果,从而在AR设备中展示出更为逼真的增强现实场景的效果。
在一种实施方式中,AR数据包中还包括至少一个虚拟对象对应的交互数据,交互数据包括至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数中的至少一种。
示例性地,交互数据中包含的状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数的解释详见上文,在此不再赘述。
基于上文提到的状态触发条件可以包含多种,比如包括点击模型触发条件、滑动模型触发条件、距离识别触发条件、指定区域触发条件、手势识别触发条件、朝向当前模型触发条件以及朝向指定模型触发条件中的至少一种或多种,每种触发条件的解释详见上文,这里不进行赘述,可以看到状态触发条件可以包含两种类型,一种类型为与AR设备对应的第二位姿数据有关的状态触发条件,比如距离识别触发条件、指定区域触发条件、朝向当前模型触发条件以及朝向指定模型触发条件中的至少一种或多种,另一种类型为与AR设备对应的第二位姿数据无关的状态触发条件,比如点击模型触发条件、滑动模型触发条件和手势识别触发条件中的至少一种或多种,下面将针对这两种类型,来分别考虑与虚拟对象的交互。
在一种可能的实施方式中,本公开实施例提供的展示方法还包括:
(1)检测到作用于至少一个虚拟对象的交互操作;
(2)在作用于至少一个虚拟对象的交互操作符合第一类状态触发条件的情况下,基于至少一个虚拟对象在该第一类状态触发条件下对应的呈现状态,和/或,至少一个虚拟对象在该第一类状态触发条件被触发展示后的循环展示次数,对至少一个虚拟对象的 呈现特效信息进行更新,得到更新后的呈现特效信息;
进一步地,基于呈现特效信息,通过AR设备展示至少一个虚拟对象,包括:
基于更新后的呈现特效信息,通过AR设备展示至少一个虚拟对象。
示例性地,交互操作可以为用于触发虚拟对象的呈现特效信息进行更新的操作,其中,第一类状态触发条件可以为上述提到的与AR设备对应的第二位姿数据无关的状态触发条件,比如点击模型触发条件、滑动模型触发条件和手势识别触发条件中的至少一种或多种。
示例性地,AR设备检测到存在该交互操作,比如,若检测到用户作用于AR设备中针对虚拟对象A触发的滑动操作,则可以获取该虚拟对象A在滑动操作下对应的呈现状态,和/或,该虚拟对象A在该滑动操作下被触发展示后的循环展示次数,然后基于此对虚拟对象A的呈现特效信息进行更新。
示例性地,对虚拟对象的呈现特效信息进行更新的方式可以分为三种情况,第一种情况可以单独基于至少一个虚拟对象在该第一类状态触发条件下对应的呈现状态对该至少一个虚拟对象的呈现特效信息进行更新,第二种情况可以单独基于该至少一个虚拟对象在该第一类状态触发条件被触发展示后的循环展示次数,对至少一个虚拟对象的呈现特效信息进行更新,第三种情况可以同时结合至少一个虚拟对象在该第一类状态触发条件下对应的呈现状态,以及至少一个虚拟对象在该第一类状态触发条件被触发展示后的循环展示次数,共同对至少一个虚拟对象的呈现特效信息进行更新。
下面分别结合具体实施例来对以上三种情况进行介绍。
针对第一种情况,示例性地,虚拟对象A的初始呈现特效信息为在桌子上显示的虚拟花瓶,交互操作为作用于虚拟对象上的滑动操作,且则该虚拟对象A在滑动操作下对应的呈现状态为不显示,可以在检测到存在作用于针对该虚拟花瓶的滑动操作后,将虚拟花瓶的呈现特效信息从原来的显示变为消失。
针对第二种情况,示例性地,虚拟对象A为呈现在墙壁上从A位置走向B位置的虚拟花猫,交互操作为作用于虚拟花猫对应的三维模型上的点击操作,且该虚拟对象在点击操作下被触发展示后的循环展示次数为5次,可以在检测到用户作用于针对该虚拟花猫的点击操作时,触发该虚拟花猫按照从A位置走向B位置的展示方式循环5次。
针对第三种情况,示例性地,虚拟对象A是在一展花灯中的闪烁一次的虚拟灯光,交互操作为手势识别操作,该虚拟灯光在该手势识别操作下对应的呈现状态为显示,且该虚拟对象在该手势识别操作下被触发展示后的循环展示次数为5次,可以在检测到该手势识别操作时,将虚拟灯光的呈现特效信息更新为闪烁5次。
本公开实施例中,在检测到存在针对虚拟对象的交互操作时,可以在确定该交互操作符合设定状态触发条件的情况下,按照虚拟对象在该设定状态触发条件下的展示方式,对虚拟对象进行展示,示例性地,增强现实体验者可以对AR设备展示设定的手势,从而触发虚拟对象按照与该手势对应的呈现特效进行展示,该方式提高了增强现实体验者在目标现实场景中与虚拟对象的交互性,提升了用户体验。
在另一种可能的实施方式中,本公开实施例提供的展示方法还包括:
在第二位姿数据符合第二类状态触发条件的情况下,基于至少一个虚拟对象在该第二类状态触发条件下对应的呈现状态,和/或,至少一个虚拟对象在该第二类状态触发条件被触发展示后的循环展示次数,对至少一个虚拟对象的呈现特效信息进行更新,得到更新后的呈现特效信息;
基于呈现特效信息,通过AR设备展示至少一个虚拟对象,包括:
基于更新后的呈现特效信息,通过AR设备展示至少一个虚拟对象。
示例性地,第二类状态触发条件可以为上述提到的与AR设备对应的第二位姿数据 有关的状态触发条件,比如距离识别触发条件、指定区域触发条件、朝向当前模型触发条件以及朝向指定模型触发条件中的至少一种或多种。
示例性地,AR设备对应的第二位姿数据符合第二类状态触发条件包括多种情况,具体可以通过AR设备的显示部件所在的位置和/或显示角度来确定AR设备对应的第二位姿数据是否符合第二类状态触发条件,可以单独通过AR设备的显示部件所在的位置来确定AR设备对应的第二位姿数据是否符合第二类状态触发条件,也可以单独通过AR设备的显示角度来确定AR设备对应的第二位姿数据是否符合第二类状态触发条件,或者通过结合AR设备的显示部件所在的位置和显示角度来确定AR设备对应的第二位姿数据是否符合第二类状态触发条件。
示例性地,当AR设备的显示部件所在的位置坐标与设定位置点之间距离满足设定距离时,则可以确定该AR设备对应的第二位姿数据符合第二类状态触发条件;或者,还可以在确定AR设备的显示角度朝向虚拟对象A所在的位置时,同样可以确定该AR设备对应的第二位姿数据符合第二类状态触发条件;或者,还可以在AR设备的显示部件所在的位置坐标与设定位置点之间的距离满足设定距离,且该AR设备的显示角度朝向虚拟对象A所展示的位置时,确定该AR设备对应的第二位姿数据符合第二类状态触发条件,这种情况较多,在此不再一一赘述。
进一步地,在AR设备对应的第二姿数据符合第二类状态触发条件的情况下,可以按照至少一个虚拟对象在该第二类状态触发条件下对应的呈现状态,和/或,至少一个虚拟对象在该第二类状态触发条件被触发展示后的循环展示次数,对至少一个虚拟对象的呈现特效信息进行更新,具体更新方式与上文基于交互操作进行更新的方式相似,比在此不再赘述。
本公开实施例中,在检测到AR设备对应的第二位姿数据符合设定状态触发条件的情况下,按照虚拟对象在该设定状态触发条件下的展示方式,对虚拟对象进行展示,示例性地,当AR设备靠近设定位置,且AR设备的显示角度朝向虚拟对象A所在的位置时,触发虚拟对象A按照该AR设备的第二位姿数据对应的呈现特效进行展示,该过程可以使得增强现实场景效果更加逼真,从而提升用户体验。
下面以基于AR生成端远程生成AR场景内容的过程进行说明,如图12所示,包括:
S301,响应于第一触发操作,获取第一触发操作指示的目标现实场景的三维场景模型,以及与目标现实场景关联的初始AR数据包;
S302,获取与初始AR数据包关联的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象在三维场景模型中的第一位姿数据;
S303,基于至少一个虚拟对象的更新数据,对初始AR数据包进行更新,生成更新后的AR数据包。
本公开实施例中,可以响应第一触发操作,向用户提供与该目标现实场景关联的三维场景模型和初始AR数据包,便于用户按照自身需求直观地通过三维场景模型对初始AR数据包关联的至少一个虚拟对象进行编辑更新,通过该方式得到更新后的AR数据包,当增强现实体验者在目标现实场景中进行增强现实体验时,可以直接调用更新后的AR数据包进行增强现实体验,通过本公开给出的远程编辑AR数据包的方式,能够简化AR数据包的生成方式,为后续增强现实场景的展示提供了便利的AR素材。
参见图13所示,本公开实施例还提供了一种AR场景内容的展示系统400,包括AR生成端401、服务器402以及AR设备403,AR生成端401与服务器402通信连接,AR设备403与服务器402通信连接;
AR生成端401,用于响应于第一触发操作,获取第一触发操作指示的目标现实场景的三维场景模型,以及与目标现实场景关联的初始增强现实AR数据包;获取与初始 AR数据包关联的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象在三维场景模型中的第一位姿数据;以及用于基于至少一个虚拟对象的更新数据,对初始AR数据包进行更新,并将更新后的AR数据包发送至服务器;
服务器402,用于接收更新后的AR数据包,以及向AR设备转发更新后的AR数据包;
AR设备403,用于响应于第二触发操作,获取服务器中保存的与第二触发操作指示的目标现实场景关联的更新后的AR数据包;基于AR设备当前拍摄目标现实场景时的第二位姿数据,以及更新后的AR数据包中至少一个虚拟对象在三维场景模型中的第一位姿数据,确定至少一个虚拟对象的呈现特效信息;基于呈现特效信息,通过AR设备展示至少一个虚拟对象。
本公开实施例提供的展示系统,能够远程编辑生成AR数据包,并将生成的AR数据包发布至服务器,供AR设备端进行增强现实体验,具体可以在AR生成端提供一种针对AR数据包的简便生成方式,便于用户编辑,服务器端能够对AR数据包进行保存,便于AR设备对AR数据包进行下载体验。
下面以基于AR设备在增强现实场景中现场生成AR场景内容的过程进行说明,应用于AR设备,如图14所示,包括:
S501,响应于第一触发操作,获取与第一触发操作指示的目标现实场景关联的初始AR数据包;
S502,基于AR设备当前拍摄目标现实场景时的第二位姿数据,以及初始AR数据包,展示与初始AR数据包关联的至少一个虚拟对象;
S503,获取对展示的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象在目标现实场景中的第一位姿数据;
S505,基于更新数据,对初始AR数据包进行更新,生成更新后的AR数据包。
本公开实施例中,可以响应第一触发操作,进入AR数据包编辑状态,并在获取到AR设备对应的第二位姿数据后,基于该AR设备对应的第二位姿数据,展示出与初始AR数据包关联的至少一个虚拟对象,便于用户在AR设备中能够按照需求直观地对虚拟对象在目标现实场景中的第二位姿数据进行编辑调整,从而可以简化AR场景内容的生成过程,也可以使调整后的AR数据包的准确度更高。
基于同一技术构思,本公开实施例中还提供了与图1所示的AR场景内容的生成方法对应的生成装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述生成方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参见图15所示,为本公开实施例提供的一种AR场景内容的生成装置600,包括:
第一获取模块601,用于响应于第一触发操作,获取第一触发操作指示的目标现实场景关联的初始AR数据包;
第二获取模块602,用于获取与初始AR数据包关联的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象对应的第一位姿数据;
更新模块603,用于基于至少一个虚拟对象的更新数据,对初始AR数据包进行更新,生成更新后的AR数据包。
在一种可能的实施方式中,第一获取模块601还用于:
获取第一触发操作指示的目标现实场景的三维场景模型;
第二获取模块602在用于获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
展示加载的三维场景模型;
获取至少一个虚拟对象在置于三维场景模型中的情况下的更新数据,更新数据包括 至少一个虚拟对象在置于三维场景模型中的第一位姿数据。
在一种可能的实施方式中,生成装置还包括展示模块604,展示模块604用于:
基于AR设备当前拍摄目标现实场景时的第二位姿数据,以及初始AR数据包,展示与初始AR数据包关联的至少一个虚拟对象;
第二获取模块602用于获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
获取对展示的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象在目标现实场景中的第一位姿数据。
在一种可能的实施方式中,第二获取模块在用于获取对展示的至少一个虚拟对象的更新数据时,包括:
展示位姿数据编辑界面,并获取通过位姿数据编辑界面接收的至少一个虚拟对象的第一位姿数据;
其中,第一位姿数据中包括以下至少一种:在目标现实场景所处坐标系中的位置坐标、偏转角度以及尺寸信息。
在一种可能的实施方式中,更新数据还包括至少一个虚拟对象对应的交互数据;
第二获取模块在用于获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
展示交互数据编辑界面,并获取通过交互数据编辑界面分别接收的每个虚拟对象的交互数据;
其中,交互数据中包括以下至少一种:至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数。
在一种可能的实施方式中,在交互数据中包括多种状态触发条件的情况下,交互数据还包括:每种状态触发条件的优先级。
在一种可能的实施方式中,第二获取模块602还用于:
获取与初始AR数据包关联的至少一个虚拟物体模型对应的第三位姿数据,至少一个虚拟物体模型表示目标现实场景中的目标物体;
更新模块603在用于基于至少一个虚拟对象的更新数据,对初始AR数据包进行更新,生成更新后的AR数据包时,包括:
基于至少一个虚拟对象的更新数据,以及虚拟物体模型对应的第三位姿数据,对初始AR数据包进行更新,生成更新后的AR数据包。
在一种可能的实施方式中,至少一个虚拟对象包括至少一个第一虚拟对象,第二获取模块602在用于获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
获取初始AR数据包中包含的至少一个第一虚拟对象的更新数据。
在一种可能的实施方式中,至少一个虚拟对象包括至少一个第二虚拟对象,第二获取模块在用于获取与初始AR数据包关联的至少一个虚拟对象的更新数据时,包括:
从预先建立的素材库中获取与初始AR数据包关联的至少一个第二虚拟对象,并获取至少一个第二虚拟对象的更新数据。
在一种可能的实施方式中,生成装置还包括发送模块605,生成更新后的AR数据包括之后,发送模块605用于:
将更新后的AR数据包发送至服务器;或者,将更新后的AR数据包以及指示更新后的AR数据包是否启用的状态信息发送至服务器。
在一种可能的实施方式中,生成更新后的AR数据包之后,发送模块605还用于:
响应于对更新后的AR数据包的上传触发操作,获取更新后的AR数据包的标签信息,并将标签信息发送至服务器。
基于同一技术构思,本公开实施例中还提供了图9所示的AR场景内容的展示方法对应的展示装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述展示方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图16所示,为本公开实施例提供的一种AR场景内容的展示装置700的示意图,包括:
获取模块701,用于响应于第二触发操作,获取与第二触发操作指示的目标现实场景关联的AR数据包;AR数据包中包含至少一个虚拟对象对应的第一位姿数据;
确定模块702,用于基于AR设备当前拍摄目标现实场景的第二位姿数据,以及AR数据包中至少一个虚拟对象对应的第一位姿数据,确定至少一个虚拟对象的呈现特效信息;
展示模块703基于呈现特效信息,通过AR设备展示至少一个虚拟对象。
在一种可能的实施方式中,AR数据包还包括至少一个虚拟物体模型对应的第三位姿数据;虚拟物体模型表示目标现实场景中的目标物体;
确定模块702在用于基于AR设备当前拍摄目标现实场景时的第二位姿数据,以及AR数据包中至少一个虚拟对象对应的第一位姿数据,确定至少一个虚拟对象的呈现特效信息时,包括:
基于AR设备当前拍摄目标现实场景时的第二位姿数据、AR数据包中至少一个虚拟对象对应的第一位姿数据、以及虚拟物体模型对应的第三位姿数据,确定至少一个虚拟对象的呈现特效信息。
在一种可能的实施方式中,AR数据包中还包括至少一个虚拟对象对应的交互数据,交互数据包括至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数中的至少一种。
在一种可能的实施方式中,展示装置还包括交互模块704,交互模块704用于:
检测作用于至少一个虚拟对象的交互操作;
在作用于至少一个虚拟对象的交互操作符合第一类状态触发条件的情况下,基于至少一个虚拟对象在该第一类状态触发条件下对应的呈现状态,和/或,至少一个虚拟对象在该第一类状态触发条件被触发展示后的循环展示次数,对至少一个虚拟对象的呈现特效信息进行更新,得到更新后的呈现特效信息;
展示模块704在用于基于呈现特效信息,通过AR设备展示至少一个虚拟对象时,包括:
基于更新后的呈现特效信息,通过AR设备展示至少一个虚拟对象。
在一种可能的实施方式中,展示装置还包括交互模块704,交互模块704用于:
在第二位姿数据符合第二类状态触发条件的情况下,基于至少一个虚拟对象在该第二类状态触发条件下对应的呈现状态,和/或,至少一个虚拟对象在该第二类状态触发条件被触发展示后的循环展示次数,对至少一个虚拟对象的呈现特效信息进行更新,得到更新后的呈现特效信息;
展示模块在用于基于呈现特效信息,通过AR设备展示至少一个虚拟对象时,包括:
基于更新后的呈现特效信息,通过AR设备展示至少一个虚拟对象。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
对应于图1中的AR场景内容的生成方法,本公开实施例还提供了一种电子设备800,如图17所示,为本公开实施例提供的电子设备800结构示意图,包括:
处理器81、存储器82、和总线83;存储器82用于存储执行指令,包括内存821和外部存储器822;这里的内存821也称内存储器,用于暂时存放处理器81中的运算数 据,以及与硬盘等外部存储器822交换的数据,处理器81通过内存821与外部存储器822进行数据交换,当电子设备800运行时,处理器81与存储器82之间通过总线83通信,使得处理器81执行以下指令:响应于第一触发操作,获取第一触发操作指示的目标现实场景关联的初始AR数据包;获取与初始AR数据包关联的至少一个虚拟对象的更新数据;更新数据包括至少一个虚拟对象对应的第一位姿数据;基于至少一个虚拟对象的更新数据,对初始AR数据包进行更新,生成更新后的AR数据包。
或者,可以使得处理器81执行以下指令:响应于第二触发操作,获取与第二触发操作指示的目标现实场景关联的AR数据包;AR数据包中包含至少一个虚拟对象对应的第一位姿数据;基于AR设备当前拍摄目标现实场景的第二位姿数据,以及AR数据包中至少一个虚拟对象对应的第一位姿数据,确定至少一个虚拟对象的呈现特效信息;基于呈现特效信息,通过AR设备展示至少一个虚拟对象。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中的生成方法或展示方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的生成方法或展示方法的步骤,具体可参见上述方法实施例,在此不再赘述。
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开 的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (20)

  1. 一种增强现实AR场景内容的生成方法,包括:
    响应于第一触发操作,获取所述第一触发操作指示的目标现实场景关联的初始AR数据包;
    获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象对应的第一位姿数据;
    基于所述至少一个虚拟对象的更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
  2. 根据权利要求1所述的生成方法,所述生成方法还包括:
    获取所述第一触发操作指示的目标现实场景的三维场景模型;
    所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:
    展示加载的所述三维场景模型;
    获取所述至少一个虚拟对象在置于所述三维场景模型中的情况下的更新数据,所述更新数据包括所述至少一个虚拟对象在置于所述三维场景模型中的第一位姿数据。
  3. 根据权利要求1所述的生成方法,所述生成方法还包括:
    基于AR设备当前拍摄所述目标现实场景时的第二位姿数据,以及所述初始AR数据包,展示与所述初始AR数据包关联的至少一个虚拟对象;
    所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:
    获取对展示的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象在所述目标现实场景中的第一位姿数据。
  4. 根据权利要求3所述的生成方法,所述获取对展示的至少一个虚拟对象的更新数据,包括:
    展示位姿数据编辑界面,并获取通过所述位姿数据编辑界面接收的所述至少一个虚拟对象的第一位姿数据;
    其中,所述第一位姿数据中包括以下至少一种:在所述目标现实场景所处坐标系中的位置坐标、偏转角度以及尺寸信息。
  5. 根据权利要求1至4任一所述的生成方法,所述更新数据还包括所述至少一个虚拟对象对应的交互数据;
    所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:
    展示交互数据编辑界面,并获取通过所述交互数据编辑界面分别接收的每个虚拟对象的交互数据;
    其中,所述交互数据中包括以下至少一种:至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数。
  6. 根据权利要求5所述的生成方法,在所述交互数据中包括多种状态触发条件的情况下,所述交互数据还包括:每种状态触发条件的优先级。
  7. 根据权利要求1至6任一所述的生成方法,所述生成方法还包括:
    获取与所述初始AR数据包关联的至少一个虚拟物体模型对应的第三位姿数据,所述至少一个虚拟物体模型表示所述目标现实场景中的目标物体;
    所述基于所述至少一个虚拟对象的所述更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包,包括:
    基于所述至少一个虚拟对象的所述更新数据,以及所述虚拟物体模型对应的所述第三位姿数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
  8. 根据权利要求1至7任一所述的生成方法,所述至少一个虚拟对象包括至少一个第一虚拟对象,所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:
    获取所述初始AR数据包中包含的至少一个第一虚拟对象的更新数据。
  9. 根据权利要求1至8任一所述的生成方法,所述至少一个虚拟对象包括至少一个第二虚拟对象,所述获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据,包括:
    从预先建立的素材库中获取与所述初始AR数据包关联的至少一个第二虚拟对象,并获取所述至少一个第二虚拟对象的更新数据。
  10. 根据权利要求1至9任一所述的生成方法,生成更新后的AR数据包括之后,所述生成方法还包括:
    将所述更新后的AR数据包发送至服务器;或者,将所述更新后的AR数据包以及指示所述更新后的AR数据包是否启用的状态信息发送至服务器。
  11. 根据权利要求10所述的生成方法,所述生成更新后的AR数据包之后,还包括:
    响应于对所述更新后的AR数据包的上传触发操作,获取所述更新后的AR数据包的标签信息,并将所述标签信息发送至所述服务器。
  12. 一种增强现实AR场景内容的展示方法,包括:
    响应于第二触发操作,获取与所述第二触发操作指示的目标现实场景关联的AR数据包;所述AR数据包中包含至少一个虚拟对象对应的第一位姿数据;
    基于AR设备当前拍摄所述目标现实场景的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息;
    基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
  13. 根据权利要求12所述的展示方法,所述AR数据包还包括至少一个虚拟物体模型对应的第三位姿数据;所述虚拟物体模型表示所述目标现实场景中的目标物体;
    所述基于所述AR设备当前拍摄所述目标现实场景时的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息,包括:
    基于所述AR设备当前拍摄所述目标现实场景时的所述第二位姿数据、所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据、以及所述虚拟物体模型对应的所述第三位姿数据,确定所述至少一个虚拟对象的呈现特效信息。
  14. 根据权利要求12或13所述的展示方法,所述AR数据包中还包括至少一个虚拟对象对应的交互数据,所述交互数据包括至少一种状态触发条件、与每种状态触发条件对应的呈现状态、以及虚拟对象在被触发展示后的循环展示次数中的至少一种。
  15. 根据权利要求14所述的展示方法,所述展示方法还包括:
    检测作用于所述至少一个虚拟对象的交互操作;
    在所述作用于所述至少一个虚拟对象的交互操作符合第一类状态触发条件的情况下,基于所述至少一个虚拟对象在该第一类状态触发条件下对应的呈现状态,和/或,所述至少一个虚拟对象在该第一类状态触发条件被触发展示后的循环展示次数,对所述至少一个虚拟对象的呈现特效信息进行更新,得到更新后的呈现特效信息;
    所述基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象,包括:
    基于所述更新后的呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
  16. 根据权利要求14或15所述的展示方法,所述展示方法还包括:
    在所述第二位姿数据符合第二类状态触发条件的情况下,基于所述至少一个虚拟对象在该第二类状态触发条件下对应的呈现状态,和/或,所述至少一个虚拟对象在该第二类状态触发条件被触发展示后的循环展示次数,对所述至少一个虚拟对象的呈现特效信息进行更新,得到更新后的呈现特效信息;
    所述基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象,包括:
    基于所述更新后的呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
  17. 一种增强现实AR场景内容的生成装置,包括:
    第一获取模块,用于响应于第一触发操作,获取所述第一触发操作指示的目标现实场景关联的初始AR数据包;
    第二获取模块,用于获取与所述初始AR数据包关联的至少一个虚拟对象的更新数据;所述更新数据包括所述至少一个虚拟对象对应的第一位姿数据;
    更新模块,用于基于所述至少一个虚拟对象的更新数据,对所述初始AR数据包进行更新,生成更新后的AR数据包。
  18. 一种增强现实AR场景内容的展示装置,包括:
    获取模块,用于响应于第二触发操作,获取与所述第二触发操作指示的目标现实场景关联的AR数据包;所述AR数据包中包含至少一个虚拟对象对应的第一位姿数据;
    确定模块,用于基于AR设备当前拍摄所述目标现实场景的第二位姿数据,以及所述AR数据包中至少一个虚拟对象对应的所述第一位姿数据,确定所述至少一个虚拟对象的呈现特效信息;
    展示模块,用于基于所述呈现特效信息,通过所述AR设备展示所述至少一个虚拟对象。
  19. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至11任一所述的生成方法的步骤,或者执行如权利要求12至16任一所述的展示方法的步骤。
  20. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至11任一所述的生成方法的步骤,或者执行如权利要求12至16任一所述的展示方法的步骤。
PCT/CN2020/135048 2020-05-26 2020-12-09 Ar场景内容的生成方法、展示方法、装置及存储介质 WO2021238145A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG11202108241QA SG11202108241QA (en) 2020-05-26 2020-12-09 Ar scene content generation method and presentation method, apparatuses, and storage medium
JP2021538425A JP2022537861A (ja) 2020-05-26 2020-12-09 Arシーンコンテンツの生成方法、表示方法、装置及び記憶媒体
KR1020217020429A KR20210148074A (ko) 2020-05-26 2020-12-09 Ar 시나리오 콘텐츠의 생성 방법, 전시 방법, 장치 및 저장 매체

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010456842.3 2020-05-26
CN202010456843.8 2020-05-26
CN202010456843.8A CN111610998A (zh) 2020-05-26 2020-05-26 Ar场景内容的生成方法、展示方法、装置及存储介质
CN202010456842.3A CN111610997A (zh) 2020-05-26 2020-05-26 Ar场景内容的生成方法、展示方法、展示系统及装置

Publications (1)

Publication Number Publication Date
WO2021238145A1 true WO2021238145A1 (zh) 2021-12-02

Family

ID=78745558

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135048 WO2021238145A1 (zh) 2020-05-26 2020-12-09 Ar场景内容的生成方法、展示方法、装置及存储介质

Country Status (5)

Country Link
JP (1) JP2022537861A (zh)
KR (1) KR20210148074A (zh)
SG (1) SG11202108241QA (zh)
TW (1) TWI783472B (zh)
WO (1) WO2021238145A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401442A (zh) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 视频直播及特效控制方法、装置、电子设备及存储介质
CN114764327A (zh) * 2022-05-09 2022-07-19 北京未来时空科技有限公司 一种三维可交互媒体的制作方法、装置及存储介质
CN115291939A (zh) * 2022-08-17 2022-11-04 北京字跳网络技术有限公司 互动场景配置方法、装置、存储介质、设备及程序产品
CN115374141A (zh) * 2022-09-20 2022-11-22 支付宝(杭州)信息技术有限公司 虚拟形象的更新处理方法及装置
WO2023207174A1 (zh) * 2022-04-28 2023-11-02 Oppo广东移动通信有限公司 显示方法、装置、显示设备、头戴式设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020071038A1 (en) * 2000-12-07 2002-06-13 Joe Mihelcic Method and system for complete 3D object and area digitizing
CN108416832A (zh) * 2018-01-30 2018-08-17 腾讯科技(深圳)有限公司 媒体信息的显示方法、装置和存储介质
CN108520552A (zh) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN109564472A (zh) * 2016-08-11 2019-04-02 微软技术许可有限责任公司 沉浸式环境中的交互方法的选取
CN110716645A (zh) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 一种增强现实数据呈现方法、装置、电子设备及存储介质
CN111610998A (zh) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 Ar场景内容的生成方法、展示方法、装置及存储介质
CN111610997A (zh) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 Ar场景内容的生成方法、展示方法、展示系统及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI628613B (zh) * 2014-12-09 2018-07-01 財團法人工業技術研究院 擴增實境方法與系統
EP3621039A1 (en) * 2018-09-06 2020-03-11 Tata Consultancy Services Limited Real time overlay placement in videos for augmented reality applications
WO2020072972A1 (en) * 2018-10-05 2020-04-09 Magic Leap, Inc. A cross reality system
CN110764614B (zh) * 2019-10-15 2021-10-08 北京市商汤科技开发有限公司 增强现实数据呈现方法、装置、设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020071038A1 (en) * 2000-12-07 2002-06-13 Joe Mihelcic Method and system for complete 3D object and area digitizing
CN109564472A (zh) * 2016-08-11 2019-04-02 微软技术许可有限责任公司 沉浸式环境中的交互方法的选取
CN108416832A (zh) * 2018-01-30 2018-08-17 腾讯科技(深圳)有限公司 媒体信息的显示方法、装置和存储介质
CN108520552A (zh) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110716645A (zh) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 一种增强现实数据呈现方法、装置、电子设备及存储介质
CN111610998A (zh) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 Ar场景内容的生成方法、展示方法、装置及存储介质
CN111610997A (zh) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 Ar场景内容的生成方法、展示方法、展示系统及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401442A (zh) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 视频直播及特效控制方法、装置、电子设备及存储介质
CN114401442B (zh) * 2022-01-14 2023-10-24 北京字跳网络技术有限公司 视频直播及特效控制方法、装置、电子设备及存储介质
WO2023207174A1 (zh) * 2022-04-28 2023-11-02 Oppo广东移动通信有限公司 显示方法、装置、显示设备、头戴式设备及存储介质
CN114764327A (zh) * 2022-05-09 2022-07-19 北京未来时空科技有限公司 一种三维可交互媒体的制作方法、装置及存储介质
CN114764327B (zh) * 2022-05-09 2023-05-05 北京未来时空科技有限公司 一种三维可交互媒体的制作方法、装置及存储介质
CN115291939A (zh) * 2022-08-17 2022-11-04 北京字跳网络技术有限公司 互动场景配置方法、装置、存储介质、设备及程序产品
CN115374141A (zh) * 2022-09-20 2022-11-22 支付宝(杭州)信息技术有限公司 虚拟形象的更新处理方法及装置
CN115374141B (zh) * 2022-09-20 2024-05-10 支付宝(杭州)信息技术有限公司 虚拟形象的更新处理方法及装置

Also Published As

Publication number Publication date
JP2022537861A (ja) 2022-08-31
TW202145150A (zh) 2021-12-01
TWI783472B (zh) 2022-11-11
KR20210148074A (ko) 2021-12-07
SG11202108241QA (en) 2021-12-30

Similar Documents

Publication Publication Date Title
WO2021238145A1 (zh) Ar场景内容的生成方法、展示方法、装置及存储介质
US11854149B2 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
KR102417645B1 (ko) Ar 장면 이미지 처리 방법, 장치, 전자 기기 및 저장 매체
KR102414587B1 (ko) 증강 현실 데이터 제시 방법, 장치, 기기 및 저장 매체
KR102534637B1 (ko) 증강 현실 시스템
KR102367928B1 (ko) 표면 인식 렌즈
KR20220030263A (ko) 텍스처 메시 빌딩
CN111610998A (zh) Ar场景内容的生成方法、展示方法、装置及存储介质
US20160217616A1 (en) Method and System for Providing Virtual Display of a Physical Environment
WO2021073269A1 (zh) 增强现实数据呈现方法、装置、设备、存储介质和程序
Keil et al. The House of Olbrich—An augmented reality tour through architectural history
US11810316B2 (en) 3D reconstruction using wide-angle imaging devices
CN112070906A (zh) 一种增强现实系统及增强现实数据的生成方法、装置
US20160063764A1 (en) Image processing apparatus, image processing method, and computer program product
CN111610997A (zh) Ar场景内容的生成方法、展示方法、展示系统及装置
CN109448050B (zh) 一种目标点的位置的确定方法及终端
US20210118236A1 (en) Method and apparatus for presenting augmented reality data, device and storage medium
CN112070907A (zh) 一种增强现实系统及增强现实数据的生成方法、装置
KR20150079387A (ko) 카메라 광 데이터로 가상 환경을 조명하는 방법
CN111815783A (zh) 虚拟场景的呈现方法及装置、电子设备及存储介质
JP2016139199A (ja) 画像処理装置、画像処理方法、およびプログラム
US11656576B2 (en) Apparatus and method for providing mapping pseudo-hologram using individual video signal output
CN113678173A (zh) 用于虚拟对象的基于图绘的放置的方法和设备
KR20180075222A (ko) 전자 장치 및 그 동작 방법
EP3923121A1 (en) Object recognition method and system in augmented reality enviroments

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021538425

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938244

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938244

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 521430195

Country of ref document: SA