CN112783993B - Content synchronization method for multiple authorized spaces based on digital map - Google Patents

Content synchronization method for multiple authorized spaces based on digital map Download PDF

Info

Publication number
CN112783993B
CN112783993B CN201911089945.4A CN201911089945A CN112783993B CN 112783993 B CN112783993 B CN 112783993B CN 201911089945 A CN201911089945 A CN 201911089945A CN 112783993 B CN112783993 B CN 112783993B
Authority
CN
China
Prior art keywords
media content
space
authorized
target
digital map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911089945.4A
Other languages
Chinese (zh)
Other versions
CN112783993A (en
Inventor
刘建滨
李尔
曹军
吴宗武
许阳坡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911089945.4A priority Critical patent/CN112783993B/en
Publication of CN112783993A publication Critical patent/CN112783993A/en
Application granted granted Critical
Publication of CN112783993B publication Critical patent/CN112783993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Abstract

The application discloses a content synchronization method, device and storage medium for a plurality of authorized spaces based on a three-dimensional map, and belongs to the technical field of AR or VR. According to the method and the device for processing the digital map, the digital map corresponding to the plurality of scenes and the authorized space corresponding to the first user identifier contained in each digital map can be obtained at the same time, then, the target media content is added in a certain authorized space, the relative position relation between the target media content and the target object is determined, the relative position relation, the target object and the target media content are sent to the server, so that the server can search the object matched with the target object in the digital map corresponding to other scenes, and synchronize the target media content to other authorized spaces corresponding to the first user identifier according to the relative position relation, the server can automatically complete the addition of the media content in the plurality of authorized spaces, the adding efficiency is improved, and the effect consistency of the media content in the plurality of authorized spaces is ensured.

Description

Content synchronization method for multiple authorized spaces based on digital map
Technical Field
The present application relates to the field of augmented reality (augmented reality, AR) or Virtual Reality (VR) technologies, and in particular, to a method, an apparatus, and a storage medium for synchronizing contents of multiple authorized spaces based on a digital map.
Background
Currently, digital maps are increasingly used in daily life. The digital map comprises a large number of virtual three-dimensional spaces, and besides partial spaces occupied by the existing buildings and other objects, a large number of residual unoccupied three-dimensional spaces exist in the three-dimensional spaces. How to better utilize these three-dimensional spaces to realize the value of these three-dimensional spaces is a problem that needs to be solved in the current process of operating digital maps.
Disclosure of Invention
The application provides a digital map-based authorized space display method, a media content synchronization method of a plurality of authorized spaces, a media content sharing method, a device and a storage medium, which can realize the visualization of the authorized space and the addition of media content in the visualized authorized space, and the technical scheme is as follows:
in a first aspect, there is provided a digital map-based authorized space display method, which is applied to a terminal, the method comprising: acquiring a preview stream of a target scene; acquiring a first user identifier; acquiring the pose of a terminal; acquiring n authorized spaces according to the first user identifier and the pose, wherein the n authorized spaces are n non-overlapping three-dimensional spaces corresponding to the first user identifier in a digital map corresponding to the target scene, the n authorized spaces are used for presenting media content, n is an integer greater than or equal to 1, and the digital map comprises a panoramic image, a point cloud image or a grid image; rendering the n authorized spaces in a preview stream of the target scene.
In the embodiment of the application, the terminal can acquire the authorization space of the current registered user in the digital map corresponding to the target scene according to the first user identifier and the pose of the terminal, and further render the authorization space of the registered user in the preview stream of the target scene, so that the registered user can view the authorization space corresponding to the current scene in real time, and the method is more convenient.
Optionally, after rendering the n authorized spaces in the preview stream of the target scene, the method further includes: obtaining target media content, wherein the target media content comprises one or more of characters, pictures, audio, video and models; adding the target media content in a target authorized space, wherein the target authorized space is any one of the n authorized spaces.
Optionally, after rendering the n authorized spaces in the preview stream of the target scene, the method further includes: displaying the media content contained in the target authorization space, and editing the displayed media content according to a first editing mode. The first editing mode includes one or more of an adding mode, a deleting mode, an replacing mode and a moving mode according to a preset relative displacement.
The adding mode refers to adding the media content based on the original media content in the authorized space under the condition that the authorized space already contains the media content. The deletion mode refers to deleting the existing media content in the authorized space, and specifically may include deleting the whole media content or deleting a part of elements in the media content. The replacement means is to replace a certain media content existing in the authorized space with other media content or replace a certain element contained in the existing media content in the authorized space with other media content. The moving manner according to the preset relative displacement may refer to moving the existing media content in the authorized space from the current position to another position according to the preset relative displacement.
In addition, it should be noted that after the media content is edited according to the first editing mode, the terminal may send the edited media content and the first editing mode to the server, and the server may edit the same media content in other authorized spaces of the first user identifier according to the first editing mode. Specifically, editing according to the first editing mode refers to performing editing operation same as that of the first editing mode on the same media content in other authorized spaces, so that all authorized spaces containing corresponding media content show consistent effects after editing.
For example, a first editing mode adds a first media object for a first location in a first authorized space; synchronizing the first editing means in the second authorization space may be embodied as adding the first media object at a second location in the second authorization space; assuming that the first authorized space and the second space are coincident (including after scaling), the first position and the second position are coincident and the manner of presentation of the first media object in both spaces is consistent.
For example, the first editing mode is deleting the first media object at the first position in the first authorized space; synchronizing the first editing means in the second authorization space may be embodied as deleting the first media object at a second location in the second authorization space; assuming that the first and second positions are coincident if the first and second authorized spaces are coincident (including after scaling), the remaining content of the first and second authorized spaces appears to be coincident.
For example, the first editing mode is to replace the first media object at the first position with the second media object in the first authorized space; synchronizing the first editing means in the second authorized space may be embodied as replacing the first media object in the second authorized space with the second media object; assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the second position are coincident, and the second media objects in the two spaces are also coincident, the presentation is consistent.
For example, the first editing mode is to move the first media object from the first position to the second position in the first authorized space; synchronizing the first editing means in the second authorized space may in particular be manifested in that the first media object in the second authorized space is moved from the third position to the fourth position. Assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the third position are coincident, the second position and the fourth position are coincident, and the manner of presentation of the first media object in both spaces is consistent.
The above is by way of example only and is not limiting. The first and second authorized spaces may be identical in size and shape; or may be proportioned, in which case the arrangement of objects in space will be proportioned accordingly.
The target media content may be media content stored in the terminal, or media content downloaded by the terminal from a network.
Because the authorized space corresponding to the first user identifier is explicitly displayed in the preview flow of the target scene, the registered user can clearly learn the boundary of the authorized space of the registered user, in this case, the user adds the media content in the authorized space, so that the added media content can be effectively prevented from encroaching on the authorized spaces of other users, the accurate addition of the media content is realized, and the addition efficiency is improved.
Optionally, the implementation process of adding the target media content in the target authorization space may be: when a dragging instruction for the target media content is detected, adding the target media content at a dragging end point position indicated by the dragging instruction, wherein the display modes of the media content in the target authorized space and the media content outside the target authorized space in the target media content are different, or the media content in the target authorized space is visible and the media content outside the target authorized space is invisible in the target media content.
Optionally, after adding the target media content in the target authorization space, determining a target relative position relationship between the target media content and a target object, where the target object is a preset image or a three-dimensional object included in a digital map corresponding to the target scene; and sending the first user identifier, the target media content, the target object and the target relative position relation to a server, so that the server updates the content of other authorized spaces corresponding to the first user identifier in a preset digital map according to the target media content, the target object and the target relative position relation.
That is, after the target media content is added in the target authorization space, the server can automatically add the target media content in other authorization spaces according to the relative position relation and the target object by sending the relative position relation between the target media content and the target object to the terminal, so that the adding efficiency of the media content is improved.
Optionally, the relative positional relationship of the target media content and the first feature satisfies a first preset positional relationship, and the first feature is a preset image or three-dimensional object included in the preview stream of the target scene. That is, in the embodiment of the present application, the preset positional relationship is satisfied between the target media content placed in the target authorization space and the first feature in the preview stream, so that the target media content and the target scene can be ensured to be adapted.
Optionally, the implementation manner of acquiring n authorized spaces according to the first user identifier and the pose may include the following three ways:
the first user identification and the pose are sent to a server, so that the server obtains the n authorized spaces according to the first user identification and the pose; and receiving the n authorized spaces sent by the server. Or,
The first user identification, the pose and the space screening conditions are sent to a server, so that the server obtains m authorized spaces according to the first user identification and the pose, and n authorized spaces meeting the space screening conditions are obtained from the m authorized spaces; and receiving the n authorized spaces sent by the server. Or,
the first user identification and the pose are sent to a server, so that the server obtains m authorized spaces according to the first user identification and the pose; receiving the m authorized spaces sent by the server; and acquiring n authorized spaces meeting the space screening conditions from the m authorized spaces.
The three modes are all realization modes for acquiring the corresponding authorized space of the first user identifier under the condition that the corresponding authorized space exists in the first user identifier.
Alternatively, if the first user identity does not have a corresponding authorization space, the terminal may apply for the authorization space in the following manner. The terminal may send a space application request to a server, where the space application request is used to apply for an authorized space to the server, and the space application request carries the first user identifier, the pose, and a requirement of the authorized space, so that the server allocates n corresponding authorized spaces for the first user identifier according to the pose and the requirement of the authorized space; and receiving an authorization response sent by the server, wherein the authorization response carries the n authorization spaces.
Optionally, the implementation procedure of rendering the n authorized spaces in the preview stream of the target scene may be: and rendering the n authorized spaces in a preset display form in a preview stream of the target scene according to the pose, wherein the preset display form comprises one or more of preset colors, preset transparency, cube space and sphere space.
The preset display form further includes displaying the boundary of the authorized space with obvious features, for example, displaying the boundary of the authorized space with a static solid line or a dotted line, or displaying the boundary of the authorized space with a scrolling or changing dotted line, which is not limited in the embodiment of the present application.
Optionally, after rendering the n authorized spaces in the preview stream of the target scene, if the pose of the n authorized spaces does not match the pose of the preview stream of the target scene, adjusting the pose of the n authorized spaces in the digital map so that the pose of the n authorized spaces matches the pose of the preview stream of the target scene; and sending the adjusted pose of the n authorized spaces to a server so that the server updates the pose of the n authorized spaces in the digital map.
That is, according to the pose deviation between the preview stream of the target scene and the target authorized space, the target authorized space can be adjusted so that the position in the three-dimensional map completely corresponds to the real world. In this way, the position accuracy of the media content added in the target authorized space can be ensured, and the problem that the media content is in a wrong position or even can not be displayed when being displayed again later is avoided.
Optionally, the relative positional relationship between the n authorized spaces rendered in the preview stream of the target scene and a second feature satisfies a second preset positional relationship, where the second feature is a preset image or a three-dimensional object included in the digital map corresponding to the target scene.
In a second aspect, there is provided a content synchronization method for a plurality of authorized spaces based on a digital map, the method being applied to a terminal, the method comprising: acquiring a first user identifier; determining a first scene according to the first user identification; acquiring a first digital map corresponding to a first scene, wherein the first digital map comprises a first authorized space; the first authorization space is a three-dimensional space corresponding to the first user identifier in the digital map corresponding to the first scene; the digital map corresponding to the first scene comprises a target object; the target object comprises a preset image or a three-dimensional object; the first digital map comprises a panorama, a point cloud or a grid; displaying the first digital map and the first authorized space; the first digital map comprises a target object; the target object comprises a preset image or a three-dimensional object; acquiring target media content; adding the target media content in the first authorized space; determining a target relative positional relationship between the target media content and the target object; and sending the first user identifier, the target media content, the target object and the target relative position relation to a server, so that the server updates the content of other authorized spaces corresponding to the first user identifier in a preset digital map according to the target media content, the target object and the target relative position relation, wherein the preset digital map comprises the target object.
In the embodiment of the application, after the target media content is added in the first authorized space in the digital map corresponding to the first scene, the target media content, the target object, and the relative position relationship between the target media content and the target object can be sent to the server, and the server can add the target media content to other authorized spaces corresponding to the first user identifier according to the relative position relationship in the digital map of other scenes which also contain the target object, so that the effect of automatically and uniformly updating the media content in each authorized space corresponding to the user identifier is realized, and the updating efficiency is high.
Optionally, the implementation process of acquiring the first digital map corresponding to the first scene according to the first user identifier may be: the first user identification is sent to the server, so that the server obtains digital maps corresponding to k scenes and a plurality of authorized spaces corresponding to the first user identification in the digital maps corresponding to each scene according to the first user identification; receiving digital maps corresponding to the k scenes and a plurality of authorized spaces corresponding to the first user identification in the digital map corresponding to each scene, wherein the digital maps are sent by the server; and selecting the first scene from the k scenes according to a preset rule, and acquiring a first digital map corresponding to the first scene.
Optionally, the selecting the first scene from the k scenes according to a preset rule includes: selecting a scene closest to the position of the terminal from the k scenes as the first scene; or selecting a scene with highest priority from the k scenes as the first scene; or selecting a default scene from the k scenes as the first scene.
Optionally, the implementation process of determining the first scene according to the first user identifier may include: the first user identification and scene requirements are sent to the server, so that the server obtains k scenes corresponding to the first user identification according to the first user identification, and obtains a first scene meeting the scene requirements from the k scenes; and receiving the first scene sent by the server.
Optionally, in the digital map, different user identifications correspond to different authorized spaces.
In a third aspect, there is provided a method for content synchronization of a plurality of authorized spaces based on a digital map, the method comprising: acquiring a first user identifier; acquiring a first user identifier, a target object, target media content and a target relative position relationship between the target media content and the target object, which are sent by a terminal; the target object comprises a preset image or a three-dimensional object; acquiring a second digital map corresponding to a second scene according to the first user identifier, wherein the second digital map comprises a second authorized space; wherein the second authorized space is a three-dimensional space corresponding to the first user identifier; the second digital map comprises the target object; the second authorization space is used for presenting media content; determining the position of the target object in the second digital map; and adding the target media content in the second authorization space according to the position of the target object and the target relative position relationship, so that when the terminal presents the second digital map and renders the second authorization space, the position relationship between the target media content in the second authorization space and the target object in the second digital map meets the target relative position relationship.
In the application, the server can acquire the target object, the target media content and the relative position relation between the target media content and the target object sent by the terminal, and then, the target media content is added in the second authorization space according to the relative position relation between the target object and the target, that is, the effect of automatically and uniformly updating the media content in each authorization space corresponding to the user identifier is achieved, and the updating efficiency is high.
Optionally, after adding the target media content in the second authorization space according to the relative position relationship between the target object and the target, if the target media content does not match with the second authorization space, the target media content is adjusted so that the adjusted target media content matches with the second authorization space.
The target media content added in the second authorized space may not fit the second authorized space, e.g., may be out of range of the second authorized space, in which case the server may adjust the size, shape, etc. of the target media content to fit the target media content to the second authorized space.
Optionally, in the digital map, different user identifications correspond to different authorized spaces.
In a fourth aspect, a method for synchronizing contents of a plurality of authorized spaces based on a digital map is provided, the method being applied to a terminal, the method comprising acquiring a first user identification; determining a first scene according to the first user identification; acquiring a first digital map corresponding to the first scene; the first digital map comprises a first authorized space; wherein the first authorized space is a three-dimensional space corresponding to the first user identifier; the first authorization space is used for presenting media content; displaying the first digital map, the first authorized space, and first media content included in the first authorized space; editing the first media content according to a first editing mode; and sending the first user identifier, the first media content and the first editing mode to a server, so that the server edits the first media content in other authorized spaces corresponding to the first user identifier in a preset digital map according to the first editing mode.
In the embodiment of the application, the terminal can simultaneously acquire the digital maps corresponding to the multiple scenes and the authorized spaces corresponding to the first user identifications contained in each digital map, edit the media content in a certain authorized space and send the media content and the editing mode to the server, and the server can search the authorized space which also contains the first user identifications of the media content in the digital maps corresponding to other scenes and edit the media content in the corresponding authorized space according to the received editing mode, so that the server can automatically complete the editing of the same media content in the multiple authorized spaces, improve the editing efficiency of the media content in the authorized spaces and ensure the effect consistency of the media content in the multiple authorized spaces.
Optionally, the acquiring, according to the first user identifier, a first digital map corresponding to a first scene includes: the first user identification is sent to the server, so that the server obtains digital maps corresponding to k scenes and a plurality of authorized spaces corresponding to the first user identification in the digital maps corresponding to each scene according to the first user identification; receiving digital maps corresponding to the k scenes and a plurality of authorized spaces corresponding to the first user identification in the digital map corresponding to each scene, wherein the digital maps are sent by the server; and selecting the first scene from the k scenes according to a preset rule, and acquiring a first digital map corresponding to the first scene.
Optionally, the selecting the first scene from the k scenes according to a preset rule includes: selecting a scene closest to the position of the terminal from the k scenes as the first scene; or selecting a scene with highest priority from the k scenes as the first scene; or selecting a default scene from the k scenes as the first scene.
Optionally, the implementation process of determining the first scene according to the first user identifier includes: the first user identification and scene requirements are sent to the server, so that the server obtains k scenes corresponding to the first user identification according to the first user identification, and obtains a first scene meeting the scene requirements from the k scenes; and receiving the first scene sent by the server.
Optionally, the first editing mode includes one or more of an adding mode, a deleting mode, an replacing mode and a moving mode according to a preset relative displacement.
The adding mode refers to adding the media content based on the original media content in the authorized space under the condition that the authorized space already contains the media content. The deletion mode refers to deleting the existing media content in the authorized space, and specifically may include deleting the whole media content or deleting a part of elements in the media content. The replacement means is to replace a certain media content existing in the authorized space with other media content or replace a certain element contained in the existing media content in the authorized space with other media content. The moving manner according to the preset relative displacement may refer to moving the existing media content in the authorized space from the current position to another position according to the preset relative displacement.
In addition, it should be noted that after the media content is edited according to the first editing mode, the terminal may send the edited media content and the first editing mode to the server, and the server may edit the same media content in other authorized spaces of the first user identifier according to the first editing mode. Specifically, editing according to the first editing mode refers to performing editing operation same as that of the first editing mode on the same media content in other authorized spaces, so that all authorized spaces containing corresponding media content show consistent effects after editing.
For example, a first editing mode adds a first media object for a first location in a first authorized space; synchronizing the first editing means in the second authorization space may be embodied as adding the first media object at a second location in the second authorization space; assuming that the first authorized space and the second space are coincident (including after scaling), the first position and the second position are coincident and the manner of presentation of the first media object in both spaces is consistent.
For example, the first editing mode is deleting the first media object at the first position in the first authorized space; synchronizing the first editing means in the second authorization space may be embodied as deleting the first media object at a second location in the second authorization space; assuming that the first and second positions are coincident if the first and second authorized spaces are coincident (including after scaling), the remaining content of the first and second authorized spaces appears to be coincident.
For example, the first editing mode is to replace the first media object at the first position with the second media object in the first authorized space; synchronizing the first editing means in the second authorized space may be embodied as replacing the first media object in the second authorized space with the second media object; assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the second position are coincident, and the second media objects in the two spaces are also coincident, the presentation is consistent.
For example, the first editing mode is to move the first media object from the first position to the second position in the first authorized space; synchronizing the first editing means in the second authorized space may in particular be manifested in that the first media object in the second authorized space is moved from the third position to the fourth position. Assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the third position are coincident, the second position and the fourth position are coincident, and the manner of presentation of the first media object in both spaces is consistent.
The above is by way of example only and is not limiting. The first and second authorized spaces may be identical in size and shape; or may be proportioned, in which case the arrangement of objects in space will be proportioned accordingly.
Optionally, in the digital map, different user identifications correspond to different authorized spaces.
In a fifth aspect, a method for synchronizing contents in a plurality of authorized spaces based on a digital map is provided, where the method includes obtaining a first user identifier, a first media content and a first editing mode, where the first media content is a media content included in a first authorized space, and the first authorized space is a three-dimensional space corresponding to the first user identifier in a first digital map corresponding to a first scene, and the first digital map includes a panorama, a point cloud map or a grid model; acquiring a second digital map corresponding to a second scene according to the first user identifier, wherein the second digital map comprises a second authorized space, and the second authorized space is a three-dimensional space corresponding to the first user identifier in the digital map corresponding to the second scene; the second authorized space includes the first media content; editing the first media content included in the second authorized space according to the first editing mode.
In the embodiment of the application, after receiving the first media content and the first editing mode, the server can search the authorized space corresponding to the first user identifier included in the digital map corresponding to other scenes for the first media content, and edit the searched first media content according to the first editing mode, so that synchronous editing of the same media content in a plurality of authorized spaces is realized, and therefore, the server can automatically complete editing of the media content in the plurality of authorized spaces, editing efficiency is improved, and effect consistency of the media content in the plurality of authorized spaces is ensured.
Optionally, after the editing of the first media content included in the second authorization space according to the first editing manner, the method further includes:
and if the edited media content is not matched with the second authorized space, adjusting the edited media content to enable the adjusted media content to be matched with the second authorized space.
Optionally, the first editing mode includes one or more of an adding mode, a deleting mode, an replacing mode and a moving mode according to a preset relative displacement.
The adding mode refers to adding the media content based on the original media content in the authorized space under the condition that the authorized space already contains the media content. The deletion mode refers to deleting the existing media content in the authorized space, and specifically may include deleting the whole media content or deleting a part of elements in the media content. The replacement means is to replace a certain media content existing in the authorized space with other media content or replace a certain element contained in the existing media content in the authorized space with other media content. The moving manner according to the preset relative displacement may refer to moving the existing media content in the authorized space from the current position to another position according to the preset relative displacement.
In addition, it should be noted that after the media content is edited according to the first editing mode, the terminal may send the edited media content and the first editing mode to the server, and the server may edit the same media content in other authorized spaces of the first user identifier according to the first editing mode. Specifically, editing according to the first editing mode refers to performing editing operation same as that of the first editing mode on the same media content in other authorized spaces, so that all authorized spaces containing corresponding media content show consistent effects after editing.
For example, a first editing mode adds a first media object for a first location in a first authorized space; synchronizing the first editing means in the second authorization space may be embodied as adding the first media object at a second location in the second authorization space; assuming that the first authorized space and the second space are coincident (including after scaling), the first position and the second position are coincident and the manner of presentation of the first media object in both spaces is consistent.
For example, the first editing mode is deleting the first media object at the first position in the first authorized space; synchronizing the first editing means in the second authorization space may be embodied as deleting the first media object at a second location in the second authorization space; assuming that the first and second positions are coincident if the first and second authorized spaces are coincident (including after scaling), the remaining content of the first and second authorized spaces appears to be coincident.
For example, the first editing mode is to replace the first media object at the first position with the second media object in the first authorized space; synchronizing the first editing means in the second authorized space may be embodied as replacing the first media object in the second authorized space with the second media object; assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the second position are coincident, and the second media objects in the two spaces are also coincident, the presentation is consistent.
For example, the first editing mode is to move the first media object from the first position to the second position in the first authorized space; synchronizing the first editing means in the second authorized space may in particular be manifested in that the first media object in the second authorized space is moved from the third position to the fourth position. Assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the third position are coincident, the second position and the fourth position are coincident, and the manner of presentation of the first media object in both spaces is consistent.
The above is by way of example only and is not limiting. The first and second authorized spaces may be identical in size and shape; or may be proportioned, in which case the arrangement of objects in space will be proportioned accordingly.
Optionally, in the digital map, the authorized spaces corresponding to different user identifications are different.
In a sixth aspect, there is provided a digital map-based media content sharing method, the method being applied to a second terminal, the method comprising: acquiring a video of a target scene sent by a first terminal, wherein the video of the target scene carries a target pose when the first terminal shoots the video of the target scene; acquiring target media content to be displayed according to the target pose; and playing the video of the target scene, and rendering the target media content while playing the video of the target scene.
In the embodiment of the application, the second terminal may receive the video of the target scene shared by the first terminal, and acquire the media content added at the corresponding position from the server according to the target pose included in the video for display. Therefore, the terminals can share the media content added into the digital map by a video sharing method, and the media content is convenient to spread.
Optionally, the implementation process of obtaining the target media content to be displayed according to the target pose may include: the target pose is sent to a server, so that the server obtains the target media content according to the target pose; and receiving the target media content sent by the server.
In a seventh aspect, there is provided a digital map-based authorized space display device having a function of implementing the digital map-based authorized space display method behavior in the first aspect described above. The digital map-based authorized space display device comprises at least one module for implementing the digital map-based authorized space display method provided in the first aspect.
In an eighth aspect, there is provided a digital map-based content synchronization apparatus for a plurality of authorized spaces, the digital map-based content synchronization apparatus having a function of realizing the behavior of the digital map-based content synchronization method in the second or fourth aspect. The content synchronization device based on the plurality of authorized spaces of the digital map comprises at least one module for implementing the content synchronization method based on the plurality of authorized spaces of the digital map provided in the second aspect.
A ninth aspect provides a digital map-based content synchronization apparatus for a plurality of authorized spaces, the digital map-based content synchronization apparatus having a function of realizing the digital map-based content synchronization method behavior of the plurality of authorized spaces in the third or fifth aspect. The content synchronization device based on the plurality of authorized spaces of the digital map comprises at least one module for implementing the content synchronization method based on the plurality of authorized spaces of the digital map provided in the third aspect.
In a tenth aspect, a digital map-based media content sharing apparatus is provided, which has a function of implementing the behavior of the digital map-based media content sharing method in the sixth aspect. The digital map-based media content sharing device comprises at least one module, wherein the at least one module is used for realizing the digital map-based media content sharing method provided in the fourth aspect.
In an eleventh aspect, there is provided a digital map-based authorized space display device, which includes a processor, a memory, a camera, a transceiver, and a communication bus, where the processor, the memory, the camera, and the transceiver are all connected through the communication bus, and the memory is configured to store a program supporting the digital map-based authorized space display device to perform the digital map-based authorized space display method provided in the first aspect, and store data related to implementing the digital map-based authorized space display method provided in the first aspect. The camera is used for collecting video streams, the transceiver is used for receiving or transmitting data, the processor controls the camera and the transceiver to realize the content synchronization method of the plurality of authorized spaces based on the digital map according to the first aspect by executing the program stored in the memory.
In a twelfth aspect, there is provided a digital map-based content synchronization apparatus for a plurality of authorized spaces, the digital map-based content synchronization apparatus including a processor, a memory, a camera, a transceiver, and a communication bus, the processor, the memory, the camera, and the transceiver being all connected by the communication bus, the memory being configured to store a program for enabling the digital map-based content synchronization apparatus for a plurality of authorized spaces to execute the digital map-based content synchronization method provided in the second or fourth aspect, and to store data related to the digital map-based content synchronization method provided in the second or fourth aspect. The camera is used for collecting video streams, the transceiver is used for receiving or transmitting data, the processor controls the camera and the transceiver to realize the content synchronization method based on the plurality of authorized spaces of the digital map according to the second aspect or the fourth aspect by executing the program stored in the memory.
In a thirteenth aspect, there is provided a digital map-based content synchronization apparatus for a plurality of authorized spaces, the digital map-based content synchronization apparatus including in its structure a processor, a memory, a transceiver, and a communication bus, the processor, the memory, and the transceiver being all connected by the communication bus, the memory being for storing a program for supporting the digital map-based content synchronization apparatus for a plurality of authorized spaces to perform the digital map-based content synchronization method provided in the third aspect or the fifth aspect, and for storing data involved in implementing the digital map-based content synchronization method provided in the third aspect or the fifth aspect. The transceiver is configured to receive or transmit data, and the processor controls the transceiver to implement the content synchronization method for a plurality of authorized spaces based on a digital map according to the third or fifth aspect by executing a program stored in the memory.
In a fourteenth aspect, a digital map-based media content sharing apparatus is provided, where the digital map-based media content sharing apparatus includes a processor, a memory, a camera, a transceiver, and a communication bus, where the processor, the memory, the camera, and the transceiver are all connected through the communication bus, and the memory is configured to store a program supporting the digital map-based media content sharing apparatus to execute the digital map-based media content sharing method provided in the sixth aspect, and store data related to implementing the digital map-based media content sharing method provided in the sixth aspect. The camera is used for collecting video streams, the transceiver is used for receiving or sending data, the processor controls the camera and the transceiver to realize the digital map-based media content sharing method provided in the sixth aspect by executing the program stored in the memory.
In a fifteenth aspect, there is provided a computer readable storage medium having instructions stored therein that when run on a computer cause the computer to perform the digital map-based authorized space display method of the first aspect described above.
In a sixteenth aspect, there is provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of content synchronization for a plurality of authorized spaces based on a digital map as described in the second or fourth aspect above.
In a seventeenth aspect, there is provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method for content synchronization of a plurality of authorized spaces based on a digital map according to the third or fifth aspect above.
In an eighteenth aspect, there is provided a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the digital map-based media content sharing method of the sixth aspect described above.
In a nineteenth aspect, there is provided a computer program product containing instructions, which when run on a computer, cause the computer to perform the digital map-based authorized space display method described in the first aspect, or cause the computer to perform the digital map-based content synchronization method described in the second or fourth aspect, or cause the computer to perform the digital map-based content synchronization method described in the third or fifth aspect, or cause the computer to perform the digital map-based media content sharing method described in the sixth aspect.
The beneficial effects that this application provided technical scheme brought include at least:
in the embodiment of the application, the terminal can simultaneously acquire the digital maps corresponding to the multiple scenes and the authorized spaces corresponding to the first user identifications contained in each digital map, then, the target media content is added in a certain authorized space, the relative position relation between the target media content and the target object is determined, the relative position relation, the target object and the target media content are sent to the server, the server can search the characteristics matched with the target object in the digital maps corresponding to other scenes, and synchronize the target media content to other authorized spaces corresponding to the first user identifications according to the relative position relation, so that the server can automatically complete the addition of the media content in the multiple authorized spaces, the addition efficiency is improved, the consistency of the effects of the media content in the multiple authorized spaces is ensured, and the addition accuracy can be ensured.
Drawings
FIG. 1 is a diagram of a system architecture provided in an embodiment of the present application;
FIG. 2 is a block diagram of software modules of a terminal according to an embodiment of the present application;
FIG. 3 is a block diagram of software modules of a server provided in an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a software structural block diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application;
FIG. 7 is a flowchart of an authorized space display method based on a digital map according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a preview flow of a target scene and a display of a target authorization space according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a pose deviation of an authorized space from a preview stream of a target scene according to an embodiment of the present application;
FIG. 10 is a schematic diagram showing one display media content type option according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a display target media content shown in an embodiment of the present application;
FIG. 12 is a flow chart of a method for content synchronization for multiple authorized spaces based on a digital map according to an embodiment of the present application;
fig. 13 is a schematic diagram of a user interacting with a terminal to select a first scene according to an embodiment of the present application;
fig. 14 is a schematic diagram of a user interacting with a terminal to select a first authorization space according to an embodiment of the present application;
FIG. 15 is a schematic diagram illustrating one embodiment of synchronizing targeted media content in multiple authorized spaces;
FIG. 16 is a flowchart of another method for synchronizing content based on multiple authorized spaces of a digital map according to an embodiment of the present application;
FIG. 17 is a flowchart of a method for synchronizing content of a plurality of authorized spaces based on a digital map according to another embodiment of the present application;
FIG. 18 is a flowchart of another method for synchronizing content based on multiple authorized spaces of a digital map according to an embodiment of the present application;
FIG. 19 is a schematic diagram of an editing option provided by an embodiment of the present application;
FIG. 20 is a flowchart of another method for synchronizing content based on multiple authorized spaces of a digital map according to an embodiment of the present application;
FIG. 21 is a flowchart of another method for synchronizing content based on multiple authorized spaces of a digital map according to an embodiment of the present application;
FIG. 22 is a schematic diagram showing a display of a preset space and setting of the preset space according to an embodiment of the present application;
FIG. 23 is a schematic diagram showing a pre-segmented spatial block in a preview stream of a target scene according to an embodiment of the present application;
FIG. 24 is a schematic diagram of a binding of a user authorized space with an authorized space of a building according to an embodiment of the present application;
FIG. 25 is a flowchart of a method for sharing media content based on a digital map according to an embodiment of the present application;
FIG. 26 is a schematic diagram of displaying media content display switch options on a video playback page according to an embodiment of the present application;
FIG. 27 is a flowchart of another method for sharing media content based on digital maps according to an embodiment of the present application;
fig. 28 is a schematic structural diagram of an authorized space display device based on a digital map according to an embodiment of the present application;
fig. 29 is a schematic structural diagram of a content synchronization device based on a plurality of authorized spaces of a digital map according to an embodiment of the present application;
fig. 30 is a schematic structural diagram of another content synchronization device based on a plurality of authorized spaces of a digital map according to an embodiment of the present application;
fig. 31 is a schematic structural diagram of another content synchronization device based on a plurality of authorized spaces of a digital map according to an embodiment of the present application;
fig. 32 is a schematic structural diagram of another content synchronization device based on a plurality of authorized spaces of a digital map according to an embodiment of the present application;
fig. 33 is a schematic structural diagram of a digital map-based media content sharing device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiment of the present application in detail, an application scenario related to the embodiment of the present application is described.
Based on the geographic information of the real world, the digital map constructed by adopting technologies such as AR or VR and the like is more visual in description of the real world, and can provide great convenience for people to travel. The digital map may include, among other things, a panorama, a point cloud, and a grid model. In this embodiment, the mesh model may also be referred to as a mesh map. Buildings, plots, etc. in the real world may be presented stereoscopically in corresponding virtual spaces in a digital map. In such a case, the user may need to present the media content within the corresponding space, such as a building or parcel presented in the virtual space of the digital dimensional map. For example, assuming that one or more shops of a certain commercial brand exist in the real world, when a product or service needs to be promoted in the shops, a user can apply for authorized spaces in the digital map corresponding to the shops, and then media content such as corresponding videos, animations, pictures or the like for information recommendation is added in the authorized spaces. For another example, for a building in the real world, a user may want to identify the building presented in the digital map, or may add media content such as pictures or text to the location of the building in the digital map. Based on the above, the embodiment of the application provides a display method of an authorized space based on a digital map, which can be used for a registered user to check the authorized space of the registered user in real time and add media content in the authorized space. In addition, the embodiment of the application also provides a content synchronization method of a plurality of authorized spaces based on the digital map, which can be used for registering users to add media content in batches in the plurality of authorized spaces. In addition, the embodiment of the application also provides a media content sharing method based on the digital map, which can enable a consumption user to acquire and watch media content in a corresponding authorized space through videos shared by other consumption users.
The following are two possible scenarios presented in the embodiments of the present application.
In the first scenario, assuming that a certain cell phone brand has 100 shops in city a, a new cell phone is currently marketed, and needs to be advertised, at this time, the user wants to present AR content such as pictures, characters, videos, animations, and 3D models on the shop sign attachment. In this case, the user can automatically and synchronously add the AR content to be added to 100 stores through the content synchronization method provided by the embodiment of the application, manual operation of the user is not needed, and the adding efficiency is improved.
In the second scenario, it is assumed that a user in a certain fine store wants to edit and see the propaganda effect of the AR content added by the user in real time, at this time, the user can hold the terminal in the environment where the fine store is located, the terminal can display the authorized space in real time through the authorized space display method provided by the embodiment of the present application, and the user can add the media content in the authorized space in real time through interaction with the terminal and view the effect of the media content added by the user.
The system architecture to which the embodiments of the present application relate is described next.
Fig. 1 is a system architecture diagram related to a media content adding method according to an embodiment of the present application. As shown in fig. 1, a terminal 101 and a server 102 are included in the system. Wherein the terminal 101 and the server 102 may communicate through a wired network or a wireless network.
The terminal 101 may perform image acquisition on the real world and display the acquired live image in real time. In addition, the terminal 101 may further obtain, from the server 102, spatial information of an authorized space of the current user in the digital map according to the user identifier of the current user, and then display the authorized space on the user interface, so as to add media content in the authorized space according to the media content adding method provided in the embodiment of the present application. If there is added media content in the authorized space, the terminal 101 may also obtain the added media content from the server and display the media content in the authorized space through AR or VR or Mixed Reality (MR) technology. If the current user does not have an authorized space in the digital map, the terminal 101 may also apply for the authorized space in the digital map according to the user operation through the related implementation manner provided in the embodiment of the present application.
The server 102 stores therein space information of authorized spaces owned by respective users in the digital map. When receiving the user identifier sent by the terminal 101, the server 102 may obtain, according to the user identifier, space information of an authorized space owned by the user identified by the user identifier. The server 102 also stores information such as media content added to each authorized space and the position and posture of the media content. Thus, when the server 102 receives the positioning information transmitted by the terminal 101, if there is an authorized space for the user of the terminal 101 at the location indicated by the positioning information and there is added media content in the authorized space, the server 102 may transmit information such as the authorized space at the location indicated by the positioning information and the added media content in the space to the terminal 101 so that the terminal 101 displays the authorized space and the corresponding media content on the user interface through AR or VR technology.
The terminal 101 may be a terminal device such as a mobile phone, a tablet, a near-eye display device, etc., and the server 102 may be a single server or a server cluster. When server 102 is a server cluster, a plurality of service nodes may be included in the server cluster, each of which may be configured to perform a different function as described above. For example, media content service nodes, digital map service nodes, and edge service nodes may be included in the server cluster. Wherein the edge service node may be configured to receive a request from the terminal 101 and feed back corresponding information to the terminal 101. The media content service node may be used to store relevant information of media content that has been added to the digital map. The digital map service node may be configured to store information related to the digital map, such as spatial information of each authorized space in the digital map, map data of the digital map, and the like.
Illustratively, referring to FIG. 2, the terminal 101 may include a display module 1011, a user interface module 1012, an authorized space loading module 1013, an authorized space presentation module 1014, a media content adding module 1015, a positioning and matching module 1016.
The display module 1011 may be used to display live images acquired by the terminal 101 in real time, and may display media content added to a digital map through AR or VR technology. The user interface module 1012 is used to present a user interface and interact with a user through the user interface. The authorized space loading module 1013 is configured to load space information of an authorized space in the acquired digital map. The authorized space presentation module 1014 is configured to perform visual presentation of the authorized space according to space information of the authorized space. The media content adding module 1015 is used to add media content to a digital map. The positioning and matching module 1016 is configured to obtain current positioning information and pose information of the terminal 101, and the positioning and matching module 1016 is further configured to match the displayed field image with a digital map.
For example, referring to fig. 3, the server 102 may include an authorization space management module 1021, a media content management module 1022, a location management module 1023, and a storage module 1024.
The authorization space management module 1021 is configured to process a request for applying for an authorization space sent by the terminal 101, and manage the authorization space owned by each user. The media content management module 1022 is used to manage media content added to the digital map. The location management module 1023 is configured to respond to a location request of the terminal. The storage module 1024 is used to store information such as data required by each of the above modules.
Fig. 4 shows a schematic structural diagram of an electronic device 400. The functions of the terminal in the embodiments of the present application may be implemented by the electronic device.
Electronic device 400 may include a processor 410, an external memory interface 420, an internal memory 421, a universal serial bus (universal serial bus, USB) interface 430, a charge management module 440, a power management module 441, a battery 442, an antenna 1, an antenna 2, a mobile communication module 450, a wireless communication module 460, an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, an ear-piece interface 470D, a sensor module 480, keys 490, a motor 491, an indicator 492, a camera 493, a display screen 494, and a subscriber identity module (subscriber identification module, SIM) card interface 495, among others. The sensor modules 480 may include pressure sensors 480A, gyroscope sensors 480B, barometric pressure sensors 480C, magnetic sensors 480D, acceleration sensors 480E, distance sensors 480F, proximity sensors 480G, fingerprint sensors 480H, temperature sensors 480J, touch sensors 480K, ambient light sensors 480L, bone conduction sensors 480M, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 400. In other embodiments of the present application, electronic device 400 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 410 may include one or more processing units, such as: the processor 410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 410 for storing instructions and data. In some embodiments, the memory in the processor 410 is a cache memory. The memory may hold instructions or data that the processor 410 has just used or recycled. If the processor 410 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided, reducing the latency of the processor 410 and thus improving the efficiency of the system.
In some embodiments, processor 410 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 410 may contain multiple sets of I2C buses. The processor 410 may be coupled to the touch sensor 480K, charger, flash, camera 493, etc., respectively, through different I2C bus interfaces. For example: the processor 410 may couple the touch sensor 480K through an I2C interface, causing the processor 410 to communicate with the touch sensor 480K through an I2C bus interface, implementing the touch functionality of the electronic device 400.
The I2S interface may be used for audio communication. In some embodiments, the processor 410 may contain multiple sets of I2S buses. The processor 410 may be coupled to the audio module 470 via an I2S bus to enable communication between the processor 410 and the audio module 470. In some embodiments, the audio module 470 may communicate audio signals to the wireless communication module 460 through the I2S interface to implement a function of answering a call through a bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 470 and the wireless communication module 460 may be coupled by a PCM bus interface. In some embodiments, the audio module 470 may also transmit audio signals to the wireless communication module 460 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 410 with the wireless communication module 460. For example: the processor 410 communicates with the bluetooth module in the wireless communication module 460 through a UART interface to implement bluetooth functions. In some embodiments, the audio module 470 may transmit an audio signal to the wireless communication module 460 through a UART interface, implementing a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 410 to peripheral devices such as the display screen 494, the camera 493, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 410 and camera 493 communicate through a CSI interface to implement the photographing function of electronic device 400. The processor 410 and the display screen 494 communicate via a DSI interface to implement the display functions of the electronic device 400.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 410 with the camera 493, display screen 494, wireless communication module 460, audio module 470, sensor module 480, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 430 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 430 may be used to connect a charger to charge the electronic device 400, or may be used to transfer data between the electronic device 400 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 400. In other embodiments of the present application, the electronic device 400 may also use different interfacing manners, or a combination of multiple interfacing manners, as in the above embodiments.
The charge management module 440 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 440 may receive a charging input of a wired charger through the USB interface 430. In some wireless charging embodiments, the charge management module 440 may receive wireless charging input through a wireless charging coil of the electronic device 400. The battery 442 may be charged by the charge management module 440, and the electronic device may be powered by the power management module 441.
The power management module 441 is configured to connect the battery 442, the charge management module 440 and the processor 410. The power management module 441 receives input from the battery 442 and/or the charge management module 440 to power the processor 410, the internal memory 421, the display screen 494, the camera 493, the wireless communication module 460, and the like. The power management module 441 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 441 may also be disposed in the processor 410. In other embodiments, the power management module 441 and the charge management module 440 may be disposed in the same device.
The wireless communication function of the electronic device 400 may be implemented by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in electronic device 400 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 450 may provide a solution for wireless communication, including 2G/3G/4G/5G, as applied to the electronic device 400. The mobile communication module 450 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 450 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 450 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate the electromagnetic waves. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the processor 410. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the same device as at least some of the modules of the processor 410.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio devices (not limited to speaker 470A, receiver 470B, etc.), or displays images or video through display screen 494. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 450 or other functional module, independent of the processor 410.
The wireless communication module 460 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 400. The wireless communication module 460 may be one or more devices that integrate at least one communication processing module. The wireless communication module 460 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and transmits the processed signals to the processor 410. The wireless communication module 460 may also receive a signal to be transmitted from the processor 410, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 450 of electronic device 400 are coupled, and antenna 2 and wireless communication module 460 are coupled, such that electronic device 400 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 400 implements display functions via a GPU, a display screen 494, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 494 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 410 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 494 is used to display images, videos, and the like. The display screen 494 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 400 may include 1 or N display screens 494, N being a positive integer greater than 1.
Electronic device 400 may implement capture functionality through an ISP, camera 493, video codec, GPU, display screen 494, and application processor, among others.
The ISP is used to process the data fed back by the camera 493. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, an ISP may be provided in the camera 493.
The camera 493 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 400 may include 1 or N cameras 493, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 400 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 400 may support one or more video codecs. Thus, the electronic device 400 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 400 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 420 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 400. The external memory card communicates with the processor 410 through an external memory interface 420 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 421 may be used to store computer-executable program code that includes instructions. The internal memory 421 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 400 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 421 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 410 performs various functional applications and data processing of the electronic device 400 by executing instructions stored in the internal memory 421 and/or instructions stored in a memory provided in the processor.
Electronic device 400 may implement audio functionality through audio module 470, speaker 470A, receiver 470B, microphone 470C, headphone interface 470D, and an application processor, among others. Such as music playing, recording, etc.
The audio module 470 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 470 may also be used to encode and decode audio signals. In some embodiments, the audio module 470 may be disposed in the processor 410, or some functional modules of the audio module 470 may be disposed in the processor 410.
Speaker 470A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 400 may listen to music, or to hands-free conversations, through the speaker 470A.
A receiver 470B, also referred to as a "earpiece," is used to convert the audio electrical signal into a sound signal. When electronic device 400 is answering a telephone call or voice message, voice may be received by placing receiver 470B in close proximity to the human ear.
Microphone 470C, also referred to as a "microphone" or "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 470C through the mouth, inputting a sound signal to the microphone 470C. The electronic device 400 may be provided with at least one microphone 470C. In other embodiments, the electronic device 400 may be provided with two microphones 470C, which may implement noise reduction in addition to collecting sound signals. In other embodiments, the electronic device 400 may also be provided with three, four, or more microphones 470C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The headphone interface 470D is for connecting a wired headphone. Earphone interface 470D may be a USB interface 430 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 480A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 480A may be disposed on display screen 494. The pressure sensor 480A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 480A, the capacitance between the electrodes changes. The electronic device 400 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 494, the electronic apparatus 400 detects the touch operation intensity according to the pressure sensor 480A. The electronic device 400 may also calculate the location of the touch based on the detection signal of the pressure sensor 480A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 480B may be used to determine a motion gesture of the electronic device 400. In some embodiments, the angular velocity of electronic device 400 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 480B. The gyro sensor 480B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 480B detects the shake angle of the electronic device 400, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 400 through the reverse motion, so as to realize anti-shake. The gyro sensor 480B may also be used for navigation, somatosensory of game scenes.
The air pressure sensor 480C is used to measure air pressure. In some embodiments, electronic device 400 calculates altitude from barometric pressure values measured by barometric pressure sensor 480C, aiding in positioning and navigation.
The magnetic sensor 480D includes a hall sensor. The electronic device 400 may detect the opening and closing of the flip holster using the magnetic sensor 480D. In some embodiments, when the electronic device 400 is a flip machine, the electronic device 400 may detect the opening and closing of the flip according to the magnetic sensor 480D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 480E may detect the magnitude of acceleration of the electronic device 400 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 400 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 480F for measuring distance. The electronic device 400 may measure the distance by infrared or laser. In some embodiments, the electronic device 400 may range using the distance sensor 480F to achieve fast focus.
Proximity light sensor 480G may include, for example, a Light Emitting Diode (LED) and a light detector, for example, a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 400 emits infrared light outwards through the light emitting diode. The electronic device 400 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 400. When insufficient reflected light is detected, the electronic device 400 may determine that there is no object in the vicinity of the electronic device 400. The electronic device 400 may detect that the user holds the electronic device 400 near the ear to talk using the proximity light sensor 480G, so as to automatically extinguish the screen for power saving purposes. The proximity light sensor 480G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 480L is used to sense ambient light level. The electronic device 400 may adaptively adjust the brightness of the display screen 494 based on the perceived ambient light level. The ambient light sensor 480L may also be used to automatically adjust white balance during photographing. Ambient light sensor 480L may also cooperate with proximity light sensor 480G to detect whether electronic device 400 is in a pocket to prevent false touches.
The fingerprint sensor 480H is used to collect a fingerprint. The electronic device 400 may utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 480J detects temperature. In some embodiments, the electronic device 400 performs a temperature processing strategy using the temperature detected by the temperature sensor 480J. For example, when the temperature reported by temperature sensor 480J exceeds a threshold, electronic device 400 performs a reduction in performance of a processor located in the vicinity of temperature sensor 480J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 400 heats the battery 442 to avoid the low temperature causing the electronic device 400 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 400 performs boosting of the output voltage of the battery 442 to avoid abnormal shutdown caused by low temperatures.
Touch sensor 480K, also referred to as a "touch device". The touch sensor 480K may be disposed on the display screen 494, and the touch sensor 480K and the display screen 494 form a touch screen, which is also called a "touch screen". The touch sensor 480K is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 494. In other embodiments, the touch sensor 480K may also be disposed on a surface of the electronic device 400 at a different location than the display screen 494.
Bone conduction sensor 480M may acquire a vibration signal. In some embodiments, bone conduction sensor 480M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 480M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 480M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 470 may parse out a voice signal based on the vibration signal of the sound part vibration bone block obtained by the bone conduction sensor 480M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 480M, so that a heart rate detection function is realized.
The keys 490 include a power-on key, a volume key, etc. The keys 490 may be mechanical keys. Or may be a touch key. Electronic device 400 may receive key inputs, generate key signal inputs related to user settings and function controls of electronic device 400.
The motor 491 may generate a vibration cue. The motor 491 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 491 may also correspond to different vibration feedback effects by touch operations applied to different areas of the display screen 494. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 492 may be an indicator light, which may be used to indicate a state of charge, a change in charge, an indication message, a missed call, a notification, or the like.
The SIM card interface 495 is used to connect to a SIM card. The SIM card may be inserted into the SIM card interface 495 or removed from the SIM card interface 495 to enable contact and separation with the electronic device 400. The electronic device 400 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 495 may support Nano SIM cards, micro SIM cards, etc. The same SIM card interface 495 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 495 may also be compatible with different types of SIM cards. The SIM card interface 495 may also be compatible with external memory cards. The electronic device 400 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, electronic device 400 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 400 and cannot be separated from the electronic device 400.
The software system of the electronic device 400 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the electronic device 400 is illustrated.
Fig. 5 is a software architecture block diagram of an electronic device 400 according to an embodiment of the invention.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 5, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 5, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 400. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the electronic device 400 software and hardware is illustrated below in connection with capturing a photo scene.
When touch sensor 480K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera driver by calling a kernel layer, and captures a still image or video through a camera 493.
Fig. 6 is a schematic structural diagram of a server 600 according to an embodiment of the present application, where the server in the system architecture shown in fig. 1 may be implemented by the server 600 shown in fig. 6. Referring to fig. 6, the server 600 includes at least one processor 601, a communication bus 602, a memory 603, and at least one communication interface 604.
The processor 601 may be a general purpose central processing unit (Central Processing Unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs of the present application.
The communication bus 602 may include a pathway to transfer information between the aforementioned components.
The Memory 603 may be, but is not limited to, a read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a random access Memory (random access Memory, RAM)) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a compact disc read-Only Memory (Compact Disc Read-Only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 603 may be stand alone and be coupled to the processor 601 via a communication bus 602. The memory 603 may also be integrated with the processor 601.
The communication interface 604 uses any transceiver-like device for communicating with other devices or communication networks, such as ethernet, radio Access Network (RAN), wireless local area network (Wireless Local Area Networks, WLAN), etc.
In a particular implementation, the processor 601 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 4, as an embodiment.
In a particular implementation, the server may include multiple processors, such as processor 601 and processor 605 shown in FIG. 4, as one embodiment. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, the computer device may also include an output device 606 and an input device 607, as one embodiment. The output device 606 communicates with the processor 601 and can display information in a variety of ways. For example, the output device 606 may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 607 is in communication with the processor 601 and may receive user input in a variety of ways. For example, the input device 607 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
The memory 603 is used for storing program codes for executing the embodiments of the present application, and the processor 601 controls the execution. The processor 601 is arranged to execute program code 608 stored in the memory 603. The server shown in fig. 1 may interact with the terminal through a processor 601 and program code 608 in a memory 603 to enable loading of an authorization space and addition and presentation of media content.
The best method for AR content registration is what you see is what you get, that is, when content registration is performed, the space where you can put can be seen in real time, and based on this, the present application provides a method for presenting authorized virtual space, which is convenient for users to perform AR content registration in a manageable area. On the basis, the embodiment of the application also provides an interactive flow for real-time content registration, and a user performs real-time authorization space viewing, dragging AR content, saving and uploading on a real scene, so that real-time AR content registration on site is realized.
Fig. 7 is a flowchart of a method for displaying an authorized space based on a digital map according to an embodiment of the present application. The method can be applied to a terminal, see fig. 7, and comprises the following steps:
Step 701: and obtaining a preview stream of the target scene.
In the embodiment of the application, each user may register its own user account in the server. That is, the user first needs to register to log into the system, with a unique user account for managing his digital space assets (i.e., authorized space). After logging in through the user account, the user can acquire the preview stream of the target scene. For convenience of explanation, a user who has an authorized space in the digital map or who wants to apply for an authorized space in the digital map may be referred to as a registered user. The scene in which the terminal is currently located may include all of the content that the terminal is within the field of view in the current environment. Such as a room, or a piece of land, the specific range and size may be freely defined depending on the environment or according to specific requirements, and the present invention is not limited. Specifically, the target scene may be a part or all of the scenes in which the terminal is currently located. For example, the target scene may be a store in the environment in which the terminal is currently located. Optionally, the target scene may also include an object (e.g., a target object such as a store logo) of interest to the user in the scene in which the terminal is currently located. For example, the target scene may be a sign of a store or other signage object in the scene where the terminal is currently located.
Specifically, the terminal may collect the preview stream of the target scene through a camera configured by the terminal itself, or may also receive the preview stream of the target scene collected and sent by other devices in real time. The terminal can call the camera by a system to collect the preview flow of the target scene, or can call the camera by an installed certain application program to collect the preview flow of the target scene.
In addition, in the embodiment of the present application, the preview stream of the target scene may also be referred to as a preview stream corresponding to the target scene.
Step 702: and acquiring the first user identification and the pose of the terminal.
In the embodiment of the application, the terminal may acquire the user account currently logged in, and use the user account as the first user identifier. In addition, the user account currently logged in by the terminal is not used to limit the user of the terminal. In other words, the user account currently logged in by the terminal may or may not be the user account of the user of the terminal, which is not limited in this application. It should be noted that, when the terminal obtains the first user identifier, the terminal may be obtained by a terminal system, or may be obtained by a third application program installed in the terminal, which is not limited in this embodiment of the present application.
In addition, the terminal can acquire the current pose of the terminal. The pose of the terminal may include a position and a pose of the terminal in the real world, wherein the position in the real world may include a position coordinate in a world coordinate system, or a longitude and latitude in the real world, and the pose of the terminal may include a rotation angle, a pitch angle, and a roll angle.
In one possible implementation, the terminal may detect the current pose of the terminal through its own configured sensors. For example, the terminal may detect the current pose through sensors such as acceleration sensors, gyroscopes, positioning components, and the like.
Alternatively, in another possible implementation manner, the terminal may send the acquired multi-frame image in the preview stream of the target scene to the server. The server can determine the position of the terminal when the terminal collects the multi-frame image according to the matching of the received multi-frame image and the characteristic information contained in the digital map. Then, the server can determine the pose of the terminal when the terminal collects the multi-frame image according to the characteristic image matched with the multi-frame image and the multi-frame image, and send the pose to the terminal.
Alternatively, in some possible scenarios, the terminal may combine the two possible implementations to obtain the pose of the terminal. For example, the terminal may acquire information such as latitude and longitude and altitude where the terminal is currently located through a sensor, and send the information and the acquired multi-frame images in the preview stream to the server, and the server may determine the gesture of the terminal according to the information and the multi-frame images, and send the gesture to the terminal.
Step 703: and acquiring n authorized spaces according to the first user identifier and the pose of the terminal.
After the terminal acquires the first user identifier and the pose, n authorized spaces can be acquired from the server according to the first user identifier and the pose. The n obtained authorized spaces are n mutually non-overlapping three-dimensional spaces corresponding to the first user identifier in the digital map corresponding to the target scene. Wherein n is an integer equal to 1 or greater than 1.
The first way is: the terminal can send the first user identifier and the pose to the server, and the server can acquire n authorized spaces according to the first user identifier and the pose and send the n authorized spaces to the terminal.
In this implementation manner, after receiving the first user identifier and the pose of the terminal, the server may first obtain the digital map corresponding to the target scene according to the pose of the terminal, and then the server may obtain all authorized spaces included in the digital map corresponding to the target scene, search whether the authorized space corresponding to the first user identifier exists in all authorized spaces included in the digital map corresponding to the target scene, and if so, the server may send the searched authorized space to the terminal. Specifically, in the embodiment of the present application, the authorized space sent by the server to the terminal may refer to space information that can characterize the corresponding authorized space.
The server may store therein space information of authorized spaces owned by the respective users in the digital map. For example, the server may store a mapping relationship between the user identification of each user and the spatial information of the authorized space owned by the corresponding user in the digital map of the different areas. The different user identifications correspond to different authorized spaces, that is, the same authorized space does not simultaneously correspond to different user identifications, that is, the authorized spaces owned by different users are different. In addition, any two authorization spaces may not overlap each other, that is, there is no overlapping portion between any two authorization spaces. Alternatively, in some cases, there may be an overlap between any two authorized spaces, in which case the two spaces where the overlap exists may each correspond to their own display times, which are different. When the server sends the authorized spaces to the terminal, for the authorized spaces with overlapping, the display time of each authorized space can be issued for the authorized spaces, so that the terminal can perform time-sharing display according to different display times when displaying the overlapped authorized spaces.
Specifically, after the server obtains the digital map corresponding to the target scene according to the pose of the terminal, the mapping relationship corresponding to all authorized spaces included in the digital map corresponding to the target scene can be determined from the mapping relationship according to the range of the digital map corresponding to the target scene, that is, the mapping relationship between the space information of the authorized spaces and the corresponding user identifiers is obtained, and then the server can obtain the space information of the authorized spaces corresponding to the first user identifiers from the mapping relationship and send the space information of the authorized spaces corresponding to the first user identifiers to the terminal. The spatial information of the authorized space may include a pose of the authorized space in the digital map, among others. The pose of the authorized space in the digital map refers to the pose of the authorized space in the reference coordinate system of the digital map.
Optionally, if the server does not find the authorized space corresponding to the first user identifier in all the authorized spaces included in the digital map corresponding to the target scene, it is indicated that the authorized space corresponding to the first user identifier does not exist in the digital map corresponding to the target scene, and at this time, the server may return a notification message of failure in loading the authorized space to the terminal, so as to notify that the terminal does not acquire the authorized space corresponding to the first user identifier. In this case, the terminal may interact with the user through the method described in the subsequent embodiments, so that the user applies for the authorization space.
It should be noted that there may be one or more authorized spaces in the digital map corresponding to the target scene by the registered user. If only one authorized space corresponding to the first user identifier exists in the digital map corresponding to the target scene, the server can directly send the space information of the authorized space to the terminal. If the digital map corresponding to the target scene includes a plurality of authorized spaces corresponding to the first user identifier, the server may directly send space information of the plurality of authorized spaces to the terminal. Alternatively, the server may select a part of the authorized space from the plurality of authorized spaces and transmit the selected part of the authorized space to the terminal. For example, the server may transmit spatial information of an authorized space closest to the location of the terminal among the plurality of authorized spaces to the terminal. Alternatively, each of the authorized spaces may have a priority, and the server may transmit space information of an authorized space having the highest priority among the plurality of authorized spaces to the terminal. Alternatively, the server may transmit space information of a default one of the plurality of authorized spaces to the terminal. The default authorized space may be one of a plurality of authorized spaces set in the background of the server, or may be an authorized space of an earliest application defaulted according to the time sequence of the application.
The second way is: the terminal sends the first user identification, the pose and the space screening condition to the server, and the server can acquire m authorized spaces according to the first user identification and the pose and acquire n authorized spaces meeting the space screening condition from the m authorized spaces. And sending the obtained n authorized spaces to the terminal.
In this implementation, the terminal may send the spatial screening condition to the server in addition to the first user identification and pose to the server. The space filtering condition may include a condition that the authorized space to be acquired by the terminal needs to be met, and the space filtering condition may be input by a registered user identified by the first user identifier. Illustratively, the spatial screening conditions may include one or more of geographic location conditions, priority conditions, and the like. The geographic location condition may include that a distance between a location of the authorized space and a location of the terminal meets a preset distance, and the priority condition may include that a priority of the authorized space is not less than a preset priority.
After receiving the first user identifier, the pose and the space screening condition, the server can refer to the first implementation manner to obtain all authorized spaces corresponding to the first user identifier in the target scene, and then can screen the authorized spaces meeting the space screening condition from the authorized spaces corresponding to the first user identifier and send the authorized spaces to the terminal.
Third mode: the terminal sends a first user identifier and a pose to a server, and the server acquires m authorized spaces according to the first user identifier and the pose; transmitting m authorized spaces to a terminal; the terminal may obtain n authorized spaces satisfying the space screening condition from the m authorized spaces.
The spatial screening condition may be referred to in the second implementation. The implementation is different from the second implementation in that the server does not screen the authorized space corresponding to the first user identifier, but sends all the authorized spaces corresponding to the first user identifier to the terminal, and the terminal selects n authorized spaces satisfying the space screening condition.
It should be noted that, in the above various implementations, the authorized spaces sent by the server to the terminal include space information of the corresponding authorized space, that is, the terminal obtaining the authorized space refers to obtaining the space information of the authorized space. Wherein the spatial information includes a pose of the authorized space in the digital map.
Step 704: n authorized spaces are rendered in the preview stream of the target scene.
After acquiring n authorized spaces corresponding to the first user identifier in the target scene, the terminal may present one or more authorized areas superimposed on the real world, and explicitly represent the position, size, shape of the authorized space in the real world by means including, but not limited to, lines, planes, etc. Specifically, the terminal may render n authorized spaces in a preset display form in the preview stream of the target scene according to the pose of the terminal and the spatial information of the n authorized spaces. Wherein the preset display form includes one or more of a preset color, a preset transparency, a cube space, and a sphere space.
Because the spatial information includes the pose of the authorized space, the terminal can determine what pose to render n authorized spaces in the preview stream of the target scene according to its pose and the pose of the authorized space, i.e., determine the display pose of the n authorized spaces. And then, the terminal can render n authorized spaces in the preview stream of the target scene in a preset display form according to the determined display pose. Wherein the relative positional relationship between the n authorized spaces rendered in the preview stream of the target scene and the second feature in the preview stream of the target scene will satisfy a second preset positional relationship. That is, it is ensured that the n authorized spaces may be displayed at appropriate positions within the target scene, such that the n authorized spaces are adapted to the target scene. For example, assuming that the target scene is a store, one of n authorized spaces is a first preset distance from a store sign of the store, the other authorized space is a second preset distance from the store sign, and the distance between two authorized spaces is a third preset distance, after rendering the two authorized spaces through the pose of the terminal and the spatial information of the two authorized spaces, the positional relationship between the two authorized spaces and the store sign displayed in the preview stream of the target scene will satisfy the above-described relationship. Similarly, the gesture also satisfies the gesture relative relationship between the three.
In addition, the terminal may display n authorized spaces according to a preset display form, and render the n authorized spaces with a preset color, that is, the surfaces of the n authorized spaces may be attached with the preset color. In addition, n authorized spaces may be rendered with a preset transparency when displayed. In addition, the n authorization spaces rendered can be cube spaces, sphere spaces or other shapes, namely the shapes of the authorization spaces include but are not limited to the above-mentioned shapes; the specific shape depends on authorized spatial shape settings allowed by the digital map operator. Boundaries of the authorization space may be displayed in lines of a designated line type, e.g., in solid static lines, scrolling or changing dashed lines, etc. It should also be noted that the display of the boundaries of the authorized spaces is likewise intended to include, but not be limited to, the possibilities listed above.
Fig. 8 is a schematic diagram of an authorized space displayed in a preview stream of a target scene according to an embodiment of the present application. As shown in fig. 8, the preview stream of the target scene includes a building, and a space in front of the building in the digital map is an authorized space of the user, and based on this, as shown in fig. 8, in the preview stream, the authorized space may be displayed in front of the building, and a boundary of the authorized space is indicated by a dotted line.
After rendering n authorized spaces in the preview stream of the target scene by the terminal, if the pose of the n authorized spaces is not matched with the pose of the preview stream of the target scene, the pose of the n authorized spaces in the digital map can be adjusted so that the pose of the n authorized spaces is matched with the pose of the preview stream of the target scene; and then, the terminal can send the adjusted pose of the n authorized spaces to the server so that the server can update the pose of the n authorized spaces in the digital map.
It should be noted that, in some cases, the acquired pose of the terminal may not be accurate, and at this time, n authorized spaces rendered in the preview stream of the target scene according to the pose of the terminal will not match the target scene. In this case, if the media content is added directly in the authorized space of the display, when the subsequent positioning is accurate, the media content is displayed again according to the pose of the added media content, which will cause deviation or even failure in displaying the media content. Based on the above, in order to accurately register the content, the application also provides an accurate position matching method, so that the registered content can be matched with the 3D map. Specifically, when determining that the pose between the authorized space and the preview stream of the target scene is in deviation, the terminal can adjust the pose of the authorized space so that the authorized space and the preview stream of the target scene are matched, and therefore the position in the digital map completely corresponds to the position in the real world. The terminal can automatically adjust the pose of the n authorized spaces, and the user can manually adjust the pose of the n authorized spaces.
When the terminal automatically adjusts n authorized spaces, the terminal can identify features which can be matched with features contained in a digital map corresponding to the target scene from the preview stream of the target scene, and then the terminal can determine pose deviations between the preview stream of the target scene and the n authorized spaces according to the matched features, so that the pose of the n authorized spaces is adjusted according to the pose deviations. For example, assuming that the preview stream of the target scene includes a store at the current location of the terminal, the pose deviation of the store sign in the preview stream and the store sign in the digital map corresponding to the target scene may be determined, and then the poses of the n authorized spaces may be adjusted according to the pose deviation.
Optionally, if the adjustment is performed manually by the user, the user may drag the authorized space to move or rotate to adapt the pose in the authorized space to the preview stream of the target scene when a certain pose deviation exists between the authorized space and the preview stream of the target scene. Accordingly, after detecting the user operation, the terminal can move or rotate the authorized space in real time according to the user operation and record the pose of the authorized space, and after stopping the operation, the terminal can send the last recorded pose of the authorized space to the server. The server may replace the pose included in the spatial information of the authorized space with the adjusted pose of the authorized space.
Fig. 9 shows a schematic diagram of the presence of pose deviations of the authorized space from the preview stream of the target scene. As shown in fig. 9, the authorized space has a certain pose deviation from the preview stream of the target scene. In this case, the position of the authorized space can be adjusted automatically or manually by the user by the method described above, so that the authorized space and the live image finally presented are as shown in fig. 8.
The foregoing is an implementation manner of visualizing the authorized spaces in the video stream of the target scene according to the embodiment of the present application, optionally, after rendering n authorized spaces in the video stream corresponding to the target scene, media content may be further added and synchronized in the authorized spaces through the following steps 705 and 706.
Step 705: target media content is obtained.
After the terminal renders n authorized spaces corresponding to the first user identifier in the preview stream of the target scene, the terminal may display a media content adding option on the current interface, and the registered user may trigger the media content adding instruction by performing a selection operation on the media content adding option. After detecting the media content adding instruction, the terminal can acquire the target media content to be added.
After detecting a media content adding instruction triggered by a user, the terminal can display a media content menu on a current interface, wherein the media content menu can comprise a plurality of content type options; when a selection instruction for a target content type option is detected, acquiring target media content with a content type consistent with the target content type according to the target content type option, wherein the target content type option is any one of a plurality of content type options.
It should be noted that the plurality of content type options may include text, pictures, video, audio, models, and the like. The user may select a type option of media content desired to be added as a target content type option and trigger a selection instruction for the target content type option by performing a selection operation on the target content type option. When the terminal receives a selection instruction aiming at the target content type option, different methods can be adopted to acquire the target media content according to different target content type options.
For example, when the target content type option is text, the terminal may display a text input box on the user interface, in which a user may input text, and set a color, size, font, etc. of the input text through a text format setting option displayed in the user interface upon receiving a selection instruction for the target content type option. The terminal may then take the text content entered by the user as the target media content.
When the target content type option is a picture, the terminal can display a plurality of pictures stored locally when receiving a selection instruction for the target content type option, a user can select one or more pictures from the plurality of pictures, and the terminal can acquire the one or more pictures selected by the user as target media content. Of course, in one possible implementation, the terminal may also first display the acquisition option and the local acquisition option when receiving a selection instruction for the target content type option. If the terminal detects that the acquisition option is triggered, the terminal can acquire images through the camera, and the acquired images are used as target media content. If the terminal detects that the local acquisition option is triggered, the terminal can acquire the target media content from the locally stored multiple pictures in the mode. Alternatively, in other possible implementations, the terminal may also obtain a picture from another device as the target media content.
When the target content type option is video or audio, the terminal may display a file identification list of a plurality of videos or audios stored locally when receiving a selection instruction for the target content type option. The user can select a file identifier from the file identifier list, and the terminal can acquire video or audio identified by the file identifier selected by the user as target media content. Similarly, in some possible implementations, the terminal may also refer to the method for acquiring the target media content of the picture type, and acquire video or audio according to the user selection, or acquire video or audio from other devices, which is not described herein.
When the target content type option is a model, the terminal may display an identification list of the locally stored model when receiving a selection instruction for the target content type option. The user may select an identity from the list of identities. The terminal may obtain a model corresponding to the identifier selected by the user, and use the model as the target media content. Alternatively, the terminal may obtain the model directly from the other device. Wherein, the model is a three-dimensional model which is created in advance.
Alternatively, in a possible case, the user may select the above-mentioned multiple types of media contents to be freely combined, and the terminal may use the combined media contents as target media contents, or may acquire, in advance, the combined target media contents including the various types from the terminal. The embodiments of the present application are not limited in this regard.
FIG. 10 is a schematic diagram illustrating a display of media content type options on a current interface according to an embodiment of the present application. As shown in fig. 10, the interface has five content type options displayed thereon, text, picture, video, audio, and model. From which the user can select any one of the options, the terminal can acquire the corresponding type of media content according to the type option selected by the user through the method.
Step 706: adding the target media content within the target authorization space.
After the AR content is ready, the user may "place" the AR content into the visible authorized space, and may further adjust information such as the pose, size, position, color, etc. of the AR content, the AR content may only be placed within the authorized space, and the portion of the AR content exceeding the authorized space may appear to be significantly inconsistent (e.g., invisible, rendered in other colors, semi-transparent, etc.) with the authorized space to explicitly indicate that there is an anomaly in the AR content placement area.
Specifically, the authorization space corresponding to the first user identifier displayed in the preview stream of the target scene may be one or more. When the displayed authorized space is one, the authorized space is the target authorized space. When the displayed authorized space is a plurality of authorized spaces, then the target authorized space may be any one of the authorized spaces, in which case the user selects the space to which the target media content is added to be the target authorized space.
Wherein the terminal can implement the addition of the target media content by interacting with the user. For example, the user may perform a drag operation on the target media content to trigger a drag instruction. And when the terminal detects a dragging instruction aiming at the target media content, adding the target media content at the dragging end position indicated by the dragging instruction.
The drag end position indicated by the drag instruction may be a position where the target media content is located when the drag operation is ended. That is, the user can drop the target media content within the desired drop authorization space, i.e., the target authorization space, by performing a drag operation on the target media content.
In addition, after the target media content is placed at the drag end position indicated by the drag instruction, the user can also adjust the size of the target media content by dragging the boundary of the target media content, adjust the pose of the target media content by rotating the target media content, and the like.
After the setting of the target media content is completed, considering that when the user places the target media content, there may be a part of the content that cannot be in the authorized space, in this case, in order to avoid encroaching on the authorized space of other users, the terminal may display only a part of the content in the target authorized space in the target media content, and the content outside the target authorized space may not be displayed, or may be displayed differently from the content in the target authorized space in other ways. For example, the difference display may be performed by semitransparent, different color rendering, or the like.
Fig. 11 is a schematic diagram of display target media content shown in an embodiment of the present application. As shown in fig. 11, for the target media content, if the target media content is completely located in the authorized space, the target media content may be completely displayed as shown in fig. 11, and if some of the target media content exceeds the authorized space, only the content located in the authorized space may be displayed, and the remaining portion may not be displayed as shown in fig. 11.
After the target media content is displayed, the terminal can detect whether a confirmation instruction is received, if the confirmation instruction is received, the terminal can send the currently displayed media content positioned in the target authorized space and the pose of the media content in the digital map to the server, and the server can correspondingly store the received target media content and the pose of the target media content and the space information of the target authorized space, so that the addition of the target media content is realized. That is, the server can store all information such as the AR content and the position and posture of the AR content in the real world, and can be completely restored at the next loading.
Optionally, in one possible implementation manner, the terminal may further determine a target relative position relationship between the target media content and the target object added to the target authorization space, and send the target media content, the target object, and the target relative position relationship to the server, so that the server may update the content of the other authorization space corresponding to the first user identifier in the preset digital map according to the target media content, the target object, and the target relative position. The target object is a preset image or a three-dimensional object included in the digital map corresponding to the target scene. For example, the target object may be a store sign or other feature in the target scene.
That is, the terminal may determine a target object from the digital map corresponding to the target scene and determine a relative positional relationship of the target media content and the target object. The target media content, the target object, and the relative positional relationship are then sent to a server. The server may search for the same feature as the target object from a preset digital map, where the preset digital map may be a digital map corresponding to other scenes. If the characteristics same as the target object are found, the server can detect whether an authorized space corresponding to the first user identifier exists in the preset digital map, and if so, the terminal can add the target media content into the authorized space corresponding to the first user identifier in the preset digital map according to the target relative position relationship, so that the position relationship between the target media content and the target object in the preset digital map meets the target relative position relationship.
Therefore, in the embodiment of the application, by sending the relative position relationship between the target media content and the target object to the server, the server can update the media content in other authorized spaces corresponding to the first user identifier in the preset digital map according to the relative position relationship and the target object, so that the media content update efficiency is improved.
In addition, in the embodiment of the present application, the relative positional relationship between the target media content and the first feature in the preview stream of the target scene satisfies the first preset positional relationship. The first feature is a preset image or three-dimensional object contained in a preview stream of the target scene. It should be noted that the first feature may be a feature corresponding to the target object in the preview stream of the target scene. For example, assuming that the target object is a sign of store 1, the first feature may be a sign of store 1 in the preview stream of the target scene. In this case, the first preset positional relationship is the same as the target relative positional relationship between the target media content and the target object. In this way, spatial adaptation in the real world of the pose of the target media content added to the target authorized space can be ensured.
It should be noted that, adding the target media content in the target authorized space may refer to adding the media content when the target authorized space does not include the media content, or may refer to adding the target media content when the target authorized space already includes the media content. Alternatively, in some possible scenarios, the terminal may replace some item of media content already contained within the target authorized space with the target media content after acquiring the target media content.
Alternatively, in other possible scenarios, when the terminal displays n authorized spaces, the terminal may also display the media content already included in the n authorized spaces, in which case, after step 704, the terminal may not perform steps 705 and 706, but may directly prune or move the media content included in the n authorized spaces according to the user operation.
The terminal may delete all media content included in the target authorized space, or delete part of elements in the media content included in the target authorized space, and send the editing mode (i.e. deleting mode) to the server, where the server uniformly deletes the same media content or element included in other authorized spaces corresponding to the first user identifier according to the deleting mode. The deletion mode may include media content or elements deleted by the terminal and identification of deletion operation.
Or, the terminal may move the media content included in the target authorized space according to a preset relative displacement, and send the editing mode (i.e. the moving mode) to the server, where the server performs unified movement on the same media content or element included in other authorized spaces corresponding to the first user identifier according to the moving mode. The moving mode may include the moved media content or element and the moved position.
In the embodiment of the application, the terminal can acquire the authorization space of the current registered user in the digital map corresponding to the target scene according to the first user identifier and the pose of the terminal, and further render the authorization space of the registered user in the preview stream of the target scene, so that the registered user can view the authorization space corresponding to the current scene in real time, and the method is more convenient. In addition, because the authorized space is clearly displayed in the preview flow corresponding to the target scene, the registered user can clearly learn the boundary of the authorized space of the registered user, in this case, the user adds the media content in the authorized space, so that the added media content can be effectively prevented from encroaching on the authorized space of other users, the accurate addition of the media content is realized, and the addition efficiency is improved.
The above embodiments mainly describe a process of displaying, in a preview stream of a target scene, an authorized space corresponding to a first user identifier in a digital map corresponding to the target scene according to a pose of a terminal and the first user identifier. Optionally, in some possible scenarios, there may be multiple authorized spaces corresponding to the first user identifier in the digital map corresponding to the multiple scenarios, and the registered user may need to update the media content in the multiple authorized spaces uniformly, in which case, the terminal and the server may implement synchronization of the media content in the multiple authorized spaces through the following steps. That is, the embodiment of the application also provides an interaction flow for registering media contents in batches, and a user realizes the synchronization of contents in different areas by searching the same target object in a plurality of areas on a 3D map, configuring AR contents corresponding to the pose of the target object, saving and uploading.
Fig. 12 is a flowchart illustrating a method for synchronizing media content in multiple authorized spaces according to an embodiment of the present application. As shown in fig. 12, the method includes the steps of:
step 1201: the terminal acquires a first user identification.
In the embodiment of the application, the terminal may use the currently logged-in user account as the first user identifier.
Step 1202: the terminal sends a first user identification to the server.
Step 1203: the server acquires digital maps corresponding to k scenes and authorized spaces corresponding to the first user identifiers in the digital maps corresponding to the scenes according to the first user identifiers.
The server can search the authorized space through the logged-in user identification, and determine that no less than 1 authorized space exists before the user at the moment, the authorized space is loaded, and the loading mode can be a method of presenting a plurality of regional pictures or other methods.
For example, a mapping relationship between a user identifier, an authorized space, and a scene may be stored in the server, based on which the server may find k scenes corresponding to the first user identifier from the mapping relationship, and obtain an authorized space corresponding to the first user identifier included in each of the k scenes corresponding to the first user identifier. It should be noted that the scene here may include a store, a specified feature, and the like. For example, the first scene may be a store, that is, the digital map corresponding to the first scene is the digital map corresponding to a store.
Step 1204: the server sends the digital maps corresponding to the k scenes and the authorized space corresponding to the first user identification in the digital map corresponding to each scene to the terminal.
Step 1205: the terminal selects a first scene from k scenes according to a preset rule, acquires a first digital map corresponding to the first scene, and displays the first digital map and a first authorized space.
After receiving the digital maps corresponding to the k scenes and the authorized space corresponding to the first user identifier in the digital map corresponding to each scene, the terminal can select the first scene from the k scenes according to a preset rule. The preset rule may be closest to the current location of the terminal, with highest priority or default. That is, the terminal may select a scene closest to the position of the terminal from the k scenes as a first scene; or selecting a scene with highest priority from k scenes as a first scene; alternatively, a default scene is selected from the k scenes as the first scene. The foregoing is merely a few possible implementations of the embodiments of the present application and is not meant to limit the embodiments of the present application.
Optionally, in another possible implementation manner, the terminal may also display, in a digital map corresponding to each scene, an authorized space corresponding to the first user identifier in the digital map in a carousel manner, or the terminal may also display, in the digital map, scene marks corresponding to k scenes according to positions of the k scenes. In displaying k scenes, the registered user may select a certain scene, and after the terminal detects a selection operation of the user, the scene selected by the registered user may be regarded as a first scene.
After determining the first scene, the terminal can display an authorized space corresponding to the first user identifier contained in the first digital map corresponding to the first scene. If only one authorized space corresponding to the first user identifier is included in the first digital map, the terminal can directly take the authorized space as the first authorized space. If the first user identifier contained in the first digital map corresponds to a plurality of authorized spaces, after the terminal displays the authorized spaces, the registered user can select one of the authorized spaces, and the terminal can take the authorized space selected by the user as the first authorized space.
The terminal may highlight the first authorized space selected by the user, for example, the first authorized space may respond by flashing, or a boundary of the first authorized space may be displayed with a scrolling dotted line, and so on.
Fig. 13 is a schematic diagram of a user interacting with a terminal to select a first scene according to an embodiment of the present application. As shown in fig. 13, the terminal may display a plurality of scene markers A, B, C and D in the digital map. Assuming that the user selects the scene mark a, the terminal may take the scene identified by the scene mark a as the first scene. Assuming that the first digital map corresponding to the first scene includes a plurality of authorized spaces corresponding to the first user identifier, as shown in fig. 14, the terminal may display the space identifiers of the plurality of authorized spaces in a list manner, and the user may select one identifier from the space identifiers of the plurality of authorized spaces.
Optionally, in a possible implementation manner, the terminal may also send a scene requirement, while sending the first user identifier to the server, where the scene requirement may be scene information of interest specified by the user. After receiving the first user identifier and the scene requirement, the server may first acquire k scenes corresponding to the first user identifier, and then acquire a first scene meeting the scene requirement from the k scenes. The scene requirement may be default, highest priority, closest to the current location of the terminal, etc.
Step 1206: the terminal obtains the target media content.
The implementation manner of this step may refer to the related implementation manner of step 705 in the foregoing embodiment, and this embodiment of the present application is not described herein again.
Step 1207: the terminal adds the target media content in the first authorized space.
The implementation manner of this step may refer to the related implementation manner of step 706 in the foregoing embodiment, and this embodiment of the present application is not described herein again.
Step 1208: the terminal determines a target relative positional relationship between the target media content and the target object.
After adding the target media content in the first authorization space, the terminal can determine a target relative position relationship between the target media content and a target object, wherein the target object is a preset image or a three-dimensional object included in a digital map corresponding to the first scene. For example, when the first scene is a store, the target object may be a sign of the store. It should be noted that the target object may or may not be located in the first authorized space, but the target object is included in the digital map corresponding to the first scene.
It should be noted that, the target object may be determined by the terminal through an image recognition algorithm, or may be an object (including but not limited to a picture, a text, and a 3D model) specified by the user in the digital map corresponding to the first scene. That is, the user may draw an area in the digital map corresponding to the first scene through gesture operation, and the terminal may target the feature in the area drawn by the user. Alternatively, the user may click a location within the digital map, and the terminal may center on the location with the feature located within the area within the preset radius as the target object. The above are just a few examples given in the embodiments of the present application for determining a target object according to a selection operation by a user.
In addition, the relative positional relationship between the target media content and the target object may include information such as a distance between the target media content and the target object, and a relative pose.
Step 1209: and the terminal sends the target media content, the target object and the target relative position relation to the server.
Step 1210: the server determines the location of the target object in the second digital map.
After receiving the target media content, the target object and the target relative position relationship, the server can search whether the target object is contained in the digital map corresponding to other scenes corresponding to the first user identifier. If the digital map corresponding to a certain scene contains the target object, the server can determine the position of the target object in the digital map, wherein for convenience of explanation, the searched scene containing the target object is called a second scene, and the digital map corresponding to the second scene is a second digital map.
When the server searches the target object, the server may search near the authorized space corresponding to the first user identifier obtained by the first user identifier, for example, the server may enlarge the 50-meter range by centering on the authorized space. That is, the other scenario corresponding to the first user identifier may be an area within a preset radius range centered on the authorized space of the first user identifier.
Step 1211: and adding target media content in other authorized spaces corresponding to the first user identification according to the position of the target object and the target relative position relation by the server.
After determining the position of the target object in the second digital map, the server may add the target media content in the authorized space of the first user identifier included in the digital map corresponding to the scene according to the target relative position relationship and the position of the target object, so that the position relationship between the target media content and the target object satisfies the target relative position relationship.
For example, the terminal may determine the pose of the target object in the digital map corresponding to the second scene through a picture detection algorithm. And then, the terminal can determine the pose of the target media content in the second authorized space in the digital map corresponding to the second scene according to the pose of the target object in the digital map corresponding to the second scene, the pose of the target object in the digital map corresponding to the first scene and the pose of the target media content in the first authorized space by the following formula.
ΔR t =P v1 *P 1 -1
P vx =ΔR t *P x
Wherein P is 1 For the pose matrix corresponding to the pose of the target object in the digital map corresponding to the first scene, P v1 Is a pose matrix corresponding to the pose of the target media content in the first authorized space, delta R t For the pose change matrix, P x For the pose matrix corresponding to the pose of the target object in the digital map corresponding to the second scene, P vx And a pose matrix corresponding to the pose of the target media content in the second authorized space.
After determining the pose of the target media content in the second authorized space, the terminal may add the target media content in the second authorized space according to the pose of the target media content in the second authorized space. Wherein the added media content may be replaced with the target media content if the added media content originally exists at the corresponding location in the second authorized space.
For the authorized space corresponding to the first user identifier contained in the digital map corresponding to other scenes, the media content in the corresponding authorized space can be updated in the mode, so that the synchronization of the media content in a plurality of authorized spaces is realized. That is, the server may set the relative position of the AR content according to the gesture obtained by the target object retrieval, and apply it to all the authorized spaces in batch.
Fig. 15 is a schematic diagram illustrating a method for synchronizing target media content in multiple authorized spaces according to an embodiment of the present application. As shown in fig. 15, a target object is determined in a digital map corresponding to a first scene, and features matching the target object are obtained in digital maps corresponding to other scenes. The target media content is added in the authorization space 1. At this time, according to the pose of the target object and the pose of the target media content in the authorized space 1, the relative positional relationship between the target media content and the target object can be calculated. And then, the server can determine the pose of the target object in the digital map corresponding to other scenes, and further determine and obtain the pose of the target media content in other authorized spaces according to the pose of the target object in the digital map corresponding to other scenes and the determined relative position relation. Finally, the target media content added in each authorized space is shown in fig. 15 according to the position information and the gesture information of the target media content in other authorized spaces.
It should be noted that, in one possible scenario, after the server adds the target media content in the other authorized space, a situation may occur that the target media content is not adapted to the other authorized space, for example, the target media content is out of range of the other authorized space, and at this time, the server may automatically adjust the target media content so that the target media content is adapted to the corresponding authorized space. Alternatively, in one possible implementation, for such a case of discomfort, for example, for media content exceeding the authorized space, the server may also record marking the media content exceeding the authorized space, and subsequently, when issuing the corresponding authorized space and media content within the authorized space to the server, only the portion of the media content not exceeding the authorized space may be issued, or the entire media content and the marking may be issued, so that the terminal may display the media content not exceeding the authorized space, while the exceeding portion is not displayed.
In the embodiment of the application, the terminal can simultaneously acquire the digital maps corresponding to the multiple scenes and the authorized spaces corresponding to the first user identifications contained in each digital map, then, the target media content is added in a certain authorized space, the relative position relation between the target media content and the target object is determined, the relative position relation, the target object and the target media content are sent to the server, the server can search the characteristics matched with the target object in the digital maps corresponding to other scenes, and synchronize the target media content to other authorized spaces corresponding to the first user identifications according to the relative position relation, so that the server can automatically complete the addition of the media content in the multiple authorized spaces, the addition efficiency is improved, the consistency of the effects of the media content in the multiple authorized spaces is ensured, and the addition accuracy can be ensured.
The above embodiments introduce a process of implementing synchronization of media content of multiple authorized spaces by interaction between a terminal and a server. Fig. 16 provides a flowchart for implementing content synchronization of multiple authorized spaces on the terminal side, and referring to fig. 16, the implementation process includes the following steps:
Step 1601: a first user identification is obtained.
The implementation of this step may refer to step 1201 in the foregoing embodiment.
Step 1602: a first scene is determined based on the first user identification.
The first digital map comprises a first authorized space; the first authorization space is a three-dimensional space corresponding to a first user identifier in the first digital map; the first digital map comprises a target object; the target object comprises a preset image or a three-dimensional object; the first digital map includes a panorama, a point cloud, or a grid model.
In this step, the terminal may send the first user identifier to the server, so that the server obtains digital maps corresponding to k scenes and a plurality of authorized spaces corresponding to the first user identifier in the digital map corresponding to each scene according to the first user identifier; then, receiving digital maps corresponding to k scenes and a plurality of authorized spaces corresponding to first user identifiers in the digital maps corresponding to each scene, wherein the digital maps are sent by a server; and selecting a first scene from k scenes according to a preset rule.
After receiving the k digital maps corresponding to the k scenes and the plurality of authorized spaces corresponding to the first user identifier in the digital map corresponding to each scene, which are sent by the server, the terminal may determine the first scene by referring to the implementation in step 1205 in the foregoing embodiment.
Optionally, in a possible implementation manner, the terminal may also send a scene requirement, while sending the first user identifier to the server, where the scene requirement may be scene information of interest specified by the user. After receiving the first user identifier and the scene requirement, the server may first acquire k scenes corresponding to the first user identifier, and then acquire a first scene meeting the scene requirement from the k scenes. And then, the server can send the acquired first scene, the first digital map corresponding to the first scene and the authorized space of the first user identifier included in the first scene to the terminal. The scene requirement may be default, highest priority, closest to the current location of the terminal, etc.
Step 1603: and acquiring a first digital map corresponding to the first scene.
After determining the first scene, the terminal may acquire a first digital map corresponding to the first scene. And, the terminal may select the first authorized space from the first digital map according to the related implementation in step 1205.
Step 1604: a first digital map and a first authorized space are displayed.
Step 1605: target media content is obtained.
The implementation of this step may refer to step 1206 in the foregoing embodiment, which is not described herein.
Step 1606: the target media content is added in the first authorized space.
The implementation of this step may refer to step 1207 in the foregoing embodiment, which is not described herein.
Step 1607: a target relative positional relationship between the target media content and the target object is determined.
The implementation of this step may refer to step 1208 in the foregoing embodiment, which is not described herein.
Step 1608: and sending the first user identifier, the target media content, the target object and the target relative position relation to the server so that the server updates the content of other authorized spaces corresponding to the first user identifier in the preset digital map according to the target media content, the target object and the target relative position relation.
The first user identifier may be sent to the server by the terminal when determining the first scene, or may be sent once when acquiring the first scene, and sent once again in this step.
In the embodiment of the application, the terminal can simultaneously acquire the digital maps corresponding to the multiple scenes and the authorized spaces corresponding to the first user identifications contained in each digital map, then, the target media content is added in a certain authorized space, the relative position relation between the target media content and the target object is determined, and the relative position relation, the target object and the target media content are sent to the server, so that the server can search the characteristics matched with the target object in the digital maps corresponding to other scenes, and synchronize the target media content to other authorized spaces corresponding to the first user identifications according to the relative position relation, the server can automatically complete the addition of the media content in the multiple authorized spaces, the addition efficiency is improved, and the addition accuracy can be ensured.
Fig. 17 is a flowchart of an implementation of content synchronization of multiple authorized spaces on a server side according to an embodiment of the present application, and referring to fig. 17, the implementation process may include the following steps:
step 1701: and acquiring a first user identifier, a target object, target media content and a target relative position relationship between the target media content and the target object, which are sent by the terminal.
In this embodiment of the present application, the server may receive a first user identifier sent by the terminal, where the first user identifier may correspond to a user account logged in by the terminal.
The target media content is media content which is added to a first digital map corresponding to the first scene by the terminal, and the first digital map contains a target object. The server may receive the target object, the target media content, and the target relative positional relationship sent by the terminal.
Step 1702: and acquiring a second digital map corresponding to the second scene according to the first user identification.
The implementation of this step may refer to the implementation process of determining the second scene in step 1210 in the foregoing embodiment, and acquiring the second digital map corresponding to the second scene. The second digital map also contains the target object.
Step 1703: the position of the target object is determined in the second digital map.
The implementation process of this step may refer to the implementation manner of step 1210 in the foregoing embodiment, which is not described herein.
Step 1704: and adding target media content in the second authorized space according to the position of the target object and the target relative position relationship, so that when the terminal renders the second authorized space in the second digital map, the target relative position relationship between the target media content in the second authorized space and the target object in the second digital map is met.
The implementation of this step may refer to the implementation process of synchronizing the target media content to the second authorized space described in step 1211 in the foregoing embodiment, which is not described herein.
In the embodiment of the application, after receiving the target object, the target media content and the relative position relation between the target object and the target media content, the server can search the characteristics matched with the target object in the digital map corresponding to other scenes and synchronize the target media content to other authorized spaces corresponding to the first user identifier according to the relative position relation, so that the server can automatically complete the addition of the media content in the authorized spaces, the addition efficiency is improved, and the effect consistency of the media content in the authorized spaces can be ensured.
The foregoing embodiments mainly describe an implementation manner of synchronously adding media content in a plurality of authorized spaces, and optionally, in some possible scenarios, the media content already contained in the plurality of authorized spaces may also be synchronously edited. Fig. 18 is a diagram of another method for synchronizing contents of a plurality of authorized spaces based on a digital map according to an embodiment of the present application, the method including the steps of:
step 1801: the terminal acquires a first user identification.
The implementation of this step may refer to step 1201 in the foregoing embodiment.
Step 1802: the terminal sends a first user identification to the server.
The implementation of this step may refer to step 1202 in the previous embodiment.
Step 1803: the server acquires digital maps corresponding to k scenes and a plurality of authorized spaces corresponding to the first user identification in the digital map corresponding to each scene according to the first user identification.
The implementation of this step may refer to step 1203 in the foregoing embodiment.
Step 1804: the server sends the digital map corresponding to the k scenes and a plurality of authorized spaces corresponding to the first user identification in the digital map corresponding to each scene to the terminal.
The implementation of this step may refer to step 1204 in the foregoing embodiment.
Step 1805: the terminal selects a first scene from k scenes according to a preset rule, and acquires a first digital map corresponding to the first scene.
The implementation of this step may refer to step 1205 in the foregoing embodiment.
Step 1806: the terminal displays the first digital map, the first authorized space, and the first media content included in the first authorized space.
In this step, when the terminal displays the first media content, if all the contents in the first media content are in the first authorized space, the terminal may display the whole first media content, alternatively, if part of the contents in the first media content exceeds the boundary of the first authorized space, the terminal may not display the exceeding part of the contents, or display the exceeding part of the contents differently, for example, display the exceeding part with different transparency in different colors, or the like.
Step 1807: and the terminal edits the first media content according to a first editing mode.
After displaying the first media content, the user may perform editing operations on the first media content. The terminal may edit the first media content according to a first editing manner corresponding to an editing operation of a user.
Specifically, the terminal may display the editing option while displaying the first media content, the user may click on the editing option, and the terminal may enter the editing state after detecting the selection operation of the editing option. Thereafter, the user may perform an editing operation on the first media content. Wherein the first editing mode may include one or more of an adding mode, a deleting mode, an replacing mode, and a moving mode according to a preset relative displacement. Accordingly, the editing options may include an add option, a delete option, a replace option, and a move option. Referring to fig. 19, a schematic diagram of an editing option is shown.
The adding mode refers to adding part of the media content or part of the elements on the basis of the first media content, wherein the added content or element can be obtained from the media content or the elements stored by the terminal according to the user operation or can be obtained from a network. Specifically, the process of the terminal acquiring the content or the element to be added may refer to step 705 in the foregoing embodiment, which is not described herein. In addition, after detecting the selection operation of the add option, the terminal may display the media content type option introduced in step 705, and the user may obtain the media content by selecting the media content type option, so as to implement the addition of the media content.
The deletion mode refers to deleting the first media content or deleting a part of elements in the first media content. In this case, the user may select a delete option, the terminal enters an edit state, and then the user may select an object to be deleted and perform a delete operation to trigger a delete instruction, and after receiving the delete instruction, the terminal may delete the object selected by the user. Wherein after the user selects a certain object to be deleted, the terminal may display a confirmation option, in which case the deletion operation refers to a selection operation performed by the user on the confirmation option.
The replacement means refers to replacing the first media content with the target media content or replacing a part of the elements in the first media content with the target elements. In this case, the user may select an alternative option, and the terminal enters an editing state. Thereafter, the user may select the object to be replaced, and after the user selects the first media content or a part of the elements in the first media content, the terminal may display a confirmation option. The terminal may delete the object selected by the user after detecting that the user selects the confirmation option, and place the target media content or the target element at the position of the deleted object, and the user may adjust the position of the target media content or the target element by dragging the target media content or the target element. The step 705 may be referred to for obtaining the target media content or the target element, which is not described herein in detail.
The moving mode according to the preset relative displacement refers to moving the first media content according to the preset relative displacement. The preset relative displacement can be obtained through calculation of an input angle and a distance of a user, or can be obtained through determination according to a movement operation of the user. Specifically, the user may select the movement option, then select the first media content, then the user may drag the first media content to move, and the terminal may obtain a preset relative displacement according to a movement track of the user to move the first media content, and move the first media content from the original position to the target position according to the preset relative displacement.
In addition, the first editing mode may further include modifying a content element included in the first media content, or modifying a size, a display position, a display gesture, and the like of the first media content.
The terminal may edit the first media content by any of the above methods, or may edit the first media content by combining the above methods, which is not limited in the embodiment of the present application.
Step 1808: the terminal sends the first media content and the first editing mode to the server.
After the user confirms that the editing is completed, the terminal may send the first media content and the first editing mode to the server.
The first editing mode may include editing contents and/or editing parameters. For example, when the first editing mode includes an add mode, the added media content or element, the relative positional relationship of the added media content and the first media content, and the like may be transmitted to the server. When the first editing means includes a pruning means, the first editing means transmitted to the server may include pruned objects. When the first editing means includes the replacement means, the first editing means transmitted to the server may include target media content or target element used to replace the first media content, and a pose of the target media content or target element. When the first editing mode includes moving according to a preset relative displacement, the sending of the first editing mode to the server may include the preset relative displacement.
Step 1809: and the server edits the first media content in other authorized spaces corresponding to the first user identification according to a first editing mode.
After receiving the first media content and the first editing mode, the server may first search whether the first media content is included in the digital map including the authorized space corresponding to the first user identifier. If the authorized space of the first user identifier included in the digital map corresponding to a certain scene is found to include the first media content, the server can take the scene as a second scene, and take the authorized space of the first user identifier including the first media content in the second scene as a second authorized space. The server may then edit the first media content contained in the second authorized space in a first editing manner. After the first media content is tag edited, the server may store the edited media content so that the edited media content may be displayed later when the terminal displays the second authorized space.
It should be noted that the server performs editing according to the first editing manner, that is, the server performs the same editing operation on the same media content in the second authorized space according to the edited content in the first authorized space, so that the same media content in each authorized space can present the same editing effect, and the effect consistency of the media content in a plurality of authorized spaces is ensured.
For example, a first editing mode adds a first media object for a first location in a first authorized space; synchronizing the first editing means in the second authorization space may be embodied as adding the first media object at a second location in the second authorization space; assuming that the first authorized space and the second space are coincident (including after scaling), the first position and the second position are coincident and the manner of presentation of the first media object in both spaces is consistent.
For example, the first editing mode is deleting the first media object at the first position in the first authorized space; synchronizing the first editing means in the second authorization space may be embodied as deleting the first media object at a second location in the second authorization space; assuming that the first and second positions are coincident if the first and second authorized spaces are coincident (including after scaling), the remaining content of the first and second authorized spaces appears to be coincident.
For example, the first editing mode is to replace the first media object at the first position with the second media object in the first authorized space; synchronizing the first editing means in the second authorized space may be embodied as replacing the first media object in the second authorized space with the second media object; assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the second position are coincident, and the second media objects in the two spaces are also coincident, the presentation is consistent.
For example, the first editing mode is to move the first media object from the first position to the second position in the first authorized space; synchronizing the first editing means in the second authorized space may in particular be manifested in that the first media object in the second authorized space is moved from the third position to the fourth position. Assuming that the first authorized space and the second space are coincident (including being coincident after scaling), the first position and the third position are coincident, the second position and the fourth position are coincident, and the manner of presentation of the first media object in both spaces is consistent.
The above is by way of example only and is not limiting. The first and second authorized spaces may be identical in size and shape; or may be proportioned, in which case the arrangement of objects in space will be proportioned accordingly.
In the embodiment of the application, the terminal can simultaneously acquire the digital maps corresponding to the multiple scenes and the authorized spaces corresponding to the first user identifications contained in each digital map, edit the media content in a certain authorized space and send the media content and the editing mode to the server, and the server can search the authorized space which also contains the first user identifications of the media content in the digital maps corresponding to other scenes and edit the media content in the corresponding authorized space according to the received editing mode, so that the server can automatically complete the editing of the same media content in the multiple authorized spaces, improve the editing efficiency of the media content in the authorized spaces and ensure the effect consistency of the media content in the multiple authorized spaces.
The above embodiments introduce a process of implementing synchronization of media content of multiple authorized spaces by interaction between a terminal and a server. Fig. 20 provides a flowchart for implementing content synchronization of multiple authorized spaces on the terminal side, and referring to fig. 20, the implementation process includes the following steps:
step 2001: a first user identification is obtained.
The implementation of this step may refer to step 1201 in the foregoing embodiment.
Step 2002: a first scene is determined based on the first user identification.
The implementation of this step may refer to step 1602 in the previous embodiment.
Step 2003: and acquiring a first digital map corresponding to the first scene, wherein the first digital map comprises a first authorized space.
The implementation of this step may refer to step 1603 in the previous embodiment.
Step 2004: the first digital map, the first authorized space, and first media content included within the first authorized space are displayed.
The implementation of this step may refer to step 1806 in the foregoing embodiment.
Step 2005: editing the first media content according to a first editing mode.
The implementation of this step may refer to step 1807 in the foregoing embodiment.
Step 2006: and sending the first user identification, the first media content and the first editing mode to the server so that the server edits the first media content in other authorized spaces corresponding to the first user identification in the preset digital map according to the first editing mode.
In the embodiment of the application, the terminal can simultaneously acquire the digital maps corresponding to the multiple scenes and the authorized spaces corresponding to the first user identifications contained in each digital map, edit the media content in a certain authorized space and send the media content and the editing mode to the server, and the server can search the authorized space which also contains the first user identifications of the media content in the digital maps corresponding to other scenes and edit the media content in the corresponding authorized space according to the received editing mode, so that the server can automatically complete the editing of the same media content in the multiple authorized spaces, improve the editing efficiency of the media content in the authorized spaces and ensure the effect consistency of the media content in the multiple authorized spaces.
Fig. 21 is a flowchart of a method for synchronizing media content in multiple authorized spaces on a server side according to an embodiment of the present application. Referring to fig. 21, the method includes the steps of:
step 2101: and acquiring a first user identification, first media content and a first editing mode sent by the terminal.
In this embodiment of the present application, the server may receive a first user identifier sent by the terminal, where the first user identifier may correspond to a user account logged in by the terminal.
The server may receive a first media content and a first editing mode sent by the terminal, where the first media content is media content included in a first authorization space included in a first digital map corresponding to a first scene. The first editing mode may include one or more of an adding mode, a deleting mode, an replacing mode, and a moving mode according to a preset relative displacement.
Step 2102: and acquiring a second digital map corresponding to the second scene according to the first user identification.
The implementation manner of this step may refer to the implementation manner of acquiring the second digital map and the second authorized space in step 1809 in the foregoing example, which is not described herein in detail.
Step 2103: editing the first media content included in the second authorized space in a first editing manner.
This step may refer to the implementation manner of step 1809 in the foregoing example, which is not described in detail in this embodiment of the present application.
In the embodiment of the application, after receiving the first media content and the first editing mode, the server can search the authorized space corresponding to the first user identifier included in the digital map corresponding to other scenes for the first media content, and edit the searched first media content according to the first editing mode, so that synchronous editing of the same media content in a plurality of authorized spaces is realized, and therefore, the server can automatically complete editing of the media content in the plurality of authorized spaces, editing efficiency is improved, and effect consistency of the media content in the plurality of authorized spaces is ensured.
Alternatively, as can be seen from the description in the foregoing step 703, after the terminal sends the first user identifier to the server, the server may search for whether an authorized space corresponding to the first user identifier exists in the digital map corresponding to the target scene. If the server does not find the authorized space corresponding to the first user identifier, it is indicated that the registered user does not have the authorized space in the digital map corresponding to the target scene, and at this time, if the three-dimensional space which is not authorized to other registered users exists in the digital map corresponding to the target scene, the server may send prompt information to the terminal. The terminal may display the prompt message to prompt the registered user that there is no corresponding authorized space in the digital map. That is, the embodiment of the application also provides a method for applying for the authorized space, and the user can purchase and divide the virtual authorized space on site. Next, the terminal and the registered user can implement application of the authorized space through interaction.
The user can set one or more virtual spaces with proper positions, sizes and shapes on the terminal through interaction modes such as clicking the shape of the basic area and dragging, zooming in and out, rotating and the like, apply for the authorized space, submit the application to the server and reasonably obtain authorization through paying fees or other modes.
For example, the terminal may display the application option in a preview stream of the target scene. When detecting a selection instruction for an application option, the terminal may send a space application request to the server. The space application request is used for applying for an authorized space to the server, and the space application request carries the first user identifier, the pose of the terminal and the requirement of the authorized space.
In one possible implementation, when detecting a selection instruction for an application option, the terminal may display a preset space to instruct a user to perform a setting operation on the preset space according to a preview stream of a target scene; when a setting instruction for the preset space is detected, the pose and the size of the preset space after setting are determined according to setting parameters contained in the setting instruction, and the pose and the size of the preset space after setting are used as requirements of an authorized space.
Wherein the terminal may display a preset space of a preset shape and size. The preset shape may be a cube or a sphere, which is not limited in this embodiment of the present application. The user may trigger a setting instruction for the preset space by performing a setting operation for the preset space. The user may first drag the preset space to place the preset space at a position of the preview stream of the target scene, and then drag the boundary of the preset space to adjust the size of the preset space. After the user completes setting the preset space through the series of operations, the terminal can carry the pose and the size of the preset space after setting as the requirement of the authorized space in a space application request and send the request to the server.
Optionally, in another possible implementation manner, the terminal may also display a space option, where the space option includes a shape option, a size option, and a quantity option of the authorized space, and the user may select the shape, the size, and the quantity of the authorized space to be applied for by the terminal, and the terminal may obtain the shape, the size, and the quantity of the authorized space selected by the user and use the shape, the size, and the quantity as the requirement of the authorized space.
Alternatively, in another possible implementation manner, the terminal may display a space information input box, and the user inputs information such as the shape, the size, the number and the like of the authorized space to be applied by the terminal, and the terminal may acquire the information input by the user and use the information as the authorized space requirement.
After receiving the space application request, the server may allocate n corresponding authorized spaces for the first user identifier according to information included in the space application request, and then the server may send an authorization response to the terminal, where the authorization response may carry n authorized spaces.
For example, the server may acquire the target digital map according to the pose of the terminal, and then the server may find whether there is a space satisfying the requirement of the authorized space but not yet applied by other users in the target digital map. If so, the server may allocate the space to the registered user identified by the first user identification, i.e., as an authorized space corresponding to the first user identification, and store the space information of the space and the first user identification corresponding to each other. The server may then return an apply success message to the terminal to inform the terminal that the application for authorization space was successful.
Optionally, if there is no space meeting the requirement of the authorized space in the target digital map, or there is a space meeting the requirement of the authorized space, but the space is the authorized space of other registered users, the server may return an application failure message to the terminal, so as to prompt that the application of the authorized space of the terminal fails.
Fig. 22 is a schematic diagram showing displaying a preset space on a user interface and setting the preset space according to an embodiment of the present application. As shown in fig. 22, the user can move the preset space to a designated position by dragging it, and then can zoom in or zoom out by dragging the corner point of the preset space.
Optionally, in one possible implementation manner, when the terminal detects the selection instruction for the application option, the digital map corresponding to the target scene may be obtained from the server according to the pose of the terminal, and then the terminal may display the pre-segmented spatial block in the digital map corresponding to the target scene in the preview stream of the target scene. The user can select one or more space blocks, the terminal can take the one or more space blocks selected by the user as a space which the user wants to apply for, and carry the pose and the size of the space as the requirement of an authorized space in a space application request to send the space application request to the server, so that the server can determine whether to take the space as the authorized space corresponding to the first user identifier according to the requirement of the authorized space.
Fig. 23 is a block of pre-segmented space displayed in a preview stream of a target scene according to an embodiment of the present application. As shown in fig. 23, a space located in front of a building may be divided into a plurality of space blocks, and a user may select one or more space blocks from the plurality of space blocks as a space to be applied for.
It should be noted that, in the embodiment of the present application, it is considered that, for a certain building, a space in a different direction of the building may be applied as an authorized space by different registered users, in this case, in constructing a digital map, a space occupation area is defined for each position, for example, when a building is already in a certain three-dimensional space, then, in constructing the digital map, a bounding box of the building is used for calibration. Thus, when the registered user applies for the space on one side of the building, the authorized space applied by the user and the space occupied by the building can be bound. Thus, for a building, when spaces in different directions of the building are applied by different registered users, the authorized spaces applied by the users are bound with the space occupied by the building.
For example, as shown in fig. 24, when the first user applies for the authorized space a, the second user applies for the authorized space B, and the space occupied by the building is C, the first user may bind the spaces a and C, and the second user may bind the space b+c. Thus, after the subsequent server obtains the user identifier, the corresponding scene can be determined according to the user identifier, and the scene comprises the authorized space for the user identifier and the space bound with the authorized space.
In the embodiment of the application, the user can realize the application of the authorized space according to the preview flow of the target scene and the current pose of the terminal by interacting with the terminal, so that the user can apply for the authorized space according to the own view in real time, namely, the view of the space is obtained, the convenience of the application of the authorized space is improved, and the application efficiency is improved.
After the media content is added in the authorized space through the method, subsequently, for a consuming user, when the first consuming user collects the video of the target scene through the first terminal, the media content added in the authorized space contained in the target scene can be pulled from the server for display according to the pose of the first terminal. Meanwhile, the first terminal can also send the video of the target scene to the second terminal, so that the second terminal can acquire the media content added in the authorized space contained in the target scene according to the video of the target scene to display, and sharing of the media content in the digital map is achieved. Next, a method for sharing media content based on a digital map provided in an embodiment of the present application will be described. As shown in fig. 25, the method includes the steps of:
Step 2501: the first terminal sends the video of the target scene to the second terminal.
The first terminal can acquire the video of the target scene according to the camera shooting component configured by the first terminal. The video of the target scene comprises a target pose when the first terminal shoots the video. After the first terminal collects the video, the target pose can be sent to a server, and the server can acquire media content matched with the target pose from stored media content according to the target pose. And sending the acquired media content to the first terminal, wherein the first terminal can render the received media content in the video of the target scene.
Meanwhile, the first terminal can also display a content sharing option, and when the first terminal detects the selection operation of the first user on the content sharing option, the first terminal can acquire a second user selected by the first user and send the video of the target scene to a second terminal corresponding to the second user.
Step 2502: and the second terminal sends the target pose to the server.
After receiving the video of the target scene, the second terminal can detect that the video of the target scene contains the target pose. At this time, the second terminal may acquire the target pose and send the target pose to the server.
Optionally, in one possible implementation manner, after the second terminal acquires the video of the target scene, the video may be played, and during the playing process of the video, the user may select the specified area on the played video frame. The second terminal may extract a target object from the specified region, and send the target object pose and the target pose to the server.
It should be noted that, when the second terminal plays the video, the media content display switch option may be displayed. The user may trigger a media content display on command by activating the option, while also triggering a media content display off command by closing the option. After receiving the start instruction, the second terminal may start executing steps 2503 to 2505, and acquire the target media content from the server and display the target media content. And stopping displaying the target media content when the second terminal receives the closing instruction.
Fig. 26 shows a schematic diagram of a media content display switch option displayed on a video page. As shown in fig. 26, the media content switch option may be the AR content option shown in fig. 26, and the AR content option may default to an active state in an initial state, in which the instruction that the second terminal may detect is an on instruction. The user clicking on the AR content option may trigger a close instruction to stop displaying the target media content. The user clicks the AR content option again, an open instruction may be triggered, and the second terminal may perform subsequent steps again according to the open instruction.
Step 2503: and the server acquires the target media content according to the target pose.
As is clear from the description of the foregoing embodiments, when adding media content, the terminal transmits the media content added in the authorized space and the pose of the media content to the server, and the server stores the media content. Based on this, in this step, the server may find target media content matching the target pose from among the stored media content.
Optionally, if the second terminal sends not only the target pose but also the target object and the pose of the target object, the server may acquire a digital map corresponding to the target scene according to the received target pose, and then search for the feature matching the target object in the digital map corresponding to the target scene. The server may then obtain the added media content associated with the target object and send the obtained media content as target media content to the terminal. The media content related to the target object may refer to media content whose pose and pose of the target object satisfy a preset pose relationship, or media content nearest to the target object, or media content contained in a certain range of area centered on the target object, or the like. For example, registered contents in a region of 50 meters around the center of the target object.
Step 2504: the server sends the target media content to the second terminal.
The server not only can send the target media content to the second terminal, but also can send the pose of the target media content to the second terminal.
Step 2505: the second terminal renders the target media content while playing the video of the target scene.
In the embodiment of the application, the second terminal may receive the video of the target scene shared by the first terminal, and acquire the media content added at the corresponding position from the server according to the target pose included in the video for display. Therefore, the terminals can share the media content added into the digital map by a video sharing method, and the media content is convenient to spread.
The foregoing embodiments mainly describe a process of implementing sharing of media content by interaction between a first terminal, a second terminal and a server, and next, fig. 27 provides a flowchart of a method for a second terminal to obtain media content based on a video shared by the first terminal, and referring to fig. 27, the method includes the following steps:
step 2701: and acquiring the video of the target scene sent by the first terminal.
The video of the target scene carries the target pose when the first terminal shoots the video of the target scene.
Step 2702: and acquiring target media content to be displayed according to the target pose.
In this step, the second terminal may send the target pose carried in the video of the target scene to the server, and the server may obtain the target media content according to the target pose through the implementation manner of step 2503 in the foregoing embodiment, and then send the media content to the second terminal. Accordingly, the second terminal may receive the target media content transmitted by the server.
Step 2703: playing the video of the target scene, and rendering the target media content while playing the video of the target scene.
After acquiring the target media content, the second terminal may render the target media content while playing the video of the target scene.
In the embodiment of the application, the second terminal may receive the video of the target scene shared by the first terminal, and acquire the media content added at the corresponding position from the server according to the target pose included in the video for display. Therefore, the terminals can share the media content added into the digital map by a video sharing method, and the media content is convenient to spread.
It should be noted that, in the above embodiments of the method, the execution order of the steps is not limited by the step numbers, that is, some steps in the above embodiments may be executed in a non-sequential manner. In addition, the steps in the above embodiments may be optionally combined without violating the natural law, which is not limited in the embodiments of the present application.
Referring to fig. 28, an embodiment of the present application provides a digital map-based authorized space display device 2800, including:
a first obtaining module 2801, configured to perform step 701 in the foregoing embodiment;
a second obtaining module 2802, configured to perform step 702 in the foregoing embodiment;
a third obtaining module 2803, configured to perform step 703 in the foregoing embodiment;
a rendering module 2804, configured to perform step 704 in the foregoing embodiment.
Optionally, the apparatus further comprises:
a fourth acquisition module, configured to acquire target media content, where the target media content includes one or more of text, picture, audio, video, and model;
and the adding module is used for adding the target media content in a target authorized space, wherein the target authorized space is any one of n authorized spaces.
Optionally, the adding module is specifically configured to:
when a drag instruction for the target media content is detected, adding the target media content at a drag end position indicated by the drag instruction, wherein the display modes of the media content in the target authorized space and the media content outside the target authorized space of the target media content are different, or the media content in the target authorized space of the target media content is visible and the media content outside the target authorized space is invisible.
Optionally, the apparatus further comprises:
the determining module is used for determining a target relative position relation between target media content and a target object, wherein the target object is a preset image or a three-dimensional object included in a digital map corresponding to a target scene;
and the sending module is used for sending the target media content, the target object and the target relative position relation to the server so that the server can update the content of other authorized spaces corresponding to the first user identifier in the preset digital map according to the target media content, the target object and the target relative position relation.
Optionally, the relative positional relationship of the target media content and the first feature satisfies a first preset positional relationship, and the first feature is a preset image or three-dimensional object included in the preview stream of the target scene.
Optionally, the third obtaining module 2803 is specifically configured to:
the method comprises the steps of sending a first user identifier and a pose to a server so that the server can acquire n authorized spaces according to the first user identifier and the pose;
and receiving n authorized spaces sent by the server.
Optionally, the third obtaining module 2803 is specifically configured to:
the method comprises the steps of sending a first user identifier, a pose and space screening conditions to a server, enabling the server to obtain m authorized spaces according to the first user identifier and the pose, and obtaining n authorized spaces meeting the space screening conditions from the m authorized spaces;
and receiving n authorized spaces sent by the server.
Optionally, the third obtaining module 2803 is specifically configured to:
the method comprises the steps of sending a first user identifier and a pose to a server, so that the server obtains m authorized spaces according to the first user identifier and the pose;
receiving m authorized spaces sent by a server;
and obtaining n authorized spaces meeting space screening conditions from the m authorized spaces.
Optionally, the third obtaining module 2803 is specifically configured to:
a space application request is sent to a server and used for applying an authorized space to the server, and the space application request carries a first user identifier, a pose and the requirement of the authorized space, so that the server allocates n corresponding authorized spaces for the first user identifier according to the pose and the requirement of the authorized space;
And receiving an authorization response sent by the server, wherein the authorization response carries n authorization spaces.
Optionally, the rendering module 2804 is specifically configured to:
and rendering n authorized spaces in a preset display form in a preview stream of the target scene according to the pose, wherein the preset display form comprises one or more of preset colors, preset transparency, cube space and sphere space.
Optionally, the device further comprises an adjusting module and a sending module;
the adjusting module is used for adjusting the pose of the n authorized spaces in the digital map if the pose of the n authorized spaces is not matched with the pose of the preview stream of the target scene, so that the pose of the n authorized spaces is matched with the pose of the preview stream of the target scene;
and the sending module is used for sending the adjusted pose of the n authorized spaces to the server so that the server can update the pose of the n authorized spaces in the digital map.
Optionally, the relative positional relationship between n authorized spaces rendered in the preview stream of the target scene and the second feature satisfies a second preset positional relationship, where the second feature is a preset image or three-dimensional object included in the preview stream of the target scene.
In summary, in the embodiment of the present application, the terminal may obtain, according to the first user identifier and the pose of the terminal, the authorization space of the current registered user in the digital map corresponding to the target scene, and further render the authorization space of the registered user in the preview stream of the target scene, so that the registered user may view, in real time, the authorization space corresponding to the current scene, which is more convenient. In addition, because the authorized space is clearly displayed in the preview flow corresponding to the target scene, the registered user can clearly learn the boundary of the authorized space of the registered user, in this case, the user adds the media content in the authorized space, so that the added media content can be effectively prevented from encroaching on the authorized space of other users, the accurate addition of the media content is realized, and the addition efficiency is improved.
Fig. 29 is a schematic structural diagram of a content synchronization device 2900 provided in an embodiment of the present application, which is based on a plurality of authorized spaces of a digital map, and referring to fig. 29, the device 2900 includes:
a first acquisition module 2901 for executing step 1601 in the previous embodiment;
a first determining module 2902 for executing the step 1602 in the foregoing embodiment;
a second obtaining module 2903, configured to perform step 1603 in the foregoing embodiment;
a display module 2904 for performing step 1604 in the previous embodiment;
the third obtaining module 2905 is further configured to perform step 1605 in the foregoing embodiment;
an add module 2906 for performing step 1606 in the previous embodiments;
a second determining module 2907 for executing step 1607 in the previous embodiment;
a transmitting module 2908, configured to perform step 1608 in the foregoing embodiment.
Optionally, the first determination module 2902 includes:
the sending sub-module is used for sending the first user identification to the server so that the server can acquire k scenes according to the first user identification;
the receiving sub-module is used for receiving k scenes sent by the server;
and the selection sub-module is used for selecting a first scene from k scenes according to a preset rule.
Optionally, the selection submodule is specifically configured to:
selecting a scene closest to the position of the terminal from k scenes as a first scene; or,
selecting a scene with highest priority from the k scenes as a first scene; or,
a default scene is selected from the k scenes as the first scene.
Optionally, the first determining module 2902 is specifically configured to:
the method comprises the steps of sending a first user identifier and scene requirements to a server, enabling the server to obtain k scenes corresponding to the first user identifier according to the first user identifier, and obtaining a first scene meeting the scene requirements from the k scenes;
and receiving the first scene sent by the server.
Optionally, in the digital map, different user identifications correspond to different authorized spaces.
In the embodiment of the application, the terminal can simultaneously acquire the digital maps corresponding to the multiple scenes and the authorized spaces corresponding to the first user identifications contained in each digital map, then, the target media content is added in a certain authorized space, the relative position relation between the target media content and the target object is determined, the relative position relation, the target object and the target media content are sent to the server, the server can search the characteristics matched with the target object in the digital maps corresponding to other scenes, and synchronize the target media content to other authorized spaces corresponding to the first user identifications according to the relative position relation, so that the server can automatically complete the addition of the media content in the multiple authorized spaces, the addition efficiency is improved, and the addition accuracy can be ensured.
Fig. 30 is a schematic structural diagram of a content synchronization device 3000 based on a plurality of authorized spaces of a digital map according to an embodiment of the present application, referring to fig. 30, the device 3000 includes:
a first obtaining module 3001, configured to perform step 1701 in the foregoing embodiment;
a second obtaining module 3002, configured to perform step 1702 in the foregoing embodiment;
a determining module 3003, configured to perform step 1703 in the foregoing embodiment;
the adding module 3004 is configured to perform step 1704 in the foregoing embodiment.
Optionally, the apparatus further comprises:
and the adjusting module is used for adjusting the target media content to enable the adjusted target media content to be matched with the second authorized space if the target media content is not matched with the second authorized space.
Optionally, in the digital map, different user identifications correspond to different authorized spaces.
In the embodiment of the application, the server can acquire the target object, the target media content and the relative position relation between the target media content and the target object sent by the terminal, and then, the target media content is added in the second authorization space according to the relative position relation between the target object and the target, that is, the effect of automatically and uniformly updating the media content in each authorization space corresponding to the user identifier is achieved, and the updating efficiency is high.
Fig. 31 is a schematic structural diagram of a content synchronization device 3100 based on a plurality of authorized spaces of a digital map according to an embodiment of the present application, referring to fig. 31, the device 3100 includes:
a first acquisition module 3101 for performing step 2001 in the foregoing embodiment;
a determining module 3102 for performing step 2002 in the foregoing embodiment;
a second acquisition module 3103 for performing step 2003 in the foregoing embodiment;
a display module 3104 for performing step 2004 in the foregoing embodiments;
an editing module 3105 for performing step 2005 in the foregoing embodiment;
a transmitting module 3106 for performing step 2006 in the foregoing embodiment.
Optionally, the second acquisition module includes:
the sending sub-module is used for sending the first user identification to the server so that the server can acquire digital maps corresponding to k scenes and a plurality of authorized spaces corresponding to the first user identification in the digital maps corresponding to each scene according to the first user identification;
the receiving sub-module is used for receiving the k digital maps corresponding to the scenes and a plurality of authorized spaces corresponding to the first user identifications in the digital map corresponding to each scene, which are sent by the server;
and the selection sub-module is used for selecting a first scene from k scenes according to a preset rule and acquiring a first digital map corresponding to the first scene.
Optionally, the selection submodule is specifically configured to:
selecting a scene closest to the position of the terminal from k scenes as a first scene; or,
selecting a scene with highest priority from the k scenes as a first scene; or,
a default scene is selected from the k scenes as the first scene.
Optionally, the determining module is specifically configured to:
the method comprises the steps of sending a first user identifier and scene requirements to a server, enabling the server to obtain k scenes corresponding to the first user identifier according to the first user identifier, and obtaining a first scene meeting the scene requirements from the k scenes;
and receiving the first scene sent by the server.
Optionally, the first editing means includes one or more of an adding means, a deleting means, an replacing means, and a moving means according to a preset relative displacement.
Optionally, in the digital map, different user identifications correspond to different authorized spaces.
Fig. 32 is a schematic structural diagram of a content synchronization device 3200 based on a plurality of authorized spaces of a digital map according to an embodiment of the present application, referring to fig. 32, the device 3200 includes:
a first acquisition module 3201 for performing step 2101 in the previous embodiment;
a second acquisition module 3202 for performing step 2102 in the foregoing embodiment;
An editing module 3203 for performing step 2103 in the previous embodiment.
Optionally, the apparatus further comprises:
and the adjusting module is used for adjusting the edited media content to enable the adjusted media content to be matched with the second authorized space if the edited media content is not matched with the second authorized space.
Optionally, the first editing means includes one or more of an adding means, a deleting means, an replacing means, and a moving means according to a preset relative displacement.
Optionally, in the digital map, different user identifications correspond to different authorized spaces.
Fig. 33 is a schematic diagram of a digital map-based media content sharing device 3300 according to an embodiment of the present application, where the device is applied to a second terminal, and the device includes:
a first acquisition module 3301 for performing step 2401 in the foregoing embodiment;
a second acquisition module 3302 for performing step 2402 in the foregoing embodiment;
a display module 3303 for executing step 2403 in the foregoing embodiment.
Optionally, the second acquisition module 3302 is specifically configured to:
the target pose is sent to a server, so that the server obtains target media content according to the target pose;
and receiving the target media content sent by the server.
In the embodiment of the application, the second terminal may receive the video of the target scene shared by the first terminal, and acquire the media content added at the corresponding position from the server according to the target pose included in the video for display. Therefore, the terminals can share the media content added into the digital map by a video sharing method, and the media content is convenient to spread.
It should be noted that: in the implementation of specific functions, the respective devices provided in the foregoing embodiments are only exemplified by the division of the foregoing functional modules, and in practical applications, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the various devices provided in the foregoing embodiments belong to the same concept as the corresponding method embodiments in the foregoing embodiments, and specific implementation processes of the various devices are detailed in the method embodiments, which are not repeated herein.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present invention are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., digital versatile Disk (Digital Versatile Disc, DVD)), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above embodiments are provided for the purpose of not limiting the present application, but rather, any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (48)

1. A method for synchronizing contents of a plurality of authorized spaces based on a digital map, the method being applied to a terminal, the method comprising:
acquiring a first user identifier;
determining a first scene according to the first user identification;
acquiring a first digital map corresponding to the first scene; the first digital map comprises a first authorized space; wherein the first authorized space is a three-dimensional space corresponding to the first user identifier; the first authorization space is used for presenting media content;
displaying the first digital map and the first authorized space; the first authorization space displayed by the terminal comprises a boundary of the first authorization space; the first digital map comprises a target object; the target object comprises a preset image or a three-dimensional object;
Acquiring target media content;
adding the target media content in the first authorized space; the display modes of the media content in the first authorized space and the media content outside the first authorized space in the target media content are different; or the media content in the first authorized space is visible in the target media content, and the media content outside the first authorized space is invisible in the target media content;
determining a target relative positional relationship between the target media content and the target object;
the first user identifier, the target media content, the target object and the target relative position relation are sent to a server, so that the server updates content of other authorized spaces corresponding to the first user identifier in a preset digital map according to the target media content, the target object and the target relative position relation indication information; wherein the preset digital map comprises the target object.
2. The method of claim 1, wherein said determining a first scene from said first user identification comprises:
The first user identification is sent to the server, so that the server obtains k scenes according to the first user identification;
receiving the k scenes sent by the server;
and selecting the first scene from the k scenes according to a preset rule.
3. The method of claim 2, wherein the selecting the first scene from the k scenes according to a preset rule comprises:
selecting a scene closest to the position of the terminal from the k scenes as the first scene; or,
selecting a scene with highest priority from the k scenes as the first scene; or,
and selecting a default scene from the k scenes as the first scene.
4. The method of claim 1, wherein said determining a first scene from said first user identification comprises:
the first user identification and scene requirements are sent to the server, so that the server obtains k scenes corresponding to the first user identification according to the first user identification, and obtains a first scene meeting the scene requirements from the k scenes;
and receiving the first scene sent by the server.
5. The method according to any of claims 1-4, wherein different user identities correspond to different authorized spaces in the digital map.
6. A method for content synchronization of a plurality of authorized spaces based on a digital map, the method comprising:
acquiring a first digital map corresponding to a first scene displayed by a terminal, adding target media content in a first authorized space in the first digital map, and then transmitting a first user identification, a target object, target media content and a target relative position relationship between the target media content and the target object; the target object comprises a preset image or a three-dimensional object; the first authorization space displayed by the terminal comprises a boundary of the first authorization space; the first digital map includes the target object; the first authorization space is a three-dimensional space corresponding to the first user identifier; the first authorization space is used for presenting media content; the positional relationship between the target media content in the first authorized space and the target object in the first digital map satisfies the target relative positional relationship; the media content in the first authorized space and the media content outside the first authorized space in the target media content are displayed in different modes; or the media content in the first authorized space is visible in the target media content, and the media content outside the first authorized space is invisible in the target media content;
Acquiring a second digital map corresponding to a second scene according to the first user identifier, wherein the second digital map comprises a second authorized space; wherein the second authorized space is a three-dimensional space corresponding to the first user identifier; the second digital map comprises the target object; the second authorization space is used for presenting media content;
determining the position of the target object in the second digital map;
adding the target media content in the second authorization space according to the position of the target object and the target relative position relationship, so that when the terminal presents the second digital map and renders the second authorization space, the position relationship between the target media content in the second authorization space and the target object in the second digital map meets the target relative position relationship; the media content in the second authorized space and the media content outside the second authorized space in the target media content are displayed in different modes; or, the media content located in the second authorized space in the target media content is visible, and the media content located outside the second authorized space in the target media content is not visible.
7. The method of claim 6, wherein after adding the target media content in the second authorized space according to the target object and the target relative positional relationship, further comprising:
and if the target media content is not matched with the second authorized space, adjusting the target media content so that the adjusted target media content is matched with the second authorized space.
8. The method according to claim 6 or 7, characterized in that in the digital map different user identities correspond to different authorized spaces.
9. A method for synchronizing contents of a plurality of authorized spaces based on a digital map, applied to a terminal, the method comprising:
acquiring a first user identifier;
determining a first scene according to the first user identification;
acquiring a first digital map corresponding to the first scene; the first digital map comprises a first authorized space; wherein the first authorized space is a three-dimensional space corresponding to the first user identifier; the first authorization space displayed by the terminal comprises a boundary of the first authorization space; the first authorization space is used for presenting media content;
Displaying the first digital map, the first authorized space, and first media content included in the first authorized space; wherein, the display modes of the media content in the first authorization space and the media content outside the first authorization space in the first media content are different; or, the media content located in the first authorized space in the first media content is visible, and the media content located outside the first authorized space in the first media content is not visible;
editing the first media content according to a first editing mode;
and sending the first user identifier, the first media content and the first editing mode to a server, so that the server edits the first media content in other authorized spaces corresponding to the first user identifier in a preset digital map according to the first editing mode.
10. The method of claim 9, wherein said determining a first scene from said first user identification comprises:
the first user identification is sent to the server, so that the server obtains k scenes according to the first user identification;
Receiving the k scenes sent by the server;
and selecting the first scene from the k scenes according to a preset rule.
11. The method of claim 10, wherein the selecting the first scene from the k scenes according to a preset rule comprises:
selecting a scene closest to the position of the terminal from the k scenes as the first scene; or,
selecting a scene with highest priority from the k scenes as the first scene; or,
and selecting a default scene from the k scenes as the first scene.
12. The method of claim 10, wherein said determining a first scene from said first user identification comprises:
the first user identification and scene requirements are sent to the server, so that the server obtains k scenes corresponding to the first user identification according to the first user identification, and obtains a first scene meeting the scene requirements from the k scenes;
and receiving the first scene sent by the server.
13. The method of any of claims 9-12, wherein the first editing mode includes one or more of an adding mode, a deleting mode, an replacing mode, and a moving mode according to a preset relative displacement.
14. A method according to any of claims 9-12, characterized in that in the digital map different user identities correspond to different authorized spaces.
15. The method of claim 13, wherein different user identifications correspond to different authorized spaces in the digital map.
16. A method for content synchronization of a plurality of authorized spaces based on a digital map, the method comprising:
acquiring a first digital map corresponding to a first scene displayed by a terminal, editing first media content in a first authorized space in the first digital map according to a first editing mode, and then transmitting a first user identification, the first media content and the first editing mode; the first authorization space is a three-dimensional space corresponding to the first user identifier; the first authorization space displayed by the terminal comprises a boundary of the first authorization space; the first authorization space is used for presenting media content; wherein, the display modes of the media content in the first authorization space and the media content outside the first authorization space in the first media content are different; or, the media content located in the first authorized space in the first media content is visible, and the media content located outside the first authorized space in the first media content is not visible;
Acquiring a second digital map corresponding to a second scene according to the first user identifier, wherein the second digital map comprises a second authorized space which is a three-dimensional space corresponding to the first user identifier; the second authorized space includes the first media content;
editing the first media content included in the second authorized space according to the first editing mode; wherein, the display modes of the media content in the second authorization space and the media content outside the second authorization space in the first media content are different; or, the media content in the second authorized space is visible in the first media content, and the media content in the first media content outside the second authorized space is not visible in the first media content.
17. The method of claim 16, wherein after editing the first media content included in the second authorized space in the first editing manner, further comprising:
and if the edited media content is not matched with the second authorized space, adjusting the edited media content to enable the adjusted media content to be matched with the second authorized space.
18. The method of claim 16 or 17, wherein the first editing mode includes one or more of an adding mode, a deleting mode, an replacing mode, and a moving mode according to a preset relative displacement.
19. A method according to claim 16 or 17, characterized in that in the digital map the authorized spaces corresponding to different user identities are different.
20. The method of claim 18, wherein the authorized spaces corresponding to different user identifications are different in the digital map.
21. A content synchronization device for a plurality of authorized spaces based on a digital map, the device being applied to a terminal, the device comprising:
the first acquisition module is used for acquiring a first user identifier;
the first determining module is used for determining a first scene according to the first user identification;
the second acquisition module is used for acquiring a first digital map corresponding to the first scene, wherein the first digital map comprises a first authorized space; the first authorization space is a three-dimensional space corresponding to the first user identifier; the first authorization space is used for presenting media content;
the display module is used for displaying the first digital map and the first authorized space; the first digital map comprises a target object; the target object comprises a preset image or a three-dimensional object; the first authorization space displayed by the terminal comprises a boundary of the first authorization space;
A third acquisition module for acquiring target media content;
an adding module, configured to add the target media content in the first authorization space; the display modes of the media content in the first authorized space and the media content outside the first authorized space in the target media content are different; or the media content in the first authorized space is visible in the target media content, and the media content outside the first authorized space is invisible in the target media content;
a second determining module, configured to determine a target relative positional relationship between the target media content and the target object;
the sending module is configured to send the first user identifier, the target media content, the target object, and the target relative position relationship to a server, so that the server updates content in other authorized spaces corresponding to the first user identifier in a preset digital map according to the target media content, the target object, and the target relative position relationship, where the preset digital map includes the target object.
22. The apparatus of claim 21, wherein the first determining module comprises:
A sending sub-module, configured to send the first user identifier to the server, so that the server obtains k scenes according to the first user identifier;
a receiving sub-module, configured to receive the k scenes sent by the server;
and the selection sub-module is used for selecting the first scene from the k scenes according to a preset rule.
23. The apparatus of claim 22, wherein the selection submodule is specifically configured to:
selecting a scene closest to the position of the terminal from the k scenes as the first scene; or,
selecting a scene with highest priority from the k scenes as the first scene; or,
and selecting a default scene from the k scenes as the first scene.
24. The apparatus of claim 21, wherein the first determining module is specifically configured to:
the first user identification and scene requirements are sent to the server, so that the server obtains k scenes corresponding to the first user identification according to the first user identification, and obtains a first scene meeting the scene requirements from the k scenes;
and receiving the first scene sent by the server.
25. The apparatus according to any of claims 21-24, wherein different user identities correspond to different authorized spaces in the digital map.
26. A digital map-based content synchronization apparatus for a plurality of authorized spaces, the apparatus comprising:
the first acquisition module is used for acquiring a first user identifier, a target object, target media content and a target relative position relationship between the target media content and the target object which are transmitted after the terminal displays a first digital map corresponding to a first scene and adds the target media content in a first authorized space in the first digital map; the target object comprises a preset image or a three-dimensional object; the first authorization space displayed by the terminal comprises a boundary of the first authorization space; the first digital map includes the target object; the first authorization space is a three-dimensional space corresponding to the first user identifier; the first authorization space is used for presenting media content; the positional relationship between the target media content in the first authorized space and the target object in the first digital map satisfies the target relative positional relationship; the media content in the first authorized space and the media content outside the first authorized space in the target media content are displayed in different modes; or the media content in the first authorized space is visible in the target media content, and the media content outside the first authorized space is invisible in the target media content;
The second acquisition module is used for acquiring a second digital map corresponding to a second scene according to the first user identifier, wherein the second digital map comprises a second authorized space; wherein the second authorized space is a three-dimensional space corresponding to the first user identifier; the second digital map comprises the target object; the second authorization space is used for presenting media content;
a determining module, configured to determine a location of the target object in the second digital map;
an adding module, configured to add the target media content in the second authorized space according to the position of the target object and the target relative position relationship, so that when the terminal presents the second digital map and renders the second authorized space, the position relationship between the target media content in the second authorized space and the target object in the second digital map satisfies the target relative position relationship; the media content in the second authorized space and the media content outside the second authorized space in the target media content are displayed in different modes; or, the media content located in the second authorized space in the target media content is visible, and the media content located outside the second authorized space in the target media content is not visible.
27. The apparatus of claim 26, wherein the apparatus further comprises:
and the adjusting module is used for adjusting the target media content to enable the adjusted target media content to be matched with the second authorized space if the target media content is not matched with the second authorized space.
28. The apparatus of claim 26 or 27, wherein different user identities correspond to different authorized spaces in the digital map.
29. A content synchronization device for a plurality of authorized spaces based on a digital map, applied to a terminal, the device comprising:
the first acquisition module is used for acquiring a first user identifier;
the determining module is used for determining a first scene according to the first user identification;
the second acquisition module is used for acquiring a first digital map corresponding to the first scene; the first digital map comprises a first authorized space; wherein the first authorized space is a three-dimensional space corresponding to the first user identifier; the first authorization space is used for presenting media content;
the display module is used for displaying the first digital map, the first authorized space and first media content included in the first authorized space; the first authorization space displayed by the terminal comprises a boundary of the first authorization space; wherein, the display modes of the media content in the first authorization space and the media content outside the first authorization space in the first media content are different; or, the media content located in the first authorized space in the first media content is visible, and the media content located outside the first authorized space in the first media content is not visible;
The editing module is used for editing the first media content according to a first editing mode;
and the sending module is used for sending the first user identification, the first media content and the first editing mode to a server so that the server edits the first media content in other authorized spaces corresponding to the first user identification in a preset digital map according to the first editing mode.
30. The apparatus of claim 29, wherein the second acquisition module comprises:
a sending sub-module, configured to send the first user identifier to the server;
the receiving sub-module is used for receiving k scenes corresponding to the first user identification sent by the server;
and the selection sub-module is used for selecting the first scene from the k scenes according to a preset rule.
31. The apparatus of claim 30, wherein the selection submodule is specifically configured to:
selecting a scene closest to the position of the terminal from the k scenes as the first scene; or,
selecting a scene with highest priority from the k scenes as the first scene; or,
And selecting a default scene from the k scenes as the first scene.
32. The apparatus of claim 29, wherein the determining module is specifically configured to:
the first user identification and scene requirements are sent to the server, so that the server obtains k scenes corresponding to the first user identification according to the first user identification, and obtains a first scene meeting the scene requirements from the k scenes;
and receiving the first scene sent by the server.
33. The apparatus of any one of claims 29-32, wherein the first editing mode comprises one or more of an adding mode, a deleting mode, an replacing mode, and a moving mode according to a preset relative displacement.
34. The apparatus of any of claims 29-32, wherein different user identifications correspond to different authorized spaces in the digital map.
35. The apparatus of claim 33, wherein different user identifications correspond to different authorized spaces in the digital map.
36. A digital map-based content synchronization apparatus for a plurality of authorized spaces, the apparatus comprising:
The first acquisition module is used for acquiring a first user identifier, a first media content and a first editing mode which are transmitted after the terminal edits the first media content in a first authorization space in the first digital map according to the first editing mode after displaying a first digital map corresponding to a first scene; the first authorization space is a three-dimensional space corresponding to the first user identifier; the first authorization space displayed by the terminal comprises a boundary of the first authorization space; the first authorization space is used for presenting media content; wherein, the display modes of the media content in the first authorization space and the media content outside the first authorization space in the first media content are different; or, the media content located in the first authorized space in the first media content is visible, and the media content located outside the first authorized space in the first media content is not visible;
the second acquisition module is used for acquiring a second digital map corresponding to a second scene according to the first user identifier, wherein the second digital map comprises a second authorized space which is a three-dimensional space corresponding to the first user identifier; the second authorized space includes the first media content;
The editing module is used for editing the first media content included in the second authorization space according to the first editing mode; wherein, the display modes of the media content in the second authorization space and the media content outside the second authorization space in the first media content are different; or, the media content in the second authorized space is visible in the first media content, and the media content in the first media content outside the second authorized space is not visible in the first media content.
37. The apparatus of claim 36, wherein the apparatus further comprises:
and the adjusting module is used for adjusting the edited media content to enable the adjusted media content to be matched with the second authorization space if the edited media content is not matched with the second authorization space.
38. The apparatus of claim 36 or 37, wherein the first editing means comprises one or more of an adding means, a deleting means, an replacing means, and a moving means according to a preset relative displacement.
39. The apparatus of claim 36 or 37, wherein different user identities correspond to different authorized spaces in the digital map.
40. The apparatus of claim 38, wherein different user identifications correspond to different authorized spaces in the digital map.
41. A computer readable storage medium having instructions stored therein which, when executed on a computer, cause the computer to perform the digital map-based content synchronization method of a plurality of authorized spaces of any one of claims 1-5.
42. A computer readable storage medium having instructions stored therein which, when executed on a computer, cause the computer to perform the digital map-based content synchronization method of a plurality of authorized spaces of any one of claims 6-8.
43. A computer readable storage medium having instructions stored therein which, when executed on a computer, cause the computer to perform the digital map-based content synchronization method of a plurality of authorized spaces of any one of claims 9-15.
44. A computer readable storage medium having instructions stored therein which, when executed on a computer, cause the computer to perform the digital map-based content synchronization method of a plurality of authorized spaces of any one of claims 16-20.
45. A terminal comprising a processor, a memory, a camera, a transceiver, and a communication bus, wherein the processor, the memory, the camera, and the transceiver are all connected through the communication bus, the memory is used for storing a program for supporting the terminal to execute the method for synchronizing contents of a plurality of authorized spaces based on a digital map according to any one of claims 1 to 5, the camera is used for collecting video streams, the transceiver is used for receiving or transmitting data, and the processor controls the camera, the transceiver to implement the method for synchronizing contents of a plurality of authorized spaces based on a digital map according to any one of claims 1 to 5 by executing the program stored in the memory.
46. A terminal comprising a processor, a memory, a camera, a transceiver, and a communication bus, wherein the processor, the memory, the camera, and the transceiver are all connected through the communication bus, the memory is used for storing a program for supporting the terminal to execute the method for synchronizing contents of a plurality of authorized spaces based on a digital map according to any one of claims 9 to 15, the camera is used for collecting video streams, the transceiver is used for receiving or transmitting data, and the processor controls the camera, the transceiver, by executing the program stored in the memory, to implement the method for synchronizing contents of a plurality of authorized spaces based on a digital map according to any one of claims 9 to 15.
47. A server comprising a processor, a memory, a transceiver and a communication bus, the processor, the memory and the transceiver being all connected by the communication bus, the memory being for storing a program for supporting the server to perform the method for content synchronization of a plurality of authorized spaces based on a digital map as claimed in any one of claims 6 to 8, the transceiver being for receiving or transmitting data, the processor controlling the transceiver to implement the method for content synchronization of a plurality of authorized spaces based on a digital map as claimed in any one of claims 6 to 8 by executing the program stored in the memory.
48. A server comprising a processor, a memory, a transceiver, and a communication bus, the processor, the memory, and the transceiver being all connected by the communication bus, the memory being for storing a program for supporting the server to perform the digital map-based content synchronization method of the plurality of authorized spaces according to any one of claims 16 to 20, the transceiver being for receiving or transmitting data, the processor controlling the transceiver to implement the digital map-based content synchronization method of the plurality of authorized spaces according to any one of claims 16 to 20 by executing the program stored in the memory.
CN201911089945.4A 2019-11-08 2019-11-08 Content synchronization method for multiple authorized spaces based on digital map Active CN112783993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911089945.4A CN112783993B (en) 2019-11-08 2019-11-08 Content synchronization method for multiple authorized spaces based on digital map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911089945.4A CN112783993B (en) 2019-11-08 2019-11-08 Content synchronization method for multiple authorized spaces based on digital map

Publications (2)

Publication Number Publication Date
CN112783993A CN112783993A (en) 2021-05-11
CN112783993B true CN112783993B (en) 2024-03-15

Family

ID=75748499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911089945.4A Active CN112783993B (en) 2019-11-08 2019-11-08 Content synchronization method for multiple authorized spaces based on digital map

Country Status (1)

Country Link
CN (1) CN112783993B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853913A (en) * 2012-12-03 2014-06-11 三星电子株式会社 Method for operating augmented reality contents and device and system for supporting the same
CN104680569A (en) * 2015-03-23 2015-06-03 厦门幻世网络科技有限公司 Method updating target 3D animation based on mobile terminal and device adopting method
CN106716306A (en) * 2014-09-30 2017-05-24 索尼互动娱乐股份有限公司 Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space
CN107833105A (en) * 2017-11-14 2018-03-23 青岛理工大学 A kind of market visualization management of leasing method and system based on building construction information model
CN109937394A (en) * 2016-10-04 2019-06-25 脸谱公司 Control and interface for user's interaction in Virtual Space

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012164685A1 (en) * 2011-05-31 2012-12-06 楽天株式会社 Information providing device, information providing method, information providing processing program, recording medium recording information providing processing program, and information providing system
US10083532B2 (en) * 2015-04-13 2018-09-25 International Business Machines Corporation Sychronized display of street view map and video stream
US20160335289A1 (en) * 2015-05-12 2016-11-17 Randy Alan Andrews Registration of virtual object association rights for augmented reality environment
US10222932B2 (en) * 2015-07-15 2019-03-05 Fyusion, Inc. Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations
WO2017165538A1 (en) * 2016-03-22 2017-09-28 Uru, Inc. Apparatus, systems, and methods for integrating digital media content into other digital media content
US11017601B2 (en) * 2016-07-09 2021-05-25 Doubleme, Inc. Mixed-reality space map creation and mapping format compatibility-enhancing method for a three-dimensional mixed-reality space and experience construction sharing system
JP6229089B1 (en) * 2017-04-26 2017-11-08 株式会社コロプラ Method executed by computer to communicate via virtual space, program causing computer to execute the method, and information processing apparatus
US10671239B2 (en) * 2017-12-18 2020-06-02 Sony Interactive Entertainment America Llc Three dimensional digital content editing in virtual reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853913A (en) * 2012-12-03 2014-06-11 三星电子株式会社 Method for operating augmented reality contents and device and system for supporting the same
CN106716306A (en) * 2014-09-30 2017-05-24 索尼互动娱乐股份有限公司 Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space
CN104680569A (en) * 2015-03-23 2015-06-03 厦门幻世网络科技有限公司 Method updating target 3D animation based on mobile terminal and device adopting method
CN109937394A (en) * 2016-10-04 2019-06-25 脸谱公司 Control and interface for user's interaction in Virtual Space
CN107833105A (en) * 2017-11-14 2018-03-23 青岛理工大学 A kind of market visualization management of leasing method and system based on building construction information model

Also Published As

Publication number Publication date
CN112783993A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN112130742B (en) Full screen display method and device of mobile terminal
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN113794801B (en) Method and device for processing geo-fence
CN110495819B (en) Robot control method, robot, terminal, server and control system
CN115473957B (en) Image processing method and electronic equipment
WO2020029306A1 (en) Image capture method and electronic device
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
CN111249728B (en) Image processing method, device and storage medium
CN114842069A (en) Pose determination method and related equipment
CN114444000A (en) Page layout file generation method and device, electronic equipment and readable storage medium
CN112416984A (en) Data processing method and device
CN112783993B (en) Content synchronization method for multiple authorized spaces based on digital map
CN116797767A (en) Augmented reality scene sharing method and electronic device
CN115032640A (en) Gesture recognition method and terminal equipment
CN116561085A (en) Picture sharing method and electronic equipment
CN117009005A (en) Display method, automobile and electronic equipment
CN116414500A (en) Recording method, acquisition method and terminal equipment for operation guide information of electronic equipment
CN112597417A (en) Page updating method and device, electronic equipment and storage medium
WO2021089006A1 (en) Digital space management method, apparatus, and device
WO2023116669A1 (en) Video generation system and method, and related apparatus
US20230114178A1 (en) Image display method and electronic device
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN117666861A (en) Service card distribution method, system and electronic equipment
CN116206602A (en) Voice analysis method, electronic device, readable storage medium and chip system
CN117671203A (en) Virtual digital content display system, method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant