WO2020258938A1 - 锚点共享方法及装置、系统、电子设备和存储介质 - Google Patents

锚点共享方法及装置、系统、电子设备和存储介质 Download PDF

Info

Publication number
WO2020258938A1
WO2020258938A1 PCT/CN2020/080481 CN2020080481W WO2020258938A1 WO 2020258938 A1 WO2020258938 A1 WO 2020258938A1 CN 2020080481 W CN2020080481 W CN 2020080481W WO 2020258938 A1 WO2020258938 A1 WO 2020258938A1
Authority
WO
WIPO (PCT)
Prior art keywords
anchor point
shared
anchor
information
identity
Prior art date
Application number
PCT/CN2020/080481
Other languages
English (en)
French (fr)
Inventor
金嘉诚
廖锦毅
谢卫健
章国锋
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Priority to JP2021549776A priority Critical patent/JP7245350B2/ja
Priority to SG11202109292SA priority patent/SG11202109292SA/en
Publication of WO2020258938A1 publication Critical patent/WO2020258938A1/zh
Priority to US17/407,214 priority patent/US20210383580A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0823Network architectures or network communication protocols for network security for authentication of entities using certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the present disclosure relates to the field of augmented reality technology, and in particular to an anchor sharing method and device, system, electronic equipment and storage medium.
  • Augmented Reality can combine real world information with virtual world information, and display virtual visual information in real world images through devices. How to allow multiple users to see the same virtual world and real world on their respective screens is critical to augmented reality technology.
  • the present disclosure proposes a technical solution for anchor point sharing.
  • an anchor point sharing method including: a server receives an anchor point sharing request sent by a terminal, the anchor point sharing request includes the current scene scanned by the terminal and the acquired anchor to be shared The identity of the point; the server determines the shared location information of the anchor point to be shared in the current scene according to the anchor point sharing request; the server feeds back the shared location information to the terminal.
  • the anchor point can be fully shared between different terminals or different time periods, so that the multi-person sharing, single-person multi-period sharing, or multi-person multi-period sharing of AR scenes can be achieved, thereby greatly Improve the unity and sharing of AR technology.
  • the method before the server receives the anchor point sharing request sent by the terminal, the method further includes: the server obtains the anchor point information of the anchor point to be shared; and the server generates the anchor point information according to the anchor point information.
  • the identity of the anchor is shared, and the anchor information and the identity are saved.
  • the identification and anchor point information are stored in the server, so that the server can directly determine the shared location information of the anchor point to be shared in the current scene based on the data stored in the server, without reading the corresponding anchor point information from the main terminal, thereby effectively improving The efficiency of the anchor sharing process.
  • the server generates the identity of the anchor to be shared according to the anchor information, and saves the anchor information and the identity, and further includes: saving the anchor When the time between the point information and the identity identifier exceeds the time threshold, the server deletes the anchor point information and the identity identifier.
  • the anchor point information of the anchor point to be shared includes: first feature information of the anchor point scene in which the anchor point to be shared is located, and the anchor point to be shared is at the anchor point Original location information in the scene.
  • the anchor point information of the anchor point to be shared that includes the first feature information and the original position information
  • the environment information of the anchor point scene where the anchor point to be shared is located and the position relationship between the anchor point to be shared and the anchor point scene can be effectively determined Therefore, it is convenient to effectively locate the shared position information of the anchor point to be shared in the current scene based on the comparison result of the first feature information and the current scene in combination with the original position information in the subsequent steps.
  • the anchor point sharing request includes: the second feature information of the current scene, the identity of the anchor point to be shared, and an anchor point sharing request message.
  • the server Through the anchor sharing request including the second feature information, the identity identifier, and the anchor sharing request message, it is convenient for the server to determine the anchor to be shared requested by the slave terminal according to the identity while receiving the anchor sharing request message sent by the slave terminal.
  • the identity of the point while directly acquiring the second characteristic information to facilitate subsequent determination of the shared location information directly in the server based on the second characteristic information, thereby reducing multiple interaction processes with the slave terminal and improving the efficiency of the anchor point sharing process .
  • the server determining the sharing location information of the anchor point to be shared in the current scene according to the anchor point sharing request includes: the server separately according to the anchor point sharing request Obtain the identity of the anchor to be shared and the second characteristic information of the current scene; the server reads the anchor information of the anchor to be shared according to the identity; the server according to the second characteristic information and The anchor point information determines the shared location information of the anchor point to be shared in the current scene.
  • the server can easily determine the anchor point to be shared in the current scene by reading the data without multiple interactions with the terminal based on the internally stored data and the received anchor sharing request. In order to complete the sharing of AR scenes with high efficiency and high accuracy.
  • the server determining the shared location information of the anchor point to be shared in the current scene according to the second feature information and the anchor point information includes: the server according to the Anchor point information, the first feature information of the anchor point scene where the anchor point to be shared is located and the original position information of the anchor point to be shared in the anchor point scene are obtained; the server compares the first feature information with Perform feature matching on the second feature information to obtain the position transformation relationship between the current scene and the anchor scene; the server performs position transformation on the original position information according to the position transformation relationship to obtain the waiting The shared location information of the shared anchor point in the current scene.
  • the position transformation relationship between the current scene and the anchor point scene is obtained, and then based on this position transformation relationship and the original position information, the anchor point to be shared is obtained through the position transformation.
  • Shared location information in the scene this process has a small amount of calculation and a convenient calculation method. While ensuring the accuracy of the shared location information for determining the anchor point to be shared, the calculation speed of this process is greatly reduced, thereby improving the anchor point sharing process Speed and efficiency.
  • an anchor sharing method including: a terminal obtains the identity of an anchor to be shared; and the terminal sends an anchor sharing request to the server according to the current scenario and the identity, and the anchor
  • the point sharing request is used to instruct the server to determine the shared location information of the anchor point to be shared in the current scene; the terminal receives the shared location information fed back by the server.
  • the terminal only needs to obtain the identity of the current scene and the target anchor point, and then can obtain the shared location information of the anchor point to be shared in the current scene by communicating with the server, and then realize the sharing of the AR scene.
  • the sharing method is simple and efficient, suitable for general promotion.
  • the terminal acquiring the identity of the anchor to be shared includes: the terminal acquires shared information; the terminal obtains the identity of the anchor to be shared according to the correspondence between the shared information and the identity Logo.
  • the identity of the anchor to be shared can be obtained more conveniently, without a complicated analysis process, and the identity can be obtained indirectly through shared information, which can reduce the risk of identity leakage and ensure security while ensuring Improve the efficiency of anchor sharing.
  • the terminal sends an anchor sharing request to the server according to the current scene and the identity, including: the terminal scans the current scene and obtains an image of the current scene; the terminal responds to the image of the current scene Performing feature extraction to obtain the second feature information; the terminal sends the second feature information, the identity identifier, and the anchor point sharing request message together as the anchor point sharing request to the server.
  • the terminal can directly transmit the data required by the server to the server while requesting the server to complete the anchor point sharing, without cumbersome interaction process, and improve the efficiency of anchor point sharing.
  • an anchor sharing method which includes: a terminal obtains the identity of the anchor to be shared; the terminal sends an anchor sharing request to the server according to the current scenario and the identity; The anchor point sharing request determines the shared location information of the anchor point to be shared in the current scene; the terminal receives the shared location information fed back by the server.
  • the anchor point to be shared can be positioned in the current scene through the server according to the identity of the anchor point to be shared, so that different terminals can share the virtual world in the same real scene and realize the sharing of augmented reality technology.
  • an anchor point sharing device which is applied to a server and includes: an anchor point sharing request receiving module, configured to receive an anchor point sharing request sent by a terminal, the anchor point sharing request It includes the current scene scanned by the terminal and the acquired identity of the anchor point to be shared; the shared location information determination module is configured to determine the position of the anchor point to be shared in the current scene according to the anchor point sharing request Shared location information; a feedback module for feeding back the shared location information to the terminal.
  • the anchor point sharing request receiving module before the anchor point sharing request receiving module, it further includes an identity generation module, and the identity generation module is configured to: obtain the anchor point information of the anchor point to be shared; The anchor point information generates the identity of the anchor to be shared, and saves the anchor information and the identity.
  • the identity generation module is further configured to: when the time for storing the anchor point information and the identity identifier exceeds a time threshold, the server deletes the anchor point information and the identity identifier .
  • the anchor point information of the anchor point to be shared includes: first feature information of the anchor point scene in which the anchor point to be shared is located, and the anchor point to be shared is at the anchor point Original location information in the scene.
  • the anchor point sharing request includes: the second feature information of the current scene, the identity of the anchor point to be shared, and an anchor point sharing request message.
  • the shared location information determining module is configured to: obtain the identity of the anchor to be shared and the second feature information of the current scene according to the anchor sharing request; The identity identifier reads the anchor point information of the anchor point to be shared; and determines the shared location information of the anchor point to be shared in the current scene according to the second characteristic information and the anchor point information.
  • the shared location information determining module is further configured to: according to the anchor point information, obtain the first feature information of the anchor point scene where the anchor point to be shared is located and the first feature information of the anchor point to be shared The original position information of the anchor point in the anchor point scene; performing feature matching between the first feature information and the second feature information to obtain the position transformation relationship between the current scene and the anchor point scene; According to the position conversion relationship, perform position conversion on the original position information to obtain shared position information of the anchor point to be shared in the current scene.
  • an anchor point sharing device which is applied to a terminal and includes: an identity identification acquisition module for acquiring the identity identification of an anchor to be shared; an anchor sharing request sending module for use According to the current scene and the identity, an anchor point sharing request is sent to the server, where the anchor point sharing request is used to instruct the server to determine the shared location information of the anchor point to be shared in the current scene; and the shared location
  • the information receiving module is used to receive shared location information fed back by the server.
  • the identity acquisition module is used to: acquire shared information; obtain the identity of the anchor to be shared according to the correspondence between the shared information and the identity.
  • the anchor point sharing request sending module is configured to: scan the current scene to obtain an image of the current scene; perform feature extraction on the image of the current scene to obtain the second feature information;
  • the second feature information, the identity identifier, and the anchor point sharing request message are collectively used as the anchor point sharing request and sent to the server.
  • an anchor point sharing system including: the first anchor point sharing device as described in the third aspect; the second anchor point sharing device as described in the fourth aspect; wherein The first anchor point sharing device and the second anchor point sharing device interact through an anchor point sharing request.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor is configured to execute the method of the first aspect described above.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor is configured to execute the method of the second aspect described above.
  • a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method of the first aspect described above is implemented.
  • a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method of the second aspect described above is implemented.
  • a computer program including computer readable code, when the computer readable code is executed in an electronic device, a processor in the electronic device executes for realizing any of the above The method described.
  • the identity of the anchor to be shared is obtained through the terminal, and the anchor sharing request is sent to the server according to the current scenario and the identity.
  • the server determines that the anchor to be shared is in the current scene according to the anchor sharing request And feed back the shared location information to the terminal.
  • the anchor point to be shared can be positioned in the current scene through the server according to the identity of the anchor point to be shared, so that different terminals can share the virtual world in the same real scene and realize the sharing of augmented reality technology.
  • Fig. 1 shows a flowchart of an anchor sharing method according to an embodiment of the present disclosure.
  • Fig. 2 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure.
  • Fig. 3 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure.
  • Fig. 4 shows a schematic diagram of a coordinate system of an anchor point scene according to an embodiment of the present disclosure.
  • Fig. 5 shows a schematic diagram of a coordinate system of a current scene according to an embodiment of the present disclosure.
  • Fig. 6 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure.
  • Fig. 7 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure.
  • Fig. 8 shows a schematic diagram of an application example according to the present disclosure.
  • Fig. 9 shows a schematic diagram of an application example according to the present disclosure.
  • Fig. 10 shows a block diagram of an anchor point sharing device according to an embodiment of the present disclosure.
  • Fig. 11 shows a block diagram of an anchor sharing device according to an embodiment of the present disclosure.
  • Fig. 12 shows a block diagram of an anchor point sharing system according to an embodiment of the present disclosure.
  • Fig. 13 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 14 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 shows a flowchart of an anchor sharing method according to an embodiment of the present disclosure.
  • the method can be applied to a server.
  • the specific type, model, and implementation of the server are not limited, and can be flexibly selected according to actual conditions.
  • the anchor point sharing method may include:
  • Step S11 The server receives an anchor point sharing request sent by the terminal, where the anchor point sharing request includes the current scene scanned by the terminal and the acquired identity of the anchor point to be shared.
  • Step S12 The server determines the sharing location information of the anchor point to be shared in the current scene according to the anchor point sharing request.
  • Step S13 the server feeds back the shared location information to the terminal.
  • the anchor point to be shared may be generated by a method of generating anchor points after scanning the anchor point scene where the anchor point is located by a certain terminal device.
  • the terminal that scans the anchor scene may be referred to as the master terminal, and the master terminal serves as the terminal device.
  • the specific implementation manner is not limited, and can be flexibly determined according to actual conditions.
  • the main terminal can be a user equipment (UE, User Equipment), mobile device, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (PDA, Personal Digital Assistant), handheld device, Computing equipment, in-vehicle equipment, wearable devices, etc.
  • the specific implementation method of anchor point generation is also not limited.
  • the anchor points to be shared can be generated by extracting features from the results of the main terminal scanning the anchor point scene, and performing anchor point prediction based on the neural network. Since the method of anchor point generation is not limited, the number of generated anchor points to be shared is also not limited. It can be one or multiple, depending on the actual situation of the anchor point scene and the specific method of anchor point generation. Be flexible.
  • the terminal that sends the anchor point sharing request to the server may be referred to as a slave terminal, and the slave terminal is used as a terminal device.
  • the specific implementation manner thereof is also not limited, and will not be repeated here.
  • the number of slave terminals is also not limited, and may be one or multiple. In an example, the number of slave terminals may be one.
  • the server can realize the sharing of the anchor point to be shared between the master terminal and the slave terminal by receiving the anchor point sharing request sent by the slave terminal.
  • the number of slave terminals can be multiple, and the specific number is not limited here. It can be flexibly selected according to the actual situation.
  • the server can receive anchor sharing requests sent by multiple slave terminals to realize the sharing
  • the anchor point is shared between the master terminal and multiple slave terminals.
  • the slave terminal can be a different device from the master terminal, or the same device.
  • the anchor point sharing between different terminal devices can be realized.
  • multiple terminal devices can use the anchor point sharing method proposed in the above disclosed embodiment.
  • a single terminal device can be implemented to share anchor points between different periods of time. For example, it can be a certain terminal device.
  • the anchor point sharing method realizes multi-period sharing of an AR scene.
  • the number of slave terminals can be multiple.
  • One of the slave terminals and the master terminal is the same device, and the remaining slave terminals and the master terminal are different devices.
  • multiple terminals can be used for multiple periods of an AR scenario. Sharing, for example, may be that multiple terminal devices use the anchor sharing method proposed in the above disclosed embodiment to realize multi-period sharing of a certain AR scene by multiple terminals.
  • the anchor sharing request received by the server can be flexibly determined according to the actual situation. It has been proposed in step S11 that the anchor sharing request includes the current scene scanned by the terminal and the acquired identity of the anchor to be shared. Therefore, in the embodiments of the present disclosure, although the presentation form of the anchor sharing request is flexible, the anchor sharing request should effectively reflect the geographic information of the current scene and be related to the identity of the anchor to be shared.
  • the anchor point sharing request may include: the second feature information of the current scene, the identity of the anchor point to be shared, and the anchor point sharing request message.
  • the second feature information of the current scene is mainly used to reflect the geographic information of the current scene, and its specific expression form and acquisition method are not limited, and can be flexibly selected according to the actual situation. In a possible implementation manner, it can be acquired by the slave terminal scanning the current scene and performing feature extraction.
  • the expression of the identity of the anchor to be shared is also not limited. Any form that can be used to indicate the identity of the anchor to be shared can be used as the expression of the identity.
  • the anchor to be shared The expression form of the identity can be the anchor id of the anchor to be shared.
  • the specific content and expression form of the anchor point sharing request message are also not limited. Any request that can be expressed to the server for anchor point sharing can be used as the expression form of the anchor point sharing request message.
  • the server Through the anchor sharing request including the second feature information, the identity identifier, and the anchor sharing request message, it is convenient for the server to determine the anchor to be shared requested by the slave terminal according to the identity while receiving the anchor sharing request message sent by the slave terminal.
  • the identity of the point while directly acquiring the second characteristic information to facilitate subsequent determination of the shared location information directly in the server based on the second characteristic information, thereby reducing multiple interaction processes with the slave terminal and improving the efficiency of the anchor point sharing process .
  • the shared location information of the anchor point to be shared in the current scene finally determined by the server its expression is also not limited, any expression that can indicate the location of the anchor point to be shared in the current scene can be used as the realization of shared location information form.
  • the position of the anchor point to be shared in the current scene can be indicated by coordinate information.
  • the anchor point sharing method proposed in the embodiment of the present disclosure receives an anchor point sharing request sent by a terminal through a server, determines the shared location information of the anchor point to be shared in the current scene according to the anchor point sharing request, and sends the shared location information to Terminal, through this process, based on the operation of the server, the anchor point can be fully shared between different terminals or different time periods, so as to realize multi-person sharing, single-person multi-period sharing, or multi-person multi-period sharing of AR scenarios , Thereby greatly improving the unity and sharing of AR technology.
  • step S10 may be further included.
  • FIG. 2 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure, as shown in FIG.
  • step S10 may include:
  • Step S101 The server obtains anchor point information of the anchor point to be shared.
  • Step S102 The server generates the identity of the anchor to be shared according to the anchor information, and saves the anchor information and the identity.
  • the implementation manner of step S101 is not limited, that is, the manner in which the server obtains the anchor point information of the anchor to be shared is not limited.
  • the anchor information of the anchor to be shared can be actively uploaded to the server by the main terminal; in a possible implementation, the anchor information of the anchor to be shared can also be actively sent to the server by the server. Obtained by the master terminal. It should be noted that in the embodiments of the present disclosure, no matter how the server obtains the anchor information of the anchor to be shared, the process of generating the anchor information of the anchor to be shared is completed inside the main terminal, and the server The anchor point information is only used for receiving and storing.
  • the server obtains the anchor point information of the anchor point to be shared, and generates the identity of the anchor point to be shared based on the anchor point information, and saves the anchor point information and the identity identifier.
  • the identity and anchor information are stored in the server, so that the server can directly
  • the shared location information of the anchor point to be shared in the current scene is determined based on the data stored in the server, and there is no need to read the corresponding anchor point information from the main terminal, thereby effectively improving the efficiency of the anchor point sharing process.
  • the anchor point information of the anchor point to be shared is not limited to its specific content.
  • the anchor point information of the anchor point to be shared may include: the first anchor point scene of the anchor point to be shared Feature information, and the original location information of the anchor point to be shared in the anchor point scene.
  • the original position information of the anchor point to be shared in the anchor point scene is used to indicate the position relationship between the anchor point to be shared and the anchor point scene.
  • the specific expression form is the same as the shared position information, which is not used in the embodiment of the present disclosure. Due to limitations, in one example, the original location information can also be expressed in the form of coordinate information.
  • the original position information of the anchor point to be shared in the anchor point scene is not limited in its generation method and form of expression.
  • the original position information can be natural when the anchor point to be shared is generated. What is obtained is that after the master terminal scans the anchor point scene, when the master terminal generates the anchor point to be shared, it also generates the original position information of the anchor point to be shared in the anchor point scene.
  • the first feature information of the anchor point scene where the anchor point to be shared is located are also not limited.
  • the first feature information may include instant positioning and map construction ( Related map information in the SLAM (simultaneous localization and mapping) algorithm. This map information can be used as the reference coordinate system of the anchor point scene where the anchor point to be shared is located and the reference surrounding environment, and used in the subsequent step S12 to realize the shared location information Of ok.
  • the first feature information may be obtained by the master terminal by scanning the anchor point scene and performing feature extraction on the scanning result.
  • the expression form of the first characteristic information is also not limited.
  • the first characteristic information may be acquired and stored by the server in the form of continuous frames.
  • the anchor point information of the anchor point to be shared that includes the first feature information and the original position information
  • the environment information of the anchor point scene where the anchor point to be shared is located and the position relationship between the anchor point to be shared and the anchor point scene can be effectively determined Therefore, it is convenient to effectively locate the shared position information of the anchor point to be shared in the current scene based on the comparison result of the first feature information and the current scene in combination with the original position information in the subsequent step S12.
  • step S102 may further include:
  • the server When the time for storing the anchor point information and the identity mark exceeds the time threshold, the server deletes the anchor point information and the identity mark.
  • the above disclosed embodiments have proposed that once the identity of the anchor to be shared is generated, it can represent that the anchor to be shared corresponding to the identity can be shared.
  • the anchor point information and identity of the anchor point to be shared can be stored in the server. Because the anchor point to be shared may be immediacy, that is, the shared anchor point may only be used for a certain period of time. For the server If all anchor points to be shared are permanently stored inside the server, it may cause a waste of resources, and at the same time may reduce the storage space of the server and reduce the operating efficiency of the server. Therefore, in order to reduce the waste of resources, you can delete the anchor information and the identity stored in the server when the time for storing the anchor information and the identity tag exceeds the time threshold.
  • the specific time threshold setting can be flexibly selected according to the actual situation.
  • the time threshold can be set to 168 hours, that is, starting from the moment the identity of the anchor to be shared is generated, within 168 hours, the anchor to be shared can be After being shared, after 168 hours, the identity and anchor information of the anchor to be shared will be deleted from the server, that is, the anchor to be shared can no longer be shared.
  • step S12 may also be different, so the implementation of step S12 is not unique.
  • FIG. 3 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure. As shown in the figure, in a possible implementation manner, step S12 may include:
  • step S121 the server obtains the identity of the anchor to be shared and the second feature information of the current scene respectively according to the anchor sharing request.
  • Step S122 The server reads the anchor point information of the anchor point to be shared according to the identity identifier.
  • Step S123 The server determines the sharing location information of the anchor point to be shared in the current scene according to the second feature information and the anchor point information.
  • the anchor point sharing request may include the identity of the anchor point to be shared, the second feature information of the current scene, and the anchor point sharing request message. Therefore, the server may follow the anchor point
  • the point sharing request obtains the identity identifier and the second characteristic information. After the server obtains the identity identifier, according to the above disclosed embodiments, it can be known that the identity identifier and its corresponding anchor point information can be stored in the server. Therefore, the server can be based on the anchor point.
  • the identity identifier in the sharing request determines the anchor point information corresponding to the anchor point to be shared, and then based on the anchor point information and the second characteristic information, based on step S123, the shared location information of the anchor point to be shared in the current scene can be determined.
  • the server can easily determine the anchor point to be shared in the current scene by reading the data without multiple interactions with the terminal based on the internally stored data and the received anchor sharing request. In order to complete the sharing of AR scenes with high efficiency and high accuracy.
  • step S123 may include:
  • the server obtains the first feature information of the anchor point scene where the anchor point to be shared is located and the original position information of the anchor point to be shared in the anchor point scene.
  • the server performs feature matching between the first feature information and the second feature information to obtain the position transformation relationship between the current scene and the anchor scene.
  • the server performs position conversion on the original position information according to the position conversion relationship to obtain the shared position information of the anchor point to be shared in the current scene.
  • the anchor point information may include first feature information and original position information, where it can be seen from the above disclosed embodiments that the first feature information can represent the anchor point
  • the original position information can reflect the position of the anchor point to be shared in the anchor point scene.
  • the second feature information can also characterize the situation of the current scene. Therefore, in a possible implementation, the The first feature information and the second feature information are feature-matched to obtain the position transformation relationship between the current scene and the anchor point scene. In this way, after the position transformation relationship between the current scene and the anchor point scene is determined, the original position information Substituting into this transformation relationship, the shared location information of the anchor point to be shared in the current scene can be obtained.
  • the feature matching process can be flexibly determined according to the actual expression of the first feature information and the second feature information.
  • the feature matching process can be the registration of the coordinate system between the master terminal and the slave terminal device. That is, the first feature information uploaded by the master terminal to the server.
  • the representation form of the feature information can be map information, and the second feature information uploaded to the server by the slave terminal is registered with the coordinate system of the master and slave terminals, thereby The transformation relationship of the coordinate system between the anchor scene corresponding to the master terminal and the current scene corresponding to the slave terminal can be calculated.
  • FIG. 4 shows a schematic diagram of the coordinate system of the anchor scene according to an embodiment of the present disclosure.
  • the origin of the coordinate system anchor scene may be referred to as O H
  • a is the anchor to be shared
  • the origin of the current scene coordinate system may be referred to is O R
  • the position transformation relationship between the current scene and the anchor point scene is obtained, and then based on this position transformation relationship and the original position information, the anchor point to be shared is obtained through the position transformation.
  • Shared location information in the scene this process has a small amount of calculation and a convenient calculation method. While ensuring the accuracy of the shared location information for determining the anchor point to be shared, the calculation speed of this process is greatly reduced, thereby improving the anchor point sharing process Speed and efficiency.
  • FIG. 6 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure.
  • the method can be applied to a terminal device. Further, the method can be applied to the slave terminal proposed in the above disclosed embodiment.
  • the slave terminal The number and implementation manners of, are not limited, and reference may be made to the above disclosed embodiments, which will not be repeated here.
  • the anchor point sharing method may include:
  • Step S21 The terminal obtains the identity of the anchor to be shared.
  • Step S22 The terminal sends an anchor point sharing request to the server according to the current scene and the identity.
  • the anchor point sharing request is used to instruct the server to determine the shared location information of the anchor point to be shared in the current scene.
  • Step S23 The terminal receives the shared location information fed back by the server.
  • the identity of the anchor to be shared obtained by the terminal is consistent with the identity proposed in the previous disclosed embodiments, and will not be repeated here; at the same time, the anchor sharing request sent by the terminal to the server is the same as The anchor point sharing request received by the server proposed in the foregoing disclosed embodiments is the same, and will not be repeated here; the terminal receiving the shared location information fed back by the server is the same as the shared location information fed back to the terminal by the server in the foregoing disclosed embodiments Consistent, it will not be described in detail here.
  • the terminal obtains the identity of the anchor to be shared, and sends an anchor sharing request to the server according to the current scene and identity, and then receives the shared location information fed back by the server.
  • the terminal only needs to obtain the current scene and the target anchor.
  • the identity of the point can be used to obtain the shared location information of the anchor point to be shared in the current scene by communicating with the server, and then realize the sharing of the AR scene.
  • the sharing method is simple and efficient, and is suitable for general promotion.
  • step S21 is not limited, that is, the manner in which the terminal obtains the identity of the anchor to be shared is not limited.
  • the subject of implementing step S21 in the embodiment of the present disclosure may be the slave terminal, and the identity of the anchor to be shared is based on the communication between the master terminal and the server in a possible implementation. It is generated by the information exchange and stored in the server. In a possible implementation, the server can return the generated identity of the anchor to be shared to the main terminal.
  • the master terminal and the slave terminal may be the same device or different devices. Therefore, how the slave terminal obtains the identity of the anchor to be shared is not limited in the embodiment of the present disclosure. Any manner that enables the slave terminal to obtain the identity identifier can be used as an implementation manner of the embodiments of the present disclosure.
  • step S21 may include:
  • the terminal obtains the shared information.
  • the terminal obtains the identity of the anchor to be shared according to the correspondence between the shared information and the identity.
  • the form of expression of shared information is not limited, and any form of information that can be mapped to an identity and can be shared in a shared form can be used as a way to realize shared information.
  • the corresponding relationship between the shared information and the identity can be flexibly set according to actual conditions, and is not limited to the following disclosed embodiments.
  • the form of the shared information can be the room number set during the AR scene sharing process, that is, the room number is bound to the identity identifier.
  • the terminal can obtain the shared room number After entering the room, the ID corresponding to the room number is automatically obtained.
  • the form of the shared information can be the user identity in the AR scenario sharing process, that is, the user identity is bound with the identity identifier.
  • the terminal may obtain the shared user identity after obtaining the shared user identity. , Automatically obtain the identity that is bound to the user identity.
  • the expression of the shared information can be directly the identity identifier, that is, the identity identifier itself can be used as the content of the shared information and be acquired by the terminal.
  • the manner in which the terminal obtains shared information is also not limited, and can be flexibly selected according to the actual situation.
  • the terminal is a slave terminal and may not be the same device as the master terminal.
  • the slave terminal may obtain shared information in Obtain the shared information shared by the master terminal; in an example, the terminal, as a slave terminal, may be the same device as the master terminal.
  • the method for the slave terminal to obtain the shared information may be to directly read the shared information from the master terminal.
  • the terminal obtains the identity of the anchor to be shared by obtaining the shared information and according to the corresponding relationship between the shared information and the identity.
  • the identity of the anchor to be shared can be obtained more conveniently without complicated analysis.
  • the identity can be obtained indirectly through shared information, which can reduce the risk of identity leakage and improve the efficiency of anchor sharing while ensuring security.
  • step S22 is also not limited, that is, the terminal according to the current scenario and the identity identifier ,
  • the specific implementation process of sending the anchor sharing request to the server is not limited.
  • step S22 may include:
  • the terminal scans the current scene and obtains an image of the current scene.
  • the terminal performs feature extraction on the image of the current scene to obtain second feature information.
  • the terminal sends the second feature information, identity identifier, and anchor point sharing request message together as an anchor point sharing request to the server.
  • the second feature information is consistent with the second feature information of the current scene included in the anchor point sharing request, and will not be repeated here.
  • the second feature information may The second feature information is obtained by scanning the current scene and performing feature extraction on the results obtained from the scan.
  • the specific feature extraction process and method are not limited in the embodiments of the present disclosure, and any relevant information that can be extracted into the current scene Feature information, which can determine the physical environment features of the current scene, can be used as a feature extraction implementation method.
  • a trained feature extraction neural network can be used to extract the physical environment feature information of the current scene.
  • SIFT Scale-invariant Feature Transform
  • the terminal obtains the image of the current scene by scanning the current scene, and extracts the features of the image of the current scene. After obtaining the second feature information, it packs the second feature information, the nearby teacher and the anchor point sharing request message as the anchor point sharing The request is sent to the server, so that the terminal can directly transmit the data required by the server to the server while requesting the server to complete the anchor sharing, without cumbersome interaction process, and improving the efficiency of anchor sharing.
  • FIG. 7 shows a flowchart of an anchor point sharing method according to an embodiment of the present disclosure.
  • the method can be applied to a system composed of a terminal device and a server.
  • the implementation of the terminal and the server is the same as in the above disclosed embodiment. This will not be repeated here.
  • the anchor point sharing method may include:
  • Step S31 The terminal obtains the identity of the anchor to be shared.
  • Step S32 The terminal sends an anchor point sharing request to the server according to the current scene and the identity.
  • Step S33 The server determines the sharing location information of the anchor point to be shared in the current scene according to the anchor point sharing request.
  • Step S34 The terminal receives the shared location information fed back by the server.
  • step S31 is the same as that of step S21
  • the implementation of step S32 is the same as that of step S22
  • the implementation of step S33 is the same as that of step S13
  • the implementation of step S34 is the same as that of step S23.
  • the methods are the same, so I won't repeat them here.
  • the anchor point to be shared can be positioned in the current scene through the server according to the identity of the anchor point to be shared, so that different terminals can share the virtual world in the same real scene and realize the sharing of augmented reality technology.
  • Augmented reality technology is a technology that can combine real world information and virtual world information, and display virtual visual information in real world images through devices.
  • most of the current augmented reality is a stand-alone version. What is displayed on the screen is a combination of their own virtual world and real world information. It is impossible for multiple users to see the same virtual world and real world on their respective screens. No interactive AR experience.
  • FIGS 8-9 show schematic diagrams of an application example according to the present disclosure.
  • an embodiment of the present disclosure proposes an anchor point sharing method, which can be integrated into a software development kit (SDK, Inside the Software Development Kit, it is realized through the cooperation between the terminal and the server.
  • SDK software development kit
  • the process of implementing the anchor sharing method based on SDK may be:
  • the specific authentication process may be adding the Key/Secret applied from the developer website To obtain the permission to call cloud services.
  • User A creates an AR scene through the main terminal, and creates an anchor point in the scene (the anchor point is located on a marker in the real scene) as the anchor point to be shared, and hosts the anchor point to be shared to the cloud server , Obtain the corresponding anchor id as the identity of the anchor to be shared.
  • the application layer manages the acquired anchor id by itself, and shares the shared information with related users after binding to the shared information.
  • this process can be completed using cloud services.
  • User B creates an AR scene against the same real scene, uses the anchor id bound to the shared information to perform an anchor resolve operation in the cloud server to obtain the anchor to be shared, and the anchor to be shared
  • the position of the point relative to the real scene is the same as when the user A is placed.
  • Figure 8 shows the implementation process of hosting the anchor point to be shared to the cloud server
  • Figure 9 shows the implementation process of anchor resolution of the anchor point to be shared in the cloud server, as can be seen from the figure
  • the process of hosting and parsing is consistent in the data transmission process, but the transmitted data flow is different.
  • the anchor point to be shared is hosted on the cloud server
  • the main realization is the analysis and storage of the data, and the analysis on the cloud server to be shared
  • the cloud server will call the matching algorithm to achieve.
  • the specific process for the cloud server to host the anchor point to be shared can be as follows: the main terminal transmits data to the cloud server.
  • the data includes the location information of the anchor point to be shared in the anchor point scene, the relevant map information in the SLAM algorithm, etc.
  • the map information is used as The reference coordinate system and the surrounding environment are used for subsequent analysis of the anchor point to be shared.
  • the data transmitted by the main terminal will be saved in the cloud server and become the cloud anchor point and assigned a cloud anchor id, which will be returned The main terminal.
  • Anchor hosting is instant. From the time the cloud anchor number is obtained to 168 hours thereafter, the anchor can be resolved. After 168 hours, the related information of the anchor will be deleted.
  • the process of analyzing the anchor point to be shared by the cloud server can be as follows: the slave terminal is responsible for sending an analysis request to the cloud server to register the cloud anchor point to the local coordinate system in the current slave terminal, and the request packet contains the image extracted from the current slave terminal Feature information and the number of the cloud anchor point to be resolved.
  • the cloud server will perform feature matching between the feature information uploaded by the slave terminal and the map information uploaded by the master terminal.
  • the matching process includes the registration of the master-slave terminal coordinate system, calculates the transformation relationship of the AR scene coordinate system in the master-slave terminal, and compares this The transformation result is applied to the cloud anchor point, and the slave terminal receives the coordinate information of the cloud anchor point in the AR scene of the slave terminal.
  • the above method can be applied to the sharing process of a multi-person AR scene, that is, a user through a terminal device, the cloud server hosts one or more anchor points to be shared, and the remaining users can use their respective terminal devices , Scan the respective current scenes, and analyze the anchor points on the cloud server to obtain the position of the anchor point to be shared in the respective terminal device, so as to realize that in multiple terminals, the anchor points are displayed at the same position in the real world. Realize multi-person AR interaction.
  • the above method can also be applied to the sharing process of single-person multi-period AR scenarios, that is, a user hosts one or more anchor points to be shared on the cloud server through a certain terminal device, so that the anchor point and the anchor point are stored on the cloud server Scenes. After a period of time, the user can also scan the anchor point scene through the terminal device, and open the anchor point scene in the server through anchor point analysis, and obtain the anchor point to be shared in the same position as the previously saved anchor point scene. And perform the required operations.
  • the above method can also be applied to the sharing process of multi-person and multi-period AR scenes, that is, combining the above two application methods, multiple people save the AR scene, and next time the AR scene is opened through anchor analysis, the same scene will be presented. Subject to time constraints.
  • the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possibility.
  • the inner logic is determined.
  • Fig. 10 shows a block diagram of an anchor sharing device according to an embodiment of the present disclosure.
  • the anchor sharing device may be a server or other equipment.
  • the anchor point sharing device 40 may include: an anchor point sharing request receiving module 41, configured to receive an anchor point sharing request sent by the terminal, and the anchor point sharing request includes the current scene scanned by the terminal and the obtained waiting The identity of the shared anchor point; the shared location information determining module 42 is used to determine the shared location information of the anchor point to be shared in the current scene according to the anchor point sharing request; the feedback module 43 is used to feed back the shared location information to the terminal .
  • the anchor sharing request receiving module before the anchor sharing request receiving module, it further includes an identity generating module, and the identity generating module is used to: obtain the anchor information of the anchor to be shared; and generate the anchor to be shared according to the anchor information Point’s identity, save anchor point information and identity.
  • the identity generation module is further configured to: when the time for storing the anchor point information and the identity mark exceeds the time threshold, the server deletes the anchor point information and the identity mark.
  • the anchor point information of the anchor point to be shared includes: the first feature information of the anchor point scene where the anchor point to be shared is located, and the original position information of the anchor point to be shared in the anchor point scene .
  • the anchor point sharing request includes: the second feature information of the current scene, the identity of the anchor point to be shared, and the anchor point sharing request message.
  • the shared location information determination module is used to: obtain the identity of the anchor to be shared and the second feature information of the current scene according to the anchor sharing request; and read the anchor to be shared according to the identity.
  • the shared location information determination module is further used to: according to the anchor point information, obtain the first feature information of the anchor point scene where the anchor point to be shared is located and the position of the anchor point to be shared in the anchor point scene.
  • Original location information; the first feature information and the second feature information are feature-matched to obtain the position transformation relationship between the current scene and the anchor scene; according to the position transformation relationship, the original position information is transformed to obtain the anchor point to be shared Shared location information in the current scene.
  • Fig. 11 shows a block diagram of an anchor sharing device according to an embodiment of the present disclosure.
  • the anchor sharing device may be a terminal device or the like.
  • the anchor point sharing device 50 may include: an identity identification acquisition module 51, which is used to acquire the identity identification of the anchor point to be shared; and an anchor point sharing request sending module 52, which is used to according to the current scene and identification Send an anchor point sharing request to the server, where the anchor point sharing request is used to instruct the server to determine the shared location information of the anchor to be shared in the current scene; the shared location information receiving module 53 is used to receive shared location information fed back by the server.
  • the identity acquisition module is used to: obtain shared information; and obtain the identity of the anchor to be shared according to the correspondence between the shared information and the identity.
  • the anchor point sharing request sending module is used to: scan the current scene to obtain an image of the current scene; perform feature extraction on the image of the current scene to obtain the second feature information; combine the second feature information and identity
  • the identifier and the anchor point sharing request message, together as an anchor point sharing request, are sent to the server.
  • FIG. 12 shows a block diagram of an anchor point sharing system according to an embodiment of the present disclosure.
  • the anchor point sharing system 60 may include: the first anchor point sharing device 40 as described in the foregoing disclosed embodiments; the second anchor point sharing device 50 as described in the foregoing disclosed embodiments; wherein , The first anchor point sharing device and the second anchor point sharing device interact through an anchor point sharing request.
  • the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the foregoing method when executed by a processor.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
  • the electronic device can be provided as a terminal, server or other form of device.
  • FIG. 13 is a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operated on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 806 provides power for various components of the electronic device 800.
  • the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC).
  • the microphone is configured to receive external audio signals.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
  • the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
  • the component is the display and the keypad of the electronic device 800.
  • the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
  • the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 800 can be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field A programmable gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
  • FIG. 14 is a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server. 14
  • the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932 for storing instructions executable by the processing component 1922, such as application programs.
  • the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
  • the present disclosure may be a system, method, and/or computer program product.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory flash memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a printer with instructions stored thereon
  • the computer-readable storage medium used herein is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
  • Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access connection).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
  • the computer-readable program instructions are executed to realize various aspects of the present disclosure.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Studio Devices (AREA)

Abstract

本公开涉及一种锚点共享方法及装置、系统、电子设备和存储介质。所述锚点共享方法包括:服务器接收终端发送的锚点共享请求,所述锚点共享请求包括所述终端扫描得到的当前场景以及获取的待共享锚点的身份标识;服务器根据所述锚点共享请求,确定所述待共享锚点在所述当前场景中的共享位置信息;服务器向所述终端反馈所述共享位置信息。本公开实施例通过上述过程可以依据待共享锚点的身份标识,通过服务器将待共享锚点在当前场景中实现定位,从而使得不同终端可以共享同一现实场景下的虚拟世界,实现增强现实技术的共享。

Description

锚点共享方法及装置、系统、电子设备和存储介质
本公开要求在2019年06月26日提交中国专利局、申请号为201910562547.3、申请名称为“锚点共享方法及装置、系统、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及增强现实技术领域,尤其涉及一种锚点共享方法及装置、系统、电子设备和存储介质。
背景技术
增强现实技术(AR,Augmented Reality)可以将真实世界信息和虚拟世界信息结合,通过设备将虚拟视觉信息在真实世界图像中显示出来。如何让多个用户在各自的屏幕中看到相同的虚拟世界和现实世界,对增强现实技术而言是至关重要的。
发明内容
本公开提出了一种锚点共享技术方案。
根据本公开的第一方面,提供了一种锚点共享方法,包括:服务器接收终端发送的锚点共享请求,所述锚点共享请求包括所述终端扫描得到的当前场景以及获取的待共享锚点的身份标识;服务器根据所述锚点共享请求,确定所述待共享锚点在所述当前场景中的共享位置信息;服务器向所述终端反馈所述共享位置信息。
通过上述过程,可以基于服务器的操作,实现锚点在不同终端或不同时段之间的完全共享,从而可以实现AR场景的多人共享、单人多时段共享或是多人多时段共享,从而大大提升了AR技术的统一性和分享性。
在一种可能的实现方式中,所述服务器接收终端发送的锚点共享请求之前,还包括:服务器获取所述待共享锚点的锚点信息;服务器根据所述锚点信息,生成所述待共享锚点的身份标识,保存所述锚点信息和所述身份标识。
通过上述过程,可以实现待共享锚点与身份标识之间的一一对应关系,从而便于后续通过身份标识来确定需要共享的锚点的身份,从而便于确定需要进行共享的AR场景;同时,将身份标识和锚点信息保存于服务器中,便于服务器直接基于服务器内保存的数据确定待共享锚点在当前场景的共享位置信息,无需再从主终端中读取相应的锚点信息,从而有效提升锚点共享过程的效率。
在一种可能的实现方式中,所述服务器根据所述锚点信息,生成所述待共享锚点的身份标识,保存所述锚点信息和所述身份标识,还包括:在保存所述锚点信息和所述身份标识的时间超过时间阈值时,服务器删除所述锚点信息和所述身份标识。
通过在保存锚点信息和身份标识的时间超过时间阈值时,删除服务器内锚点信息和身份标识,可以在保障待共享锚点可以被共享的基础上,对于存储时间过长的待共享锚点进行有效清理,从而增大服务器的存储空间的重复利用率,减少资源的浪费的同时提升服务器的工作效率。
在一种可能的实现方式中,所述待共享锚点的锚点信息包括:所述待共享锚点所在的锚点场景的第一特征信息,以及所述待共享锚点在所述锚点场景中的原始位置信息。
通过包括有第一特征信息和原始位置信息的待共享锚点的锚点信息,可以有效确定待共享锚点所在的锚点场景的环境信息和待共享锚点与锚点场景之间的位置关系,从而便于在后续的步骤中,基于第一特征信息与当前场景的比对结果,结合原始位置信息,有效定位待共享锚点在当前场景中的共享位置信息。
在一种可能的实现方式中,所述锚点共享请求包括:所述当前场景的第二特征信息、所述待共享锚点的身份标识和锚点共享请求消息。
通过包括有第二特征信息、身份标识和锚点共享请求消息的锚点共享请求,可以便于服务器在 接收从属终端发送的锚点共享请求消息的同时,根据身份标识确定从属终端请求的待共享锚点的身份,同时直接获取第二特征信息来便于后续直接在服务器内基于该第二特征信息确定共享位置信息,从而减少了与从属终端之间的多次交互过程,提升锚点共享过程的效率。
在一种可能的实现方式中,所述服务器根据所述锚点共享请求,确定所述待共享锚点在所述当前场景中的共享位置信息,包括:服务器根据所述锚点共享请求,分别得到所述待共享锚点的身份标识和所述当前场景的第二特征信息;服务器根据所述身份标识,读取所述待共享锚点的锚点信息;服务器根据所述第二特征信息和所述锚点信息,确定所述待共享锚点在所述当前场景中的共享位置信息。
通过上述过程,服务器可以基于内部保存的数据和接收到的锚点共享请求,通过服务器内部的运算,无需与终端进行多次交互来读取数据,即可便捷地确定待共享锚点在当前场景中的位置信息,从而完成高效率且准确度较高的AR场景的共享。
在一种可能的实现方式中,所述服务器根据所述第二特征信息和所述锚点信息,确定所述待共享锚点在所述当前场景中的共享位置信息,包括:服务器根据所述锚点信息,分别得到所述待共享锚点所在的锚点场景的第一特征信息和所述待共享锚点在所述锚点场景中的原始位置信息;服务器将所述第一特征信息与所述第二特征信息进行特征匹配,得到所述当前场景与所述锚点场景之间的位置变换关系;服务器根据所述位置变换关系,对所述原始位置信息进行位置变换,得到所述待共享锚点在所述当前场景中的共享位置信息。
通过将第一特征信息与第二特征信息进行特征匹配,得到当前场景与锚点场景之间的位置变换关系,再基于此位置变换关系和原始位置信息,通过位置变换得到待共享锚点在当前场景中的共享位置信息,这一过程计算量小,计算方式便捷,在保障确定待共享锚点的共享位置信息准确性的同时,大大缩减了这一过程的计算速度,从而提升锚点共享过程的速度和效率。
根据本公开的第二方面,提供了一种锚点共享方法,包括:终端获取待共享锚点的身份标识;终端根据当前场景和所述身份标识,向服务器发送锚点共享请求,所述锚点共享请求用于指示所述服务器确定所述待共享锚点在所述当前场景中的共享位置信息;终端接收服务器反馈的共享位置信息。
通过上述过程,可以使得终端只需获取当前场景以及目标锚点的身份标识,即可通过与服务器通信的方式,获取待共享锚点在当前场景中的共享位置信息,继而实现AR场景的共享,共享方式简单且效率高,适合普遍推广。
在一种可能的实现方式中,所述终端获取待共享锚点的身份标识,包括:终端获取共享信息;终端根据所述共享信息与身份标识之间的对应关系,得到待共享锚点的身份标识。
通过上述过程,可以较为便捷地获取到待共享锚点的身份标识,无需复杂的解析过程,且可以通过共享信息间接地获取身份标识,可以减少身份标识被泄露的风险,在保障安全性的同时提升锚点共享的效率。
在一种可能的实现方式中,所述终端根据当前场景和所述身份标识,向服务器发送锚点共享请求,包括:终端扫描当前场景,获取当前场景的图像;终端对所述当前场景的图像进行特征提取,得到所述第二特征信息;终端将所述第二特征信息、所述身份标识和所述锚点共享请求消息,共同作为所述锚点共享请求,发送至服务器。
通过上述过程,可以使得终端在请求完成服务器完成锚点共享的同时,直接将服务器需要的数据传输到服务器内,无需繁琐的交互过程,提升锚点共享的效率。
根据本公开的第三方面,提供了一种锚点共享方法,包括:终端获取待共享锚点的身份标识;终端根据当前场景和所述身份标识,向服务器发送锚点共享请求;服务器根据所述锚点共享请求,确定所述待共享锚点在当前场景中的共享位置信息;终端接收所述服务器反馈的所述共享位置信息。
通过上述过程可以依据待共享锚点的身份标识,通过服务器将待共享锚点在当前场景中实现定位,从而使得不同终端可以共享同一现实场景下的虚拟世界,实现增强现实技术的共享。
根据本公开的第四方面,提供了一种锚点共享装置,所述装置应用于服务器,包括:锚点共享请求接收模块,用于接收终端发送的锚点共享请求,所述锚点共享请求包括所述终端扫描得到的当前 场景以及获取的待共享锚点的身份标识;共享位置信息确定模块,用于根据所述锚点共享请求,确定所述待共享锚点在所述当前场景中的共享位置信息;反馈模块,用于向所述终端反馈所述共享位置信息。
在一种可能的实现方式中,所述锚点共享请求接收模块之前,还包括身份标识生成模块,所述身份标识生成模块用于:获取所述待共享锚点的锚点信息;根据所述锚点信息,生成所述待共享锚点的身份标识,保存所述锚点信息和所述身份标识。
在一种可能的实现方式中,所述身份标识生成模块进一步用于:在保存所述锚点信息和所述身份标识的时间超过时间阈值时,服务器删除所述锚点信息和所述身份标识。
在一种可能的实现方式中,所述待共享锚点的锚点信息包括:所述待共享锚点所在的锚点场景的第一特征信息,以及所述待共享锚点在所述锚点场景中的原始位置信息。
在一种可能的实现方式中,所述锚点共享请求包括:所述当前场景的第二特征信息、所述待共享锚点的身份标识和锚点共享请求消息。
在一种可能的实现方式中,所述共享位置信息确定模块用于:根据所述锚点共享请求,分别得到所述待共享锚点的身份标识和所述当前场景的第二特征信息;根据所述身份标识,读取所述待共享锚点的锚点信息;根据所述第二特征信息和所述锚点信息,确定所述待共享锚点在所述当前场景中的共享位置信息。
在一种可能的实现方式中,所述共享位置信息确定模块进一步用于:根据所述锚点信息,分别得到所述待共享锚点所在的锚点场景的第一特征信息和所述待共享锚点在所述锚点场景中的原始位置信息;将所述第一特征信息与所述第二特征信息进行特征匹配,得到所述当前场景与所述锚点场景之间的位置变换关系;根据所述位置变换关系,对所述原始位置信息进行位置变换,得到所述待共享锚点在所述当前场景中的共享位置信息。
根据本公开的第五方面,提供了一种锚点共享装置,所述装置应用于终端,包括:身份标识获取模块,用于获取待共享锚点的身份标识;锚点共享请求发送模块,用于根据当前场景和所述身份标识,向服务器发送锚点共享请求,所述锚点共享请求用于指示所述服务器确定所述待共享锚点在所述当前场景中的共享位置信息;共享位置信息接收模块,用于接收服务器反馈的共享位置信息。
在一种可能的实现方式中,所述身份标识获取模块用于:获取共享信息;根据所述共享信息与身份标识之间的对应关系,得到待共享锚点的身份标识。
在一种可能的实现方式中,所述锚点共享请求发送模块用于:扫描当前场景,获取当前场景的图像;对所述当前场景的图像进行特征提取,得到所述第二特征信息;将所述第二特征信息、所述身份标识和所述锚点共享请求消息,共同作为所述锚点共享请求,发送至服务器。
根据本公开的第六方面,提供了一种锚点共享系统,包括:如第三方面所述的第一锚点共享装置;如第四方面所述的第二锚点共享装置;其中,所述第一锚点共享装置与所述第二锚点共享装置通过锚点共享请求进行交互。
根据本公开的第七方面,提供了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:执行上述第一方面的方法。
根据本公开的第八方面,提供了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:执行上述第二方面的方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述第一方面的方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述第二方面的方法。
根据本公开的一方面,提供了计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述中的任一项所述的方法。
在本公开实施例中,通过终端获取待共享锚点的身份标识,根据当前场景和身份标识,向服务器发送锚点共享请求,服务器再根据锚点共享请求,确定待共享锚点在当前场景中的共享位置信息,并将该共享位置信息反馈至终端。通过上述过程可以依据待共享锚点的身份标识,通过服务器将待共享锚点在当前场景中实现定位,从而使得不同终端可以共享同一现实场景下的虚拟世界,实现增强现实技术的共享。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开一实施例的锚点共享方法的流程图。
图2示出根据本公开一实施例的锚点共享方法的流程图。
图3示出根据本公开一实施例的锚点共享方法的流程图。
图4示出根据本公开一实施例的锚点场景的坐标系示意图。
图5示出根据本公开一实施例的当前场景的坐标系示意图。
图6示出根据本公开一实施例的锚点共享方法的流程图。
图7示出根据本公开一实施例的锚点共享方法的流程图。
图8示出了根据本公开一应用示例的示意图。
图9示出了根据本公开一应用示例的示意图。
图10示出根据本公开一实施例的锚点共享装置的框图。
图11示出根据本公开一实施例的锚点共享装置的框图。
图12示出根据本公开一实施例的锚点共享系统的框图。
图13示出根据本公开实施例的一种电子设备的框图。
图14示出根据本公开实施例的一种电子设备的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
图1示出根据本公开一实施例的锚点共享方法的流程图,该方法可以应用于服务器,服务器的具体种类、型号以及实现方式均不受限定,可以根据实际情况灵活选择。
如图1所示,所述锚点共享方法可以包括:
步骤S11,服务器接收终端发送的锚点共享请求,锚点共享请求包括终端扫描得到的当前场景以及获取的待共享锚点的身份标识。
步骤S12,服务器根据锚点共享请求,确定待共享锚点在当前场景中的共享位置信息。
步骤S13,服务器向终端反馈共享位置信息。
上述公开实施例中,待共享锚点可以通过某一终端设备扫描锚点所在的锚点场景后,通过锚点生成的方式生成待共享锚点。在本公开实施例中,可以将扫描锚点场景的终端称为主终端,主终端作为终端设备,其具体的实现方式不受限定,可以根据实际情况灵活确定。在一种可能的实现方式中,主终端可以为用户设备(UE,User Equipment)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(PDA,Personal Digital Assistant)、手持设备、计算设备、车载设备、可穿戴设备等。锚点生成的具体实现方式同样不受限定,任何可以基于主终端扫描锚点场景的结果,生成锚点的方式,均可以作为本公开实施例中的实现方式,在一种可能的实现方式中,可以通过对主终端扫描锚点场景的结果进行特征提取,并基于神经网络进行锚点预测的方式,生成待共享锚点。由于锚点生成的方式并不受限定,因此,生成的待共享锚点的数量同样不受限制,可以为1个也可以为多个,根据锚点场景的实际情况以及锚点生成的具体方法灵活确定。
在本公开实施例中,可以将向服务器发送锚点共享请求的终端称为从属终端,从属终端作为终端设备,其具体的实现方式同样不受限定,在此不再举例赘述。在本公开实施例中,从属终端的数量同样不受限制,可以为1个,也可以为多个。在一个示例中,从属终端的数量可以为1个,此时服务器可以通过接收这个从属终端发送的锚点共享请求,实现待共享锚点在主终端和这一个从属终端之间的共享。在一个示例中,从属终端的数量可以为多个,具体的数量在此不做限定,可以根据实际情况灵活选择,此时服务器可以通过接收多个从属终端发送的锚点共享请求,实现待共享锚点在主终端和多个从属终端之间的共享。除此之外,从属终端可以与主终端为不同设备,也可以为相同设备。在一个示例中,当从属终端与主终端为不同设备时,可以实现不同终端设备之间的锚点共享,举例来说,可以是多个终端设备通过上述公开实施例中提出的锚点共享方法,实现多终端设备对某一AR场景的实时共享。在一个示例中,当从属终端与主终端为相同设备时,可以实现单一终端设备,在不同时段之间的锚点共享,举例来说,可以是某一终端设备,通过上述公开实施例中提出的锚点共享方法,实现某一AR场景的多时段共享。在一个示例中,从属终端的数量可以为多个,其中一个从属终端与主终端为同一设备,其余的从属终端与主终端为不同设备,此时可以实现多终端对某一AR场景的多时段共享,举例来说,可以是多个终端设备通过上述公开实施例中提出的锚点共享方法,实现多终端对某一AR场景的多时段共享。
服务器接收的锚点共享请求,其具体的表现形式可以根据实际情况灵活确定,步骤S11中已经提出,该锚点共享请求包括终端扫描得到的当前场景以及获取的待共享锚点的身份标识。因此在本公开实施例中,虽然锚点共享请求的表现形式灵活,但该锚点共享请求应该可以有效反应当前场景的地理信息,以及与待共享锚点的身份标识相关。在一种可能的实现方式中,锚点共享请求可以包括:当前场景的第二特征信息、待共享锚点的身份标识和锚点共享请求消息。
其中,当前场景的第二特征信息,主要用于反应当前场景的地理信息,其具体表现形式和获取方式均不受限定,可以根据实际情况灵活选择。在一种可能的实现方式中,可以通过从属终端扫描当前场景并进行特征提取的方式进行获取。而待共享锚点的身份标识,其表达形式同样不受限定,任何可以用以表明该待共享锚点身份信息的形式,均可以作为身份标识的表达形式,在一个示例中,待共享锚点的身份标识的表达形式,可以是该待共享锚点的锚点编号(anchor id)。同样,锚点共享请求消息的具体内容和表达形式同样不受限定,任何可以向服务器表达需要进行锚点共享的请求,均可以作为锚点共享请求消息的表达形式。
通过包括有第二特征信息、身份标识和锚点共享请求消息的锚点共享请求,可以便于服务器在接收从属终端发送的锚点共享请求消息的同时,根据身份标识确定从属终端请求的待共享锚点的身份,同时直接获取第二特征信息来便于后续直接在服务器内基于该第二特征信息确定共享位置信息,从而减少了与从属终端之间的多次交互过程,提升锚点共享过程的效率。
服务器最终确定的待共享锚点在当前场景中的共享位置信息,其表现形式同样不受限定,任何可以表明待共享锚点在当前场景中的位置的表达方式,均可以作为共享位置信息的实现形式。在一个示例中,可以通过坐标信息表明待共享锚点在当前场景中的位置。
本公开实施例提出的锚点共享方法,通过服务器接收终端发送的锚点共享请求,并根据锚点共享请求确定待共享锚点在当前场景中的共享位置信息,并将该共享位置信息发送至终端,通过这一过程,可以基于服务器的操作,实现锚点在不同终端或不同时段之间的完全共享,从而可以实现AR场景的多人共享、单人多时段共享或是多人多时段共享,从而大大提升了AR技术的统一性和分享性。
上述公开实施例中已经看出,本公开实施例中提出的锚点共享方法,需要基于终端发送的锚点共享请求来实现,而锚点共享请求的发送除了基于终端扫描得到的当前场景以外,还基于获取的待共享锚点的身份标识。因此,本公开实施例提出的锚点共享方法,还需要基于待共享锚点的身份标识来实现。在一种可能的实现方式中,待共享锚点的身份标识可以通过服务器来生成。因此,在一种可能的实现方式中,在服务器接收终端发送的锚点共享请求之前,还可以包括步骤S10,图2示出根据本公开一实施例的锚点共享方法的流程图,如图所示,在一种可能的实现方式中,步骤S10可以包括:
步骤S101,服务器获取待共享锚点的锚点信息。
步骤S102,服务器根据锚点信息,生成待共享锚点的身份标识,保存锚点信息和身份标识。
上述公开实施例中,步骤S101的实现方式不受限定,即服务器获取待共享锚点的锚点信息的方式不受限定。在一种可能的实现方式中,待共享锚点的锚点信息可以由主终端主动上传至服务器中;在一种可能的实现方式中,待共享锚点的锚点信息也可以由服务器主动向主终端获取而得到。需要注意的是,在本公开实施例中,无论服务器通过何种方式获取待共享锚点的锚点信息,该待共享锚点的锚点信息的生成过程均是在主终端内部完成的,服务器对该锚点信息仅起到接收与存储的作用。
通过服务器获取待共享锚点的锚点信息,并根据锚点信息,生成待共享锚点的身份标识,并保存锚点信息和身份标识,通过上述过程,可以实现待共享锚点与身份标识之间的一一对应关系,从而便于后续通过身份标识来确定需要共享的锚点的身份,从而便于确定需要进行共享的AR场景;同时,将身份标识和锚点信息保存于服务器中,便于服务器直接基于服务器内保存的数据确定待共享锚点在当前场景的共享位置信息,无需再从主终端中读取相应的锚点信息,从而有效提升锚点共享过程的效率。
待共享锚点的锚点信息,其包含的具体内容不受限定,在一种可能的实现方式中,待共享锚点的锚点信息可以包括:待共享锚点所在的锚点场景的第一特征信息,以及待共享锚点在锚点场景中的原始位置信息。其中,待共享锚点在锚点场景中的原始位置信息,用以表明待共享锚点与锚点场景之间的位置关系,其具体表现形式与共享位置信息一样,在本公开实施例中不受限制,在一个示例中,原始位置信息同样可以通过坐标信息的形式进行表示。除此之外,待共享锚点在锚点场景中的原始位置信息,其生成的方式与表现形式也不受限定,在一个示例中,原始位置信息可以是在待共享锚点生成的同时自然得到的,即在主终端扫描锚点场景后,主终端在生成待共享锚点时,同样随之生成了待共享锚点在锚点场景内的原始位置信息。
待共享锚点所在的锚点场景的第一特征信息,其具体包含的信息内容以及生成方式也不受限定,在一种可能的实现方式中,第一特征信息可以包括即时定位与地图构建(SLAM,simultaneous localization and mapping)算法中的相关地图信息,该地图信息可以作为待共享锚点所在的锚点场景的参考坐标系以及参考周边环境,用于后续的步骤S12中,来实现共享位置信息的确定。在一种可能的实现方式中,第一特征信息可以是主终端通过扫描锚点场景,并对扫描结果进行特征提取的方式来得到。除此之外,第一特征信息的表现形式同样不受限定,在一个示例中,该第一特征信息可以通过连续帧的形式被服务器获取和保存。
通过包括有第一特征信息和原始位置信息的待共享锚点的锚点信息,可以有效确定待共享锚点所在的锚点场景的环境信息和待共享锚点与锚点场景之间的位置关系,从而便于在后续的步骤S12中,基于第一特征信息与当前场景的比对结果,结合原始位置信息,有效定位待共享锚点在当前场景中的共享位置信息。
上述公开实施例中已经提出,待共享锚点的身份标识的表现形式不受限定,因此,服务器如何根据锚点信息生成待共享锚点的身份标识,其过程可以根据身份标识的形式进行灵活确定。在一种可能的实现方式中,服务器可以在接收到锚点信息后,随机生成一个与该待共享锚点唯一对应的锚点编号,作为身份标识,并在服务器内保存该锚点信息和对应的身份标识,该身份标识一旦生成,就可以代表该身份标识对应的待共享锚点可以被共享。除此之外,在一种可能的实现方式中,步骤S102还可以包括:
在保存锚点信息和身份标识的时间超过时间阈值时,服务器删除锚点信息和身份标识。
上述公开实施例已经提出,待共享锚点的身份标识一旦生成,就可以代表该身份标识对应的待共享锚点可以被共享。待共享锚点的锚点信息和身份标识均可以保存在服务器中,由于待共享锚点可能存在即时性,即被共享的锚点可能只是用于某一或某段时间内,对于服务器来说,如果将所有的待共享锚点均永久保存于服务器内部,可能会造成资源的浪费,同时也可能减小服务器的存储空间和降低服务器的运行效率。因此,为了减少资源的浪费,可以在保存锚点信息和身份标识的时间超过时间阈值时,删除服务器内保存的锚点信息和身份标识,具体时间阈值的设定可以根据实际情况灵活选择,不受下述公开实施例的限制,在一个示例中,可以设定时间阈值为168小时,即从生成待共享锚点的身份标识的时刻开始,在168小时之内,该待共享锚点均可以被共享,超过168小时之后,该待共享锚点的身份标识和锚点信息,会从服务器内删除,即该待共享锚点不可再共享。
通过在保存锚点信息和身份标识的时间超过时间阈值时,删除服务器内锚点信息和身份标识,可以在保障待共享锚点可以被共享的基础上,对于存储时间过长的待共享锚点进行有效清理,从而增大服务器的存储空间的重复利用率,减少资源的浪费的同时提升服务器的工作效率。
通过上述各公开实施例可知,锚点共享请求的实现形式不唯一,因此,随着锚点共享请求形式的不同,步骤S12的实现方式同样可能会存在差异,因此步骤S12的实现方式不唯一。图3示出根据本公开一实施例的锚点共享方法的流程图,如图所示,在一种可能的实现方式中,步骤S12可以包括:
步骤S121,服务器根据锚点共享请求,分别得到待共享锚点的身份标识和当前场景的第二特征信息。
步骤S122,服务器根据身份标识,读取待共享锚点的锚点信息。
步骤S123,服务器根据第二特征信息和锚点信息,确定待共享锚点在当前场景中的共享位置信息。
上述公开实施例已经提出,在一种可能的实现方式中,锚点共享请求可以包括待共享锚点的身份标识、当前场景的第二特征信息和锚点共享请求消息,因此,服务器可以根据锚点共享请求,得到身份标识和第二特征信息,在服务器得到身份标识后,根据上述各公开实施例可知,身份标识与其对应的锚点信息均可以存储于服务器内部,因此,服务器可以根据锚点共享请求内的身份标识,确定对应待共享锚点的锚点信息,然后可以基于锚点信息与第二特征信息,基于步骤S123,确定待共享锚点在当前场景中的共享位置信息。
通过上述过程,服务器可以基于内部保存的数据和接收到的锚点共享请求,通过服务器内部的运算,无需与终端进行多次交互来读取数据,即可便捷地确定待共享锚点在当前场景中的位置信息,从而完成高效率且准确度较高的AR场景的共享。
上述公开实施例中已经提出,共享位置信息的表现形式可以根据实际情况灵活确定,因此具体如何通过第二特征信息和锚点信息,来确定待共享锚点在当前场景中的共享位置信息,不局限于下述公开的实施方式。在一种可能的实现方式中,步骤S123可以包括:
服务器根据锚点信息,分别得到待共享锚点所在的锚点场景的第一特征信息和待共享锚点在锚点场景中的原始位置信息。
服务器将第一特征信息与第二特征信息进行特征匹配,得到当前场景与锚点场景之间的位置变换关系。
服务器根据位置变换关系,对原始位置信息进行位置变换,得到待共享锚点在所述当前场景中的共享位置信息。
通过上述各公开实施例可以看出,在一种可能的实现方式中,锚点信息可以包括第一特征信息和原始位置信息,其中,通过上述公开实施例可知,第一特征信息可以表征锚点场景的情况,而原始位置信息可以反应待共享锚点在锚点场景中的位置,同时,第二特征信息也可以表征当前场景的情况,因此,在一种可能的实现方式中,可以通过对第一特征信息和第二特征信息进行特征匹配,得到当前场景与锚点场景之间的位置变换关系,这样,在确定了当前场景与锚点场景之间的位置变换关系之后,将原始位置信息代入到此变换关系中,则可以得到待共享锚点在当前场景中的共享位置信息。特征匹配的过程可以根据第一特征信息和第二特征信息的实际表达方式进行灵活确定,在一个示例中,特征匹配的过程可以为将主终端与从属终端设备之间进行坐标系的配准,即将主终端上传至服务器的第一特征信息,在本示例中该特征信息的表现形式可以为地图信息,与从属终端上传至服务器的第二特征信息,进行主从终端坐标系的配准,从而可以计算出主终端对应的锚点场景与从属终端对应的当前场景之间坐标系的变换关系,图4示出根据本公开一实施例的锚点场景的坐标系示意图,图5示出根据本公开一实施例的当前场景的坐标系示意图,从两图中可以看出,锚点场景坐标系的原点可以被称为O H,A为待共享锚点,当前场景坐标系的原点可以被称为O R,在一个示例中,锚点场景和当前场景的坐标变换关系可以为转换关系M,即O R=O H*M,则根据该转换关系,可以得到待共享锚点A在当前场景中的坐标,即A R=A H*M,则这一坐标可以作为待共享锚点A在当前场景中的共享位置信息。
通过将第一特征信息与第二特征信息进行特征匹配,得到当前场景与锚点场景之间的位置变换关系,再基于此位置变换关系和原始位置信息,通过位置变换得到待共享锚点在当前场景中的共享位置信息,这一过程计算量小,计算方式便捷,在保障确定待共享锚点的共享位置信息准确性的同时,大大缩减了这一过程的计算速度,从而提升锚点共享过程的速度和效率。
图6示出根据本公开一实施例的锚点共享方法的流程图,该方法可以应用于终端设备,进一步来说,该方法可以应用于上述公开实施例中提出过的从属终端中,从属终端的数量和实现方式等均不受限定,可以参考上述各公开实施例,在此不再赘述。
如图6所示,在一种可能的实现方式中,所述锚点共享方法可以包括:
步骤S21,终端获取待共享锚点的身份标识。
步骤S22,终端根据当前场景和身份标识,向服务器发送锚点共享请求,锚点共享请求用于指示服务器确定待共享锚点在当前场景中的共享位置信息。
步骤S23,终端接收服务器反馈的共享位置信息。
上述公开实施例中,终端获取的待共享锚点的身份标识,与前述各公开实施例中提出过的身份标识一致,在此不再赘述;同时,终端向服务器发送的锚点共享请求,与前述各公开实施例中提出的服务器接收的锚点共享请求一致,在此同样不再赘述;终端接收服务器反馈的共享位置信息,与前述各公开实施例中提出的服务器向终端反馈的共享位置信息一致,在此同样不再具体描述。
终端通过获取待共享锚点的身份标识,并根据当前场景和身份标识向服务器发送锚点共享请求后,接收服务器反馈的共享位置信息,通过上述过程,可以使得终端只需获取当前场景以及目标锚点的身份标识,即可通过与服务器通信的方式,获取待共享锚点在当前场景中的共享位置信息,继而实现AR场景的共享,共享方式简单且效率高,适合普遍推广。
上述过程中,步骤S21的实现方式不受限定,即终端获取待共享锚点的身份标识的方式不受限定。前述公开实施例中已经提出过,本公开实施例中实现步骤S21的主体可以为从属终端,而待共享锚点的身份标识,在一种可能的实现方式中,是基于主终端与服务器之间的信息交互而生成的,并保存于服务器内,在一种可能的实现方式中,服务器可以将生成的待共享锚点的身份标识,返回至主终端。前述公开实施例中也提出过,主终端与从属终端可以为同一设备,也可以为不同设备,因此,从属终端具体如何获取待共享锚点的身份标识,在本公开实施例中不受限定,任何可以使得从属终端获取到身份标识的方式,均可以作为本公开实施例的实施方式。在一种可能的实现方式中,步骤S21可以包括:
终端获取共享信息。
终端根据共享信息与身份标识之间的对应关系,得到待共享锚点的身份标识。
上述过程中,共享信息的表现形式不受限定,任何可以映射到身份标识,且可以通过共享形式进行分享的信息形式,均可以作为共享信息的实现方式。共享信息与身份标识之间的对应关系,可以根据实际情况进行灵活设定,并不局限于下述公开实施例。在一种可能的实现方式中,共享信息的表现形式可以是AR场景共享过程中设置的房间号,即将房间号与身份标识进行绑定,在一个示例中,终端可以在获取到共享的房间号后,进入房间时自动获取与该房间号对应的身份标识。在一种可能的实现方式中,共享信息的表现形式可以是AR场景共享过程中的用户身份,即将用户身份与身份标识进行绑定,在一个示例中,终端可以在获取到共享的用户身份后,自动获取到与该用户身份绑定的身份标识。在一种可能的实现方式中,共享信息的表现方式可以直接是身份标识,即身份标识可以本身作为共享信息的内容,被终端获取。终端获取共享信息的方式同样不受限定,可以根据实际情况灵活选择,在一个示例中,该终端作为从属终端,可以与主终端不为同一设备,此时该从属终端获取共享信息的方式可以是获取主终端所共享的共享信息;在一个示例中,该终端作为从属终端,可以与主终端为同一设备,此时该从属终端获取共享信息的方式可以是直接从主终端中读取共享信息。
终端通过获取共享信息,并根据共享信息与身份标识之间的对应关系,得到待共享锚点的身份标识,通过上述过程,可以较为便捷地获取到待共享锚点的身份标识,无需复杂的解析过程,且可以通过共享信息间接地获取身份标识,可以减少身份标识被泄露的风险,在保障安全性的同时提升锚点共享的效率。
由于身份标识的表现形式不受限定,上述公开实施例中也提出锚点共享请求包含的内容可以存在多种表现形式,因此步骤S22的实现方式同样不受限定,即终端根据当前场景和身份标识,向服务器发送锚点的共享请求的具体实现过程不受限定。在一种可能的实现方式中,步骤S22可以包括:
终端扫描当前场景,获取当前场景的图像。
终端对当前场景的图像进行特征提取,得到第二特征信息。
终端将第二特征信息、身份标识和锚点共享请求消息,共同作为锚点共享请求,发送至服务器。
上述公开实施例中,第二特征信息与锚点共享请求中包括的当前场景的第二特征信息一致,在此不再赘述,同样,上述公开实施例中已经提到过,第二特征信息可以通过扫描当前场景,并对扫描获得的结果进行特征提取的方式,来得到第二特征信息,具体特征提取的过程和方式在本公开实施例中不受限定,任何可以提取到当前场景中的相关特征信息,从而可以确定当前场景的物理环境特征情况的方式,均可以作为特征提取的实现方式,在一个示例中,可以通过训练好的用于提取当前场景的物理环境特征信息的特征提取神经网络,对终端扫描得到的当前场景的图像进行特征提取,从而得到特征信息;在一个示例中,可以通过ORB特征点检测的方法,对终端扫描得到的当前场景的图像进行特征提取;在一个示例中,可以通过尺度不变特征变换(SIFT,Scale-invariant feature transform)算法,对终端扫描得到的当前场景的图像进行特征提取。
终端通过对当前场景进行扫描,得到当前场景的图像,并对当前场景的图像进行特征提取,得到第二特征信息后,将第二特征信息、身边教师和锚点共享请求消息打包作为锚点共享请求,发送至服务器,可以使得终端在请求完成服务器完成锚点共享的同时,直接将服务器需要的数据传输到服务器内,无需繁琐的交互过程,提升锚点共享的效率。
图7示出根据本公开一实施例的锚点共享方法的流程图,该方法可以应用于终端设备和服务器共同构成的系统,其中,终端与服务器的实现方式与上述公开实施例中一致,在此不再赘述。
如图7所示,所述锚点共享方法可以包括:
步骤S31,终端获取待共享锚点的身份标识。
步骤S32,终端根据当前场景和身份标识,向服务器发送锚点共享请求。
步骤S33,服务器根据锚点共享请求,确定待共享锚点在当前场景中的共享位置信息。
步骤S34,终端接收服务器反馈的共享位置信息。
上述过程中,步骤S31的实现方式与步骤S21的实现方式一致,步骤S32与步骤S22的实现方式一致,步骤S33的实现方式与步骤S13的实现方式一致,步骤S34的实现方式与步骤S23的实现方式一致,在此均不再赘述。
通过终端获取待共享锚点的身份标识,根据当前场景和身份标识,向服务器发送锚点共享请求,服务器再根据锚点共享请求,确定待共享锚点在当前场景中的共享位置信息,并将该共享位置信息反馈至终端。通过上述过程可以依据待共享锚点的身份标识,通过服务器将待共享锚点在当前场景中实现定位,从而使得不同终端可以共享同一现实场景下的虚拟世界,实现增强现实技术的共享。
应用场景示例
增强现实技术是一种可以将真实世界信息和虚拟世界信息结合的技术,通过设备将虚拟视觉信息在真实世界图像中显示出来。然而目前大部分增强现实都是单机版本,在屏幕中显示的是自己的虚拟世界和真实世界信息相结合的场景,无法让多个用户在各自的屏幕中看到相同的虚拟世界和现实世界,无法进行有互动性的AR体验。
因此,一个可以实现多人共享的增强现实技术方案具有十分重要的应用价值。
图8~图9示出了根据本公开一应用示例的示意图,如图所示,本公开实施例提出了一种锚点共享方法,该锚点共享方法可以集成于软件开发工具包(SDK,Software Development Kit)内部,通过终端与服务器之间的配合所实现。在本公开示例中,基于SDK实现该锚点共享方法的过程可以为:
调用使能接口打开sdk云服务,其中,云服务会进行证书认证,认证通过后才会提供云服务,在本公开实施例中,具体的认证过程可以为添加从开发者网站申请的Key/Secret,获取调用云服务的权限。
用户A通过主终端创建一个AR场景,并在场景中新建一个锚点(锚点位于现实场景中的标志物上),作为待共享锚点,将该待共享锚点托管(host)到云端服务器,获得对应的锚点id作为该待共享锚点的身份标识。
应用层自行管理获取的锚点id,并与共享信息绑定后将共享信息分享给相关用户,在本应用示例中,这个过程可以使用云服务来完成。
用户B对着同一块现实场景创建一个AR场景,使用通过与共享信息绑定的锚点id,在云端服务器内进行锚点解析(resolve)操作,获得该待共享锚点,且该待共享锚点相对于现实场景位置与用户A摆放时相同。
在本应用示例中,图8示出将待共享锚点托管到云端服务器的实现过程,图9示出将该待共享锚点在云端服务器内进行锚点解析的实现过程,从图中可以看出,托管和解析的过程在数据的传输流程上具有一致性,但是传输的数据流不同,在云端服务器托管待共享锚点时,主要实现的是数据的解析和存储,在云端服务器解析待共享锚点时,云端服务器会调用匹配算法来实现。
云端服务器托管待共享锚点的具体过程可以为:主终端将数据传至云端服务端,数据包括待共享锚点在锚点场景中的位置信息,SLAM算法中的相关地图信息等,地图信息作为参考坐标系及参考周边环境,供后续解析待共享锚点时使用,主终端传输的数据会被保存在云端服务器,成为云锚点并赋予一个云锚点编号(anchor id),该编号会返回主终端。锚点托管具有即时性,从取得云锚点编号时到之后的168小时内,该锚点均可以被解析,168小时之后,该锚点的相关信息会被删除。
云端服务器解析待共享锚点的过程可以为:从属终端负责向云端服务器发送解析请求将云锚点注册到当前从属终端内的本地坐标系,请求数据包包含从当前从属终端采集的画面中提取的特征信息以及待解析的云锚点编号。云端服务器会将从属终端上传的特征信息和主终端上传的地图信息进行特征匹配,匹配过程包括主从终端坐标系的配准,计算出主从终端内AR场景坐标系的变换关系,并将这变换结果应用到该云锚点上,从属终端收到的就是该云锚点在从属终端AR场景中的坐标信息。
通过上述锚点托管以及锚点解析的过程,可以实现不同的终端设备共享一个或多个锚点。在本应用示例中,上述方法可以应用于多人AR场景的共享过程中,即一个用户通过某一终端设备,在云端服务器托管一个或多个待共享锚点,其余用户可以通过各自的终端设备,扫描各自当前的场景,并在云端服务器通过锚点解析,获取该待共享锚点在各自终端设备中的位置,从而实现在多台终端中,锚点均显示在现实世界的相同位置,从而实现多人AR互动。上述方法也可以应用于单人多时段AR场景的共享过程中,即某一用户通过某一终端设备在云端服务器托管一个或多个待共享锚点,从而在云端服务器保存该锚点以及锚点场景。过了一段时间后,该用户还可以通过终端设备扫描该锚点场景, 通过锚点解析,在服务器中再次打开锚点场景,获取与之前保存的锚点场景中位置相同的待共享锚点,并执行需要的操作。上述方法同样可以应用于多人多时段AR场景的共享过程中,即将上述两种应用方式结合,多人将AR场景保存,并在下次通过锚点解析打开该AR场景时,呈现相同场景,不受时间约束。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
图10示出根据本公开实施例的锚点共享装置的框图。该锚点共享装置可以为服务器等设备。
如图10所示,所述锚点共享装置40可以包括:锚点共享请求接收模块41,用于接收终端发送的锚点共享请求,锚点共享请求包括终端扫描得到的当前场景以及获取的待共享锚点的身份标识;共享位置信息确定模块42,用于根据锚点共享请求,确定待共享锚点在当前场景中的共享位置信息;反馈模块43,用于向终端反馈所述共享位置信息。
在一种可能的实现方式中,锚点共享请求接收模块之前,还包括身份标识生成模块,身份标识生成模块用于:获取待共享锚点的锚点信息;根据锚点信息,生成待共享锚点的身份标识,保存锚点信息和身份标识。
在一种可能的实现方式中,身份标识生成模块进一步用于:在保存锚点信息和身份标识的时间超过时间阈值时,服务器删除锚点信息和身份标识。
在一种可能的实现方式中,待共享锚点的锚点信息包括:待共享锚点所在的锚点场景的第一特征信息,以及待共享锚点在所述锚点场景中的原始位置信息。
在一种可能的实现方式中,锚点共享请求包括:当前场景的第二特征信息、待共享锚点的身份标识和锚点共享请求消息。
在一种可能的实现方式中,共享位置信息确定模块用于:根据锚点共享请求,分别得到待共享锚点的身份标识和当前场景的第二特征信息;根据身份标识,读取待共享锚点的锚点信息;根据第二特征信息和锚点信息,确定待共享锚点在当前场景中的共享位置信息。
在一种可能的实现方式中,共享位置信息确定模块进一步用于:根据锚点信息,分别得到待共享锚点所在的锚点场景的第一特征信息和待共享锚点在锚点场景中的原始位置信息;将第一特征信息与第二特征信息进行特征匹配,得到当前场景与锚点场景之间的位置变换关系;根据位置变换关系,对原始位置信息进行位置变换,得到待共享锚点在当前场景中的共享位置信息。
图11示出根据本公开实施例的锚点共享装置的框图。该锚点共享装置可以为终端设备等。
如图11所示,所述锚点共享装置50可以包括:身份标识获取模块51,用于获取待共享锚点的身份标识;锚点共享请求发送模块52,用于根据当前场景和身份标识,向服务器发送锚点共享请求,锚点共享请求用于指示服务器确定待共享锚点在当前场景中的共享位置信息;共享位置信息接收模块53,用于接收服务器反馈的共享位置信息。
在一种可能的实现方式中,身份标识获取模块用于:获取共享信息;根据共享信息与身份标识之间的对应关系,得到待共享锚点的身份标识。
在一种可能的实现方式中,锚点共享请求发送模块用于:扫描当前场景,获取当前场景的图像;对当前场景的图像进行特征提取,得到第二特征信息;将第二特征信息、身份标识和锚点共享请求消息,共同作为锚点共享请求,发送至服务器。
图12示出根据本公开实施例的锚点共享系统的框图。如图所示,所述锚点共享系统60可以包括:如上述各公开实施例所述的第一锚点共享装置40;如上述各公开实施例所述的第二锚点共享装置50;其中,第一锚点共享装置与第二锚点共享装置通过锚点共享请求进行交互。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序 指令被处理器执行时实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图13是根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图13,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800 可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图14是根据本公开实施例的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图14,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指 令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (27)

  1. 一种锚点共享方法,其特征在于,包括:
    服务器接收终端发送的锚点共享请求,所述锚点共享请求包括所述终端扫描得到的当前场景以及获取的待共享锚点的身份标识;
    服务器根据所述锚点共享请求,确定所述待共享锚点在所述当前场景中的共享位置信息;
    服务器向所述终端反馈所述共享位置信息。
  2. 根据权利要求1所述的方法,其特征在于,所述服务器接收终端发送的锚点共享请求之前,还包括:
    服务器获取所述待共享锚点的锚点信息;
    服务器根据所述锚点信息,生成所述待共享锚点的身份标识,保存所述锚点信息和所述身份标识。
  3. 根据权利要求2所述的方法,其特征在于,所述服务器根据所述锚点信息,生成所述待共享锚点的身份标识,保存所述锚点信息和所述身份标识,还包括:
    在保存所述锚点信息和所述身份标识的时间超过时间阈值时,服务器删除所述锚点信息和所述身份标识。
  4. 根据权利要求2或3所述的方法,其特征在于,所述待共享锚点的锚点信息包括:
    所述待共享锚点所在的锚点场景的第一特征信息,以及所述待共享锚点在所述锚点场景中的原始位置信息。
  5. 根据权利要求1至4中任意一项所述的方法,其特征在于,所述锚点共享请求包括:
    所述当前场景的第二特征信息、所述待共享锚点的身份标识和锚点共享请求消息。
  6. 根据权利要求1至5中任意一项所述的方法,其特征在于,所述服务器根据所述锚点共享请求,确定所述待共享锚点在所述当前场景中的共享位置信息,包括:
    服务器根据所述锚点共享请求,分别得到所述待共享锚点的身份标识和所述当前场景的第二特征信息;
    服务器根据所述身份标识,读取所述待共享锚点的锚点信息;
    服务器根据所述第二特征信息和所述锚点信息,确定所述待共享锚点在所述当前场景中的共享位置信息。
  7. 根据权利要求6所述的方法,其特征在于,所述服务器根据所述第二特征信息和所述锚点信息,确定所述待共享锚点在所述当前场景中的共享位置信息,包括:
    服务器根据所述锚点信息,分别得到所述待共享锚点所在的锚点场景的第一特征信息和所述待共享锚点在所述锚点场景中的原始位置信息;
    服务器将所述第一特征信息与所述第二特征信息进行特征匹配,得到所述当前场景与所述锚点场景之间的位置变换关系;
    服务器根据所述位置变换关系,对所述原始位置信息进行位置变换,得到所述待共享锚点在所述当前场景中的共享位置信息。
  8. 一种锚点共享方法,其特征在于,包括:
    终端获取待共享锚点的身份标识;
    终端根据当前场景和所述身份标识,向服务器发送锚点共享请求,所述锚点共享请求用于指示所述服务器确定所述待共享锚点在所述当前场景中的共享位置信息;
    终端接收服务器反馈的共享位置信息。
  9. 根据权利要求8所述的方法,其特征在于,所述终端获取待共享锚点的身份标识,包括:
    终端获取共享信息;
    终端根据所述共享信息与身份标识之间的对应关系,得到待共享锚点的身份标识。
  10. 根据权利要求8或9所述的方法,其特征在于,所述终端根据当前场景和所述身份标识,向服务器发送锚点共享请求,包括:
    终端扫描当前场景,获取当前场景的图像;
    终端对所述当前场景的图像进行特征提取,得到所述第二特征信息;
    终端将所述第二特征信息、所述身份标识和所述锚点共享请求消息,共同作为所述锚点共享请求,发送至服务器。
  11. 一种锚点共享方法,其特征在于,包括:
    终端获取待共享锚点的身份标识;
    终端根据当前场景和所述身份标识,向服务器发送锚点共享请求;
    服务器根据所述锚点共享请求,确定所述待共享锚点在当前场景中的共享位置信息;
    终端接收所述服务器反馈的所述共享位置信息。
  12. 一种锚点共享装置,其特征在于,所述装置应用于服务器,包括:
    锚点共享请求接收模块,用于接收终端发送的锚点共享请求,所述锚点共享请求包括所述终端扫描得到的当前场景以及获取的待共享锚点的身份标识;
    共享位置信息确定模块,用于根据所述锚点共享请求,确定所述待共享锚点在所述当前场景中的共享位置信息;
    反馈模块,用于向所述终端反馈所述共享位置信息。
  13. 根据权利要求12所述的装置,其特征在于,所述锚点共享请求接收模块之前,还包括身份标识生成模块,所述身份标识生成模块用于:
    获取所述待共享锚点的锚点信息;
    根据所述锚点信息,生成所述待共享锚点的身份标识,保存所述锚点信息和所述身份标识。
  14. 根据权利要求13所述的装置,其特征在于,所述身份标识生成模块进一步用于:
    在保存所述锚点信息和所述身份标识的时间超过时间阈值时,服务器删除所述锚点信息和所述身份标识。
  15. 根据权利要求13或14所述的装置,其特征在于,所述待共享锚点的锚点信息包括:
    所述待共享锚点所在的锚点场景的第一特征信息,以及所述待共享锚点在所述锚点场景中的原始位置信息。
  16. 根据权利要求12至15中任意一项所述的装置,其特征在于,所述锚点共享请求包括:
    所述当前场景的第二特征信息、所述待共享锚点的身份标识和锚点共享请求消息。
  17. 根据权利要求12至16中任意一项所述的装置,其特征在于,所述共享位置信息确定模块用于:
    根据所述锚点共享请求,分别得到所述待共享锚点的身份标识和所述当前场景的第二特征信息;
    根据所述身份标识,读取所述待共享锚点的锚点信息;
    根据所述第二特征信息和所述锚点信息,确定所述待共享锚点在所述当前场景中的共享位置信息。
  18. 根据权利要求17所述的装置,其特征在于,所述共享位置信息确定模块进一步用于:
    根据所述锚点信息,分别得到所述待共享锚点所在的锚点场景的第一特征信息和所述待共享锚点在所述锚点场景中的原始位置信息;
    将所述第一特征信息与所述第二特征信息进行特征匹配,得到所述当前场景与所述锚点场景之间的位置变换关系;
    根据所述位置变换关系,对所述原始位置信息进行位置变换,得到所述待共享锚点在所述当前场景中的共享位置信息。
  19. 一种锚点共享装置,其特征在于,所述装置应用于终端,包括:
    身份标识获取模块,用于获取待共享锚点的身份标识;
    锚点共享请求发送模块,用于根据当前场景和所述身份标识,向服务器发送锚点共享请求,所述锚点共享请求用于指示所述服务器确定所述待共享锚点在所述当前场景中的共享位置信息;
    共享位置信息接收模块,用于接收服务器反馈的共享位置信息。
  20. 根据权利要求19所述的装置,其特征在于,所述身份标识获取模块用于:
    获取共享信息;
    根据所述共享信息与身份标识之间的对应关系,得到待共享锚点的身份标识。
  21. 根据权利要求19或20所述的装置,其特征在于,所述锚点共享请求发送模块用于:
    扫描当前场景,获取当前场景的图像;
    对所述当前场景的图像进行特征提取,得到所述第二特征信息;
    将所述第二特征信息、所述身份标识和所述锚点共享请求消息,共同作为所述锚点共享请求,发送至服务器。
  22. 一种锚点共享系统,其特征在于,包括:
    如权利要求12至18中任意一项所述的第一锚点共享装置;
    如权利要求19至21中任意一项所述的第二锚点共享装置;
    其中,所述第一锚点共享装置与所述第二锚点共享装置通过锚点共享请求进行交互。
  23. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至7中任意一项所述的方法。
  24. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求8至10中任意一项所述的方法。
  25. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至7中任意一项所述的方法。
  26. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求8至10中任意一项所述的方法。
  27. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现如权利要求1-11中的任一项所述的方法。
PCT/CN2020/080481 2019-06-26 2020-03-20 锚点共享方法及装置、系统、电子设备和存储介质 WO2020258938A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2021549776A JP7245350B2 (ja) 2019-06-26 2020-03-20 アンカー共有方法及び装置、システム、電子機器並びに記憶媒体
SG11202109292SA SG11202109292SA (en) 2019-06-26 2020-03-20 Method, apparatus and system for anchor sharing, electronic device and storage medium
US17/407,214 US20210383580A1 (en) 2019-06-26 2021-08-20 Method, apparatus and system for anchor sharing, electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910562547.3A CN112153083B (zh) 2019-06-26 2019-06-26 锚点共享方法及装置、系统、电子设备和存储介质
CN201910562547.3 2019-06-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/407,214 Continuation US20210383580A1 (en) 2019-06-26 2021-08-20 Method, apparatus and system for anchor sharing, electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2020258938A1 true WO2020258938A1 (zh) 2020-12-30

Family

ID=73869895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080481 WO2020258938A1 (zh) 2019-06-26 2020-03-20 锚点共享方法及装置、系统、电子设备和存储介质

Country Status (6)

Country Link
US (1) US20210383580A1 (zh)
JP (1) JP7245350B2 (zh)
CN (1) CN112153083B (zh)
SG (1) SG11202109292SA (zh)
TW (1) TWI767225B (zh)
WO (1) WO2020258938A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102505537B1 (ko) * 2021-09-28 2023-03-03 성균관대학교산학협력단 Ar 공유 시스템

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103460256A (zh) * 2011-03-29 2013-12-18 高通股份有限公司 在扩增现实系统中将虚拟图像锚定到真实世界表面
US20140354685A1 (en) * 2013-06-03 2014-12-04 Gavin Lazarow Mixed reality data collaboration
US20140375688A1 (en) * 2013-06-25 2014-12-25 William Gibbens Redmann Multiuser augmented reality system
CN107741886A (zh) * 2017-10-11 2018-02-27 江苏电力信息技术有限公司 一种基于增强现实技术多人互动的方法
CN107850779A (zh) * 2015-06-24 2018-03-27 微软技术许可有限责任公司 虚拟位置定位锚
US20180190023A1 (en) * 2016-12-30 2018-07-05 Glen J. Anderson Dynamic, local augmented reality landmarks
US20180285052A1 (en) * 2017-03-30 2018-10-04 Microsoft Technology Licensing, Llc Sharing neighboring map data across devices
US20180321894A1 (en) * 2017-05-04 2018-11-08 Microsoft Technology Licensing, Llc Virtual content displayed with shared anchor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190088030A1 (en) * 2017-09-20 2019-03-21 Microsoft Technology Licensing, Llc Rendering virtual objects based on location data and image data
US10685456B2 (en) * 2017-10-12 2020-06-16 Microsoft Technology Licensing, Llc Peer to peer remote localization for devices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103460256A (zh) * 2011-03-29 2013-12-18 高通股份有限公司 在扩增现实系统中将虚拟图像锚定到真实世界表面
US20140354685A1 (en) * 2013-06-03 2014-12-04 Gavin Lazarow Mixed reality data collaboration
US20140375688A1 (en) * 2013-06-25 2014-12-25 William Gibbens Redmann Multiuser augmented reality system
CN107850779A (zh) * 2015-06-24 2018-03-27 微软技术许可有限责任公司 虚拟位置定位锚
US20180190023A1 (en) * 2016-12-30 2018-07-05 Glen J. Anderson Dynamic, local augmented reality landmarks
US20180285052A1 (en) * 2017-03-30 2018-10-04 Microsoft Technology Licensing, Llc Sharing neighboring map data across devices
US20180321894A1 (en) * 2017-05-04 2018-11-08 Microsoft Technology Licensing, Llc Virtual content displayed with shared anchor
CN107741886A (zh) * 2017-10-11 2018-02-27 江苏电力信息技术有限公司 一种基于增强现实技术多人互动的方法

Also Published As

Publication number Publication date
CN112153083A (zh) 2020-12-29
CN112153083B (zh) 2022-07-19
US20210383580A1 (en) 2021-12-09
TWI767225B (zh) 2022-06-11
SG11202109292SA (en) 2021-09-29
JP2022522283A (ja) 2022-04-15
TW202103497A (zh) 2021-01-16
JP7245350B2 (ja) 2023-03-23

Similar Documents

Publication Publication Date Title
WO2022043741A1 (zh) 网络训练、行人重识别方法及装置、存储介质、计算机程序
WO2022134382A1 (zh) 图像分割方法及装置、电子设备和存储介质、计算机程序
US10608988B2 (en) Method and apparatus for bluetooth-based identity recognition
WO2017045309A1 (zh) 设备控制方法、装置和终端设备
US10630735B2 (en) Communication terminal, communication system, communication method, and recording medium
US10298881B2 (en) Communication terminal, communication system, communication method, and recording medium
US10102505B2 (en) Server-implemented method, terminal-implemented method and device for acquiring business card information
WO2022188305A1 (zh) 信息展示方法及装置、电子设备、存储介质及计算机程序
TWI767217B (zh) 坐標系對齊的方法及裝置、電子設備和計算機可讀存儲介質
EP3264774B1 (en) Live broadcasting method and device for live broadcasting
US10091010B2 (en) Communication terminal, communication system, communication method, and recording medium
US20150282233A1 (en) Communication management system, communication management method, and recording medium storing communication management program
US9723486B2 (en) Method and apparatus for accessing network
US20180144546A1 (en) Method, device and terminal for processing live shows
KR20220024302A (ko) 블록체인과 해쉬 암호화 기술을 기반으로 한 영상 인증 시스템 및 그 방법
KR20210000957A (ko) 블록체인과 해쉬 암호화 기술을 기반으로 한 영상 인증 시스템 및 그 방법
CN110619097A (zh) 二维码生成方法、装置、电子设备及存储介质
WO2020258938A1 (zh) 锚点共享方法及装置、系统、电子设备和存储介质
CN112616053B (zh) 直播视频的转码方法、装置及电子设备
CN106447747B (zh) 图像处理方法及装置
CN111526380A (zh) 视频处理方法、装置、服务器、电子设备及存储介质
CN110673732A (zh) 场景共享方法及装置、系统、电子设备和存储介质
CN106375744B (zh) 信息投影方法及装置
WO2022151687A1 (zh) 合影图像生成方法、装置、设备、存储介质、计算机程序及产品
CN106126104B (zh) 键盘模拟方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20833051

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021549776

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20833051

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20833051

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/05/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20833051

Country of ref document: EP

Kind code of ref document: A1