CN113359983A - Augmented reality data presentation method and device, electronic equipment and storage medium - Google Patents

Augmented reality data presentation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113359983A
CN113359983A CN202110619445.8A CN202110619445A CN113359983A CN 113359983 A CN113359983 A CN 113359983A CN 202110619445 A CN202110619445 A CN 202110619445A CN 113359983 A CN113359983 A CN 113359983A
Authority
CN
China
Prior art keywords
data
special effect
audio explanation
equipment
effect data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110619445.8A
Other languages
Chinese (zh)
Inventor
田真
李斌
欧华富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110619445.8A priority Critical patent/CN113359983A/en
Publication of CN113359983A publication Critical patent/CN113359983A/en
Priority to PCT/CN2022/076270 priority patent/WO2022252688A1/en
Priority to TW111108666A priority patent/TW202248808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

The present disclosure provides an augmented reality data presentation method, apparatus, electronic device and storage medium, the method comprising: acquiring positioning information of augmented reality AR equipment; based on the positioning information, under the condition that the fact that the position between the AR equipment and an explanation area corresponding to any preset knowledge point meets a preset position condition is determined, audio explanation data corresponding to any preset knowledge point and AR special effect data matched with the audio explanation data are obtained; and controlling the AR equipment to play the audio explanation data and displaying the AR special effect data matched with the currently played audio explanation data.

Description

Augmented reality data presentation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a method and an apparatus for presenting augmented reality data, an electronic device, and a storage medium.
Background
Augmented Reality (AR), a new technology for integrating real world information and virtual world information, aims to fit and interact with the virtual world in the real world on a screen. With the development of technology, AR technology is applied in various scenes, such as a travel scene, a shopping scene, an education scene, and the like.
Therefore, it is important to provide a method for displaying AR data.
Disclosure of Invention
In view of the above, the present disclosure at least provides an augmented reality data presentation method, an apparatus, an electronic device and a storage medium.
In a first aspect, the present disclosure provides an augmented reality data presentation method, including:
acquiring positioning information of augmented reality AR equipment;
based on the positioning information, under the condition that the fact that the position between the AR equipment and an explanation area corresponding to any preset knowledge point meets a preset position condition is determined, audio explanation data corresponding to any preset knowledge point and AR special effect data matched with the audio explanation data are obtained;
and controlling the AR equipment to play the audio explanation data and displaying the AR special effect data matched with the currently played audio explanation data.
According to the method, the AR equipment is controlled to play the audio explanation data corresponding to any preset knowledge point, the AR special effect data matched with the currently played audio explanation data is displayed while the audio explanation data is played, the AR special effect data is used for performing auxiliary explanation on the audio explanation data, the explanation process of the preset knowledge point is clear and visual, and the explanation efficiency of the preset knowledge point is improved.
In a possible embodiment, the controlling the AR device to play audio explanation data and display AR special effect data matching with currently played audio explanation data includes:
and under the condition that the AR equipment meets the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the AR special effect data.
Here, when the AR device satisfies the trigger presentation condition of the AR special effect data, the AR device is controlled to play the audio explanation data, and the AR special effect data is presented, so that a situation that the playing effect of the AR special effect data is poor when the positioning information of the AR device is not appropriate is avoided, and the presentation effect of the AR special effect data is improved.
In a possible embodiment, the method further comprises:
and under the condition that the AR equipment is determined not to meet the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to display first guide information for guiding and adjusting the positioning information of the AR equipment.
Here, when the AR device does not satisfy the trigger display condition of AR special effect data, the AR device can be controlled to display the first guide information used for guiding the positioning information of the AR device to adjust the positioning information of the AR device according to the first guide information, so that the adjusted AR device can clearly and intuitively display the AR special effect data, omission of the preset knowledge points is avoided, and the explanation efficiency of the preset knowledge points is improved.
In a possible embodiment, the method further comprises:
and when the audio explanation data corresponding to any preset knowledge point is played completely and/or the AR special effect data is displayed completely, controlling the AR equipment to display at least one of the following information:
second guidance information for instructing the AR device to move to an explanation area corresponding to a next preset knowledge point;
list information indicating preset knowledge points that the AR device has not visited;
and the navigation ending information is used for indicating that the AR equipment plays the audio explanation data corresponding to each preset knowledge point.
Here, a plurality of information types to be displayed are set, so that after the audio explanation data corresponding to any preset knowledge point is played and/or the AR special effect data is displayed, the AR equipment can be controlled to display at least one type of information, and the display diversity and flexibility are improved.
In one possible embodiment, the presenting AR special effect data corresponding to currently played audio explanation data includes:
and in the process of playing the audio explanation data, switching and displaying various AR special effect data corresponding to the currently played audio explanation data.
Here, the audio explanation data may correspond to a plurality of AR special effect data, so that in the process of playing the audio explanation data, the display of the plurality of AR special effect data corresponding to the currently played audio explanation data may be switched, and the display flexibility and diversity of the AR special effect data are improved.
In a possible implementation manner, the switching and displaying multiple types of AR special effect data corresponding to currently played audio explanation data includes:
responding to a target trigger operation, and determining AR special effect data to be displayed from multiple AR special effect data corresponding to currently played audio explanation data;
and controlling the AR equipment to display the AR special effect data to be displayed.
Here, when the audio explanation data corresponds to a plurality of AR special effect data, the AR special effect data to be displayed may be determined from the plurality of AR special effect data corresponding to the currently played audio explanation data in response to the target trigger operation, so that the determined AR special effect data to be displayed may satisfy a user demand, and flexibility of displaying the AR special effect data is improved.
In a possible implementation, the acquiring the positioning information of the augmented reality AR device includes:
acquiring a real-time scene image acquired by the AR equipment;
and determining the positioning information of the AR equipment based on the real-time scene image and the constructed three-dimensional scene model.
Here, the positioning information of the AR device can be determined more accurately by using the real-time scene image acquired by the AR device and the constructed three-dimensional scene model.
In one possible implementation, the audio explanation data corresponding to each preset knowledge point is predetermined according to the following steps:
acquiring audio explanation data to be processed corresponding to a target area;
determining target keywords respectively matched with a plurality of preset knowledge points from the audio explanation data to be processed according to a preset knowledge point library;
and respectively determining audio explanation data corresponding to each preset knowledge point from the audio explanation data to be processed according to the corresponding playing time position of each target keyword in the audio explanation data to be processed.
The audio explanation data to be processed can be identified according to a preset knowledge point library, and target keywords respectively matched with preset knowledge points in the audio data to be processed are determined; and then, according to the corresponding playing time position of each target keyword in the audio explanation data to be processed, the audio explanation data corresponding to each preset knowledge point can be determined more accurately and rapidly from the audio explanation data to be processed.
In one possible implementation, the method is applied to a client application platform, and the client application platform is a Web application platform or an applet application platform.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides an augmented reality data presentation device, comprising:
the first acquisition module is used for acquiring positioning information of the AR equipment;
a second obtaining module, configured to obtain audio explanation data corresponding to any one of the preset knowledge points and AR special effect data matched with the audio explanation data when it is determined that a preset position condition is satisfied between the AR device and an explanation area corresponding to the any one of the preset knowledge points based on the positioning information;
and the control module is used for controlling the AR equipment to play the audio explanation data and displaying the AR special effect data matched with the currently played audio explanation data.
In a possible implementation manner, the control module, when controlling the AR device to play audio explanation data and displaying AR special effect data matching with currently played audio explanation data, is configured to:
and under the condition that the AR equipment meets the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the AR special effect data.
In a possible embodiment, the apparatus further comprises: a first display module to:
and under the condition that the AR equipment is determined not to meet the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to display first guide information for guiding and adjusting the positioning information of the AR equipment.
In a possible embodiment, the apparatus further comprises: a second display module to:
and when the audio explanation data corresponding to any preset knowledge point is played completely and/or the AR special effect data is displayed completely, controlling the AR equipment to display at least one of the following information:
second guidance information for instructing the AR device to move to an explanation area corresponding to a next preset knowledge point;
list information indicating preset knowledge points that the AR device has not visited;
and the navigation ending information is used for indicating that the AR equipment plays the audio explanation data corresponding to each preset knowledge point.
In one possible embodiment, when presenting the AR special effect data corresponding to the currently played audio explanation data, the control module is configured to:
and in the process of playing the audio explanation data, switching and displaying various AR special effect data corresponding to the currently played audio explanation data.
In a possible implementation manner, when switching and displaying multiple types of AR special effect data corresponding to currently played audio explanation data, the control module is configured to:
responding to a target trigger operation, and determining AR special effect data to be displayed from multiple AR special effect data corresponding to currently played audio explanation data;
and controlling the AR equipment to display the AR special effect data to be displayed.
In a possible implementation manner, the first obtaining module, when obtaining the positioning information of the augmented reality AR device, is configured to:
acquiring a real-time scene image acquired by the AR equipment;
and determining the positioning information of the AR equipment based on the real-time scene image and the constructed three-dimensional scene model.
In a possible implementation manner, the apparatus further includes a determining module, configured to determine in advance the audio explanation data corresponding to each preset knowledge point according to the following steps:
acquiring audio explanation data to be processed corresponding to a target area;
determining target keywords respectively matched with a plurality of preset knowledge points from the audio explanation data to be processed according to a preset knowledge point library;
and respectively determining audio explanation data corresponding to each preset knowledge point from the audio explanation data to be processed according to the corresponding playing time position of each target keyword in the audio explanation data to be processed.
In one possible implementation, the method is applied to a client application platform, and the client application platform is a Web application platform or an applet application platform.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the augmented reality data presentation method according to the first aspect or any one of the embodiments.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the augmented reality data presentation method according to the first aspect or any one of the embodiments described above.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flowchart of an augmented reality data presentation method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a trigger presentation condition corresponding to AR special effect data in an augmented reality data presentation method provided by an embodiment of the present disclosure;
fig. 3 shows an interface schematic diagram of an AR device provided by an embodiment of the present disclosure;
fig. 4a shows an interface schematic diagram of an AR device provided by an embodiment of the present disclosure;
fig. 4b illustrates an interface schematic diagram of an AR device provided by an embodiment of the present disclosure;
fig. 4c illustrates an interface schematic diagram of an AR device provided by an embodiment of the present disclosure;
fig. 4d shows an interface schematic diagram of an AR device provided by an embodiment of the present disclosure;
fig. 5 shows a schematic flow chart of another augmented reality data presentation method provided by the embodiment of the present disclosure;
fig. 6 shows an architecture diagram of an augmented reality data presentation apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Augmented Reality (AR), a new technology for integrating real world information and virtual world information, aims to fit and interact with the virtual world in the real world on a screen. With the development of technology, AR technology is applied in various scenes, such as a travel scene, a shopping scene, an education scene, and the like. Based on this, the embodiment of the disclosure provides an augmented reality data presentation method, an augmented reality data presentation device, an electronic device and a storage medium.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
For facilitating understanding of the embodiments of the present disclosure, a detailed description will be first given of an augmented reality data presentation method disclosed in the embodiments of the present disclosure. An executing subject of the augmented reality data display method provided by the embodiment of the present disclosure may be an AR device, and the AR device is an intelligent device capable of supporting an AR function, for example, the AR device includes but is not limited to a mobile phone, a tablet, AR glasses, and the like.
Referring to fig. 1, a schematic flow diagram of an augmented reality data presentation method provided in an embodiment of the present disclosure is shown, where the method includes S101-S103, where:
s101, acquiring positioning information of the AR equipment;
s102, based on the positioning information, under the condition that the fact that an explanation area corresponding to the AR equipment and any preset knowledge point meets a preset position condition is determined, audio explanation data corresponding to any preset knowledge point and AR special effect data matched with the audio explanation data are obtained;
and S103, controlling the AR equipment to play the audio explanation data and displaying the AR special effect data matched with the currently played audio explanation data.
According to the method, the AR equipment is controlled to play the audio explanation data corresponding to any preset knowledge point, the AR special effect data matched with the currently played audio explanation data is displayed while the audio explanation data is played, the AR special effect data is used for performing auxiliary explanation on the audio explanation data, the explanation process of the preset knowledge point is clear and visual, and the explanation efficiency of the preset knowledge point is improved.
S101 to S103 will be specifically described below.
For S101:
after the AR equipment reaches the target area, the process of AR navigation can be entered through scanning the information codes set in the target area, namely, the corresponding audio explanation data and the AR special effect data matched with the audio explanation data can be matched for the AR equipment through the acquired positioning information of the AR equipment, and audio navigation and AR special effect navigation are provided for the AR equipment. In the text travel scene, the target area may be an area corresponding to any target scenic spot, or may be each exhibition area in the target scenic spot, or the like.
In implementation, the Positioning information of the AR device may be determined according to a sensor set on the AR device, for example, the sensor may be a Global Positioning System (GPS), an Inertial sensor (IMU), or the like.
In an optional implementation manner, in S101, acquiring location information of an augmented reality AR device may include: acquiring a real-time scene image acquired by the AR equipment; and determining the positioning information of the AR equipment based on the real-time scene image and the constructed three-dimensional scene model.
After the real-time scene image acquired by the AR equipment is acquired, the feature points in the real-time scene image can be extracted, the feature points are matched with the feature point cloud included in the three-dimensional scene model, and the positioning information when the AR equipment acquires the real-time scene image is determined. The positioning information may include position information and/or orientation information, for example, the position information may be coordinate information of the AR device in a coordinate system corresponding to the three-dimensional scene model; the orientation information may be an euler angle corresponding to the AR device.
In implementation, the three-dimensional scene model can be constructed according to the following steps: collecting multiple frames of scene images at different positions, different angles and different times in a target area, and extracting characteristic points of each frame of scene image to obtain a point cloud set corresponding to each frame of scene image; and acquiring the characteristic point clouds corresponding to the target area by utilizing the point cloud sets respectively corresponding to the multi-frame scene images, wherein the characteristic point clouds corresponding to the target area form a three-dimensional scene model.
Or collecting scene videos at different positions, different angles and different times, acquiring multiple frames of video frames from the collected scene videos, and extracting feature points of each frame of video frames to obtain point cloud sets corresponding to each frame of video frames; and obtaining a three-dimensional scene model corresponding to the target area by using the point cloud sets respectively corresponding to the multiple frames of video frames.
Here, the positioning information of the AR device can be determined more accurately by using the real-time scene image acquired by the AR device and the constructed three-dimensional scene model.
For S102:
the preset knowledge point can be any interpretable knowledge point in the target area, for example, when the target area is a painting and calligraphy exhibition hall, the preset knowledge point can be any painting and calligraphy name in exhibition; for example, when the target area is a museum of the palace, the preset knowledge point may be the name of any exhibition building.
During implementation, the spatial position of each preset knowledge point and the interpretation region corresponding to each preset knowledge point can be determined in the constructed three-dimensional scene model, for example, the position of a real object in a target region can be determined as the spatial position of the preset knowledge point corresponding to the real object; or setting the surrounding area of the real object in the target area as the explanation area of the preset knowledge point corresponding to the real object.
For example, the preset position condition may be that a distance between the AR device and an explanation area corresponding to the preset knowledge point is smaller than a set distance threshold, or the preset position condition may also be that the AR device is located in the explanation area corresponding to the preset knowledge point, and the like.
In implementation, the distance between the AR device and the explanation area corresponding to each preset knowledge point may be determined based on the location information indicated by the positioning information of the AR device, for example, the distance between the AR device and the central point of the explanation area may be determined. And when the distance between the AR equipment and the preset knowledge point is smaller than a preset distance threshold value, acquiring audio explanation data corresponding to the preset knowledge point and AR special effect data matched with the audio explanation data.
Or, whether the AR equipment is located in the explanation area of the preset knowledge point or not can be determined based on the explanation area corresponding to the preset knowledge point and the position information indicated by the positioning information of the AR equipment, and if yes, the audio explanation data corresponding to the preset knowledge point and the AR special effect data matched with the audio explanation data are obtained.
The audio explanation data can be preset audio data for explaining the preset knowledge points; the AR special effects data may be AR data containing one or more of the following: video content, image content, text content. For example, when the preset knowledge point is the taihe hall, the audio explanation data may be audio data explaining information such as the purpose, historical culture, architectural style, and the like of the taihe hall; the AR special effect data can be AR data of emperor on the Taihe hall; alternatively, the AR special effect data may be AR data of the entire building of the peaceful hall in the historical time period, or the like. The AR special effect data matched with the audio explanation data can be set according to needs; the audio explanation data can correspond to one AR special effect data and can also correspond to a plurality of AR special effect data.
The display pose of the AR special effect data can be set in the three-dimensional scene model according to needs. For example, the display pose of the AR special effect data may be located in an explanation area of a preset knowledge point. Or, the display pose of the AR special effect data can be determined based on the spatial position corresponding to the preset knowledge point.
In implementation, the corresponding AR special effect data may be matched with each audio interpretation data in advance, and after the audio interpretation data is acquired, the AR special effect data matched with the audio interpretation data may be determined.
Or after the audio explanation data corresponding to any preset knowledge point is acquired, determining the AR special effect data matched with the audio explanation data in real time. For example, after audio explanation data corresponding to any preset knowledge point is acquired, a target keyword corresponding to the audio explanation data may be identified; and determining the AR special effect data matched with the audio explanation data based on the target keywords corresponding to the audio explanation data and the keywords corresponding to each AR special effect data.
After the audio explanation data corresponding to any preset knowledge point is obtained, the audio explanation data can be identified, and target keywords included in the audio explanation data are determined; or, the text content of the audio explanation data can be determined, the text content corresponding to the audio explanation data is identified, and the target keywords included in the audio explanation data are determined; the target keyword may be a word related to a preset knowledge point. And determining the AR special effect data matched with the audio explanation data based on the target keywords corresponding to the audio explanation data and the keywords corresponding to each AR special effect data.
During implementation, when the AR equipment is controlled to play the audio explanation data, the AR equipment can be controlled to display the AR special effect data related to the target keyword according to the target keyword of the currently played audio explanation data, so that the played AR special effect data is consistent with the played audio explanation data, and the preset knowledge point can be clearly and clearly explained and displayed.
In an alternative embodiment, the audio explanation data corresponding to each preset knowledge point may be predetermined according to the following steps:
step A1, acquiring audio explanation data to be processed corresponding to a target area;
step A2, determining target keywords respectively matched with a plurality of preset knowledge points from the audio explanation data to be processed according to a preset knowledge point library;
step A3, according to the playing time position of each target keyword in the audio explanation data to be processed, respectively determining the audio explanation data corresponding to each preset knowledge point from the audio explanation data to be processed.
During implementation, to-be-processed audio explanation data corresponding to the target area can be generated according to the information of the plurality of knowledge points included in the target area, wherein the to-be-processed audio explanation data can be complete audio data explaining the knowledge points respectively. Determining target keywords respectively matched with the plurality of preset knowledge points from the audio explanation data to be processed according to a preset knowledge point library; or determining target keywords respectively matched with the plurality of preset knowledge points from text contents corresponding to the audio explanation data to be processed. And finally, segmenting the audio explanation data to be processed according to the corresponding playing time position of each target keyword in the audio explanation data to be processed, and determining the audio explanation data corresponding to each preset knowledge point from a plurality of audio explanation data obtained after segmentation.
The audio explanation data to be processed can be identified according to a preset knowledge point library, and target keywords respectively matched with preset knowledge points in the audio data to be processed are determined; and then, according to the corresponding playing time position of each target keyword in the audio explanation data to be processed, the audio explanation data corresponding to each preset knowledge point can be determined more accurately and rapidly from the audio explanation data to be processed.
For S103:
here, the audio explanation data may be played through a voice playing device of the AR apparatus, and the AR apparatus may be controlled to display AR special effect data matched with the currently played audio explanation data.
In an optional implementation, the controlling the AR device to play the audio explanation data and display the AR special effect data matching with the currently played audio explanation data may include:
in a first mode, under the condition that the AR equipment meets the trigger display condition of the AR special effect data based on the positioning information, the AR equipment is controlled to play the audio explanation data, and the AR special effect data is displayed.
And controlling the AR equipment to display first guide information for guiding and adjusting the positioning information of the AR equipment under the condition that the AR equipment is determined to not meet the trigger display condition of the AR special effect data based on the positioning information.
Here, when the AR device satisfies the trigger presentation condition of the AR special effect data, the AR device is controlled to play the audio explanation data, and the AR special effect data is presented, so that a situation that the playing effect of the AR special effect data is poor when the positioning information of the AR device is not appropriate is avoided, and the presentation effect of the AR special effect data is improved.
And when the AR equipment does not satisfy the trigger display condition of the AR special effect data, the AR equipment can be controlled to display first guide information used for guiding and adjusting the positioning information of the AR equipment, so that the positioning information of the AR equipment can be adjusted according to the first guide information, the adjusted AR equipment can clearly and visually display the AR special effect data, omission of preset knowledge points is avoided, and explanation efficiency of the preset knowledge points is improved.
In implementation, a corresponding trigger display condition may be set for each piece of AR special effect data, for example, the trigger display condition may include a trigger display direction and/or a trigger display distance. When the trigger display condition comprises a trigger display direction, and when the positioning information of the AR device indicates that the orientation of the AR device is located in the set trigger display direction, determining that the AR device meets the trigger display condition of the AR special effect data. When the triggering display condition comprises a triggering display distance, when the positioning information of the AR device indicates that the distance between the AR device and the display pose of the AR special effect data is smaller than a set first distance value and/or larger than a set second distance value, the AR device is determined to meet the triggering display condition of the AR special effect data.
Referring to fig. 2, for the AR special effect data in fig. 2, based on the display pose of the AR special effect data, a trigger display direction and a trigger display distance corresponding to the AR special effect data may be generated. For example, when the orientation of the AR device is located in the direction range corresponding to the trigger display direction, it is determined that the AR device meets the trigger display condition, for example, the first AR device 21 in fig. 2, and when the orientation of the AR device is the direction 211, the orientation of the AR device is located in the direction range corresponding to the set trigger display direction, and the AR device meets the trigger display condition; when the orientation of the AR device is direction 212, then the AR device does not satisfy the trigger presentation condition.
For example, if the distance between the first AR device 21 and the display position of the AR special effect data in fig. 2 is smaller than the set first distance value and larger than the set second distance value, the first AR device 21 satisfies the set trigger display distance. The distance between the second AR device 22 in fig. 2 and the display position of the AR special effect data is greater than the set first distance value, so the second AR device 22 does not satisfy the set trigger display distance.
When the trigger display condition includes a trigger display direction and a trigger display distance, it is determined that the AR device satisfies the trigger display condition of the AR special effect data when the orientation indicated by the positioning information of the AR device satisfies the trigger display direction condition and the position information indicated by the positioning information of the AR device satisfies the condition of the trigger display distance. And when the orientation indicated by the positioning information of the AR equipment does not meet the condition of triggering the display direction and/or the position information indicated by the positioning information of the AR equipment does not meet the condition of triggering the display distance, determining that the AR equipment does not meet the triggering display condition of the AR special effect data.
And if the AR equipment meets the trigger display condition corresponding to the AR special effect data, controlling the AR equipment to play the audio explanation data and displaying the AR special effect data.
If the AR equipment does not meet the trigger display condition corresponding to the AR special effect data, first guide information used for guiding and adjusting the positioning information of the AR equipment is generated, and the AR equipment displays the first guide information, so that the pose of the AR equipment is adjusted through the first guide information, and the AR equipment can display the AR special effect data clearly.
Referring to the interface schematic diagram of an AR device shown in fig. 3, first guidance information is shown in fig. 3, for example, the first guidance information may be "please move the AR device to the right to view AR special effect data corresponding to the preset knowledge point a".
In an alternative embodiment, the method further comprises: and when the audio explanation data corresponding to any preset knowledge point is played completely and/or the AR special effect data is displayed completely, controlling the AR equipment to display at least one of the following information: second guidance information for instructing the AR device to move to an explanation area corresponding to a next preset knowledge point; list information indicating preset knowledge points that the AR device has not visited; and the navigation ending information is used for indicating that the AR equipment plays the audio explanation data corresponding to each preset knowledge point.
Here, a plurality of information types to be displayed are set, so that after the audio explanation data corresponding to any preset knowledge point is played and/or the AR special effect data is displayed, the AR equipment can be controlled to display at least one type of information, and the display diversity and flexibility are improved.
After the audio explanation data corresponding to any preset knowledge point is played and/or the AR special effect data is displayed, and when there is a preset knowledge point that is not visited in the AR device, the AR device may be controlled to display second guidance information for instructing the AR device to move to an explanation area corresponding to a next preset knowledge point, as shown in fig. 4a, the second guidance information included in fig. 4a may be "please move forward, view preset knowledge point B". For example, navigation information from the current position to the next preset knowledge point (i.e. preset knowledge point B) may also be included in fig. 4 a.
Or, the AR device may be controlled to display list information indicating preset knowledge points that the AR device has not visited; see fig. 4 b.
Or, the AR device may be further controlled to display list information of all the preset knowledge points, and mark the visited preset knowledge points and the non-visited preset knowledge points in all the preset knowledge points, as shown in fig. 4c, where the preset knowledge points where the hook-pair "√" exists may be the visited preset knowledge points, and the preset knowledge points where the hook-pair "√" does not exist may be the non-visited preset knowledge points; and then, in response to a triggering operation, determining a preset knowledge point of the AR device at the next tour, and controlling the AR device to display second guidance information for instructing the AR device to move to an explanation area corresponding to the determined preset knowledge point at the next tour, wherein the triggering operation may be voice data, a touch screen operation, and the like.
When the AR device does not have the preset knowledge points that are not visited, the AR device may be further controlled to display navigation end information for indicating that the AR device has played the audio explanation data corresponding to each preset knowledge point, respectively, as shown in fig. 4d, the navigation end information may be "this navigation is ended, and the user is welcomed to use next time". The AR device may also be controlled to display a navigation end indicator, for example, the smiley face indicator in fig. 4d may be the navigation end indicator.
In an optional implementation, the presenting AR special effect data corresponding to currently played audio explanation data includes: and in the process of playing the audio explanation data, switching and displaying various AR special effect data corresponding to the currently played audio explanation data.
When the audio explanation data is matched with various AR special effect data, the various AR special effect data corresponding to the currently played audio explanation data can be displayed in a switching mode in the process of playing the audio explanation data. For example, when the audio interpretation data is audio data for interpreting a historical character, the plurality of types of AR special effect data may include AR special effects of historical characters in different postures, for example, the plurality of types of AR special effect data may include AR special effects of historical characters in a learning posture, AR special effects of historical characters in a resting posture, and the like. When the audio interpretation data is audio data for interpreting a building, the plurality of types of AR special effect data may include AR special effects of the building under different lighting, or the AR special effect data may include AR special effects of the building when the building is located in different historical periods, and the like.
For example, a plurality of types of AR special effect data may be displayed in a switched manner according to a target keyword being explained in currently played audio explanation data, so that the displayed AR special effect data is matched with the target keyword being explained. Or, in the process of playing the audio explanation data, the user can switch and display a plurality of AR special effect data corresponding to the currently played audio explanation data by adjusting the position of the AR device. For example, in the process of playing audio explanation data, a user may switch and display multiple types of AR special effect data corresponding to the currently played audio explanation data by moving the AR device left and right or rotating the AR device clockwise.
Here, the audio explanation data may correspond to a plurality of AR special effect data, so that in the process of playing the audio explanation data, the display of the plurality of AR special effect data corresponding to the currently played audio explanation data may be switched, and the display flexibility and diversity of the AR special effect data are improved.
In an optional implementation manner, the switching displays multiple types of AR special effect data corresponding to currently played audio explanation data:
step B1, responding to the target trigger operation, and determining AR special effect data to be displayed from multiple AR special effect data corresponding to the currently played audio explanation data;
and step B2, controlling the AR equipment to display the AR special effect data to be displayed.
When the AR equipment plays the audio explanation data, the AR equipment can be synchronously controlled to play the AR special effect data corresponding to the audio explanation data, for example, one of the multiple AR special effect data can be randomly selected for playing, if a user wants to check other AR special effect data, a target triggering operation can be executed, and in response to the target triggering operation, the AR special effect data to be displayed is determined from the multiple AR special effect data corresponding to the currently played audio explanation data; and controlling the AR equipment to display the AR special effect data to be displayed.
The target triggering operation can be set according to actual needs, for example, the target triggering operation can be voice data, touch screen operation, pose change state of the AR device, and the like. For example, voice data may be input in the AR device, for example, the input voice data may be "special effect one", and in response to the voice data, the AR special effect data to be presented is determined from a plurality of AR special effect data corresponding to the currently played audio interpretation data. Or, a touch screen operation may be performed on the AR device, and in response to the touch screen operation, the AR special effect data to be displayed is determined from a plurality of AR special effect data corresponding to the currently played audio explanation data. Or determining AR special effect data to be displayed from multiple AR special effect data corresponding to currently played audio explanation data through the pose change state of the AR equipment; for example, if the pose transformation state of the AR device is left-right movement, the next AR special effect data located behind the currently played AR special effect data among the multiple kinds of special effect data may be determined as the AR special effect data to be displayed. And further, the AR equipment can be controlled to display the determined AR special effect data to be displayed.
Here, when the audio explanation data corresponds to a plurality of AR special effect data, the AR special effect data to be displayed may be determined from the plurality of AR special effect data corresponding to the currently played audio explanation data in response to the target trigger operation, so that the determined AR special effect data to be displayed may satisfy a user demand, and flexibility of displaying the AR special effect data is improved.
In an optional implementation manner, the method may be applied to a client application platform, where the client application platform is a Web application platform or an applet application platform.
When the method is implemented, the method can be applied to a client application platform, and the client can be a Web application platform on an AR device or an applet application platform on the AR device. Alternatively, the client application platform may also be an application on an AR device for AR navigation.
In implementation, referring to fig. 5, the augmented reality data presentation method may include:
s501, after the AR equipment reaches the target area, the positioning information of the AR equipment is obtained.
S502, based on the positioning information of the AR equipment, whether the explanation area corresponding to the AR equipment and any preset knowledge point meets the preset position condition or not is judged.
And S503, if yes, acquiring audio explanation data corresponding to any preset knowledge point and AR special effect data matched with the audio explanation data. For example, when the distance between the AR device and the explanation area corresponding to any one of the preset knowledge points is less than 2 meters, it is determined that the AR device satisfies the preset location condition. If not, the positioning information of the AR equipment can be continuously acquired.
S504, based on the positioning information, whether the AR equipment meets the trigger display condition of the AR special effect data is judged.
And S505, if the condition is met, controlling the AR equipment to play the audio explanation data, and displaying the AR special effect data.
And S506, if the AR special effect data does not meet the preset requirement, controlling the AR equipment to display first guide information for guiding and adjusting the positioning information of the AR equipment so as to guide the AR equipment to check the AR special effect data. For example, when the orientation of the AR device is located in a trigger display direction corresponding to the AR special effect data, and/or when the distance between the AR device and the display pose of the AR device and the AR special effect data is within a range of 0.5 m to 5 m (including 0.5 m and 5 m), it is determined that the AR device satisfies the trigger display condition of the AR special effect data.
And S507, after the audio explanation data of any preset knowledge point is played, and/or after the AR special effect data is displayed, prompting the AR equipment to check the next preset knowledge point until the audio explanation data corresponding to all the preset knowledge points in the target area are played completely, and confirming that the navigation is finished.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides an augmented reality data presentation device, as shown in fig. 6, which is an architecture schematic diagram of the augmented reality data presentation device provided in the embodiment of the present disclosure, and includes a first obtaining module 601, a second obtaining module 602, and a control module 603, specifically:
a first obtaining module 601, configured to obtain positioning information of an augmented reality AR device;
a second obtaining module 602, configured to, based on the positioning information, obtain audio explanation data corresponding to any one preset knowledge point and AR special effect data matched with the audio explanation data when it is determined that a preset position condition is satisfied between the AR device and an explanation area corresponding to the any one preset knowledge point;
the control module 603 is configured to control the AR device to play the audio interpretation data, and display AR special effect data matched with the currently played audio interpretation data.
In a possible implementation manner, the control module 603, when controlling the AR device to play audio explanation data and displaying AR special effect data matching with the currently played audio explanation data, is configured to:
and under the condition that the AR equipment meets the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the AR special effect data.
In a possible embodiment, the apparatus further comprises: a first presentation module 604 for:
and under the condition that the AR equipment is determined not to meet the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to display first guide information for guiding and adjusting the positioning information of the AR equipment.
In a possible embodiment, the apparatus further comprises: a second presentation module 605 for:
and when the audio explanation data corresponding to any preset knowledge point is played completely and/or the AR special effect data is displayed completely, controlling the AR equipment to display at least one of the following information:
second guidance information for instructing the AR device to move to an explanation area corresponding to a next preset knowledge point;
list information indicating preset knowledge points that the AR device has not visited;
and the navigation ending information is used for indicating that the AR equipment plays the audio explanation data corresponding to each preset knowledge point.
In one possible embodiment, when presenting the AR special effect data corresponding to the currently played audio explanation data, the control module 603 is configured to:
and in the process of playing the audio explanation data, switching and displaying various AR special effect data corresponding to the currently played audio explanation data.
In a possible implementation manner, when switching to display multiple types of AR special effect data corresponding to currently played audio explanation data, the control module 603 is configured to:
responding to a target trigger operation, and determining AR special effect data to be displayed from multiple AR special effect data corresponding to currently played audio explanation data;
and controlling the AR equipment to display the AR special effect data to be displayed.
In a possible implementation manner, the first obtaining module 601, when obtaining the positioning information of the augmented reality AR device, is configured to:
acquiring a real-time scene image acquired by the AR equipment;
and determining the positioning information of the AR equipment based on the real-time scene image and the constructed three-dimensional scene model.
In a possible implementation manner, the apparatus further includes a determining module 606, configured to determine in advance the audio explanation data corresponding to each preset knowledge point according to the following steps:
acquiring audio explanation data to be processed corresponding to a target area;
determining target keywords respectively matched with a plurality of preset knowledge points from the audio explanation data to be processed according to a preset knowledge point library;
and respectively determining audio explanation data corresponding to each preset knowledge point from the audio explanation data to be processed according to the corresponding playing time position of each target keyword in the audio explanation data to be processed.
In one possible implementation, the method is applied to a client application platform, and the client application platform is a Web application platform or an applet application platform.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 7, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used to temporarily store operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, the processor 701 exchanges data with the external memory 7022 through the memory 7021, and when the electronic device 700 is operated, the processor 701 and the memory 702 communicate with each other through the bus 703, so that the processor 701 executes the following instructions:
acquiring positioning information of augmented reality AR equipment;
based on the positioning information, under the condition that the fact that the position between the AR equipment and an explanation area corresponding to any preset knowledge point meets a preset position condition is determined, audio explanation data corresponding to any preset knowledge point and AR special effect data matched with the audio explanation data are obtained;
and controlling the AR equipment to play the audio explanation data and displaying the AR special effect data matched with the currently played audio explanation data.
In addition, the embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the augmented reality data presentation method in the above method embodiment are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
An embodiment of the present disclosure further provides a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the augmented reality data presentation method in the foregoing method embodiment, which may be referred to specifically in the foregoing method embodiment, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. An augmented reality data presentation method, comprising:
acquiring positioning information of augmented reality AR equipment;
based on the positioning information, under the condition that the fact that the position between the AR equipment and an explanation area corresponding to any preset knowledge point meets a preset position condition is determined, audio explanation data corresponding to any preset knowledge point and AR special effect data matched with the audio explanation data are obtained;
and controlling the AR equipment to play the audio explanation data and displaying the AR special effect data matched with the currently played audio explanation data.
2. The method of claim 1, wherein the controlling the AR device to play audio explanation data and to present AR special effect data matching currently played audio explanation data comprises:
and under the condition that the AR equipment meets the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the AR special effect data.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and under the condition that the AR equipment is determined not to meet the trigger display condition of the AR special effect data based on the positioning information, controlling the AR equipment to display first guide information for guiding and adjusting the positioning information of the AR equipment.
4. The method according to any one of claims 1 to 3, further comprising:
and when the audio explanation data corresponding to any preset knowledge point is played completely and/or the AR special effect data is displayed completely, controlling the AR equipment to display at least one of the following information:
second guidance information for instructing the AR device to move to an explanation area corresponding to a next preset knowledge point;
list information indicating preset knowledge points that the AR device has not visited;
and the navigation ending information is used for indicating that the AR equipment plays the audio explanation data corresponding to each preset knowledge point.
5. The method according to any one of claims 1 to 4, wherein the displaying the AR special effect data corresponding to the currently played audio explanation data comprises:
and in the process of playing the audio explanation data, switching and displaying various AR special effect data corresponding to the currently played audio explanation data.
6. The method of claim 5, wherein the switching to display the plurality of AR special effect data corresponding to the currently played audio explanation data comprises:
responding to a target trigger operation, and determining AR special effect data to be displayed from multiple AR special effect data corresponding to currently played audio explanation data;
and controlling the AR equipment to display the AR special effect data to be displayed.
7. The method according to any one of claims 1 to 6, wherein the acquiring the positioning information of the AR device comprises:
acquiring a real-time scene image acquired by the AR equipment;
and determining the positioning information of the AR equipment based on the real-time scene image and the constructed three-dimensional scene model.
8. The method according to any one of claims 1 to 7, wherein the audio explanation data corresponding to each preset knowledge point is predetermined according to the following steps:
acquiring audio explanation data to be processed corresponding to a target area;
determining target keywords respectively matched with a plurality of preset knowledge points from the audio explanation data to be processed according to a preset knowledge point library;
and respectively determining audio explanation data corresponding to each preset knowledge point from the audio explanation data to be processed according to the corresponding playing time position of each target keyword in the audio explanation data to be processed.
9. The method according to any one of claims 1 to 8, wherein the method is applied to a client application platform, and the client application platform is a Web application platform or an applet application platform.
10. An augmented reality data presentation device, comprising:
the first acquisition module is used for acquiring positioning information of the AR equipment;
a second obtaining module, configured to obtain audio explanation data corresponding to any one of the preset knowledge points and AR special effect data matched with the audio explanation data when it is determined that a preset position condition is satisfied between the AR device and an explanation area corresponding to the any one of the preset knowledge points based on the positioning information;
and the control module is used for controlling the AR equipment to play the audio explanation data and displaying the AR special effect data matched with the currently played audio explanation data.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the augmented reality data presentation method according to any one of claims 1 to 9.
12. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the augmented reality data presentation method according to any one of claims 1 to 9.
CN202110619445.8A 2021-06-03 2021-06-03 Augmented reality data presentation method and device, electronic equipment and storage medium Pending CN113359983A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110619445.8A CN113359983A (en) 2021-06-03 2021-06-03 Augmented reality data presentation method and device, electronic equipment and storage medium
PCT/CN2022/076270 WO2022252688A1 (en) 2021-06-03 2022-02-15 Augmented reality data presentation method and apparatus, electronic device, and storage medium
TW111108666A TW202248808A (en) 2021-06-03 2022-03-09 Augmented reality data presentation method, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110619445.8A CN113359983A (en) 2021-06-03 2021-06-03 Augmented reality data presentation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113359983A true CN113359983A (en) 2021-09-07

Family

ID=77531631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110619445.8A Pending CN113359983A (en) 2021-06-03 2021-06-03 Augmented reality data presentation method and device, electronic equipment and storage medium

Country Status (3)

Country Link
CN (1) CN113359983A (en)
TW (1) TW202248808A (en)
WO (1) WO2022252688A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327204A (en) * 2021-12-30 2022-04-12 北京达佳互联信息技术有限公司 Information display method, device, equipment and storage medium
WO2022252688A1 (en) * 2021-06-03 2022-12-08 上海商汤智能科技有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377487A (en) * 2012-04-11 2013-10-30 索尼公司 Information processing apparatus, display control method, and program
CN105718588A (en) * 2016-01-26 2016-06-29 北京行云时空科技有限公司 Space-time log automatic generation method and system based on 3D glasses
CN107728782A (en) * 2017-09-21 2018-02-23 广州数娱信息科技有限公司 Exchange method and interactive system, server
CN110110104A (en) * 2019-04-18 2019-08-09 贝壳技术有限公司 It is a kind of to automatically generate the method and device that house is explained in virtual three-dimensional space
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality
CN111638796A (en) * 2020-06-05 2020-09-08 浙江商汤科技开发有限公司 Virtual object display method and device, computer equipment and storage medium
CN111640171A (en) * 2020-06-10 2020-09-08 浙江商汤科技开发有限公司 Historical scene explaining method and device, electronic equipment and storage medium
CN112181141A (en) * 2020-09-23 2021-01-05 北京市商汤科技开发有限公司 AR positioning method, AR positioning device, electronic equipment and storage medium
CN112179331A (en) * 2020-09-23 2021-01-05 北京市商汤科技开发有限公司 AR navigation method, AR navigation device, electronic equipment and storage medium
WO2021073268A1 (en) * 2019-10-15 2021-04-22 北京市商汤科技开发有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190251750A1 (en) * 2018-02-09 2019-08-15 Tsunami VR, Inc. Systems and methods for using a virtual reality device to emulate user experience of an augmented reality device
CN110716646A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method, device, equipment and storage medium
CN112348969B (en) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
CN113359983A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377487A (en) * 2012-04-11 2013-10-30 索尼公司 Information processing apparatus, display control method, and program
CN105718588A (en) * 2016-01-26 2016-06-29 北京行云时空科技有限公司 Space-time log automatic generation method and system based on 3D glasses
CN107728782A (en) * 2017-09-21 2018-02-23 广州数娱信息科技有限公司 Exchange method and interactive system, server
CN110110104A (en) * 2019-04-18 2019-08-09 贝壳技术有限公司 It is a kind of to automatically generate the method and device that house is explained in virtual three-dimensional space
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality
WO2021073268A1 (en) * 2019-10-15 2021-04-22 北京市商汤科技开发有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN111638796A (en) * 2020-06-05 2020-09-08 浙江商汤科技开发有限公司 Virtual object display method and device, computer equipment and storage medium
CN111640171A (en) * 2020-06-10 2020-09-08 浙江商汤科技开发有限公司 Historical scene explaining method and device, electronic equipment and storage medium
CN112181141A (en) * 2020-09-23 2021-01-05 北京市商汤科技开发有限公司 AR positioning method, AR positioning device, electronic equipment and storage medium
CN112179331A (en) * 2020-09-23 2021-01-05 北京市商汤科技开发有限公司 AR navigation method, AR navigation device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李楠等: "基于AR和移动终端的博物馆智能导览系统的设计与实现", 《北华航天工业学院学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022252688A1 (en) * 2021-06-03 2022-12-08 上海商汤智能科技有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN114327204A (en) * 2021-12-30 2022-04-12 北京达佳互联信息技术有限公司 Information display method, device, equipment and storage medium

Also Published As

Publication number Publication date
TW202248808A (en) 2022-12-16
WO2022252688A1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
KR102417645B1 (en) AR scene image processing method, device, electronic device and storage medium
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
JP5976019B2 (en) Theme-based expansion of photorealistic views
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
US20170153787A1 (en) Injection of 3-d virtual objects of museum artifact in ar space and interaction with the same
WO2016122973A1 (en) Real time texture mapping
CN112148189A (en) Interaction method and device in AR scene, electronic equipment and storage medium
JP2022505998A (en) Augmented reality data presentation methods, devices, electronic devices and storage media
JP2012212345A (en) Terminal device, object control method and program
CN111638797A (en) Display control method and device
CN113359986B (en) Augmented reality data display method and device, electronic equipment and storage medium
CN113359983A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111640192A (en) Scene image processing method and device, AR device and storage medium
CN111693063A (en) Navigation interaction display method and device, electronic equipment and storage medium
CN112967404A (en) Method and device for controlling movement of virtual object, electronic equipment and storage medium
EP3104333A1 (en) Method and system for generating a user-customized computer-generated animation
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
CN112148125A (en) AR interaction state control method, device, equipment and storage medium
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN113345108A (en) Augmented reality data display method and device, electronic equipment and storage medium
CN113282687A (en) Data display method and device, computer equipment and storage medium
CN112947756A (en) Content navigation method, device, system, computer equipment and storage medium
CN112333498A (en) Display control method and device, computer equipment and storage medium
CN111918114A (en) Image display method, image display device, display equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052319

Country of ref document: HK