CN112148197A - Augmented reality AR interaction method and device, electronic equipment and storage medium - Google Patents

Augmented reality AR interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112148197A
CN112148197A CN202011005936.5A CN202011005936A CN112148197A CN 112148197 A CN112148197 A CN 112148197A CN 202011005936 A CN202011005936 A CN 202011005936A CN 112148197 A CN112148197 A CN 112148197A
Authority
CN
China
Prior art keywords
virtual
special effect
trigger operation
real
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011005936.5A
Other languages
Chinese (zh)
Inventor
王鼎禄
刘旭
侯欣如
李斌
黄波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202011005936.5A priority Critical patent/CN112148197A/en
Publication of CN112148197A publication Critical patent/CN112148197A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an Augmented Reality (AR) interaction method, an apparatus, an electronic device and a storage medium, wherein the AR interaction method comprises: displaying an AR picture matched with the real scene image on the AR equipment based on the real scene image shot by the AR equipment; at least one virtual object is displayed in the AR picture, wherein the at least one virtual object comprises an associated virtual object, and the associated virtual object comprises a first virtual sub-object and a second virtual sub-object; and detecting a target trigger operation aiming at the associated virtual object, and displaying a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object in the AR picture under the condition that the target trigger operation meets a preset condition, wherein the first AR special effect and the second AR special effect are associated. In the embodiment of the disclosure, because the virtual object includes the associated virtual object, the associated first AR special effect and the associated second AR special effect can be displayed when the virtual object is triggered, and the display effect of the AR special effect is improved.

Description

Augmented reality AR interaction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to an augmented reality AR interaction method, apparatus, electronic device, and storage medium.
Background
Augmented Reality (AR) technology is a technology for skillfully fusing virtual information and a real world, and the technology can superimpose the virtual information and a real environment on one picture in real time. Although AR has advanced significantly over the past years, it has been found in the AR experience that prior art AR devices exhibit a single AR display. Therefore, optimization of the effect of the augmented reality scene presented by the AR device and improvement of the user interaction experience are increasingly important.
Disclosure of Invention
The embodiment of the disclosure at least provides an AR interaction method, an AR interaction device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an AR interaction method, including:
displaying an AR picture matched with the real scene image on the AR equipment based on the real scene image shot by the AR equipment; at least one virtual object is displayed in the AR picture, wherein the at least one virtual object comprises an associated virtual object, and the associated virtual object comprises a first virtual sub-object and a second virtual sub-object;
and detecting a target trigger operation aiming at the associated virtual object, and displaying a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object in the AR picture under the condition that the target trigger operation meets a preset condition, wherein the first AR special effect and the second AR special effect are associated.
In the embodiment of the disclosure, based on the real scene image shot by the AR device, the corresponding AR picture is displayed, and at least one associated virtual object is displayed in the AR picture, so that the associated first AR special effect and the associated second AR special effect are displayed when the associated virtual object is triggered, the display effect of the AR special effect is further improved, and the user experience is enriched.
According to the first aspect, in a possible implementation, the detecting a target trigger operation for the associated virtual object includes:
detecting a trigger operation acting on a screen of the AR device;
identifying a screen coordinate position of the trigger operation on a screen of the AR device;
and determining whether the trigger operation is a target trigger operation aiming at the associated virtual object or not based on the screen coordinate position and a preset position area of the associated virtual object in a pre-constructed three-dimensional virtual scene map.
In the embodiment of the disclosure, whether the trigger operation is a target trigger operation for the associated virtual object is determined based on the screen coordinate position of the trigger operation on the screen of the AR device and the preset position area of the associated virtual object in the pre-constructed three-dimensional virtual scene map, and the operation acting on the screen of the AR device can be combined with the real scene to improve the interactive reality.
According to the first aspect, in a possible implementation manner, the determining whether the trigger operation is a target trigger operation for the associated virtual object based on the screen coordinate position and a preset position area of the associated virtual object in a pre-constructed three-dimensional virtual scene map includes:
determining a screen coordinate position corresponding to the screen coordinate position according to the screen coordinate position acted on the screen of the AR equipment by the trigger operation and a conversion relation between a screen coordinate system and a real-time positioning and mapping SLAM coordinate system;
mapping the SLAM coordinate position to the three-dimensional virtual scene map to obtain a virtual world coordinate position of the trigger operation acting on the three-dimensional virtual scene map;
and determining the trigger operation as a target trigger operation aiming at the associated virtual object under the condition that the virtual world coordinate position is located in the preset position area.
In the embodiment of the disclosure, the conversion from the screen coordinate position of the trigger operation acting on the screen of the AR device to the virtual world coordinate position in the three-dimensional virtual scene map is realized by real-time positioning and map building of the SLAM coordinate, and then whether the trigger operation is triggered to the virtual object is determined, so that not only is the positioning from the real world to the virtual world realized, but also the positioning method is relatively accurate, and the interaction experience of a user is improved.
According to the first aspect, in a possible implementation manner, the displaying, in the AR screen, a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object includes:
and displaying a first AR special effect of the first virtual sub-object with a first state change and a second AR special effect of the second virtual sub-object with a second state change in the AR picture.
In the embodiment of the disclosure, the corresponding AR special effect is displayed by associating the state changes of the first virtual sub-object and the second virtual sub-object in the virtual objects, so that the AR special effect is flexible and vivid, and the display effect of the AR special effect is improved.
In a possible embodiment according to the first aspect, the first state change comprises a form change and the second state change comprises a position change.
In the embodiment of the present disclosure, the AR special effects of the associated first virtual sub-object and the second virtual sub-object respectively include different effects of a form change and a position change, so that the special effect pictures are rich and colorful, and the visual effect can be further improved by the associated dynamic display effect.
According to the first aspect, in a possible implementation manner, based on a real scene image captured by an AR device, displaying, at the AR device, an AR picture matching the real scene image, includes:
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real scene image shot by the AR equipment and a pre-constructed three-dimensional virtual scene map.
In the embodiment of the disclosure, the display special effect of the virtual object in the real scene is designed based on the pre-constructed three-dimensional virtual scene map, so that the display special effect of the virtual object and the real scene are better integrated.
According to the first aspect, in a possible implementation manner, displaying, on an AR device, an AR picture matched with a real-time positioning pose of the AR device based on a real-scene image shot by the AR device and a pre-constructed three-dimensional virtual scene map includes:
determining an initial positioning pose of the AR equipment based on a real scene image shot by the AR equipment and the pre-constructed three-dimensional virtual scene map;
determining the real-time positioning pose of the AR equipment through real-time positioning and map building SLAM based on the initial positioning pose of the AR equipment;
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real-time positioning pose of the AR equipment.
In the embodiment of the disclosure, the real-time positioning pose of the AR device is determined through the SLAM, and then based on the real-time positioning pose of the AR device, the AR picture matched with the real-time positioning pose of the AR device is displayed on the AR device.
In a possible implementation manner, the target triggering operation satisfying the preset condition includes at least one of the following:
the number of times of the target trigger operation exceeds a preset number of times, and the duration of the target trigger operation exceeds a preset duration.
In the embodiment of the disclosure, the target triggering operation meeting the preset condition is limited by the operation times and the operation duration, so that the interest of user interaction can be increased.
According to the first aspect, in a possible implementation, presenting, at an AR device, an AR picture matching the real scene image includes:
acquiring target AR special effect data of the virtual object corresponding to the currently reached target interaction stage based on AR special effect data of the virtual object corresponding to different preset interaction stages;
displaying the virtual object in the AR picture based on the target AR special effect data.
In the embodiment of the disclosure, target AR special effect data of the virtual object corresponding to the currently reached target interaction stage is obtained based on the AR special effect data of the virtual object corresponding to the preset different interaction stages, and different AR special effect data are matched at different stages, so that the interaction experience of a user is enriched.
In a second aspect, an embodiment of the present disclosure provides an AR interaction apparatus, including:
the AR picture display module is used for displaying an AR picture matched with the real scene image on the AR equipment based on the real scene image shot by the AR equipment; at least one virtual object is displayed in the AR picture, wherein the at least one virtual object comprises an associated virtual object, and the associated virtual object comprises a first virtual sub-object and a second virtual sub-object;
and the AR special effect display module is used for detecting a target trigger operation aiming at the associated virtual object, and displaying a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object in the AR picture under the condition that the target trigger operation meets a preset condition, wherein the first AR special effect and the second AR special effect are associated.
According to the second aspect, in a possible implementation manner, the AR special effects presentation module is specifically configured to:
detecting a trigger operation acting on a screen of the AR device;
identifying a screen coordinate position of the trigger operation on a screen of the AR device;
and determining whether the trigger operation is a target trigger operation aiming at the associated virtual object or not based on the screen coordinate position and a preset position area of the associated virtual object in a pre-constructed three-dimensional virtual scene map.
According to the second aspect, in a possible implementation manner, the AR special effects presentation module is specifically configured to:
determining a screen coordinate position corresponding to the screen coordinate position according to the screen coordinate position acted on the screen of the AR equipment by the trigger operation and a conversion relation between a screen coordinate system and a real-time positioning and mapping SLAM coordinate system;
mapping the SLAM coordinate position to the three-dimensional virtual scene map to obtain a virtual world coordinate position of the trigger operation acting on the three-dimensional virtual scene map;
and determining the trigger operation as a target trigger operation aiming at the associated virtual object under the condition that the virtual world coordinate position is located in the preset position area.
According to the second aspect, in a possible implementation manner, the AR special effects presentation module is specifically configured to:
and displaying a first AR special effect of the first virtual sub-object with a first state change and a second AR special effect of the second virtual sub-object with a second state change in the AR picture.
According to the second aspect, in a possible embodiment, the first state change comprises a form change and the second state change comprises a position change.
According to the second aspect, in a possible implementation manner, the AR picture presentation module is specifically configured to:
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real scene image shot by the AR equipment and a pre-constructed three-dimensional virtual scene map.
According to the second aspect, in a possible implementation manner, the AR picture presentation module is specifically configured to:
determining an initial positioning pose of the AR equipment based on a real scene image shot by the AR equipment and the pre-constructed three-dimensional virtual scene map;
determining the real-time positioning pose of the AR equipment through real-time positioning and map building SLAM based on the initial positioning pose of the AR equipment;
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real-time positioning pose of the AR equipment.
According to the second aspect, in a possible implementation, the target triggering operation satisfies a preset condition, and includes at least one of:
the number of times of the target trigger operation exceeds a preset number of times, and the duration of the target trigger operation exceeds a preset duration.
According to the second aspect, in a possible implementation manner, the AR picture presentation module is specifically configured to:
acquiring target AR special effect data of the virtual object corresponding to the currently reached target interaction stage based on AR special effect data of the virtual object corresponding to different preset interaction stages;
displaying the virtual object in the AR picture based on the target AR special effect data.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the AR interaction method according to the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the augmented reality AR interaction method according to the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an AR interaction method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a first virtual sub-object and a second virtual sub-object provided by an embodiment of the present disclosure;
FIG. 3 illustrates a flowchart of a method for detecting a target trigger operation for an associated virtual object according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a method for determining whether a trigger operation is a target trigger operation for an associated virtual object according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method for displaying an AR picture according to an embodiment of the disclosure;
FIG. 6 is a flowchart illustrating a method for generating a three-dimensional virtual scene map according to an embodiment of the disclosure;
FIG. 7 illustrates a flowchart of a method of determining initial pose data of an AR device provided by an embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of an augmented reality AR interaction apparatus provided by an embodiment of the present disclosure;
fig. 9 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Augmented Reality (AR) technology is a technology for skillfully fusing virtual information and a real world, and the technology can superimpose the virtual information and a real environment on one picture in real time. Although AR has advanced significantly over the past years, it has been found in the AR experience that prior art AR devices exhibit a single AR display. Therefore, optimization of the effect of the augmented reality scene presented by the AR device and improvement of the user's interactive experience are technical problems to be solved by the present disclosure.
Based on the research, the disclosure provides an AR interaction method, which includes displaying an AR picture matched with a real scene image on an AR device based on the real scene image shot by the AR device; the AR picture is provided with at least one associated virtual object, the associated virtual object comprises a first virtual sub-object and a second virtual sub-object, and a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object are displayed when the associated virtual object is triggered, so that the display effect of the AR special effects is improved, and the user interaction experience is enriched.
To facilitate understanding of the present embodiment, first, an AR interaction method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the AR interaction method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a mobile device, a user terminal, a handheld device, a computing device, a vehicle device, a wearable device, or the like, or a server or other processing device. In some possible implementations, the AR interaction method may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of an AR interaction method provided in the embodiment of the present disclosure is shown, where the AR interaction method includes the following steps S101 to S102:
s101, displaying an AR picture matched with a real scene image on the AR equipment based on the real scene image shot by the AR equipment; at least one virtual object is shown in the AR picture, wherein the at least one virtual object comprises an associated virtual object, and the associated virtual object comprises a first virtual sub-object and a second virtual sub-object.
Illustratively, an AR picture matched with the real-time positioning pose of the AR equipment can be displayed on the AR equipment based on a real scene image shot by the AR equipment and a pre-constructed three-dimensional virtual scene map, so that the display special effect of the virtual object and the real scene are better in integration. The AR device may specifically include a smart phone, a tablet computer, AR glasses, and the like, that is, the AR device may be a terminal device in the aforementioned computer device with a certain computing capability. The AR equipment can be internally provided with an image acquisition component and also can be externally connected with the image acquisition component, and after the AR equipment enters a working state, real scene images can be shot in real time through the image acquisition component.
For example, whether a target object is included in the real scene image may be identified based on the real scene image captured by the AR device, and in a case where it is identified that the target object is included in the current real scene image, an AR picture matching the target object may be displayed in the AR device. For example, when the target object is a sofa in a living room, once the sofa is recognized to be included in the image of the real scene, the virtual object may be displayed in a preset orientation of the sofa to form an AR picture matching the target object.
The associated virtual object comprises a first virtual sub-object and a second virtual sub-object, and means that an association relationship exists between the first virtual sub-object and the second virtual sub-object, and the association relationship can be a supported relationship existing between the first virtual sub-object and the second virtual sub-object; the control relationship between the first virtual sub-object and the second virtual sub-object may also be present, for example, the behavior of the first virtual sub-object on the second virtual sub-object is controlled, and the like, which is not limited herein.
Fig. 2 is a schematic diagram of a first virtual sub-object and a second virtual sub-object in an embodiment of the present application. In some embodiments, the first virtual sub-object may be a control virtual sub-object, and the second virtual sub-object may be a trapped virtual sub-object, the control virtual sub-object being for trapping the trapped virtual sub-object.
For example, taking a point bubble game as an example, the control virtual sub-object may be a virtual bubble, and the trapped virtual sub-object may be a trapped virtual living body, such as various small animals. In this embodiment, the second virtual sub-object is completely surrounded by the first virtual sub-object. It is understood that the trapped virtual sub-object may also be other types of virtual objects, such as, but not limited to, a bomb, etc.
In other embodiments, the control virtual sub-object may also be a hijacker, and the trapped virtual sub-object may also be a hijacked person, as long as the first virtual sub-object and the second virtual sub-object have an association relationship, and the specific types of the first virtual sub-object and the second virtual sub-object are not limited.
It should be noted that the virtual object may also include other virtual objects besides the associated virtual object, for example, the virtual object may also include an empty virtual bubble.
S102, detecting a target trigger operation aiming at the associated virtual object, and displaying a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object in an AR picture under the condition that the target trigger operation meets a preset condition, wherein the first AR special effect and the second AR special effect are associated.
The association between the first AR special effect and the second AR special effect means that a certain association relationship exists between the first AR special effect and the second AR special effect, for example, the second AR special effect is caused by the first AR special effect, or the first AR special effect and the second AR special effect have a certain association (for example, occur simultaneously) in occurrence time, and the association relationship between the first AR special effect and the second AR special effect is not limited herein.
For example, a first AR special effect in which the first virtual sub-object changes in state in a first direction and a second AR special effect in which the second virtual sub-object changes in state in a second direction may be displayed in the AR screen. Therefore, the corresponding AR special effect is displayed by associating the state changes of the first virtual sub-object and the second virtual sub-object in the virtual objects, so that the AR special effect is flexible and vivid, and the display effect of the AR special effect is improved.
For example, the first state change may include a form change and the second state change may include a position change. As shown in fig. 2, the first virtual sub-object (virtual bubble) can show the first AR special effect of the self-breaking form change, and the second virtual sub-object (living animal) can show the second AR special effect of the flying-away position change, so that the AR special effect is rich and colorful, the visual impact can be improved through the associated dynamic display effect, and the visual experience of the user is further improved.
Illustratively, the first state change may also include a posture change, and the second state change may also include a posture change. For example, a first virtual sub-object may exhibit a first AR effect that switches the change in pose of the presentation, and a second virtual sub-object may also exhibit a second AR effect that switches the change in pose of the presentation.
In the embodiment of the disclosure, based on the real scene image shot by the AR device, the corresponding AR picture is displayed, and at least one associated virtual object is displayed in the AR picture, so that the associated first AR special effect and the associated second AR special effect are displayed when the associated virtual object is triggered, the display effect of the AR special effect is further improved, and the user experience is enriched.
The above-mentioned S101 to S102 will be described in detail with reference to specific embodiments.
For the above S101, when the AR device displays an AR screen matching the real-time positioning pose of the AR device based on a real-scene image shot by the AR device and a pre-constructed three-dimensional virtual scene map, as shown in fig. 3, the following S1011 to 1013 may be included:
and S1011, determining an initial positioning pose of the AR equipment based on the real scene image shot by the AR equipment and the pre-constructed three-dimensional virtual scene map.
For example, a three-dimensional virtual scene map representing a real scene may be generated by previously shooting video or image data obtained from the real scene, and the specific generation manner is described later in detail.
Illustratively, the three-dimensional virtual scene map is constructed based on video data of the real scene, the real scene image is an image of the real scene, and therefore, the current pose data of the image acquisition means can be determined based on the real scene image and the three-dimensional virtual scene map, because the image acquisition means is located on the AR device, and the real scene image displayed on the screen of the AR device is acquired by the image acquisition means, the current pose data of the image acquisition means can be used as the initial pose data of the AR device in the real site.
Illustratively, the initial pose data of the AR device includes current position coordinates and current pose data of the AR device in a world coordinate system corresponding to the real site, wherein the current pose data may include a current orientation of the AR device, which may be represented by a current angle of an optical axis of an image capture component in the AR device with an X-axis, a Y-axis, and a Z-axis in the world coordinate system.
And S1012, determining the real-time positioning pose of the AR equipment through real-time positioning and map building SLAM based on the initial positioning pose of the AR equipment.
The SLAM means that the equipment moves from an unknown position (initial positioning position) in an unknown environment, self-positioning is carried out according to position estimation and a map in the moving process, and meanwhile, an incremental map is built on the basis of self-positioning to realize autonomous positioning and navigation of the equipment. In this embodiment, the AR device uses the position in the initial positioning pose as the origin of coordinates of the SLAM map, and establishes a SLAM coordinate system based on the origin of coordinates, so that the pose of the AR device in the SLAM coordinate system can be determined in the movement process of the AR device, and the real-time positioning pose of the AR device in the three-dimensional virtual scene map can be determined by combining the pose of the origin of coordinates in the three-dimensional virtual scene map.
And S1013, displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real-time positioning pose of the AR equipment.
In some embodiments, when the real-time positioning pose is the same as the preset positioning pose, an AR picture matching the real-time positioning pose of the AR device is displayed on the AR device. Therefore, the virtual object can be guaranteed to be different in the displayed real scene at each time, the virtual object can display the special effect in different real spaces, and different visual experiences are brought to users.
For the above S102, when detecting the target trigger operation for the associated virtual object, as shown in fig. 4, the following S1021 to 1023 may be included:
s1021, a trigger operation acting on the screen of the AR device is detected.
S1022, a screen coordinate position of the trigger operation on the screen of the AR device is identified.
And S1023, determining whether the trigger operation is a target trigger operation aiming at the associated virtual object or not based on the screen coordinate position and a preset position area of the associated virtual object in a pre-constructed three-dimensional virtual scene map.
For example, when a user touches a screen of the AR device, the AR device may detect a trigger operation acting on the screen and recognize a screen coordinate position of the trigger operation on the screen of the AR device.
For the above S1023, when determining whether the trigger operation is a target trigger operation for the associated virtual object based on the screen coordinate position and the preset position area of the associated virtual object in the pre-constructed three-dimensional virtual scene map, as shown in fig. 5, the following S10231 to 10233 may be included:
s10231, determining a SLAM coordinate position corresponding to the screen coordinate position according to the screen coordinate position acted on the AR device screen by the trigger operation and the conversion relation between the screen coordinate system and the real-time positioning and mapping SLAM coordinate system.
It can be understood that the position of the AR device in the SLAM coordinate system is the position of a certain reference point (such as the position of the camera) on the AR device in the SLAM coordinate system, and the relative coordinates of the screen with respect to the reference point are fixed, that is, there is a fixed conversion relationship between the screen coordinate system and the SLAM coordinate system, so that the SLAM coordinate position corresponding to the screen coordinate position can be determined based on the coordinates of a certain point in the screen coordinate system and the relative coordinates of the point with respect to the reference point.
And S10232, mapping the SLAM coordinate position to a three-dimensional virtual scene map to obtain a virtual world coordinate position of the trigger operation acting on the three-dimensional virtual scene map.
For example, since the SLAM coordinate system is established with the initial positioning pose of the AR device in the three-dimensional virtual scene map as the coordinate origin, the virtual world coordinates in the three-dimensional virtual scene map corresponding to the current point in the SLAM coordinate system can be derived based on the coordinates of the current point in the SLAM coordinate system and the coordinates of the coordinate origin (initial positioning position) with respect to the coordinates in the three-dimensional virtual scene map.
And S10233, determining the trigger operation as a target trigger operation aiming at the related virtual object when the virtual world coordinate position is located in the preset position area.
For example, when the virtual world coordinate position is located in the preset position area, it is described that the touch position of the user coincides with the position of the associated virtual object in the three-dimensional virtual map, that is, the user clicks the associated virtual object, and at this time, it is determined that the trigger operation is valid, and the trigger operation is a target trigger operation for the associated virtual object.
In some embodiments, when the number of times of the target trigger operation exceeds a preset number or the duration of the target trigger operation exceeds a preset duration, it is determined that the target trigger operation satisfies a preset condition. For example, it is determined that the virtual bubble is hit to break the virtual bubble only when the number of screen hits exceeds a preset number, so that the interest of game interaction can be increased.
It can be understood that, in other embodiments, for the above S101, when the AR device displays an AR screen that matches with the real-time positioning pose of the AR device, target AR special effect data of a virtual object corresponding to a currently reached target interaction stage may also be obtained based on AR special effect data of virtual objects respectively corresponding to preset different interaction stages, and the virtual object is displayed in the AR screen based on the target AR special effect data. The target AR special effect data includes, but is not limited to, a type, a moving speed, a frequency of generation, a number of generation, and the like of the virtual object.
The different interaction stages refer to different states corresponding to the currently displayed AR picture. This stage may be set by the user or may be determined according to the degree of operation of the AR screen by the user. For example, in a customs clearance game, the initial states of the users are the same, and with the operation on the game, the customs clearance degrees of different users are different, so as to display the state corresponding to the current level of the user.
For example, in a bubble game, different interaction stages may correspond to the level of the game, and AR pictures with different effect data may be displayed at different levels. For example, different types of virtual bubbles can be continuously generated at different levels, the difficulty of the game is controlled according to the adjustment of the movement speed of the virtual bubbles, the generation speed and the required number of times of breaking, and then rich game experience of users is realized.
As shown in fig. 6, the three-dimensional virtual scene map mentioned above for many times may be pre-constructed in the following manner, including S601 to S603:
s601, acquiring a plurality of real scene sample images.
For example, the real scene, such as a living room, may be shot in advance through the AR device at multiple angles, so as to obtain a large number of sample images of the real scene corresponding to the real scene.
S602, constructing an initial three-dimensional scene virtual model representing a real scene based on a plurality of real scene sample images.
For S602, when generating an initial three-dimensional scene virtual model corresponding to a real scene based on a plurality of real scene sample images, the method may include:
(1) extracting a plurality of feature points from each acquired real scene sample image;
(2) generating an initial three-dimensional scene virtual model based on the extracted multiple feature points and a pre-stored three-dimensional sample graph matched with the real scene; the three-dimensional sample graph is a pre-stored three-dimensional graph representing the appearance characteristics of the real scene.
Specifically, the feature points extracted for each real scene sample image may be points capable of representing key information of the real scene sample image, such as for a real scene sample image containing an object (such as a wall), where the feature points may represent feature points of contour information of the object.
Illustratively, the pre-stored three-dimensional sample graph related to the real scene may include a three-dimensional graph with dimension labels, which is set in advance and can characterize the topographic features of the real scene, such as a Computer Aided Design (CAD) three-dimensional graph characterizing the topographic features of the real scene.
Aiming at the real scene, when the extracted feature points are sufficient, the feature point cloud formed by the feature points can form a three-dimensional model representing the real scene, the feature points in the feature point cloud are unitless, the three-dimensional model formed by the feature point cloud is also unitless, and then the feature point cloud is aligned with a three-dimensional graph which is provided with scale marks and can represent the appearance features of the real scene, so that the initial three-dimensional scene virtual model corresponding to the real scene is obtained.
S603, aligning the calibration feature points on the constructed initial three-dimensional scene virtual model with the calibration feature points corresponding to the real scene to generate a three-dimensional virtual scene map.
The generated initial three-dimensional model may have a distortion phenomenon, and then the initial three-dimensional model may be adjusted through a two-dimensional map corresponding to a real scene, so that a three-dimensional scene virtual model with high accuracy may be obtained.
For S603, when aligning the calibration feature points on the constructed initial three-dimensional scene model with the calibration feature points corresponding to the real scene to generate the three-dimensional virtual scene map, the method includes:
(1) extracting calibration characteristic points for representing a plurality of spatial position points of a real scene from an initial three-dimensional scene model corresponding to the real scene;
(2) and determining real coordinate data of the calibration feature points in a real two-dimensional map corresponding to a real scene, and adjusting the coordinate data of each feature point in the initial three-dimensional scene model based on the real coordinate data corresponding to each calibration feature point.
For example, some feature points representing spatial position points of the edge and corner of the target object may be selected as calibration feature points, then a coordinate data adjustment amount is determined based on real coordinate data corresponding to the calibration feature points and coordinate data of the calibration feature points in the initial three-dimensional scene virtual model, and then the coordinate data of each feature point in the initial three-dimensional model is corrected based on the coordinate data adjustment amount, so that a three-dimensional scene virtual model with high accuracy can be obtained.
With respect to the above S1011, when determining the initial positioning pose of the AR device based on the real scene image captured by the AR device and the three-dimensional virtual scene map constructed in advance, as shown in fig. 7, the following S10111 to S10113 may be included:
s10111, extracting the feature points contained in the real scene image, and extracting the feature points of each real scene sample image when the three-dimensional virtual scene map is constructed in advance.
S10112, determining a target real scene sample image with the highest similarity to the real scene image based on the feature points corresponding to the real scene image and the feature points corresponding to each real scene sample image when the three-dimensional virtual scene map is constructed in advance.
S10113, determining initial pose data of the AR equipment based on shooting pose data corresponding to the target real scene sample image.
For example, after a real scene image captured by the AR device is acquired, a target real scene sample image with the highest similarity to the real scene image may be found through the feature points in the real scene image and the feature points of each real scene sample image when the three-dimensional virtual scene map is constructed in advance, for example, the similarity value between the real scene image and each real scene sample image may be determined based on the feature information of the feature points of the real scene image and the feature information of the feature points of each real scene sample image, and the real scene sample image with the highest similarity value and exceeding the similarity threshold value may be used as the target real scene sample image.
After the target real scene sample image is determined, initial pose data of the AR device may be determined based on shooting pose data corresponding to the target real scene sample image.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, an augmented reality AR interaction device corresponding to the augmented reality AR interaction method is further provided in the embodiment of the present disclosure, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the above-mentioned augmented reality AR interaction method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 8, a schematic diagram of an AR interaction apparatus 500 provided in an embodiment of the present disclosure is shown, where the AR interaction apparatus includes:
an AR picture display module 501, configured to display, on the basis of a real scene image captured by an AR device, an AR picture matched with the real scene image on the AR device; at least one virtual object is displayed in the AR picture, wherein the at least one virtual object comprises an associated virtual object, and the associated virtual object comprises a first virtual sub-object and a second virtual sub-object;
the AR special effect display module 502 is configured to detect a target trigger operation for the associated virtual object, and display a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object in the AR screen when the target trigger operation meets a preset condition, where there is an association between the first AR special effect and the second AR special effect.
In one possible implementation, the AR special effect display module 502 is specifically configured to:
detecting a trigger operation acting on a screen of the AR device;
identifying a screen coordinate position of a trigger operation on a screen of the AR device;
and determining whether the trigger operation is a target trigger operation aiming at the associated virtual object or not based on the screen coordinate position and a preset position area of the associated virtual object in a pre-constructed three-dimensional virtual scene map.
In one possible implementation, the AR special effect display module 502 is specifically configured to:
determining a screen coordinate position corresponding to the screen coordinate position according to the screen coordinate position acted on the screen of the AR equipment by the trigger operation and a conversion relation between a screen coordinate system and a real-time positioning and mapping SLAM coordinate system;
mapping the SLAM coordinate position to a three-dimensional virtual scene map to obtain a virtual world coordinate position of a trigger operation acting on the three-dimensional virtual scene map;
and determining the trigger operation as a target trigger operation aiming at the associated virtual object under the condition that the virtual world coordinate position is located in the preset position area.
In one possible implementation, the AR special effect display module 502 is specifically configured to:
and displaying a first AR special effect of the first virtual sub-object with the first state change and a second AR special effect of the second virtual sub-object with the second state change in the AR picture.
In one possible embodiment, the first state change comprises a form change and the second state change comprises a position change.
In a possible implementation, the AR picture display module 501 is specifically configured to:
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real scene image shot by the AR equipment and the pre-constructed three-dimensional virtual scene map.
In a possible implementation, the AR picture display module 501 is specifically configured to:
determining an initial positioning pose of the AR equipment based on a real scene image shot by the AR equipment and a pre-constructed three-dimensional virtual scene map;
determining the real-time positioning pose of the AR equipment through real-time positioning and map building SLAM based on the initial positioning pose of the AR equipment;
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real-time positioning pose of the AR equipment.
In one possible embodiment, the target triggering operation satisfies a preset condition, and includes at least one of:
the number of times of the target trigger operation exceeds the preset number of times, and the duration of the target trigger operation exceeds the preset duration.
In a possible implementation, the AR picture presentation module 501 is specifically configured to:
acquiring target AR special effect data of a virtual object corresponding to a currently reached target interaction stage based on AR special effect data of the virtual object corresponding to different preset interaction stages;
and displaying the virtual object in the AR picture based on the target AR special effect data.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 9, a schematic structural diagram of an electronic device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory and temporarily stores operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In this embodiment, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and is controlled by the processor 701 to execute. That is, when the electronic device 700 is operating, communication between the processor 701 and the memory 702 is via the bus 703, which enables the processor 701 to execute application program code stored in the memory 702.
The Memory 702 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 702 is configured to store a program, and the processor 703 executes the program after receiving an execution instruction, and a method executed by the electronic device 200 defined by a flow disclosed in any embodiment of the invention described later may be applied to the processor 703 or implemented by the processor 703.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 700. In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the AR interaction method in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the AR interaction method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the AR interaction method in the above method embodiments, which may be referred to specifically in the above method embodiments, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. An Augmented Reality (AR) interaction method, comprising:
displaying an AR picture matched with the real scene image on the AR equipment based on the real scene image shot by the AR equipment; at least one virtual object is displayed in the AR picture, wherein the at least one virtual object comprises an associated virtual object, and the associated virtual object comprises a first virtual sub-object and a second virtual sub-object;
and detecting a target trigger operation aiming at the associated virtual object, and displaying a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object in the AR picture under the condition that the target trigger operation meets a preset condition, wherein the first AR special effect and the second AR special effect are associated.
2. The method of claim 1, wherein the detecting a target trigger operation for the associated virtual object comprises:
detecting a trigger operation acting on a screen of the AR device;
identifying a screen coordinate position of the trigger operation on a screen of the AR device;
and determining whether the trigger operation is a target trigger operation aiming at the associated virtual object or not based on the screen coordinate position and a preset position area of the associated virtual object in a pre-constructed three-dimensional virtual scene map.
3. The method according to claim 2, wherein the determining whether the trigger operation is a target trigger operation for the associated virtual object based on the screen coordinate position and a preset position area of the associated virtual object in a pre-constructed three-dimensional virtual scene map comprises:
determining a screen coordinate position corresponding to the screen coordinate position according to the screen coordinate position acted on the screen of the AR equipment by the trigger operation and a conversion relation between a screen coordinate system and a real-time positioning and mapping SLAM coordinate system;
mapping the SLAM coordinate position to the three-dimensional virtual scene map to obtain a virtual world coordinate position of the trigger operation acting on the three-dimensional virtual scene map;
and determining the trigger operation as a target trigger operation aiming at the associated virtual object under the condition that the virtual world coordinate position is located in the preset position area.
4. The method of claim 1, wherein the displaying a first AR effect corresponding to the first virtual sub-object and a second AR effect corresponding to the second virtual sub-object in the AR picture comprises:
and displaying a first AR special effect of the first virtual sub-object with a first state change and a second AR special effect of the second virtual sub-object with a second state change in the AR picture.
5. The method of claim 4, wherein the first change of state comprises a change of form and the second change of state comprises a change of position.
6. The method according to any one of claims 1-5, wherein presenting, at the AR device, an AR picture matching the image of the real scene based on the image of the real scene captured by the AR device comprises:
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real scene image shot by the AR equipment and a pre-constructed three-dimensional virtual scene map.
7. The method of claim 6, wherein displaying, on the basis of the image of the real scene shot by the AR device and the pre-constructed three-dimensional virtual scene map, an AR picture matching the real-time positioning pose of the AR device on the AR device comprises:
determining an initial positioning pose of the AR equipment based on a real scene image shot by the AR equipment and the pre-constructed three-dimensional virtual scene map;
determining the real-time positioning pose of the AR equipment through real-time positioning and map building SLAM based on the initial positioning pose of the AR equipment;
and displaying an AR picture matched with the real-time positioning pose of the AR equipment on the AR equipment based on the real-time positioning pose of the AR equipment.
8. The method according to any one of claims 1 to 7, wherein the target trigger operation satisfying the preset condition comprises at least one of:
the number of times of the target trigger operation exceeds a preset number of times, and the duration of the target trigger operation exceeds a preset duration.
9. The method according to any one of claims 1-8, wherein presenting, at an AR device, an AR picture matching the image of the real scene comprises:
acquiring target AR special effect data of the virtual object corresponding to a currently reached target interaction stage based on AR special effect data of the virtual object corresponding to different preset interaction stages;
displaying the virtual object in the AR picture based on the target AR special effect data.
10. An Augmented Reality (AR) interaction device, comprising:
the AR picture display module is used for displaying an AR picture matched with the real scene image on the AR equipment based on the real scene image shot by the AR equipment; at least one virtual object is displayed in the AR picture, wherein the at least one virtual object comprises an associated virtual object, and the associated virtual object comprises a first virtual sub-object and a second virtual sub-object;
and the AR special effect display module is used for detecting a target trigger operation aiming at the associated virtual object, and displaying a first AR special effect corresponding to the first virtual sub-object and a second AR special effect corresponding to the second virtual sub-object in the AR picture under the condition that the target trigger operation meets a preset condition, wherein the first AR special effect and the second AR special effect are associated.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the augmented reality AR interaction method of any one of claims 1-9.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of the augmented reality AR interaction method of any one of claims 1-9.
CN202011005936.5A 2020-09-23 2020-09-23 Augmented reality AR interaction method and device, electronic equipment and storage medium Pending CN112148197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011005936.5A CN112148197A (en) 2020-09-23 2020-09-23 Augmented reality AR interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011005936.5A CN112148197A (en) 2020-09-23 2020-09-23 Augmented reality AR interaction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112148197A true CN112148197A (en) 2020-12-29

Family

ID=73897532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011005936.5A Pending CN112148197A (en) 2020-09-23 2020-09-23 Augmented reality AR interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112148197A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817449A (en) * 2021-01-28 2021-05-18 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN112882576A (en) * 2021-02-26 2021-06-01 北京市商汤科技开发有限公司 AR interaction method and device, electronic equipment and storage medium
CN113347373A (en) * 2021-06-16 2021-09-03 潍坊幻视软件科技有限公司 Image processing method for making special-effect video in real time through AR space positioning
CN113398577A (en) * 2021-05-13 2021-09-17 杭州易现先进科技有限公司 Multi-person AR interaction method and system for offline space
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
CN113706721A (en) * 2021-09-07 2021-11-26 中国计量大学 Elevator inspection method and system based on augmented reality technology
CN114356087A (en) * 2021-12-30 2022-04-15 北京绵白糖智能科技有限公司 Interaction method, device, equipment and storage medium based on augmented reality
CN114401442A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Video live broadcast and special effect control method and device, electronic equipment and storage medium
CN114697302A (en) * 2020-12-31 2022-07-01 伊姆西Ip控股有限责任公司 Method for distributing virtual visual content
WO2022205634A1 (en) * 2021-03-30 2022-10-06 北京市商汤科技开发有限公司 Data display method and apparatus, and device, storage medium and program
CN115268658A (en) * 2022-09-30 2022-11-01 苏芯物联技术(南京)有限公司 Multi-party remote space delineation marking method based on augmented reality
WO2022267729A1 (en) * 2021-06-24 2022-12-29 腾讯科技(深圳)有限公司 Virtual scene-based interaction method and apparatus, device, medium, and program product
WO2023045964A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
WO2023124693A1 (en) * 2021-12-31 2023-07-06 上海商汤智能科技有限公司 Augmented reality scene display
CN113421343B (en) * 2021-05-27 2024-06-04 深圳市晨北科技有限公司 Method based on internal structure of augmented reality observation equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170123496A1 (en) * 2015-11-03 2017-05-04 Chunghwa Picture Tubes Ltd. Augmented reality system and augmented reality interaction method
WO2018121684A1 (en) * 2016-12-29 2018-07-05 中兴通讯股份有限公司 Head-mounted apparatus, interactive game platform, board game implementation system and method
CN108509043A (en) * 2018-03-29 2018-09-07 联想(北京)有限公司 A kind of interaction control method and system
CN108765563A (en) * 2018-05-31 2018-11-06 北京百度网讯科技有限公司 Processing method, device and the equipment of SLAM algorithms based on AR
CN109189302A (en) * 2018-08-29 2019-01-11 百度在线网络技术(北京)有限公司 The control method and device of AR dummy model
CN109420336A (en) * 2017-08-30 2019-03-05 深圳市掌网科技股份有限公司 Game implementation method and device based on augmented reality
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN111359200A (en) * 2020-02-26 2020-07-03 网易(杭州)网络有限公司 Augmented reality-based game interaction method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170123496A1 (en) * 2015-11-03 2017-05-04 Chunghwa Picture Tubes Ltd. Augmented reality system and augmented reality interaction method
WO2018121684A1 (en) * 2016-12-29 2018-07-05 中兴通讯股份有限公司 Head-mounted apparatus, interactive game platform, board game implementation system and method
CN109420336A (en) * 2017-08-30 2019-03-05 深圳市掌网科技股份有限公司 Game implementation method and device based on augmented reality
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN108509043A (en) * 2018-03-29 2018-09-07 联想(北京)有限公司 A kind of interaction control method and system
CN108765563A (en) * 2018-05-31 2018-11-06 北京百度网讯科技有限公司 Processing method, device and the equipment of SLAM algorithms based on AR
CN109189302A (en) * 2018-08-29 2019-01-11 百度在线网络技术(北京)有限公司 The control method and device of AR dummy model
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN111359200A (en) * 2020-02-26 2020-07-03 网易(杭州)网络有限公司 Augmented reality-based game interaction method and device

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697302A (en) * 2020-12-31 2022-07-01 伊姆西Ip控股有限责任公司 Method for distributing virtual visual content
CN112817449B (en) * 2021-01-28 2023-07-21 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN112817449A (en) * 2021-01-28 2021-05-18 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN112882576A (en) * 2021-02-26 2021-06-01 北京市商汤科技开发有限公司 AR interaction method and device, electronic equipment and storage medium
WO2022205634A1 (en) * 2021-03-30 2022-10-06 北京市商汤科技开发有限公司 Data display method and apparatus, and device, storage medium and program
CN113398577A (en) * 2021-05-13 2021-09-17 杭州易现先进科技有限公司 Multi-person AR interaction method and system for offline space
CN113398577B (en) * 2021-05-13 2024-04-09 杭州易现先进科技有限公司 Multi-person AR interaction method and system for offline space
CN113421343B (en) * 2021-05-27 2024-06-04 深圳市晨北科技有限公司 Method based on internal structure of augmented reality observation equipment
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
CN113347373B (en) * 2021-06-16 2022-06-03 潍坊幻视软件科技有限公司 Image processing method for making special-effect video in real time through AR space positioning
CN113347373A (en) * 2021-06-16 2021-09-03 潍坊幻视软件科技有限公司 Image processing method for making special-effect video in real time through AR space positioning
WO2022267729A1 (en) * 2021-06-24 2022-12-29 腾讯科技(深圳)有限公司 Virtual scene-based interaction method and apparatus, device, medium, and program product
CN113706721A (en) * 2021-09-07 2021-11-26 中国计量大学 Elevator inspection method and system based on augmented reality technology
WO2023045964A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
CN114356087A (en) * 2021-12-30 2022-04-15 北京绵白糖智能科技有限公司 Interaction method, device, equipment and storage medium based on augmented reality
WO2023124693A1 (en) * 2021-12-31 2023-07-06 上海商汤智能科技有限公司 Augmented reality scene display
CN114401442A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Video live broadcast and special effect control method and device, electronic equipment and storage medium
CN114401442B (en) * 2022-01-14 2023-10-24 北京字跳网络技术有限公司 Video live broadcast and special effect control method and device, electronic equipment and storage medium
CN115268658A (en) * 2022-09-30 2022-11-01 苏芯物联技术(南京)有限公司 Multi-party remote space delineation marking method based on augmented reality

Similar Documents

Publication Publication Date Title
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
CN107820593B (en) Virtual reality interaction method, device and system
CN111880657B (en) Control method and device of virtual object, electronic equipment and storage medium
CN111638793B (en) Display method and device of aircraft, electronic equipment and storage medium
CN110738737A (en) AR scene image processing method and device, electronic equipment and storage medium
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
US9268410B2 (en) Image processing device, image processing method, and program
CN110473293B (en) Virtual object processing method and device, storage medium and electronic equipment
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
EP2903256B1 (en) Image processing device, image processing method and program
CN112148189A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
JP6609640B2 (en) Managing feature data for environment mapping on electronic devices
JP2022505998A (en) Augmented reality data presentation methods, devices, electronic devices and storage media
CN111679742A (en) Interaction control method and device based on AR, electronic equipment and storage medium
CN111950521A (en) Augmented reality interaction method and device, electronic equipment and storage medium
KR20130119233A (en) Apparatus for acquiring 3 dimension virtual object information without pointer
CN111638797A (en) Display control method and device
CN111833457A (en) Image processing method, apparatus and storage medium
CN111882674A (en) Virtual object adjusting method and device, electronic equipment and storage medium
CN111651057A (en) Data display method and device, electronic equipment and storage medium
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN111651051A (en) Virtual sand table display method and device
CN111652971A (en) Display control method and device
CN110310325B (en) Virtual measurement method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229