CN112905014A - Interaction method and device in AR scene, electronic equipment and storage medium - Google Patents

Interaction method and device in AR scene, electronic equipment and storage medium Download PDF

Info

Publication number
CN112905014A
CN112905014A CN202110218841.XA CN202110218841A CN112905014A CN 112905014 A CN112905014 A CN 112905014A CN 202110218841 A CN202110218841 A CN 202110218841A CN 112905014 A CN112905014 A CN 112905014A
Authority
CN
China
Prior art keywords
target
user
special effect
scene
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110218841.XA
Other languages
Chinese (zh)
Inventor
侯欣如
王鼎禄
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110218841.XA priority Critical patent/CN112905014A/en
Publication of CN112905014A publication Critical patent/CN112905014A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides an interaction method, an interaction device, electronic equipment and a storage medium in an AR scene, wherein the method comprises the following steps: after the AR equipment is determined to be located in the target AR scene, displaying an AR picture matched with the target AR scene and a user image of a target user corresponding to at least one other target AR equipment in the AR equipment; determining AR special effect data corresponding to a target user based on the user image; and displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR equipment.

Description

Interaction method and device in AR scene, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of Augmented Reality (AR) technologies, and in particular, to an interaction method and apparatus in an AR scene, an electronic device, and a storage medium.
Background
The AR technology is a technology for superimposing a corresponding image, video, and 3-Dimensional (3D) model on a video to realize fusion of a virtual world and a real world according to a position and an angle of a camera image calculated in real time, and provides a new interactive experience for a user, and thus is widely applied to various technical fields such as consumption, medical care, and games.
At present, most of human-computer interaction in an AR scene is that each user performs AR experience independently, and the requirements of multiple users for AR human-computer interaction cannot be met.
Disclosure of Invention
The embodiment of the disclosure at least provides an interaction method and device in an AR scene, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an interaction method in an AR scene, where the method includes:
after determining that the AR equipment is located in a target AR scene, displaying an AR picture matched with the target AR scene and a user image of a target user corresponding to at least one other target AR equipment in the AR equipment;
determining AR special effect data corresponding to the target user based on the user image;
and displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR equipment.
By adopting the interaction method under the AR scene, the AR special effect data of the target user can be displayed at the target position corresponding to the AR picture under the condition that the user image of the real target user participating in the AR interaction is shot in the shooting view of the AR equipment. The AR device can be any one of the AR devices in the interactive scene, so that each user can utilize the AR special effect data of the target user corresponding to the other target AR devices displayed by the AR device to realize interaction with the target user, and the interactive physical examination feeling is improved by combining virtual and real.
In one possible embodiment, the determining, based on the user image, AR special effect data corresponding to the target user includes:
acquiring user identity information of the target user based on the user image;
acquiring target interaction data corresponding to the user identity information;
and generating AR special effect data corresponding to the target user based on the user identity information and the target interaction data.
The user identity information of the target user can be obtained based on the user image, target interaction data corresponding to different user identity information are different, AR special effect data can be determined under the condition that the target interaction data corresponding to the target user are obtained, the AR interaction condition of the real target user can be further shown, and interaction experience among different users is improved.
In one possible embodiment, the method further comprises:
determining a face image area where the face of the target user is located from the user image;
displaying the determined AR special effect data at a target position corresponding to the user image of the target user in an AR picture of the AR device, including:
and displaying the determined AR special effect data at a target position corresponding to the face image area of the target user in an AR picture of the AR equipment.
Here, the face image area where the face of the target user is located may be determined first, so that when displaying the AR special effect data corresponding to the target user, the display may be performed at the face position of the target user or at a position near the face, that is, the display position is closer to the display content, which makes the visual display effect better.
In one possible embodiment, the AR special effects data comprises one or more of the following data:
attribute data of the target user;
and the interaction score of the target user interacting in the target AR scene.
In one possible implementation, when a virtual object is displayed in an AR picture of the AR device; the method further comprises the following steps:
and under the condition that the target trigger operation of the virtual object shown in the AR picture acting on the AR equipment is detected, showing the AR special effect of the virtual object in the AR picture.
Here, the display of the corresponding AR special effect can be performed for the virtual object displayed in the AR screen on the AR device, and the AR display effect is enriched.
In one possible implementation, a target trigger operation of a virtual object shown in an AR picture acting on the AR device is detected according to the following steps:
responding to a trigger operation acted on the screen of the AR equipment, and determining a screen coordinate position corresponding to the trigger operation;
converting the screen coordinate position to a camera coordinate position in a virtual camera coordinate system based on the screen coordinate position and a first conversion relationship between the screen coordinate system and the virtual camera coordinate system;
converting the camera coordinate position to a world coordinate system based on the camera coordinate position and a second conversion relation between the virtual camera coordinate system and the world coordinate system to obtain a world coordinate position;
and under the condition that the world coordinate position falls into the position range corresponding to the virtual object, determining that the target trigger operation of the virtual object shown in the AR picture acting on the AR equipment is detected.
Here, it may be determined whether a trigger operation on the screen of the AR device may be applied to a virtual object shown in the AR picture on the AR device based on a first conversion relationship between the screen coordinate system and the virtual camera coordinate system and a second conversion relationship between the virtual camera coordinate system and the world coordinate system, so as to improve accuracy of a subsequent AR operation.
In a second aspect, an embodiment of the present disclosure further provides an interaction apparatus in an AR scene, where the apparatus includes:
the display device comprises a picture display module, a display module and a display module, wherein the picture display module is used for displaying an AR picture matched with a target AR scene and a user image of a target user corresponding to at least one other target AR device in the AR device after the AR device is determined to be located in the target AR scene;
a special effect determination module for determining AR special effect data corresponding to the target user based on the user image;
and the special effect display module is used for displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR equipment.
In one possible implementation, the special effect display module is further configured to:
determining a face image area where the face of the target user is located from the user image;
and displaying the determined AR special effect data at a target position corresponding to the face image area of the target user in an AR picture of the AR equipment.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the interaction method in the AR scenario as described in the first aspect and any of its various embodiments.
In a fourth aspect, the disclosed embodiments further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the interaction method in the AR scenario are performed as described in the first aspect and any of the various embodiments thereof.
For the description of the effects of the interaction apparatus, the electronic device, and the computer-readable storage medium in the AR scene, reference is made to the description of the interaction method in the AR scene, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an interaction method in an AR scenario provided by an embodiment of the present disclosure;
fig. 2 shows a flowchart of a specific method for generating AR special effect data in an interaction method in an AR scene according to an embodiment of the present disclosure;
fig. 3 shows a flowchart of a specific method for detecting a target trigger operation in an interaction method in an AR scenario according to an embodiment of the present disclosure;
fig. 4(a) is a scene schematic diagram illustrating an interaction method in an AR scene according to an embodiment of the present disclosure;
fig. 4(b) is a scene schematic diagram illustrating an interaction method in an AR scene according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an interaction apparatus in an AR scene according to an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Research shows that at present, most of human-computer interaction in an AR scene is that each user independently performs AR experience, and the requirement of the AR human-computer interaction of multiple users cannot be met.
Based on the research, the present disclosure provides an interaction method and apparatus in an AR scene, an electronic device, and a storage medium, and the AR interaction experience is better.
To facilitate understanding of the present embodiment, first, an interaction method in an AR scene disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the interaction method in the AR scene provided in the embodiments of the present disclosure is generally an electronic device with a certain computing capability, and the electronic device includes, for example: the user terminal or the server or other processing devices may be, for example, a server connected to the user terminal, the user terminal may be a tablet computer, a smart phone, a smart wearable device, an AR device (e.g., AR glasses, AR helmet, etc.), and other devices having a display function and a data processing capability, and the user terminal may be connected to the server through an application program. In some possible implementations, the interaction method in the AR scenario may be implemented by a processor calling computer-readable instructions stored in a memory.
The following describes an interaction method in an AR scenario provided in the embodiments of the present disclosure.
Referring to fig. 1, which is a flowchart of an interaction method in an AR scene provided in the embodiment of the present disclosure, the method includes steps S101 to S103, where:
s101: after the AR equipment is determined to be located in the target AR scene, displaying an AR picture matched with the target AR scene and a user image of a target user corresponding to at least one other target AR equipment in the AR equipment;
s102: determining AR special effect data corresponding to a target user based on the user image;
s103: and displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR equipment.
The interaction method in the AR scene may be mainly applied to various scenes in which AR scene interaction is required, for example, may be applied to an AR game scene to implement interaction between multiple users in the AR game scene, and may also be applied to other various application scenes, which are not specifically limited herein.
The interaction method in the AR scene provided by the embodiment of the present disclosure implements multi-user interaction on the premise that in an AR picture displayed by an AR device, interaction with a real target user displayed in the AR device can be implemented based on AR special effect data, the target user can be a real person, and corresponding AR special effect data can be subsequently added virtual AR content, so that a user can interact with the target user in the AR scene through the AR device, and experience is better.
In the embodiment of the present disclosure, the above-described interaction operation may be implemented for any one of a plurality of AR devices. The plurality of AR devices here may be all AR devices in the target AR scene. Here, multiple AR devices may be mapped into the same world coordinate system, which may correspond to a target AR scene, based on the real-time positioning poses of the individual AR devices. The determination process of the real-time positioning pose of the AR device is explained in detail next.
In the embodiment of the disclosure, the positioning pose information of the AR device may be determined based on a real scene image shot by the AR device and a three-dimensional scene map constructed in advance.
Wherein, above-mentioned reality scene image can be the user who wears AR equipment carries out the in-process that AR experienced in the AR scene, the relevant image of shooting through the camera that sets up on the AR equipment. Based on the shot related images, on one hand, local positioning of the positioning pose information of the AR device based on the pre-constructed three-dimensional scene map can be performed on the AR device, and on the other hand, remote positioning of the server can be performed, that is, after the shot real scene images are uploaded to the server, the server can determine the positioning pose information of the AR device based on the pre-constructed three-dimensional scene map.
The three-dimensional scene map in the embodiment of the present disclosure may be a high-precision map constructed based on point cloud data. Here, a large number of photos or videos may be collected for a specific location, for example, a set of photos including different shooting times, different shooting angles, and different shooting positions, a sparse feature point cloud of the specific location may be recovered based on the collected set of photos, and a high-precision map corresponding to the specific location may be constructed based on the recovered sparse feature point cloud. In some embodiments, this may be embodied based on a three-dimensional reconstruction technique of Motion from Motion (SFM).
Therefore, after the real scene image uploaded by the AR equipment is obtained, the feature points in the picture can be extracted firstly, then the real scene image is matched with the sparse feature point cloud corresponding to the high-precision map, and the position and the posture of the AR equipment in the process of shooting the real scene image can be determined based on the matching result.
Therefore, the three-dimensional scene map can be used for confirming the pose of the AR equipment with high precision and high accuracy.
The embodiment of the disclosure can also combine the real-time positioning And Mapping (SLAM) technology to realize the positioning of the AR device. The disclosed embodiment can realize joint positioning according to the following steps:
determining an initial positioning pose of the AR equipment based on a real scene image shot by the AR equipment and a pre-constructed three-dimensional scene map;
and step two, determining the real-time positioning pose of the AR equipment through real-time positioning and map building SLAM based on the initial positioning pose of the AR equipment.
Here, the initial positioning pose of the AR device may be determined based on the three-dimensional scene map, and the real-time positioning pose of the AR device may be determined by the SLAM on the basis of the initial positioning pose. The SLAM can build an incremental map on the basis of the self positioning of the AR equipment, so that the position and the direction of the equipment moving in the space can be determined under the condition of determining the initial positioning pose of the AR equipment, and the real-time positioning of the AR equipment is realized, so that the high precision and the high accuracy of the positioning are ensured, the positioning delay is reduced, and the higher real-time performance is achieved.
In some embodiments, in the real-time positioning based on SLAM, the positioning calibration may be performed in combination with the high-precision positioning of the three-dimensional scene map at every preset time interval (e.g., 5 seconds), so as to further improve the accuracy and precision of the positioning.
After the real-time positioning pose of the AR device is determined based on the positioning mode, whether the AR device is located in the target AR scene or not can be determined based on the positioning pose information. Under the condition that each AR device is located in the same target AR scene, the server may push the AR picture matched by each AR device for the target AR scene to the corresponding AR device, and the AR device may correspondingly display the AR picture matched by the target AR scene.
In the embodiment of the present disclosure, when one AR device determines that a user image of a target user corresponding to another target AR device exists in an AR screen of the AR device, corresponding AR special effect data may be determined based on the user image. The AR special effect data may be attribute data of the target user, for example, a user name of the user a, or may be an interaction score of the target user interacting in the target AR scene, for example, in a case where the target AR scene is a game scene, the interaction score may be a game score.
In the case where AR special effect data corresponding to a target user is determined, such AR special effect data may be presented corresponding to a target position corresponding to a user image. The target position may be a position away from the user image within a certain range, for example, the target position may be a position away from a contour pixel of the user image by 50 pixels, so that the displayed AR special effect data has clear directivity, and the AR interaction experience may be improved.
As shown in fig. 2, the interaction method provided by the embodiment of the present disclosure may generate AR special effect data corresponding to a target user according to the following steps:
s201: acquiring user identity information of a target user based on a user image;
s202: acquiring target interaction data corresponding to user identity information;
s203: and generating AR special effect data corresponding to the target user based on the user identity information and the target interaction data.
The AR device herein may initiate a user identification request to the server if it is determined that there is a target user corresponding to another target AR device for the user image. The server may perform identity recognition on the user image, determine user identity information of the target user, and return the user identity information to the AR device, where the identity information may refer to a biological feature of the user, for example, an identity card number authorized by the user, or information such as a game account number authorized by the user, and in addition, may refer to other identity information capable of identifying the user, which is not limited specifically herein.
Once the user identity information is determined, the AR device may also initiate an access request of the target interaction data to the server, and obtain the target interaction data corresponding to the user identity information therefrom. Based on the fusion of the user identity information and the target interaction data, corresponding AR special effect data can be obtained.
It should be noted that, for different types of AR special effect data, different special effect display manners may be adopted, for example, for user identity information, corresponding game account information may be displayed in a text manner, and for target interaction data, a graphical manner such as a residual blood volume bar may be used for displaying, which is not limited specifically herein.
In the embodiment of the disclosure, the AR special effect data can be displayed by combining the face image area where the face of the target user is located.
Here, first, a face image region where the face of the target user is located may be determined from the user image by using a face recognition technique or by using a face recognition model trained in advance.
The face recognition model in the embodiment of the present disclosure may be trained in advance, where the face recognition model may be obtained by training based on a plurality of user image samples and a labeling result obtained after face labeling is performed on each user image sample, and the training is performed on a correspondence between one user image sample and a face image area in the user image sample. Therefore, the user image is input into the trained face recognition model, the face image area where the face of the target user is located can be obtained, and the recognition efficiency is improved.
The interaction method in the AR scene provided by the embodiment of the present disclosure may not only implement interaction with a real target user shown in the AR device through the AR device, but also implement interaction with a virtual object shown in an AR picture on the AR device.
The interaction here may refer to a target trigger operation acting on the virtual object to show the AR special effect of the virtual object in the AR screen.
As shown in fig. 3, in the embodiment of the present disclosure, the process of detecting the target trigger operation acting on the virtual object displayed on the AR device may specifically be implemented by the following steps:
s301: responding to a trigger operation acted on the screen of the AR equipment, and determining a screen coordinate position corresponding to the trigger operation;
s302: converting the screen coordinate position to a camera coordinate position in a virtual camera coordinate system based on the screen coordinate position and a first conversion relationship between the screen coordinate system and the virtual camera coordinate system;
s303: converting the camera coordinate position into a world coordinate system based on the camera coordinate position and a second conversion relation between the virtual camera coordinate system and the world coordinate system to obtain a world coordinate position;
s304: and determining that the target trigger operation of the virtual object shown in the AR picture acting on the AR equipment is detected under the condition that the world coordinate position falls into the position range corresponding to the virtual object.
Here, a trigger operation performed by the user on the screen of the AR device may be responded to first, and here, a screen coordinate position corresponding to the trigger may be determined. Because a first conversion relation exists between the screen coordinate system and the camera coordinate system and a second conversion relation exists between the camera coordinate system and the world coordinate system, at this time, the screen coordinate position can be converted into the camera coordinate position under the camera coordinate system, and then the camera coordinate position obtained by conversion is converted into the world coordinate system, so that the world coordinate position is obtained.
Since the AR device is a virtual object presented in conjunction with the real world during the process of displaying the AR screen, for the indicated actual physical location, it may be determined whether the current target trigger operation is applied to the virtual object based on a determination result of whether the actual physical location falls within a location range corresponding to the virtual object. That is, if it is determined that the actual physical location falls within the location range corresponding to the virtual object, it may be considered that the current target trigger operation is applied to the virtual object.
Based on the analysis result of the target trigger operation related to the virtual object, the AR special effect of the virtual object can be displayed.
To facilitate understanding of the interaction method in the AR scene provided by the embodiments of the present disclosure, an example may be illustrated below with reference to the AR shooting game scene shown in fig. 4(a) and 4 (b).
In the AR shooting game scene shown in fig. 4(a), there are three virtual objects in total, which are octopus, sea urchin, and barrel, respectively. A special effect on attacking sea urchins can be generated in the case where a shooting operation is performed for a barrel (e.g., a single shot of the barrel), as shown in fig. 4 (b).
Meanwhile, in the AR shooting game scenes shown in fig. 4(a) and 4(b), the user images of the target users corresponding to other target AR devices are shown in the presented AR pictures, and at this time, the AR special effect data of the target users can be correspondingly shown, as shown by the user a, the game score of 8.5.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides an interaction apparatus in an AR scene corresponding to the interaction method in the AR scene, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the interaction method in the AR scene described above in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 5, which is a schematic diagram of an interaction apparatus in an AR scene provided in an embodiment of the present disclosure, the apparatus includes: a picture display module 501, a special effect determination module 502 and a special effect display module 503; wherein the content of the first and second substances,
the image display module 501 is configured to display, after it is determined that the AR device is located in the target AR scene, an AR image matched with the target AR scene and a user image of a target user corresponding to at least one other target AR device in the AR device;
a special effect determining module 502, configured to determine, based on the user image, AR special effect data corresponding to the target user;
a special effect displaying module 503, configured to display the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR device.
In one possible implementation, the special effect determination module 502 is configured to determine AR special effect data corresponding to a target user based on a user image according to the following steps:
acquiring user identity information of a target user based on a user image;
acquiring target interaction data corresponding to user identity information;
and generating AR special effect data corresponding to the target user based on the user identity information and the target interaction data.
In a possible implementation, the special effect display module 503 is further configured to:
determining a face image area where the face of a target user is located from a user image;
and displaying the determined AR special effect data at a target position corresponding to the face image area of the target user in an AR picture of the AR equipment.
In one possible embodiment, the AR special effects data comprises one or more of the following data:
attribute data of the target user;
and (4) interaction scores of the target users interacting in the target AR scene.
In one possible implementation, when a virtual object is displayed in an AR picture of an AR device; the above-mentioned device still includes:
the detecting module 504 is configured to, in a case where a target trigger operation of the virtual object shown in the AR screen acting on the AR device is detected, show an AR special effect of the virtual object in the AR screen.
In one possible implementation, the detecting module 504 is configured to detect a target triggering operation of a virtual object shown in an AR screen acting on an AR device according to the following steps:
responding to a trigger operation acted on the screen of the AR equipment, and determining a screen coordinate position corresponding to the trigger operation;
converting the screen coordinate position to a camera coordinate position in a virtual camera coordinate system based on the screen coordinate position and a first conversion relationship between the screen coordinate system and the virtual camera coordinate system;
converting the camera coordinate position into a world coordinate system based on the camera coordinate position and a second conversion relation between the virtual camera coordinate system and the world coordinate system to obtain a world coordinate position;
and determining that the target trigger operation of the virtual object shown in the AR picture acting on the AR equipment is detected under the condition that the world coordinate position falls into the position range corresponding to the virtual object.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 6, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 601, a memory 602, and a bus 603. The memory 602 stores machine-readable instructions (for example, execution instructions corresponding to the screen displaying module 501, the special effect determining module 502, and the special effect displaying module 503 in the apparatus in fig. 5, etc.) executable by the processor 601, when the electronic device is operated, the processor 601 and the memory 602 communicate via the bus 603, and when the machine-readable instructions are executed by the processor 601, the following processes are performed:
after the AR equipment is determined to be located in the target AR scene, displaying an AR picture matched with the target AR scene and a user image of a target user corresponding to at least one other target AR equipment in the AR equipment;
determining AR special effect data corresponding to a target user based on the user image;
and displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR equipment.
In a possible implementation, the instructions executed by the processor 601 for determining, based on the user image, AR special effect data corresponding to the target user includes:
acquiring user identity information of a target user based on a user image;
acquiring target interaction data corresponding to user identity information;
and generating AR special effect data corresponding to the target user based on the user identity information and the target interaction data.
In a possible implementation manner, the instructions executed by the processor 601 further include:
determining a face image area where the face of a target user is located from a user image;
in the instruction executed by the processor 601, displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR screen of the AR device includes:
and displaying the determined AR special effect data at a target position corresponding to the face image area of the target user in an AR picture of the AR equipment.
In one possible embodiment, the AR special effects data comprises one or more of the following data:
attribute data of the target user;
and (4) interaction scores of the target users interacting in the target AR scene.
In one possible implementation, when a virtual object is displayed in an AR picture of an AR device; the instructions executed by the processor 601 further include:
when a target trigger operation of a virtual object shown in an AR screen acting on an AR device is detected, an AR special effect of the virtual object is shown in the AR screen.
In one possible implementation, the processor 601 executes instructions to detect a target trigger operation of a virtual object shown in an AR screen acting on an AR device according to the following steps:
responding to a trigger operation acted on the screen of the AR equipment, and determining a screen coordinate position corresponding to the trigger operation;
converting the screen coordinate position to a camera coordinate position in a virtual camera coordinate system based on the screen coordinate position and a first conversion relationship between the screen coordinate system and the virtual camera coordinate system;
converting the camera coordinate position into a world coordinate system based on the camera coordinate position and a second conversion relation between the virtual camera coordinate system and the world coordinate system to obtain a world coordinate position;
and determining that the target trigger operation of the virtual object shown in the AR picture acting on the AR equipment is detected under the condition that the world coordinate position falls into the position range corresponding to the virtual object.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the interaction method in the AR scenario in the foregoing method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
An embodiment of the present disclosure further provides a computer program product, where the computer program product carries a program code, and an instruction included in the program code may be used to execute the step of the interaction method in the AR scenario described in the foregoing method embodiment, which may be referred to specifically in the foregoing method embodiment, and is not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. An interaction method in an AR scene, the method comprising:
after determining that the AR equipment is located in a target AR scene, displaying an AR picture matched with the target AR scene and a user image of a target user corresponding to at least one other target AR equipment in the AR equipment;
determining AR special effect data corresponding to the target user based on the user image;
and displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR equipment.
2. The interaction method of claim 1, wherein said determining, based on the user image, AR special effect data corresponding to the target user comprises:
acquiring user identity information of the target user based on the user image;
acquiring target interaction data corresponding to the user identity information;
and generating AR special effect data corresponding to the target user based on the user identity information and the target interaction data.
3. The interaction method according to claim 1 or 2, characterized in that the method further comprises:
determining a face image area where the face of the target user is located from the user image;
displaying the determined AR special effect data at a target position corresponding to the user image of the target user in an AR picture of the AR device, including:
and displaying the determined AR special effect data at a target position corresponding to the face image area of the target user in an AR picture of the AR equipment.
4. The interaction method according to any of claims 1 to 3, wherein the AR effect data comprises one or more of the following data:
attribute data of the target user;
and the interaction score of the target user interacting in the target AR scene.
5. The interaction method according to any one of claims 1 to 6, wherein, when a virtual object is displayed in the AR picture of the AR device; the method further comprises the following steps:
and under the condition that the target trigger operation of the virtual object shown in the AR picture acting on the AR equipment is detected, showing the AR special effect of the virtual object in the AR picture.
6. The interaction method according to claim 5, characterized in that the detection of the target triggering operation of a virtual object presented in an AR picture acting on the AR device is carried out according to the following steps:
responding to a trigger operation acted on the screen of the AR equipment, and determining a screen coordinate position corresponding to the trigger operation;
converting the screen coordinate position to a camera coordinate position in a virtual camera coordinate system based on the screen coordinate position and a first conversion relationship between the screen coordinate system and the virtual camera coordinate system;
converting the camera coordinate position to a world coordinate system based on the camera coordinate position and a second conversion relation between the virtual camera coordinate system and the world coordinate system to obtain a world coordinate position;
and under the condition that the world coordinate position falls into the position range corresponding to the virtual object, determining that the target trigger operation of the virtual object shown in the AR picture acting on the AR equipment is detected.
7. An interactive apparatus in an AR scene, the apparatus comprising:
the display device comprises a picture display module, a display module and a display module, wherein the picture display module is used for displaying an AR picture matched with a target AR scene and a user image of a target user corresponding to at least one other target AR device in the AR device after the AR device is determined to be located in the target AR scene;
a special effect determination module for determining AR special effect data corresponding to the target user based on the user image;
and the special effect display module is used for displaying the determined AR special effect data at a target position corresponding to the user image of the target user in the AR picture of the AR equipment.
8. The interaction device of claim 7, wherein the special effects presentation module is further configured to:
determining a face image area where the face of the target user is located from the user image;
and displaying the determined AR special effect data at a target position corresponding to the face image area of the target user in an AR picture of the AR equipment.
9. An electronic device, comprising: processor, memory and bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the interaction method in the AR scenario according to any one of claims 1 to 6.
10. A computer-readable storage medium, having stored thereon a computer program for performing, when being executed by a processor, the steps of the interaction method in the AR scenario according to any one of claims 1 to 6.
CN202110218841.XA 2021-02-26 2021-02-26 Interaction method and device in AR scene, electronic equipment and storage medium Pending CN112905014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110218841.XA CN112905014A (en) 2021-02-26 2021-02-26 Interaction method and device in AR scene, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110218841.XA CN112905014A (en) 2021-02-26 2021-02-26 Interaction method and device in AR scene, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112905014A true CN112905014A (en) 2021-06-04

Family

ID=76107246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110218841.XA Pending CN112905014A (en) 2021-02-26 2021-02-26 Interaction method and device in AR scene, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112905014A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113295180A (en) * 2021-06-30 2021-08-24 北京市商汤科技开发有限公司 Flight navigation method and device, computer equipment and storage medium
CN114900545A (en) * 2022-05-10 2022-08-12 中国电信股份有限公司 Augmented reality implementation method and system and cloud server
CN116243793A (en) * 2023-02-21 2023-06-09 航天正通汇智(北京)科技股份有限公司 Media interaction control method and device based on AR technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200027427A1 (en) * 2018-04-27 2020-01-23 Vulcan Inc. Scale determination service
CN112148189A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in AR scene, electronic equipment and storage medium
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200027427A1 (en) * 2018-04-27 2020-01-23 Vulcan Inc. Scale determination service
CN112148189A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in AR scene, electronic equipment and storage medium
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113295180A (en) * 2021-06-30 2021-08-24 北京市商汤科技开发有限公司 Flight navigation method and device, computer equipment and storage medium
CN114900545A (en) * 2022-05-10 2022-08-12 中国电信股份有限公司 Augmented reality implementation method and system and cloud server
CN116243793A (en) * 2023-02-21 2023-06-09 航天正通汇智(北京)科技股份有限公司 Media interaction control method and device based on AR technology

Similar Documents

Publication Publication Date Title
CN111028330B (en) Three-dimensional expression base generation method, device, equipment and storage medium
CN106355153B (en) A kind of virtual objects display methods, device and system based on augmented reality
CN112348969A (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
CN112148189A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
US20190287306A1 (en) Multi-endpoint mixed-reality meetings
JP2022505998A (en) Augmented reality data presentation methods, devices, electronic devices and storage media
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN111694430A (en) AR scene picture presentation method and device, electronic equipment and storage medium
CN111638797A (en) Display control method and device
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
WO2017084319A1 (en) Gesture recognition method and virtual reality display output device
CN111679742A (en) Interaction control method and device based on AR, electronic equipment and storage medium
CN112637665B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
CN111833457A (en) Image processing method, apparatus and storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111651057A (en) Data display method and device, electronic equipment and storage medium
CN111291674A (en) Method, system, device and medium for extracting expression and action of virtual character
CN112991555B (en) Data display method, device, equipment and storage medium
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111569414A (en) Flight display method and device of virtual aircraft, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604

RJ01 Rejection of invention patent application after publication