CN111640202B - AR scene special effect generation method and device - Google Patents

AR scene special effect generation method and device Download PDF

Info

Publication number
CN111640202B
CN111640202B CN202010531790.1A CN202010531790A CN111640202B CN 111640202 B CN111640202 B CN 111640202B CN 202010531790 A CN202010531790 A CN 202010531790A CN 111640202 B CN111640202 B CN 111640202B
Authority
CN
China
Prior art keywords
target
special effect
virtual
magic
tourist
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010531790.1A
Other languages
Chinese (zh)
Other versions
CN111640202A (en
Inventor
潘思霁
揭志伟
李炳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010531790.1A priority Critical patent/CN111640202B/en
Publication of CN111640202A publication Critical patent/CN111640202A/en
Application granted granted Critical
Publication of CN111640202B publication Critical patent/CN111640202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a method and a device for generating an AR scene special effect, wherein the method comprises the following steps: acquiring a real scene image of a target recreation place shot by a current tourist by using Augmented Reality (AR) equipment; after at least one other target tourist exists in the real scene image, determining virtual roles corresponding to the other target tourist, and determining the magic special effects of the virtual roles; based on the virtual character, the magic special effect of the virtual character and the position information of other target tourists corresponding to the virtual character in the real scene, the AR scene special effect is generated and displayed through the AR equipment. According to the embodiment of the disclosure, through identifying other target tourists in the acquired real scene image, the virtual role of each other target tourist and the magic special effect corresponding to the virtual role are determined, and the AR scene special effect comprising the virtual role, the magic special effect corresponding to the virtual role and the real scene image is generated, so that the display scene is enriched.

Description

AR scene special effect generation method and device
Technical Field
The disclosure relates to the technical field of augmented reality, in particular to a method and a device for generating special effects of an AR scene.
Background
Augmented reality (Augmented Reality, AR) technology is to superimpose physical information (visual information, sound, touch, etc.) into the real world by analog simulation, thereby presenting a real environment and a virtual object in the same screen or space in real time. In recent years, the application field of the AR equipment is wider and wider, so that the AR equipment plays an important role in life, work and entertainment, and the optimization of the effect of the augmented reality scene presented by the AR equipment is more and more important.
At present, some playgrounds can deploy models or puppets of cartoon characters to enrich exhibition contents, but the contents occupy extra space on one hand and increase facility cost on the other hand. How to meet the requirements of exhibition at exhibition halls and user watching based on AR technology under the conditions of reducing facility cost as much as possible and not increasing extra space waste is a problem worthy of research.
Disclosure of Invention
The embodiment of the disclosure at least provides a method and a device for generating special effects of an AR scene.
In a first aspect, an embodiment of the present disclosure provides a method for generating an AR scene special effect, where the method includes:
acquiring a real scene image of a target recreation place shot by a current tourist by using Augmented Reality (AR) equipment;
After identifying that at least one other target tourist exists in the real scene image, determining a virtual role corresponding to the other target tourist, and determining a magic special effect of the virtual role;
generating an AR scene special effect based on the virtual character, the magic special effect of the virtual character and the position information of other target tourists corresponding to the virtual character in a real scene, and displaying the AR scene special effect through AR equipment; in the AR scene special effect, the other target tourists are presented by using the images of the corresponding virtual roles.
In the method, by identifying other target tourists in the acquired real scene image, determining the virtual role of each other target tourist and the magic special effect corresponding to the virtual role, replacing the virtual role with other target tourists in the real scene image, generating the AR scene special effect comprising the virtual role, the magic special effect corresponding to the virtual role and the real scene image, and enabling the current tourist to watch the virtual role with the magic special effect by using own AR equipment, so that the display scene is enriched under the conditions of not increasing the facility cost and not bringing extra space waste.
In one possible implementation manner, after identifying that at least one other target guest exists in the real scene image, determining a virtual character corresponding to the other target guest, and determining a magic special effect of the virtual character includes:
identifying a user image of each of the other target guests in the real scene image;
determining attribute characteristics of the other target tourists according to the user image; and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles based on the attribute characteristics.
Here, by extracting the attribute features corresponding to the user image of each other target guest in the real scene image (here, the attribute features may include a face attribute feature and an accessory attribute feature), the virtual character matching with the other target guest can be determined according to the age, sex, emotion, height, wearing features, hand-held object features, and the like of the other target guest, and the corresponding magic effect is matched for the virtual character according to the virtual character prototype, thereby enriching the display scene.
In one possible implementation manner, after identifying that at least one other target guest exists in the real scene image, determining a virtual character corresponding to the other target guest, and determining a magic special effect of the virtual character, including:
According to the historical behavior data of the current tourist, determining the interest characteristics of the current tourist;
and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles according to the interest characteristics.
Here, by extracting user history behavior data of the current tourist and combining the user history behavior data (for example, the user history behavior data can comprise a play item which is played, a play time corresponding to each play item and a play frequency corresponding to each play item, for example, the play item which is played is a Disney castle play item, the play time is 5 hours, the play frequency is 5 times), interest features (for example, like Disney cartoon characters) of the current tourist are determined, virtual roles are matched for other tourists according to the determined interest features of the current tourist, and corresponding magic special effects are matched according to role prototypes corresponding to the virtual roles, so that the current tourist can watch a favorite virtual role and the magic special effects corresponding to the virtual role in a scene image by using the AR equipment, and personalized requirements of the user are met.
In one possible implementation, after generating the special effects of the AR scene, the presenting by the AR device includes:
Displaying the virtual roles fused into the real scene through AR equipment;
and triggering the magic special effect for displaying the virtual character after detecting the target gesture triggering operation of the current tourist.
Here, the current tourist can add corresponding magic special effects for the virtual character through the target gesture action in the playing process, so that the interestingness of the playing process is increased.
In one possible implementation, detecting the target gesture trigger operation of the current guest includes:
according to a plurality of continuously acquired real scene images, recognizing gesture actions of the current tourist in the plurality of real scene images;
and after determining that the gesture action belongs to a target gesture type, determining that the target gesture triggering operation of the current tourist is detected.
In one possible implementation manner, after detecting the target gesture triggering operation of the current tourist, triggering the magic special effect for displaying the virtual character includes:
when a plurality of other target tourists exist, after the target gesture triggering operation of the current tourist is detected, determining a target virtual role aimed by the target gesture triggering operation according to operation position information corresponding to the target gesture triggering operation;
Triggering the magic special effect for displaying the target virtual character.
Here, when a plurality of other target tourists exist in the real scene image, according to the position of the current tourist initiating the target gesture action, a magic special effect is added for the corresponding virtual role, the display scene is enriched, and the interest of playing is increased.
In one possible implementation manner, after detecting the target gesture triggering operation of the current tourist, triggering the magic special effect for displaying the virtual character includes:
triggering the magic special effects of the virtual roles to be displayed periodically after the target gesture triggering operation of the current tourist is detected, until the ending triggering operation of the current tourist is detected; or,
triggering the magic special effect of the virtual character once after detecting the triggering operation of the target gesture of the current tourist;
the display time length of each time of the magic special effect is preset according to the characteristics of the magic special effect.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating an AR scene special effect, the apparatus including:
the acquisition module is used for acquiring a real scene image of a target recreation place shot by a current tourist by using the augmented reality AR equipment;
The virtual object determining module is used for determining virtual roles corresponding to other target tourists after identifying that at least one other target tourist exists in the real scene image, and determining the magic special effects of the virtual roles;
the AR scene special effect generation module is used for generating an AR scene special effect based on the virtual role, the magic special effect of the virtual role and the position information of other target tourists corresponding to the virtual role in a real scene, and displaying the AR scene special effect through AR equipment; in the AR scene special effect, the other target tourists are presented by using the images of the corresponding virtual roles.
In a possible implementation manner, the virtual object determining module is specifically configured to identify a user image of each of the other target tourists in the real scene image; determining attribute characteristics of the other target tourists according to the user image; and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles based on the attribute characteristics.
In a possible implementation manner, the virtual object determining module is specifically configured to determine, according to historical behavior data of the current guest, an interest feature of the current guest; and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles according to the interest characteristics.
In one possible embodiment, the apparatus further comprises: the AR scene special effect display module is used for displaying the virtual roles fused into the real scene; and triggering the magic special effect for displaying the virtual character after detecting the target gesture triggering operation of the current tourist.
In one possible embodiment, the apparatus further comprises: the gesture triggering operation detection module is used for identifying gesture actions of the current tourist in the plurality of real scene images according to the plurality of continuously acquired real scene images; and after determining that the gesture action belongs to a target gesture type, determining that the target gesture triggering operation of the current tourist is detected.
In a possible implementation manner, the AR scene special effect display module is specifically configured to, when a plurality of other target tourists exist, determine, after detecting a target gesture triggering operation of the current tourist, a target virtual role targeted by the target gesture triggering operation according to operation position information corresponding to the target gesture triggering operation; triggering the magic special effect for displaying the target virtual character.
In a possible implementation manner, the AR scene special effect display module is further specifically configured to trigger, after detecting a target gesture triggering operation of the current guest, a magic special effect that periodically displays the virtual character until an ending triggering operation of the current guest is detected; or triggering the magic special effect showing the virtual character once after detecting the target gesture triggering operation of the current tourist; the display time length of each time of the magic special effect is preset according to the characteristics of the magic special effect.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of AR scene special effect generation as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for AR scene special effect generation according to the first aspect.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a method for AR scene effect generation provided by embodiments of the present disclosure;
FIG. 2 is a schematic diagram of an interface diagram of an AR scene effect display provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an interface diagram of an AR scene effect display provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an apparatus for generating special effects of an AR scene according to an embodiment of the present disclosure;
fig. 5 shows a schematic diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
According to research, at present, when a tourist plays at a recreation place, if the tourist in the recreation place likes a person, the photo or video of the person can only be shot at the recreation place, and the corresponding dynamic effect of the person cannot be restored, for example: when tourists play in the Disney castle, when the tourists see the favorite ice and snow queen Aisha princess, only a model of the Aisha princess or a picture or video of a doll in the Disney castle can be shot, and the tourists cannot shoot classical images of the Aisha princess in the film material using magic.
Based on the above, the present disclosure provides a method and an apparatus for generating an AR scene special effect, by identifying other target guests in an obtained real scene image, determining a virtual role of each other target guest and a magic special effect corresponding to the virtual role, replacing the virtual role with the other target guests in the real scene image, generating an AR scene special effect including the virtual role, the magic special effect corresponding to the virtual role and the real scene image, the current guest can use his or her AR device to watch the virtual role with the magic special effect, enrich the display scene, and the current guest can use his or her AR device to shoot a photo or video for the virtual role with the dynamic magic special effect, restore the dynamic effect of the person, and increase the interest of the game.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a method for generating an AR scene special effect disclosed in the present embodiment, where an execution subject of the method for generating an AR scene special effect provided in the present embodiment may be a computer device with a certain computing capability, specifically may be a terminal device or a server or other processing devices, for example, may be a server connected to an AR device, and the AR device may include, for example, devices with a display function and a data processing capability, such as AR glasses, a tablet computer, a smart phone, and a smart wearable device, where the AR device may be connected to the server through an application program, and the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a Personal digital processing (Personal DigitalAssistant, PDA), and so on. In some possible implementations, the method of generating the special effects of the AR scene may be implemented by a processor invoking computer readable instructions stored in a memory.
Example 1
The following describes a method for generating special effects of an AR scene provided by the present disclosure, taking a pointing body as a server as an example. Referring to fig. 1, a flowchart of a method for generating special effects of an AR scene according to an embodiment of the present disclosure is shown, where the method includes S101 to S103, specifically:
s101, acquiring a real scene image of a target recreation place shot by a current tourist by using Augmented Reality (AR) equipment.
The augmented reality AR device can be AR intelligent glasses or AR mobile phones, or any electronic device with an augmented reality function; the target recreation place is a recreation place currently played by the user.
Here, the real scene image may be a scene photograph taken by the user at the entrance and exit of the amusement park, or may be a scene photograph of any amusement item in the amusement park taken by the user during the course of playing.
In a specific implementation, before a tourist enters a recreation place, an AR device (such as AR intelligent glasses) can be taken at an entrance, a user can use the AR device to shoot a real scene image of the recreation place in the playing process, and virtual roles matched with other tourists in the real scene image are obtained through analysis. The AR device may complete the process of matching the virtual character for other guests in the real scene image by itself, or may send the captured real scene image to the server, and match the virtual character for other guests in the real scene image through the server.
In addition, before a tourist enters a recreation place, the terminal equipment of the user can be used for scanning and downloading the applet using the AR equipment at the entrance, the user can use the terminal equipment of the user to shoot the real scene image of the recreation place in the playing process, the shot real scene image is sent to the server through the installed applet, and the server is used for matching virtual roles for other tourists in the real scene image.
S102, after at least one other target tourist exists in the real scene image, determining a virtual role corresponding to the other target tourist, and determining a magic special effect of the virtual role.
One or more other target guests may be included in the real scene image.
Here, the virtual character may be a Mickey mouse, a Dong Lag, a white snow princess, a jasmine princess, an Aisha princess, a magic man, a knight, a green giant, a Threshold, or the like; the magic special effect is a magic special effect matched with a role prototype corresponding to the virtual role, and can comprise a fireball effect, an illumination effect, an ice and snow effect, a lightning effect and the like; such as: the magic special effect corresponding to the Aisha princess is ice and snow effect.
In a specific implementation, after a user shoots a real scene image of a target amusement place by using an AR device, whether other tourists exist in the real scene image can be determined by detecting a face area image of the real scene image; when a face region image is detected to be present in a real scene image, it is determined that at least one other guest is present in the real scene image.
In an alternative embodiment, after identifying that at least one other target guest exists in the real scene image, the following method may be used to match a corresponding virtual character for each other target guest, and determine a corresponding magic special effect for the virtual character, which is specifically described as follows: identifying a user image of each of the other target guests in the real scene image; determining attribute characteristics of the other target tourists according to the user image; and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles based on the attribute characteristics.
The user image comprises a user face area image and a user body area image; the attribute features include face attribute features and attachment attribute features; here, the face attribute features may include age, gender, emotion, height, weight, etc.; the accessory attribute features may include a wear feature and a hand-held feature, wherein the wear feature may include: dress, eyeglass, hat, dai Mi mouse head band, dai Huangguan, necklace, earring, etc., the hand-held feature may include: hand-held spider knight-errant cards, hand-held magic sticks, and the like.
Specifically, face region image detection is performed on a real scene image, when the face region image is detected to exist in the real scene image, a user image corresponding to each detected face region image is extracted, feature extraction is performed on the user image, face attribute features and accessory attribute features of the user image are obtained, a virtual role corresponding to the user image is determined according to the face attribute features and the accessory attribute features, and a corresponding magic special effect is matched for the virtual role according to a role prototype corresponding to the virtual role.
For example, when two face image areas exist in the real scene image, it is determined that two other target tourists (tourist a and tourist b) exist in the real scene image, and the user images of the tourist a and the tourist b are respectively extracted, and the attribute features extracted to the tourist a are as follows: age: age 20, sex: women, emotion: open heart, wearing characteristics: wearing a shoulder-exposed long skirt, necklace and crown; the attribute characteristics extracted to the tourist b are as follows: age: age 10, sex: men, emotion, happiness, height: 150cm, wearing characteristics: wear glasses, wear wind clothing, handheld thing characteristic: the handheld magic stick determines that the virtual roles matched with the tourist a are as follows according to the face attribute characteristics and the accessory attribute characteristics corresponding to the tourist a and the tourist b: the baby princess determines that the virtual role matched with the tourist b is a magic teacher; and the magic special effect of dancing rotation is matched for the baby princess, and the magic special effect of fireball effect when the magic is launched is matched for the magic man.
In another alternative embodiment, after identifying that at least one other target guest exists in the real scene image, the following method may be further used to match a corresponding virtual character for each other target guest, and determine a corresponding magic special effect for the virtual character, which is specifically described as follows: according to the historical behavior data of the current tourist, determining the interest characteristics of the current tourist; and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles according to the interest characteristics.
Wherein, the historical behavior data can comprise user identity information and user play history data; here, the user play history data may include play items that the user has historically played, a play duration corresponding to each play item that has historically played, and a play number corresponding to each play item that has historically played.
Wherein the interest feature is used to indicate the content of interest to the user, may include: like a Disney cartoon character, like a magic man, like a small animal, etc.
Here, the database stores in advance the user image of the history user, the user identity information of the history user, the face attribute characteristics of the history user, and the history behavior data of the history user.
In specific implementation, face region image detection is performed on a real scene image, when the face region image is detected to exist in the real scene image, it is determined that at least one other target tourist exists in the real scene image, AR equipment used by the current tourist is switched to a front-facing camera, the front-facing camera is used for shooting a user image of the current tourist, feature extraction is performed on the user image of the current tourist, user identity information of the current tourist is determined, a database is queried according to the user identity information, historical behavior data under the user identity information is determined, interest features of a user are determined according to the historical behavior data, virtual roles are matched for the other target tourists in the real scene image according to the interest features, and magic special effects corresponding to the virtual roles are matched according to role prototypes corresponding to the virtual roles.
When a face image area is detected in the real scene image, it is determined that one other target tourist (tourist c) exists in the real scene image, the AR device used by the current tourist is switched to a front camera, the front camera is used for shooting a user image of the current tourist, feature extraction is performed on the user image, user identity information of the tourist c is determined, the tourist c is determined to be a historical user according to the user identity information, a database is queried, and the historical behavior data corresponding to the user identity information is extracted to be a recreation item which is played, wherein the method comprises the following steps: the historical play duration of the superman alliance play item is as follows: 3 hours, the historical playing times are as follows: 5 times, determining the virtual role matched with the tourist c as superman; and is a magic special effect for matching take-off of superman.
S103, generating an AR scene special effect based on the virtual role, the magic special effect of the virtual role and the position information of other target tourists corresponding to the virtual role in a real scene, and displaying the AR scene special effect through AR equipment; in the AR scene special effect, the other target tourists are presented by using the images of the corresponding virtual roles.
The AR scene special effect is an AR scene image comprising virtual characters, magic special effects and a real scene image; the location information is used to indicate the location and scale information of other target guests in the real scene image.
Specifically, the virtual character can be replaced with the user image in the real scene image in real time according to the stature of the corresponding real user, and the AR scene special effect comprising the virtual character, the magic special effect corresponding to the virtual character and the real scene image is generated. Here, the virtual character corresponds to the motion gesture of the user in real time, that is, the user performs any motion, and the virtual character of the user also performs the same motion.
In a specific implementation, when the special effects of the AR scene are displayed through the AR device, the specific display process may be as follows: displaying the virtual roles fused into the real scene through AR equipment; and triggering the magic special effect for displaying the virtual character after detecting the target gesture triggering operation of the current tourist.
That is, as an alternative way, the AR device first displays a real scene image in the special effects of the AR scene and a virtual character located at a position where another target guest is located, and displays a special magic effect corresponding to the virtual character after detecting the triggering operation of the special magic effect of the current guest.
In addition, in the implementation, the target gesture triggering operation of the current tourist can be detected by the following way, which is specifically described as follows: according to a plurality of continuously acquired real scene images, recognizing gesture actions of the current tourist in the plurality of real scene images; and after determining that the gesture action belongs to a target gesture type, determining that the target gesture triggering operation of the current tourist is detected.
The gesture actions can comprise handshaking, arm waving up, down, left and right, discarding actions and the like; the target gesture type is a trigger magic special effect type; the target gesture triggering operation is an operation of triggering to add a magic effect to the virtual character, and can be left and right arm swinging, finger sounding and the like.
Here, gesture motion characteristics corresponding to the trigger magic special effect type gesture motion are stored in the database in advance.
Specifically, after the AR equipment displays the virtual character fused into the real scene, continuously acquiring a plurality of real scene images again, and when the fact that all the continuously acquired real scene images contain the hand image of the current tourist is identified, extracting action characteristics of the hand image in the plurality of real scene images, and determining the type of the gesture action of the current tourist according to the extracted gesture action characteristics; when the type of the gesture action is identified as triggering the magic special effect class, the current tourist is indicated to initiate the operation of adding the magic special effect for the virtual character.
In specific implementation, corresponding magic special effects can be added for replacing virtual roles of a plurality of other target tourists in the real scene image respectively, and the specific description is as follows: after detecting that the current tourist initiates the operation of adding the corresponding magic special effect for the virtual role displayed by the AR equipment, adding the magic special effect for the virtual role of the AR corresponding to the current tourist operation position according to the operation position of the current tourist initiated the operation of adding the magic special effect for the virtual role, and displaying the AR scene image after the addition of the magic special effect.
For example, if the current real scene image is a Disney castle, and the target gesture triggering the magic effect acts as: the arms are swung up and down, virtual roles matched with tourist d and tourist e in the real scene image are superman and Aisha princess respectively according to the method, the AR equipment displays the AR scene image comprising superman and Aisha princess, and a specific display interface is shown in figure 2 (taking the current AR equipment of the tourist as a mobile phone as an example); when the current tourist is detected to perform the action of swinging the arm up and down at the position of the Aisha princess, the ice and snow magic special effects are added for the Aisha princess, the scene image containing the super person and the Disney castle of the Aisha princess with the ice and snow magic special effects is displayed through the AR equipment, the specific display interface is shown in the figure 3 (taking the AR equipment of the current tourist as an example), the current tourist can observe the process of adding the special effects for the virtual role through the AR equipment, the display scene is enriched, and the interaction between the tourist and the recreation item in the playing process is increased.
In a specific implementation, the method for displaying the magic special effect of the virtual character in the AR equipment is as follows: triggering the magic special effects of the virtual roles to be displayed periodically after the target gesture triggering operation of the current tourist is detected, until the ending triggering operation of the current tourist is detected; or triggering the magic special effect showing the virtual character once after detecting the target gesture triggering operation of the current tourist.
The display duration of each time of the magic special effect is preset according to the characteristics of the magic special effect.
Specifically, when the AR equipment displays the magic special effect corresponding to the virtual character, the magic special effect can be periodically displayed according to the preset duration of displaying the magic special effect in the process that the current tourist initiates the target gesture triggering the magic special effect, and when the current tourist finishes the target gesture triggering the magic special effect, the display of the magic special effect is finished; or only once after the target gesture of triggering the magic special effect initiated by the current tourist is detected, the magic effect corresponding to the virtual character is displayed.
In the embodiment of the disclosure, by identifying other target tourists in the acquired real scene image, determining the virtual role of each other target tourist and the magic special effect corresponding to the virtual role, replacing the virtual role with other target tourists in the real scene image, generating the AR scene special effect comprising the virtual role, the magic special effect corresponding to the virtual role and the real scene image, the current tourist can use the AR equipment to watch the virtual role with the magic special effect, the display scene is enriched, and the current tourist can use the AR equipment to shoot a photo or a video for the virtual role with the dynamic magic special effect, so that the display scene is enriched under the conditions of reducing facility cost as much as possible and not increasing extra space waste.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides a device for generating a tour route corresponding to the method for generating a tour route, and since the principle of solving the problem by the device in the embodiment of the present disclosure is similar to the method for generating the special effect of the AR scene in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Example two
Referring to fig. 4, a schematic diagram of an apparatus for generating special effects of an AR scene according to an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 401, a virtual object determination module 402, and an AR scene special effect generation module 403; the acquiring module 401 is configured to acquire a real scene image of a target amusement place shot by a current tourist using the augmented reality AR device.
The virtual object determining module 402 is configured to determine, after identifying that at least one other target guest exists in the real scene image, a virtual character corresponding to the other target guest, and determine a magic special effect of the virtual character.
The AR scene special effect generating module 403 is configured to generate an AR scene special effect based on the virtual character, the magic special effect of the virtual character, and the position information of other target tourists corresponding to the virtual character in the real scene, and display the AR scene special effect through AR equipment; in the AR scene special effect, the other target tourists are presented by using the images of the corresponding virtual roles.
In a possible implementation manner, the virtual object determining module 402 is specifically configured to identify a user image of each of the other target guests in the real scene image; determining attribute characteristics of the other target tourists according to the user image; and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles based on the attribute characteristics.
In a possible implementation manner, the virtual object determining module 402 is specifically configured to determine, according to historical behavior data of the current guest, an interest feature of the current guest; and determining the virtual roles corresponding to the other target tourists and the magic special effects matched with the role prototypes corresponding to the virtual roles according to the interest characteristics.
In a possible embodiment, the apparatus further comprises: the AR scene special effect display module is used for displaying the virtual roles fused into the real scene; and triggering the magic special effect for displaying the virtual character after detecting the target gesture triggering operation of the current tourist.
In a possible embodiment, the apparatus further comprises: the gesture triggering operation detection module is used for identifying gesture actions of the current tourist in the plurality of real scene images according to the plurality of continuously acquired real scene images; and after determining that the gesture action belongs to a target gesture type, determining that the target gesture triggering operation of the current tourist is detected.
In a possible implementation manner, the AR scene special effect display module is specifically configured to, when a plurality of other target tourists exist, determine, after detecting a target gesture triggering operation of the current tourist, a target virtual role targeted by the target gesture triggering operation according to operation position information corresponding to the target gesture triggering operation; triggering the magic special effect for displaying the target virtual character.
In a possible implementation manner, the AR scene special effect display module is further specifically configured to trigger, after detecting a target gesture triggering operation of the current guest, a magic special effect that periodically displays the virtual character until an ending triggering operation of the current guest is detected; or triggering the magic special effect showing the virtual character once after detecting the target gesture triggering operation of the current tourist; the display time length of each time of the magic special effect is preset according to the characteristics of the magic special effect.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Corresponding to the method for generating the special effects of the AR scene in fig. 1, the embodiment of the present disclosure further provides an electronic device 500, as shown in fig. 5, which is a schematic structural diagram of the electronic device 500 provided in the embodiment of the present disclosure, including: including a processor 501, memory 502, and bus 503. The memory 502 is configured to store execution instructions, including a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external memory 5022 such as a hard disk, the processor 501 exchanges data with the external memory 5022 through the memory 5021, and when the electronic device 500 is running, the processor 501 and the memory 502 communicate with each other through the bus 503, so that the processor 501 executes the following instructions:
acquiring a real scene image of a target recreation place shot by a current tourist by using Augmented Reality (AR) equipment; after identifying that at least one other target tourist exists in the real scene image, determining a virtual role corresponding to the other target tourist, and determining a magic special effect of the virtual role; generating an AR scene special effect based on the virtual character, the magic special effect of the virtual character and the position information of other target tourists corresponding to the virtual character in a real scene, and displaying the AR scene special effect through AR equipment; in the AR scene special effect, the other target tourists are presented by using the images of the corresponding virtual roles.
The specific processing flow of the processor 501 may refer to the description of the above method embodiments, and will not be repeated here.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for generating an AR scene special effect described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product of the method for generating the special effects of the AR scene provided in the embodiments of the present disclosure includes a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the steps of the method for generating the special effects of the AR scene described in the embodiments of the method, and the details of the embodiments of the method may be referred to, which are not described herein.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (8)

1. A method for generating special effects of an AR scene, the method comprising:
acquiring a real scene image of a target recreation place shot by a current tourist by using Augmented Reality (AR) equipment;
after at least one other target tourist exists in the real scene image, according to the historical behavior data of the current tourist, the interest characteristic of the current tourist is determined, and according to the interest characteristic, the virtual role corresponding to each other target tourist and the magic special effect matched with the role prototype corresponding to the virtual role are determined; wherein, the historical behavior data can comprise user identity information and user play historical data;
generating an AR scene special effect based on the virtual character, the magic special effect of the virtual character and the position information of other target tourists corresponding to the virtual character in a real scene, and displaying the AR scene special effect through AR equipment; in the AR scene special effect, the other target tourists are presented by using the images of the corresponding virtual roles, the magic special effect of the virtual roles is triggered and displayed by the target gestures of the current tourists, and the target gestures are determined based on the acquired continuous multiple real scene images.
2. The method of claim 1, wherein the presenting by the AR device after generating the AR scene special effect comprises:
displaying the virtual roles fused into the real scene through AR equipment;
and triggering the magic special effect for displaying the virtual character after detecting the target gesture triggering operation of the current tourist.
3. The method of claim 2, wherein detecting a target gesture trigger operation of the current guest comprises:
according to a plurality of continuously acquired real scene images, recognizing gesture actions of the current tourist in the plurality of real scene images;
and after determining that the gesture action belongs to a target gesture type, determining that the target gesture triggering operation of the current tourist is detected.
4. A method according to claim 2 or 3, wherein upon detection of a target gesture triggering operation of the current guest, triggering a magic effect showing the virtual character comprises:
when a plurality of other target tourists exist, after the target gesture triggering operation of the current tourist is detected, determining a target virtual role aimed by the target gesture triggering operation according to operation position information corresponding to the target gesture triggering operation;
Triggering the magic special effect for displaying the target virtual character.
5. The method according to any one of claims 2 to 4, wherein triggering the magic effect showing the virtual character after detecting the target gesture triggering operation of the current guest comprises:
triggering the magic special effects of the virtual roles to be displayed periodically after the target gesture triggering operation of the current tourist is detected, until the ending triggering operation of the current tourist is detected; or,
triggering the magic special effect of the virtual character once after detecting the triggering operation of the target gesture of the current tourist;
the display time length of each time of the magic special effect is preset according to the characteristics of the magic special effect.
6. An apparatus for generating special effects of an AR scene, the apparatus comprising:
the acquisition module is used for acquiring a real scene image of a target recreation place shot by a current tourist by using the augmented reality AR equipment;
the virtual object determining module is used for determining the interest characteristics of the current tourist according to the historical behavior data of the current tourist after identifying that at least one other target tourist exists in the real scene image, and determining virtual roles corresponding to the other target tourists and magic special effects matched with role prototypes corresponding to the virtual roles according to the interest characteristics; wherein, the historical behavior data can comprise user identity information and user play historical data;
The AR scene special effect generation module is used for generating an AR scene special effect based on the virtual role, the magic special effect of the virtual role and the position information of other target tourists corresponding to the virtual role in a real scene, and displaying the AR scene special effect through AR equipment; in the AR scene special effect, the other target tourists are presented by using the images of the corresponding virtual roles, the magic special effect of the virtual roles is triggered and displayed by the target gestures of the current tourists, and the target gestures are determined based on the acquired continuous multiple real scene images.
7. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine-readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine-readable instructions when executed by said processor performing the steps of the method of AR scene special effect generation according to any of claims 1 to 5.
8. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method of AR scene special effect generation according to any of claims 1 to 5.
CN202010531790.1A 2020-06-11 2020-06-11 AR scene special effect generation method and device Active CN111640202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010531790.1A CN111640202B (en) 2020-06-11 2020-06-11 AR scene special effect generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010531790.1A CN111640202B (en) 2020-06-11 2020-06-11 AR scene special effect generation method and device

Publications (2)

Publication Number Publication Date
CN111640202A CN111640202A (en) 2020-09-08
CN111640202B true CN111640202B (en) 2024-01-09

Family

ID=72330106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010531790.1A Active CN111640202B (en) 2020-06-11 2020-06-11 AR scene special effect generation method and device

Country Status (1)

Country Link
CN (1) CN111640202B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637665B (en) * 2020-12-23 2022-11-04 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
CN113359985A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium
CN113359986B (en) * 2021-06-03 2023-06-20 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113282179A (en) * 2021-06-18 2021-08-20 北京市商汤科技开发有限公司 Interaction method, interaction device, computer equipment and storage medium
CN113946210B (en) * 2021-09-16 2024-01-23 武汉灏存科技有限公司 Action interaction display system and method
CN114155605B (en) * 2021-12-03 2023-09-15 北京字跳网络技术有限公司 Control method, device and computer storage medium
CN114327059A (en) * 2021-12-24 2022-04-12 北京百度网讯科技有限公司 Gesture processing method, device, equipment and storage medium
CN115374268B (en) * 2022-10-25 2023-03-24 广州市明道文化产业发展有限公司 Multi-role decentralized collaborative interaction method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006350416A (en) * 2005-06-13 2006-12-28 Tecmo Ltd Information retrieval system using avatar
CN109876450A (en) * 2018-12-14 2019-06-14 深圳壹账通智能科技有限公司 Implementation method, server, computer equipment and storage medium based on AR game
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110879946A (en) * 2018-09-05 2020-03-13 武汉斗鱼网络科技有限公司 Method, storage medium, device and system for combining gesture with AR special effect
CN111243101A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190151758A1 (en) * 2017-11-22 2019-05-23 International Business Machines Corporation Unique virtual entity creation based on real world data sources

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006350416A (en) * 2005-06-13 2006-12-28 Tecmo Ltd Information retrieval system using avatar
CN110879946A (en) * 2018-09-05 2020-03-13 武汉斗鱼网络科技有限公司 Method, storage medium, device and system for combining gesture with AR special effect
CN109876450A (en) * 2018-12-14 2019-06-14 深圳壹账通智能科技有限公司 Implementation method, server, computer equipment and storage medium based on AR game
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN111243101A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence

Also Published As

Publication number Publication date
CN111640202A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN111640202B (en) AR scene special effect generation method and device
CN106803057B (en) Image information processing method and device
CN111640200B (en) AR scene special effect generation method and device
CN111638793B (en) Display method and device of aircraft, electronic equipment and storage medium
CN109603151A (en) Skin display methods, device and the equipment of virtual role
CN111652987B (en) AR group photo image generation method and device
CN110021061A (en) Collocation model building method, dress ornament recommended method, device, medium and terminal
CN111627117B (en) Image display special effect adjusting method and device, electronic equipment and storage medium
CN111643900B (en) Display screen control method and device, electronic equipment and storage medium
CN108090968B (en) Method and device for realizing augmented reality AR and computer readable storage medium
CN111694431A (en) Method and device for generating character image
US11673054B2 (en) Controlling AR games on fashion items
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN114155605B (en) Control method, device and computer storage medium
CN108525306B (en) Game implementation method and device, storage medium and electronic equipment
CN111639979A (en) Entertainment item recommendation method and device
WO2018135246A1 (en) Information processing system and information processing device
WO2023055825A1 (en) 3d upper garment tracking
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111638798A (en) AR group photo method, AR group photo device, computer equipment and storage medium
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN109200586A (en) Game implementation method and device based on augmented reality
CN111640199B (en) AR special effect data generation method and device
CN111665942A (en) AR special effect triggering display method and device, electronic equipment and storage medium
CN116993432A (en) Virtual clothes information display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant