WO2020098462A1 - Ar virtual character drawing method and apparatus, mobile terminal and storage medium - Google Patents

Ar virtual character drawing method and apparatus, mobile terminal and storage medium Download PDF

Info

Publication number
WO2020098462A1
WO2020098462A1 PCT/CN2019/112729 CN2019112729W WO2020098462A1 WO 2020098462 A1 WO2020098462 A1 WO 2020098462A1 CN 2019112729 W CN2019112729 W CN 2019112729W WO 2020098462 A1 WO2020098462 A1 WO 2020098462A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
sound effect
virtual character
audio
virtual
Prior art date
Application number
PCT/CN2019/112729
Other languages
French (fr)
Chinese (zh)
Inventor
朱克智
王健
严锋贵
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020098462A1 publication Critical patent/WO2020098462A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space

Definitions

  • the present application relates to the field of audio technology, and in particular to an AR virtual character drawing method, device, mobile terminal, and storage medium.
  • Augmented reality (AugmentedReality, AR) technology is a new technology that integrates real-world information and virtual world information "seamlessly". It is physical information that is difficult to experience in a certain time and space of the real world ( Visual information, sound, taste, touch, etc.) are superimposed after simulation, and the virtual information is applied to the real world, which is perceived by human senses, so as to achieve a sensory experience beyond reality.
  • the real environment and virtual objects are superimposed on the same screen or space in real time.
  • the virtual characters in the AR scene are synthesized according to special effects.
  • the image and position of the virtual characters are determined according to a certain algorithm, and the interaction effect of the virtual characters in the AR scene is poor.
  • Embodiments of the present application provide an AR virtual character drawing method, device, mobile terminal, and storage medium, which can improve the interactive effect of virtual characters in an AR scene.
  • an embodiment of the present application provides an AR virtual character drawing method, including:
  • the virtual character is drawn at the position of the virtual character in the AR scene.
  • an embodiment of the present application provides an AR virtual character drawing device, including:
  • Capture unit used to capture the real three-dimensional scene picture through the camera
  • a construction unit configured to construct an augmented reality AR scene according to the real three-dimensional scene picture
  • a first obtaining unit configured to obtain at least one sound effect generated in the AR scene
  • a recognition unit configured to recognize whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene;
  • a second obtaining unit configured to obtain the position of the camera in the AR scene when the recognition unit recognizes that the target sound effect exists in the at least one sound effect
  • a determining unit configured to determine a sound effect generation algorithm, and determine the virtual character in the AR according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene Position in the scene;
  • the drawing unit is configured to draw the virtual character at the position of the virtual character in the AR scene.
  • an embodiment of the present application provides a mobile terminal, including a processor and a memory, where the memory is used to store one or more programs, and the one or more programs are configured to be executed by the processor.
  • the program includes instructions for performing the steps in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes the computer to execute the first embodiment of the present application. Part or all of the steps described in one aspect.
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing the computer program, and the computer program is operable to cause the computer to execute as implemented in the present application Examples of some or all of the steps described in the first aspect.
  • the computer program product may be a software installation package.
  • the mobile terminal captures a real three-dimensional scene picture through a camera, constructs an augmented reality AR scene based on the real three-dimensional scene picture; obtains at least one sound effect generated in the AR scene, Identify whether there is a target sound effect in at least one sound effect, the target sound effect is generated by the audio generated by the undrawn virtual character in the AR scene; if it exists, obtain the position of the camera in the AR scene, and determine the sound effect generation algorithm; according to the virtual character The position of the audio, the target sound effect, the sound effect generation algorithm and the camera in the AR scene determine the position of the virtual character in the AR scene; the virtual character is drawn at the position of the virtual character in the AR scene.
  • the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and can be drawn in the AR scene in the accurate position according to the sound effect of the virtual character Virtual characters, to improve the interactive effect of virtual characters in AR scenes.
  • FIG. 1 is a schematic flowchart of a method for drawing an AR virtual character disclosed in an embodiment of the present application
  • FIG. 2 is a schematic diagram of analog audio signal transmission disclosed in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a principle for determining the position of a virtual character in an AR scene disclosed in an embodiment of the present application
  • FIG. 4 is a schematic flowchart of another method for drawing an AR virtual character disclosed in an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an AR virtual character drawing device disclosed in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a mobile terminal disclosed in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of yet another mobile terminal disclosed in an embodiment of the present application.
  • the mobile terminals involved in the embodiments of the present application may include various handheld devices with wireless communication functions, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to a wireless modem, and various forms of user equipment (User Equipment, UE), Mobile Station (MS), terminal equipment, etc.
  • UE User Equipment
  • MS Mobile Station
  • terminal equipment etc.
  • the above-mentioned devices are collectively referred to as mobile terminals.
  • FIG. 1 is a schematic flowchart of an AR virtual character drawing method disclosed in an embodiment of the present application. As shown in FIG. 1, the AR virtual character drawing method includes the following steps.
  • the mobile terminal captures a real three-dimensional scene picture through a camera, and constructs an augmented reality AR scene according to the real three-dimensional scene picture.
  • the mobile terminal may include a camera, a display, and a speaker.
  • the camera is used to capture real three-dimensional scene pictures in real time.
  • the real three-dimensional scene pictures can be enclosed indoors or open outdoors.
  • the display is used to display the AR picture corresponding to the AR scene.
  • the speaker is used to output sound effects in the AR scene.
  • Mobile terminals may include AR-enabled devices such as mobile phones and tablet computers, and may also include dedicated AR devices such as AR glasses and AR helmets.
  • AR scenes are built on the basis of real three-dimensional scenes.
  • AR scenes can add multiple display controls on the basis of real three-dimensional scenes.
  • the display controls can be used to call different virtual characters, as well as adjust the display effect of virtual characters, and adjust the position of virtual characters, and whether to use Turn on the 3D (3D) sound effect of the virtual character.
  • the mobile terminal obtains at least one sound effect generated in the AR scene, and identifies whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene.
  • the user can call an AR virtual character to display in the AR scene as needed. Since the drawing of the virtual character takes a certain amount of time, before the virtual character is drawn, the audio generated by the virtual character can come out before the image of the virtual character. If the 3D sound effect of the virtual character is turned on, the speaker of the mobile terminal will output the sound effect generated by the 3D sound effect generation algorithm.
  • the user can also randomly call an AR virtual character to display in the AR scene, and the sound effect at this time is the sound effect of the randomly selected virtual character.
  • the AR scene includes not only the sound effects of virtual characters, but also the sound effects of background music, sound effects of virtual animals, and sound effects generated by virtual objects.
  • the virtual character may be a virtual character in a game, a virtual character in a film and television work (such as anime), or a virtual character in a literary work.
  • the audio produced by them has different frequency characteristics.
  • the virtual character can be used as an audio player, and the camera can be used as an audio receiver.
  • the mobile terminal obtains at least one sound effect generated in the AR scene, which is at least one sound effect received on the basis of the camera as the angle of view. After receiving the sound effect, the mobile terminal can analyze the frequency characteristics of the sound effect to identify which sound effect is caused by Types of avatars generated by audio.
  • the target sound effect can be set in advance, which sound effect is set as the target sound effect that needs to be identified, and the target sound effect is identified in order to find the virtual character.
  • the number of virtual characters may be one or more, and the number of virtual characters may be determined according to the AR scene.
  • the audio player is playing voice / audio.
  • the audio signal it receives also contains the reflected sound signal after various complex physical reflections.
  • the reflected acoustic signal is delayed before the direct acoustic signal arrives, and its energy is attenuated due to physical reflection.
  • the AR scene is different, the delay of the reflected sound and the energy attenuation will be greatly different, resulting in the difference in the hearing of the audio receiving end. Therefore, for different AR scenes, different reverb sound effects algorithms can be used for sound effect processing.
  • FIG. 2 is a schematic diagram of an audio signal analog transmission disclosed in an embodiment of the present application.
  • the audio signal generated by the audio playing end in FIG. 2 can reach the audio receiving end through direct and reflective methods, thereby forming a reverberation effect at the audio receiving end.
  • Two kinds of reflection paths are illustrated in FIG. 2, the first reflection path reaches the audio receiving end after two reflections, and the second reflection path reaches the audio receiving end after one reflection.
  • Figure 2 is only an example of an audio signal transmission.
  • the audio signal can be reflected by multiple reflection paths once, twice and more than twice to reach the audio receiving end. Different AR scenes have different reflection times and reflection paths. Whether the audio signal is direct or reflected, it will have a certain degree of attenuation.
  • the attenuation coefficient is determined according to the distance of the path, the number of reflections, the transmission medium, and the material of the reflection point.
  • the mobile terminal identifying whether there is a target sound effect in at least one sound effect includes the following steps:
  • the mobile terminal obtains the audio features generated by the virtual character
  • the mobile terminal recognizes whether there is a sound effect matching the above audio feature in at least one sound effect
  • the mobile terminal determines that the sound effect matching the audio feature among the at least one sound effect is the target sound effect.
  • the audio characteristics include amplitude-frequency characteristics, that is, frequency characteristics and amplitude characteristics of audio.
  • the audio generated by virtual characters generally has fixed frequency characteristics and amplitude characteristics.
  • the frequency and amplitude vary within a certain amplitude, and the frequency and amplitude also have correlations, that is, the amplitude characteristics corresponding to different frequency points are not necessarily the same.
  • the mobile terminal recognizes whether there is a sound effect matching the above audio feature in at least one sound effect, specifically: the mobile terminal obtains the audio feature of each sound effect in the at least one sound effect, and calculates the audio feature of each sound effect and the audio feature generated by the virtual character Similarity, when there is a sound effect with a similarity greater than a preset similarity threshold in at least one sound effect, it is determined that a sound effect with a similarity greater than a preset similarity threshold in at least one sound effect is a target sound effect, and the above similarity is greater than Set the sound effect of the similarity threshold as the target sound effect.
  • the embodiment of the present application can identify whether the target sound effect exists in the at least one sound effect according to the similarity of the audio features. Since the recognition of the audio feature is relatively accurate, it can accurately identify whether the target sound effect exists in the at least one sound effect.
  • the mobile terminal obtains the position of the camera in the AR scene, and determines the sound effect generation algorithm.
  • the mobile terminal may determine the position of the camera in the AR scene according to the real three-dimensional scene image captured by the camera. Specifically, the mobile terminal can rotate the camera so that the camera can shoot a complete three-dimensional scene.
  • a complete three-dimensional scene refers to a three-dimensional scene shot in 360 ° or 720 ° panorama.
  • the mobile terminal determines the position of the camera in the AR scene according to the three-dimensional scene captured by the panorama.
  • the sound effect generation algorithm can be determined according to the scene in the real three-dimensional scene picture. For example, the sound effect generation algorithm corresponding to the indoor scene is different from the sound effect generation algorithm corresponding to the outdoor scene.
  • the mobile terminal determines the sound effect generation algorithm, specifically including the following steps:
  • the mobile terminal obtains the scene data corresponding to the AR scene, and obtains the type of the virtual character
  • the mobile terminal determines the sound effect generation algorithm based on the scene data and the type of avatar.
  • the sound effect generation algorithm is related to the scene data corresponding to the AR scene and the type of the virtual character.
  • the scene data can include the geometric size of the real 3D scene constructing the AR scene (for example, the length, width, height, space volume, space length, width, and height of the building) and the material of the real 3D scene (for example, the floor in the building , Wall, ceiling materials), etc.
  • the types of virtual characters may include virtual cartoon characters, virtual game characters, and the like.
  • the mobile terminal determines the sound effect generation algorithm based on the scene data and the type of virtual character, specifically:
  • the mobile terminal determines the sound effect algorithm model corresponding to the type of the virtual character according to the correspondence between the type and the sound effect algorithm model;
  • the mobile terminal determines the algorithm parameters of the sound effect algorithm model based on the scene data
  • the sound effect generation algorithm is determined based on the sound effect algorithm model corresponding to the type of the virtual character and the algorithm parameters of the sound effect algorithm model.
  • the sound effect algorithm model corresponding to the virtual cartoon character is different from the sound effect algorithm model corresponding to the virtual game character.
  • the mobile terminal determines the position of the virtual character in the AR scene according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene.
  • the target sound effect is determined based on the sound effect generation algorithm, the audio generated by the virtual character, the position of the camera in the AR scene, and the position of the virtual character in the AR scene. If the mobile terminal determines the position of the audio generated by the virtual character, the target sound effect, and the camera in the AR scene, the position of the virtual character in the AR scene can be reversed.
  • the audio generated by the virtual character can be preset by the AR developer, the sound effect generation algorithm can be determined according to the scene data corresponding to the AR scene and the type of virtual character, the target sound effect can be directly obtained, and the position of the camera in the AR scene can be based on the panorama The three-dimensional scene taken is determined.
  • step 102 may be continued.
  • FIG. 3 is a schematic diagram of a principle for determining the position of a virtual character in an AR scene disclosed in an embodiment of the present application.
  • the reverb sound effect P S1 * R1 + S2 * R2 + S3 * R3,
  • S1 is the attenuation coefficient of the first reflection path
  • S2 is the attenuation coefficient of the second reflection path
  • S3 is the attenuation coefficient of the third reflection path
  • R1 is the first initial audio signal transmitted along the first reflection path
  • R2 is the edge
  • the second initial audio signal transmitted by the second reflection path, R3, is the third initial audio signal transmitted along the direct path.
  • the first reflection path passes through the first reflection surface
  • S1 is related to the material of the first reflection surface, the default propagation medium in the AR scene, and the path length of the first reflection path
  • the second reflection path passes through the second reflection surface, S2 and the second
  • S3 is related to the default propagation medium and the length of the direct path in the AR scenario.
  • R1, R2 and R3 are related to the spatial distribution of the sound field of the audio signal emitted by the virtual character in the real three-dimensional space.
  • the larger the path length of the first reflection path the smaller S1; when the material of the second reflection surface and the default propagation medium in the AR scene When determined, the larger the path length of the second reflection path, the smaller S2; when the default propagation medium in the AR scene is determined, the larger the direct path length, the smaller S3.
  • the spatial distribution of the sound field of the audio signal emitted by the virtual character in the real three-dimensional space is also determined, and the material of the first reflective surface and the second reflective surface are also determined, then R1 can be determined ,
  • the sizes of R2 and R3, and the default propagation medium in the AR scene can also be determined, leaving three variables, namely the path length of the first reflection path, the length of the second reflection path, and the length of the third reflection path.
  • the target sound effect generated in the AR scene can be continuously obtained three times in a short time at the position of the camera to obtain three equations, the variables in the three equations are S1, S2 and S3, and the three equations are R1 and R2 , R3, and P are all determined and different (because the initial audio emitted by the virtual character changes in intensity and frequency distribution with time), you can solve S1, S2, and S3 through the ternary linear equations, and then according to S1 , S2 and S3 calculate the path length of the first reflection path, the length of the second reflection path, and the length of the third reflection path, and determine the position of the virtual character relative to the camera according to the lengths of the three paths. Since three sets of parameters are continuously acquired in a short time, the position of the virtual character relative to the camera is almost unchanged, and S1, S2, and S3 remain almost unchanged.
  • reverb sound effect P S1 * R1 + S2 * R2 + S3 * R3
  • the reverb sound algorithm can also be This is achieved by other means, and will not be repeated here.
  • the mobile terminal draws the virtual character at the position of the virtual character in the AR scene.
  • the mobile terminal may draw the image of the virtual character at the position of the virtual character in the AR scene.
  • the display of the mobile terminal can display virtual characters in the AR scene.
  • the mobile terminal can draw the virtual character according to the pre-set character model.
  • the virtual character may have an animation effect.
  • the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and the accurate effect of the virtual character sound effect in the AR scene Draw virtual characters at locations to improve the interactive effect of virtual characters in AR scenes.
  • FIG. 4 is a schematic flowchart of another method for drawing an AR virtual character disclosed in an embodiment of the present application.
  • FIG. 4 is further optimized based on FIG. 1.
  • the AR virtual character drawing method includes the following steps.
  • the mobile terminal captures a real three-dimensional scene picture through a camera, and constructs an augmented reality AR scene according to the real three-dimensional scene picture.
  • the mobile terminal acquires at least one sound effect generated in the AR scene, and identifies whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene.
  • the mobile terminal obtains the position of the camera in the AR scene, and determines the sound effect generation algorithm.
  • the mobile terminal determines the position of the virtual character in the AR scene according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene.
  • the mobile terminal draws the virtual character at the position of the virtual character in the AR scene.
  • steps 401 to 405 for specific implementation of steps 401 to 405 in the embodiments of the present application, reference may be made to steps 101 to 105 shown in FIG. 1, and details are not described herein again.
  • the mobile terminal adjusts the sound effect corresponding to the virtual character according to the change in the position of the virtual character in the AR scene.
  • the mobile terminal adjusts the sound effect corresponding to the virtual character according to the change in the AR scene.
  • the real three-dimensional scene picture captured by the camera may change, and the corresponding AR scene may also change.
  • the AR scene will change.
  • the position of the virtual character in the AR scene may also change.
  • the user clicks the display control in the AR scene to adjust the position of the avatar the position of the avatar in the AR scene will change.
  • the position of the virtual character in the AR scene may change.
  • the mobile terminal needs to adjust the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene, and the interaction effect between the user and the virtual character in the AR scene can be increased through the change of the sound effect.
  • the user can send a voice interaction instruction, and the virtual character can move in the AR scene according to the voice interaction instruction, so that different interactive sound effects appear, increasing the interaction effect between the user and the virtual character in the AR scene.
  • the mobile terminal needs to readjust the sound effect corresponding to the virtual character according to the change of the AR scene and the position of the virtual character in the new AR scene, and the interaction effect between the user and the virtual character in the AR scene can be increased through the change of the sound effect.
  • step 406 the mobile terminal adjusts the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene, specifically including the following steps:
  • the mobile terminal If the position of the virtual character in the AR scene changes from the first position to the second position, the mobile terminal re-determines the corresponding position of the virtual character according to the audio generated by the virtual character, the sound effect generation algorithm, the position of the camera in the AR scene, and the second position Sound effects.
  • the mobile terminal can recalculate the audio generated by the virtual character, the sound generation algorithm, the camera position in the AR scene, and the second position. Determine the sound effect corresponding to the avatar.
  • the embodiment of the present application can adjust the sound effect corresponding to the virtual character in time when the position of the virtual character in the AR scene changes, and can increase the interaction effect between the user and the virtual character in the AR scene through the change of the sound effect.
  • the user can change the sound effect of the virtual character in the AR scene by changing the position of the virtual character in the AR scene while the handheld mobile terminal is moving, thereby increasing the interaction effect between the user and the virtual character in the AR scene.
  • step 407 the mobile terminal adjusts the sound effect corresponding to the virtual character according to the change of the AR scene, which specifically includes the following steps:
  • the mobile terminal obtains the position of the virtual character in the second AR scene, and obtains the scene data corresponding to the second AR scene based on the second The scene data corresponding to the AR scene and the type of virtual character re-determine the new sound effect generation algorithm;
  • the mobile terminal re-determines the sound effect corresponding to the virtual character according to the audio generated by the virtual character, a new sound effect generation algorithm, the position of the camera in the second AR scene, and the position of the virtual character in the second AR scene.
  • the parameters in the corresponding sound effect generation algorithm will also change accordingly, and the audio generated by the virtual character propagates to the camera The sound effect will also change, so the mobile terminal re-determines the correspondence of the virtual character according to the audio generated by the virtual character, the new sound effect generation algorithm, the position of the camera in the second AR scene, and the position of the virtual character in the second AR scene Sound effects.
  • the embodiment of the present application can adjust the sound effect corresponding to the virtual character in time when the AR scene changes, and can increase the interaction effect between the user and the virtual character in the AR scene through the change of the sound effect.
  • the mobile terminal can analyze whether the AR scene where the mobile terminal is located changes according to the scene picture taken by the camera. Specifically, it can be analyzed whether the AR scene where the mobile terminal is located changes through elements in the scene picture (for example, buildings, plants, vehicles, roads, etc. in the scene picture). If the first AR scene is changed to the second AR scene, the mobile terminal may determine the position of the camera in the second AR scene according to the three-dimensional scene captured by the camera in a panoramic view.
  • elements in the scene picture for example, buildings, plants, vehicles, roads, etc. in the scene picture.
  • the mobile terminal For a specific implementation manner of the mobile terminal re-determining a new sound effect generation algorithm based on the scene data corresponding to the second AR scene and the type of virtual character, reference may be made to the description of step 103 in FIG. 1, and specific implementation of step (22) may be shown in FIG. The description of step 104 in 1 will not be repeated here.
  • the above mainly introduces the solutions of the embodiments of the present application from the perspective of the execution process on the method side. It can be understood that, in order to realize the above-mentioned functions, the mobile terminal includes a hardware structure and / or a software module corresponding to each function.
  • the embodiments of the present application may divide the functional unit of the mobile terminal according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit. It should be noted that the division of the units in the embodiments of the present application is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
  • FIG. 5 is a schematic structural diagram of an AR virtual character drawing device disclosed in an embodiment of the present application.
  • the AR virtual character drawing device 500 includes a capturing unit 501, a construction unit 502, a first acquisition unit 503, a recognition unit 504, a second acquisition unit 505, a determination unit 506, and a drawing unit 507, where:
  • the capturing unit 501 is used to capture a real three-dimensional scene picture through a camera
  • the construction unit 502 is used to construct an augmented reality AR scene according to the real three-dimensional scene picture;
  • the obtaining unit 503 is configured to obtain at least one sound effect generated in the AR scene
  • the recognition unit 504 is configured to recognize whether a target sound effect exists in at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene;
  • the second obtaining unit 505 is configured to obtain the position of the camera in the AR scene when the recognition unit 504 recognizes that there is a target sound effect in at least one sound effect;
  • the determining unit 506 is used to determine a sound effect generation algorithm, and determine the position of the virtual character in the AR scene according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene;
  • the drawing unit 507 is used to draw the virtual character at the position of the virtual character in the AR scene.
  • the identification unit 504 identifies whether there is a target sound effect in at least one sound effect, specifically: acquiring audio features generated by the virtual character; identifying whether there is a sound effect matching the audio feature in the at least one sound effect; if it exists, determining at least A sound effect that matches an audio feature in a sound effect is a target sound effect.
  • the determining unit 506 determines the sound effect generation algorithm, specifically: acquiring scene data corresponding to the AR scene; acquiring the type of virtual character; and determining the sound effect generation algorithm based on the scene data and the type of virtual character.
  • the AR virtual character drawing apparatus 500 may further include an adjustment unit 508.
  • the adjusting unit 508 is configured to adjust the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene when the position of the virtual character changes in the AR scene and the AR scene does not change;
  • the adjusting unit 508 is also used to adjust the sound effect corresponding to the virtual character according to the change of the AR scene when the AR scene changes.
  • the adjusting unit 508 adjusts the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene, specifically: if the position of the virtual character in the AR scene changes from the first position to the second position, according to the virtual character
  • the generated audio, the sound effect generation algorithm, the position of the camera in the AR scene, and the second position re-determine the sound effect corresponding to the virtual character.
  • the adjustment unit 508 adjusts the sound effect corresponding to the virtual character according to the change of the AR scene, specifically: if the AR scene where the virtual character is located changes from the first AR scene to the second AR scene, the virtual character is acquired in the second AR Position in the scene, obtain the scene data corresponding to the second AR scene, and re-determine a new sound effect generation algorithm based on the scene data corresponding to the second AR scene and the type of virtual character; based on the audio generated by the virtual character, the new sound effect generation algorithm, The position of the camera in the second AR scene and the position of the virtual character in the second AR scene re-determine the sound effect corresponding to the virtual character.
  • the scene data corresponding to the AR scene includes spatial geometric parameters of the real three-dimensional scene and constituent material parameters of the real three-dimensional scene.
  • the capturing unit 501 may specifically be a camera in a mobile terminal, and the construction unit 502, first acquisition unit 503, recognition unit 504, second acquisition unit 505, determination unit 506, drawing unit 507, and adjustment unit 508 may specifically be mobile terminals Processor.
  • the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and can be based on the virtual
  • the character's sound effect draws the virtual character at an accurate position, improving the interactive effect of the virtual character in the AR scene.
  • FIG. 6 is a schematic structural diagram of a mobile terminal disclosed in an embodiment of the present application.
  • the mobile terminal 600 includes a processor 601 and a memory 602, wherein the mobile terminal 600 may further include a bus 603, the processor 601 and the memory 602 may be connected to each other through the bus 603, and the bus 603 may be a peripheral component Peripheral Component (Interconnect, PCI) bus or Extended Industry Standard Architecture (EISA) bus, etc.
  • the bus 603 can be divided into an address bus, a data bus, and a control bus. For ease of representation, only a thick line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.
  • the mobile terminal 600 may further include an input and output device 604, and the input and output device 604 may include a display screen, such as a liquid crystal display screen.
  • the memory 602 is used to store one or more programs containing instructions; the processor 601 is used to call the instructions stored in the memory 602 to perform some or all of the method steps in FIGS. 1 to 4 described above.
  • the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and can be based on the sound effect of the virtual character in the AR scene Draw virtual characters at accurate locations to improve the interactive effect of virtual characters in AR scenes.
  • the embodiment of the present application also provides another mobile terminal. As shown in FIG. 7, for ease of description, only parts related to the embodiment of the present application are shown. For specific technical details not disclosed, please refer to the method of the embodiment of the present application section.
  • the mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, etc. Taking the mobile terminal as a mobile phone as an example:
  • the mobile phone includes: a radio frequency (Radio Frequency) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, a processor 980 , And power supply 990 and other components.
  • a radio frequency (Radio Frequency) circuit 910 for detecting radio frequency
  • a memory 920 for storing data
  • a memory 920 includes a memory 920, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, a processor 980 , And power supply 990 and other components.
  • WiFi wireless fidelity
  • FIG. 7 does not constitute a limitation on the mobile phone, and may include more or less components than those shown in the figure, or a combination of certain components, or a different component arrangement.
  • the RF circuit 910 can be used for information reception and transmission.
  • the RF circuit 910 includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the RF circuit 910 can also communicate with other devices through a wireless communication network.
  • the above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Message Service (SMS), etc.
  • GSM Global System of Mobile
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Message Service
  • the memory 920 may be used to store software programs and modules.
  • the processor 980 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 920.
  • the memory 920 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, application programs required for at least one function, and the like; the storage data area may store data created according to the use of a mobile phone and the like.
  • the memory 920 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 930 may be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the mobile phone.
  • the input unit 930 may include a fingerprint recognition module 931 and other input devices 932.
  • the fingerprint recognition module 931 can collect fingerprint data on the user.
  • the input unit 930 may also include other input devices 932.
  • other input devices 932 may include, but are not limited to, one or more of a touch screen, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), trackball, mouse, joystick, and so on.
  • the display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 940 may include a display screen 941.
  • the display screen 941 may be configured in the form of a liquid crystal display (Liquid Crystal) (LCD), an organic or inorganic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • LCD liquid crystal display
  • OLED Organic Light-Emitting Diode
  • the mobile phone may further include at least one sensor 950, such as a light sensor, a motion sensor, a pressure sensor, a temperature sensor, and other sensors.
  • the light sensor may include an ambient light sensor (also called a light sensor) and a proximity sensor, wherein the ambient light sensor may adjust the backlight brightness of the mobile phone according to the brightness of the ambient light, thereby adjusting the brightness of the display screen 941, and the proximity sensor may When the mobile phone moves to the ear, the display 941 and / or backlight is turned off.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when at rest, and can be used to identify mobile phone gesture applications (such as horizontal and vertical screen switching, magnetic force) Gesture attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.
  • mobile phone gesture applications such as horizontal and vertical screen switching, magnetic force
  • Gesture attitude calibration such as vibration recognition related functions
  • vibration recognition related functions such as pedometer, tap
  • sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that can also be configured on mobile phones, they will not be repeated here.
  • the audio circuit 960, the speaker 961, and the microphone 962 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 960 can transmit the converted electrical signal of the received audio data to the speaker 961, and the speaker 961 converts it into a sound signal to play; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, which is converted by the audio circuit 960 After receiving, it is converted into audio data, and then processed by the audio data playback processor 980, and then sent to, for example, another mobile phone through the RF circuit 910, or the audio data is played to the memory 920 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 970. It provides users with wireless broadband Internet access.
  • FIG. 7 shows the WiFi module 970, it can be understood that it is not a necessary component of a mobile phone, and can be omitted as needed without changing the essence of the invention.
  • the processor 980 is the control center of the mobile phone, connects various parts of the entire mobile phone with various interfaces and lines, executes or executes the software programs and / or modules stored in the memory 920, and calls the data stored in the memory 920 to execute Various functions and processing data of the mobile phone, so as to monitor the mobile phone as a whole.
  • the processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and application programs, etc.
  • the modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 980.
  • the mobile phone also includes a power supply 990 (such as a battery) for powering various components.
  • a power supply 990 (such as a battery) for powering various components.
  • the power supply can be logically connected to the processor 980 through a power management system, so as to realize functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera 9100.
  • the camera 9100 is used to capture images and videos, and transmit the captured images and videos to the processor 980 for processing.
  • the mobile phone can also have a Bluetooth module, etc., which will not be repeated here.
  • the method flow of each step may be implemented based on the structure of the mobile phone.
  • An embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes the computer to execute any one of the AR virtual character drawing methods described in the above method embodiments Part or all of the steps.
  • An embodiment of the present application further provides a computer program product, the computer program product includes a non-transitory computer-readable storage medium storing the computer program, the computer program is operable to cause the computer to perform any one of the methods described in the above embodiment Or all steps of a method for drawing an AR virtual character.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable memory.
  • the technical solution of the present invention essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a memory, Several instructions are included to enable a computer device (which may be a personal computer, server, network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program may be stored in a computer-readable memory, and the memory may include: a flash disk , Read-Only Memory (English: Read-Only Memory, abbreviation: ROM), Random Access Device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Device
  • magnetic disk or optical disk etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

An AR virtual character drawing method and apparatus, a mobile terminal and a storage medium. The method comprises: capturing a real three-dimensional scene picture by means of a camera, and building an augmented reality (AR) scene according to the real three-dimensional scene picture (101); acquiring at least one audio effect generated in the AR scene, and identifying whether there is a target audio effect in the at least one audio effect, wherein the target audio effect is generated from an audio generated by a virtual character that is not drawn in the AR scene (102); if so, acquiring the position of the camera in the AR scene, and determining an audio effect generation algorithm (103); determining, according to the audio generated by the virtual character, the target audio effect, the audio effect generation algorithm and the position of the camera in the AR scene, the position of the virtual character in the AR scene (104); and drawing the virtual character at the position of the virtual character in the AR scene (105). The method can improve the interaction effect of the virtual character in the AR scene.

Description

AR虚拟人物绘制方法、装置、移动终端及存储介质AR virtual character drawing method, device, mobile terminal and storage medium 技术领域Technical field
本申请涉及音频技术领域,具体涉及一种AR虚拟人物绘制方法、装置、移动终端及存储介质。The present application relates to the field of audio technology, and in particular to an AR virtual character drawing method, device, mobile terminal, and storage medium.
背景技术Background technique
增强现实(Augmented Reality,AR)技术,是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息、声音、味道、触觉等)通过模拟仿真后再叠加,将虚拟的信息应用到真实世界,被人类感官所感知,从而达到超越现实的感官体验。真实的环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在。目前的AR场景中,AR场景中的虚拟人物是根据特效合成的,其虚拟人物的形象、位置都是按照一定的算法确定的,AR场景中的虚拟人物的互动效果较差。Augmented reality (AugmentedReality, AR) technology is a new technology that integrates real-world information and virtual world information "seamlessly". It is physical information that is difficult to experience in a certain time and space of the real world ( Visual information, sound, taste, touch, etc.) are superimposed after simulation, and the virtual information is applied to the real world, which is perceived by human senses, so as to achieve a sensory experience beyond reality. The real environment and virtual objects are superimposed on the same screen or space in real time. In the current AR scene, the virtual characters in the AR scene are synthesized according to special effects. The image and position of the virtual characters are determined according to a certain algorithm, and the interaction effect of the virtual characters in the AR scene is poor.
发明内容Summary of the invention
本申请实施例提供了一种AR虚拟人物绘制方法、装置、移动终端及存储介质,能够提高AR场景中虚拟人物的互动效果。Embodiments of the present application provide an AR virtual character drawing method, device, mobile terminal, and storage medium, which can improve the interactive effect of virtual characters in an AR scene.
第一方面,本申请实施例提供一种AR虚拟人物绘制方法,包括:In a first aspect, an embodiment of the present application provides an AR virtual character drawing method, including:
通过摄像头捕获真实三维场景画面,依据所述真实三维场景画面构建增强现实AR场景;Capture a real three-dimensional scene picture through a camera, and construct an augmented reality AR scene according to the real three-dimensional scene picture;
获取所述AR场景中产生的至少一种音效,识别所述至少一种音效中是否存在目标音效,所述目标音效由所述AR场景中的未绘制的虚拟人物产生的音频生成;Acquiring at least one sound effect generated in the AR scene, and identifying whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene;
若存在,获取所述摄像头在所述AR场景中的位置,确定音效生成算法;根据所述虚拟人物产生的音频、所述目标音效、所述音效生成算法和所述摄像头在所述AR场景中的位置确定所述虚拟人物在所述AR场景中的位置;If it exists, obtain the position of the camera in the AR scene and determine the sound effect generation algorithm; according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm and the camera in the AR scene The position of determines the position of the virtual character in the AR scene;
在所述虚拟人物在所述AR场景中的位置处绘制所述虚拟人物。The virtual character is drawn at the position of the virtual character in the AR scene.
第二方面,本申请实施例提供了一种AR虚拟人物绘制装置,包括:In a second aspect, an embodiment of the present application provides an AR virtual character drawing device, including:
捕获单元,用于通过摄像头捕获真实三维场景画面;Capture unit, used to capture the real three-dimensional scene picture through the camera;
构建单元,用于依据所述真实三维场景画面构建增强现实AR场景;A construction unit, configured to construct an augmented reality AR scene according to the real three-dimensional scene picture;
第一获取单元,用于获取所述AR场景中产生的至少一种音效;A first obtaining unit, configured to obtain at least one sound effect generated in the AR scene;
识别单元,用于识别所述至少一种音效中是否存在目标音效,所述目标音效由所述AR场景中的未绘制的虚拟人物产生的音频生成;A recognition unit, configured to recognize whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene;
第二获取单元,用于在所述识别单元识别到所述至少一种音效中存在所述目标音效时,获取所述摄像头在所述AR场景中的位置;A second obtaining unit, configured to obtain the position of the camera in the AR scene when the recognition unit recognizes that the target sound effect exists in the at least one sound effect;
确定单元,用于确定音效生成算法,根据所述虚拟人物产生的音频、所述目标音效、所述音效生成算法和所述摄像头在所述AR场景中的位置确定所述虚拟人物在所述AR场景中的位置;A determining unit, configured to determine a sound effect generation algorithm, and determine the virtual character in the AR according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene Position in the scene;
绘制单元,用于在所述虚拟人物在所述AR场景中的位置处绘制所述虚拟人物。The drawing unit is configured to draw the virtual character at the position of the virtual character in the AR scene.
第三方面,本申请实施例提供一种移动终端,包括处理器、存储器,所述存储器用于存储一个或多个程序,所述一个或多个程序被配置成由所述处理器执行,上述程序包括用于执行本申请实施例第一方面中的步骤的指令。In a third aspect, an embodiment of the present application provides a mobile terminal, including a processor and a memory, where the memory is used to store one or more programs, and the one or more programs are configured to be executed by the processor. The program includes instructions for performing the steps in the first aspect of the embodiments of the present application.
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本 申请实施例第一方面中所描述的部分或全部步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes the computer to execute the first embodiment of the present application. Part or all of the steps described in one aspect.
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。According to a fifth aspect, an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing the computer program, and the computer program is operable to cause the computer to execute as implemented in the present application Examples of some or all of the steps described in the first aspect. The computer program product may be a software installation package.
可以看出,本申请实施例中所描述的AR虚拟人物绘制方法,移动终端通过摄像头捕获真实三维场景画面,依据真实三维场景画面构建增强现实AR场景;获取AR场景中产生的至少一种音效,识别至少一种音效中是否存在目标音效,目标音效由AR场景中的未绘制的虚拟人物产生的音频生成;若存在,获取摄像头在AR场景中的位置,确定音效生成算法;根据虚拟人物产生的音频、目标音效、音效生成算法和摄像头在AR场景中的位置确定虚拟人物在AR场景中的位置;在虚拟人物在AR场景中的位置处绘制虚拟人物。本申请实施例可以在识别出目标音效后,根据音效生成算法反推出目标音效对应的未绘制的虚拟人物在AR场景中的准确位置,能够在AR场景中根据虚拟人物的音效在准确的位置绘制虚拟人物,提高AR场景中虚拟人物的互动效果。It can be seen that in the AR virtual character drawing method described in the embodiments of the present application, the mobile terminal captures a real three-dimensional scene picture through a camera, constructs an augmented reality AR scene based on the real three-dimensional scene picture; obtains at least one sound effect generated in the AR scene, Identify whether there is a target sound effect in at least one sound effect, the target sound effect is generated by the audio generated by the undrawn virtual character in the AR scene; if it exists, obtain the position of the camera in the AR scene, and determine the sound effect generation algorithm; according to the virtual character The position of the audio, the target sound effect, the sound effect generation algorithm and the camera in the AR scene determine the position of the virtual character in the AR scene; the virtual character is drawn at the position of the virtual character in the AR scene. In the embodiment of the present application, after the target sound effect is recognized, the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and can be drawn in the AR scene in the accurate position according to the sound effect of the virtual character Virtual characters, to improve the interactive effect of virtual characters in AR scenes.
附图说明BRIEF DESCRIPTION
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly explain the embodiments of the present application or the technical solutions in the prior art, the following will briefly introduce the drawings required in the embodiments or the description of the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, without paying any creative work, other drawings can be obtained based on these drawings.
图1是本申请实施例公开的一种AR虚拟人物绘制方法的流程示意图;1 is a schematic flowchart of a method for drawing an AR virtual character disclosed in an embodiment of the present application;
图2是本申请实施例公开的一种音频信号模拟传输的示意图;2 is a schematic diagram of analog audio signal transmission disclosed in an embodiment of the present application;
图3是本申请实施例中公开的一种虚拟人物在AR场景中的位置确定原理示意图;3 is a schematic diagram of a principle for determining the position of a virtual character in an AR scene disclosed in an embodiment of the present application;
图4是本申请实施例公开的另一种AR虚拟人物绘制方法的流程示意图;4 is a schematic flowchart of another method for drawing an AR virtual character disclosed in an embodiment of the present application;
图5是本申请实施例公开的一种AR虚拟人物绘制装置的结构示意图;5 is a schematic structural diagram of an AR virtual character drawing device disclosed in an embodiment of the present application;
图6是本申请实施例公开的一种移动终端的结构示意图;6 is a schematic structural diagram of a mobile terminal disclosed in an embodiment of the present application;
图7是本申请实施例公开的又一种移动终端的结构示意图。7 is a schematic structural diagram of yet another mobile terminal disclosed in an embodiment of the present application.
具体实施方式detailed description
为了使本技术领域的人员更好地理解本发明方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to enable those skilled in the art to better understand the solutions of the present invention, the technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only It is a part of the embodiments of the present invention, but not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative efforts fall within the protection scope of the present invention.
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。The terms "first", "second", etc. in the description and claims of the present invention and the above drawings are used to distinguish different objects, not to describe a specific order. In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not limited to the listed steps or units, but optionally includes steps or units that are not listed, or optionally also includes Other steps or units inherent to these processes, methods, products, or equipment.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本发明的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to "embodiments" means that specific features, structures, or characteristics described in connection with the embodiments may be included in at least one embodiment of the present invention. The appearance of the phrase in various places in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment mutually exclusive with other embodiments. Those skilled in the art understand explicitly and implicitly that the embodiments described herein can be combined with other embodiments.
本申请实施例所涉及到的移动终端可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的 用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为移动终端。The mobile terminals involved in the embodiments of the present application may include various handheld devices with wireless communication functions, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to a wireless modem, and various forms of user equipment (User Equipment, UE), Mobile Station (MS), terminal equipment, etc. For convenience of description, the above-mentioned devices are collectively referred to as mobile terminals.
下面对本申请实施例进行详细介绍。The following describes the embodiments of the present application in detail.
请参阅图1,图1是本申请实施例公开的一种AR虚拟人物绘制方法的流程示意图,如图1所示,该AR虚拟人物绘制方法包括如下步骤。Please refer to FIG. 1. FIG. 1 is a schematic flowchart of an AR virtual character drawing method disclosed in an embodiment of the present application. As shown in FIG. 1, the AR virtual character drawing method includes the following steps.
101,移动终端通过摄像头捕获真实三维场景画面,依据真实三维场景画面构建增强现实AR场景。101. The mobile terminal captures a real three-dimensional scene picture through a camera, and constructs an augmented reality AR scene according to the real three-dimensional scene picture.
本申请实施例中,移动终端可以包括摄像头、显示器和扬声器。其中,摄像头用于实时捕获真实三维场景画面,真实三维场景画面可以是封闭的室内,也可以是开阔的室外。显示器用于显示AR场景对应的AR画面。扬声器用于输出AR场景中的音效。移动终端可以包括手机、平板电脑等具有AR功能的设备,也可以包括AR眼镜、AR头盔等专用AR设备。In the embodiment of the present application, the mobile terminal may include a camera, a display, and a speaker. Among them, the camera is used to capture real three-dimensional scene pictures in real time. The real three-dimensional scene pictures can be enclosed indoors or open outdoors. The display is used to display the AR picture corresponding to the AR scene. The speaker is used to output sound effects in the AR scene. Mobile terminals may include AR-enabled devices such as mobile phones and tablet computers, and may also include dedicated AR devices such as AR glasses and AR helmets.
其中,AR场景是在真实三维场景画面的基础上构建的。AR场景可以在真实三维场景画面的基础上增加多个显示控件,该显示控件可以用于调用不同的虚拟人物,以及用于调节虚拟人物的显示效果,以及调节虚拟人物的位置,以及用于是否开启虚拟人物的三维(3Dimensions,3D)音效。Among them, AR scenes are built on the basis of real three-dimensional scenes. AR scenes can add multiple display controls on the basis of real three-dimensional scenes. The display controls can be used to call different virtual characters, as well as adjust the display effect of virtual characters, and adjust the position of virtual characters, and whether to use Turn on the 3D (3D) sound effect of the virtual character.
102,移动终端获取AR场景中产生的至少一种音效,识别至少一种音效中是否存在目标音效,目标音效由AR场景中的未绘制的虚拟人物产生的音频生成。102. The mobile terminal obtains at least one sound effect generated in the AR scene, and identifies whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene.
本申请实施例中,当AR场景构建好之后,用户可以依据需要调用AR虚拟人物在AR场景中显示。由于虚拟人物的绘制需要一定的时间,在绘制虚拟人物之前,虚拟人物产生的音频可以先于虚拟人物的形象出来。如果开启虚拟人物的3D音效,则移动终端的扬声器会输出经过3D音效生成算法生成的音效。用户也可以随机调用一个AR虚拟人物在AR场景中显示,此时出来的音效为随机选择的虚拟人物的音效。In the embodiment of the present application, after the AR scene is constructed, the user can call an AR virtual character to display in the AR scene as needed. Since the drawing of the virtual character takes a certain amount of time, before the virtual character is drawn, the audio generated by the virtual character can come out before the image of the virtual character. If the 3D sound effect of the virtual character is turned on, the speaker of the mobile terminal will output the sound effect generated by the 3D sound effect generation algorithm. The user can also randomly call an AR virtual character to display in the AR scene, and the sound effect at this time is the sound effect of the randomly selected virtual character.
AR场景中不仅有虚拟人物的音效,还可以包括背景音乐的音效、虚拟动物的音效、虚拟物体产生的音效等。虚拟人物可以是游戏中的虚拟人物,也可以是影视作品(比如,动漫)中的虚拟人物,也可以是文学作品中的虚拟人物。The AR scene includes not only the sound effects of virtual characters, but also the sound effects of background music, sound effects of virtual animals, and sound effects generated by virtual objects. The virtual character may be a virtual character in a game, a virtual character in a film and television work (such as anime), or a virtual character in a literary work.
由于不同类型的虚拟人物,其产生的音频具有不同的频率特征。虚拟人物可以作为音频播放端,摄像头可以作为音频接收端。移动终端获取AR场景中产生的至少一种音效,是在摄像头作为视角的基础上接收的至少一种音效,移动终端接收到音效后,可以通过分析音效的频率特征来识别该音效是由哪种类型的虚拟人物产生的音频生成的。Due to different types of avatars, the audio produced by them has different frequency characteristics. The virtual character can be used as an audio player, and the camera can be used as an audio receiver. The mobile terminal obtains at least one sound effect generated in the AR scene, which is at least one sound effect received on the basis of the camera as the angle of view. After receiving the sound effect, the mobile terminal can analyze the frequency characteristics of the sound effect to identify which sound effect is caused by Types of avatars generated by audio.
目标音效可以预先进行设定,设定哪种音效为需要识别的目标音效,识别目标音效是为了寻找虚拟人物。虚拟人物的数量可以为一个或多个,目虚拟人物的数量可以依据AR场景来确定。The target sound effect can be set in advance, which sound effect is set as the target sound effect that needs to be identified, and the target sound effect is identified in order to find the virtual character. The number of virtual characters may be one or more, and the number of virtual characters may be determined according to the AR scene.
在AR场景中,在真实三维场景中,音频播放端在进行语音/音频播放时。对于音频接收端而言,其接收到的音频信号除了音频播放端直接传递过来的直达声信号外,还包含了经过各种复杂物理反射后的反射声信号。反射声信号要延迟于直达声信号到达,并且由于物理反射作用其能量要发生衰减。并且AR场景不同,反射声的延迟及能量衰减会有较大的差异,从而造成音频接收端的听觉的不同。因此,对于不同的AR场景,可以采用不同的混响音效算法来进行音效处理。In an AR scene, in a real three-dimensional scene, the audio player is playing voice / audio. For the audio receiving end, in addition to the direct sound signal directly passed by the audio playing end, the audio signal it receives also contains the reflected sound signal after various complex physical reflections. The reflected acoustic signal is delayed before the direct acoustic signal arrives, and its energy is attenuated due to physical reflection. And the AR scene is different, the delay of the reflected sound and the energy attenuation will be greatly different, resulting in the difference in the hearing of the audio receiving end. Therefore, for different AR scenes, different reverb sound effects algorithms can be used for sound effect processing.
如图2所示。图2是本申请实施例公开的一种音频信号模拟传输的示意图。图2中的音频播放端产生的音频信号可以通过直达和反射的方式到达音频接收端,从而在音频接收端形成混响效果。图2中示例了两种反射路径,第一反射路径经过两次反射到达音频接收端,第二反射路径经过一次反射到达音频接收端。图2仅为一种音频信号传输的示例,音频信号可以经过1次、2次以及2次以上的多条反射路径反射到达音频接收端。AR场景的 不同,其反射的次数、反射的路径也不相同。无论音频信号是直达还是反射,其都会有一定程度的衰减,衰减系数依据路径的距离、反射的次数、传输的介质以及反射点的材质来确定。as shown in picture 2. 2 is a schematic diagram of an audio signal analog transmission disclosed in an embodiment of the present application. The audio signal generated by the audio playing end in FIG. 2 can reach the audio receiving end through direct and reflective methods, thereby forming a reverberation effect at the audio receiving end. Two kinds of reflection paths are illustrated in FIG. 2, the first reflection path reaches the audio receiving end after two reflections, and the second reflection path reaches the audio receiving end after one reflection. Figure 2 is only an example of an audio signal transmission. The audio signal can be reflected by multiple reflection paths once, twice and more than twice to reach the audio receiving end. Different AR scenes have different reflection times and reflection paths. Whether the audio signal is direct or reflected, it will have a certain degree of attenuation. The attenuation coefficient is determined according to the distance of the path, the number of reflections, the transmission medium, and the material of the reflection point.
可选的,步骤102中,移动终端识别至少一种音效中是否存在目标音效,包括如下步骤:Optionally, in step 102, the mobile terminal identifying whether there is a target sound effect in at least one sound effect includes the following steps:
(11)移动终端获取虚拟人物产生的音频特征;(11) The mobile terminal obtains the audio features generated by the virtual character;
(12)移动终端识别至少一种音效中是否存在与上述音频特征匹配的音效;(12) The mobile terminal recognizes whether there is a sound effect matching the above audio feature in at least one sound effect;
(13)若存在,移动终端确定至少一种音效中与音频特征匹配的音效为目标音效。(13) If there is, the mobile terminal determines that the sound effect matching the audio feature among the at least one sound effect is the target sound effect.
本申请实施例中,音频特征包括幅频特性,即音频的频率特征和振幅特征。虚拟人物产生的音频一般具有固定的频率特征和振幅特征,频率和振幅在一定幅度内变化,频率和幅度还具有相关性,即不同频率点对应的幅度特性也不一定相同。In the embodiment of the present application, the audio characteristics include amplitude-frequency characteristics, that is, frequency characteristics and amplitude characteristics of audio. The audio generated by virtual characters generally has fixed frequency characteristics and amplitude characteristics. The frequency and amplitude vary within a certain amplitude, and the frequency and amplitude also have correlations, that is, the amplitude characteristics corresponding to different frequency points are not necessarily the same.
移动终端识别至少一种音效中是否存在与上述音频特征匹配的音效,具体为:移动终端获取至少一种音效中每种音效的音频特征,计算每种音效的音频特性与虚拟人物产生的音频特征的相似度,当至少一种音效中存在相似度大于预设相似度阈值的音效时,则确定至少一种音效中相似度大于预设相似度阈值的音效为目标音效,将上述相似度大于预设相似度阈值的音效作为目标音效。The mobile terminal recognizes whether there is a sound effect matching the above audio feature in at least one sound effect, specifically: the mobile terminal obtains the audio feature of each sound effect in the at least one sound effect, and calculates the audio feature of each sound effect and the audio feature generated by the virtual character Similarity, when there is a sound effect with a similarity greater than a preset similarity threshold in at least one sound effect, it is determined that a sound effect with a similarity greater than a preset similarity threshold in at least one sound effect is a target sound effect, and the above similarity is greater than Set the sound effect of the similarity threshold as the target sound effect.
本申请实施例可以根据音频特征的相似度来识别至少一种音效中是否存在目标音效,由于音频特征的识别准确的较高,可以准确的识别至少一种音效中是否存在目标音效。The embodiment of the present application can identify whether the target sound effect exists in the at least one sound effect according to the similarity of the audio features. Since the recognition of the audio feature is relatively accurate, it can accurately identify whether the target sound effect exists in the at least one sound effect.
103,若存在,移动终端获取摄像头在AR场景中的位置,确定音效生成算法。103. If it exists, the mobile terminal obtains the position of the camera in the AR scene, and determines the sound effect generation algorithm.
本申请实施例中,移动终端可以根据摄像头捕获的真实三维场景画面来确定摄像头在AR场景中的位置。具体的,移动终端可以旋转摄像头,以使摄像头能够拍摄完整的三维场景。完整的三维场景,指的是360°或720°全景拍摄的三维场景。移动终端根据全景拍摄的三维场景确定摄像头在AR场景中的位置。In the embodiment of the present application, the mobile terminal may determine the position of the camera in the AR scene according to the real three-dimensional scene image captured by the camera. Specifically, the mobile terminal can rotate the camera so that the camera can shoot a complete three-dimensional scene. A complete three-dimensional scene refers to a three-dimensional scene shot in 360 ° or 720 ° panorama. The mobile terminal determines the position of the camera in the AR scene according to the three-dimensional scene captured by the panorama.
音效生成算法可以根据真实三维场景画面中的场景来确定。比如,室内场景对应的音效生成算法与室外场景对应的音效生成算法不相同。The sound effect generation algorithm can be determined according to the scene in the real three-dimensional scene picture. For example, the sound effect generation algorithm corresponding to the indoor scene is different from the sound effect generation algorithm corresponding to the outdoor scene.
可选的,步骤103中,移动终端确定音效生成算法,具体包括如下步骤:Optionally, in step 103, the mobile terminal determines the sound effect generation algorithm, specifically including the following steps:
移动终端获取AR场景对应的场景数据,获取虚拟人物的类型;The mobile terminal obtains the scene data corresponding to the AR scene, and obtains the type of the virtual character;
移动终端基于场景数据和虚拟人物的类型确定音效生成算法。The mobile terminal determines the sound effect generation algorithm based on the scene data and the type of avatar.
本申请实施例中,音效生成算法与AR场景对应的场景数据和虚拟人物的类型相关。场景数据可以包括构建AR场景的真实三维场景的几何大小(比如,建筑物的长度、宽度、高度、空间体积、空间的长宽高等参数)、真实三维场景的材质(比如,建筑物中的地板、墙面、天花板的材质)等。虚拟人物的类型可以包括虚拟动漫人物、虚拟游戏人物等。In the embodiment of the present application, the sound effect generation algorithm is related to the scene data corresponding to the AR scene and the type of the virtual character. The scene data can include the geometric size of the real 3D scene constructing the AR scene (for example, the length, width, height, space volume, space length, width, and height of the building) and the material of the real 3D scene (for example, the floor in the building , Wall, ceiling materials), etc. The types of virtual characters may include virtual cartoon characters, virtual game characters, and the like.
移动终端基于场景数据和虚拟人物的类型确定音效生成算法,具体为:The mobile terminal determines the sound effect generation algorithm based on the scene data and the type of virtual character, specifically:
移动终端根据类型与音效算法模型的对应关系确定与该虚拟人物的类型对应的音效算法模型;The mobile terminal determines the sound effect algorithm model corresponding to the type of the virtual character according to the correspondence between the type and the sound effect algorithm model;
移动终端基于场景数据确定该音效算法模型的算法参数;The mobile terminal determines the algorithm parameters of the sound effect algorithm model based on the scene data;
基于该虚拟人物的类型对应的音效算法模型和该音效算法模型的算法参数确定音效生成算法。The sound effect generation algorithm is determined based on the sound effect algorithm model corresponding to the type of the virtual character and the algorithm parameters of the sound effect algorithm model.
比如,虚拟动漫人物对应的音效算法模型与虚拟游戏人物对应的音效算法模型不相同。For example, the sound effect algorithm model corresponding to the virtual cartoon character is different from the sound effect algorithm model corresponding to the virtual game character.
104,移动终端根据虚拟人物产生的音频、目标音效、音效生成算法和摄像头在AR场景中的位置确定虚拟人物在AR场景中的位置。104. The mobile terminal determines the position of the virtual character in the AR scene according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene.
本申请实施例中,如果移动终端识别到了目标音效,由于目标音效是基于音效生成算法、虚拟人物产生的音频、摄像头在AR场景中的位置、虚拟人物在AR场景中的位置确定的。如果移动终端确定了虚拟人物产生的音频、目标音效、摄像头在AR场景中的位置, 即可反推出虚拟人物在AR场景中的位置。In the embodiment of the present application, if the mobile terminal recognizes the target sound effect, the target sound effect is determined based on the sound effect generation algorithm, the audio generated by the virtual character, the position of the camera in the AR scene, and the position of the virtual character in the AR scene. If the mobile terminal determines the position of the audio generated by the virtual character, the target sound effect, and the camera in the AR scene, the position of the virtual character in the AR scene can be reversed.
其中,虚拟人物产生的音频可以由AR开发者预先设定,音效生成算法可以根据AR场景对应的场景数据和虚拟人物的类型确定,目标音效可以直接获取,摄像头在AR场景中的位置可以根据全景拍摄的三维场景确定。Among them, the audio generated by the virtual character can be preset by the AR developer, the sound effect generation algorithm can be determined according to the scene data corresponding to the AR scene and the type of virtual character, the target sound effect can be directly obtained, and the position of the camera in the AR scene can be based on the panorama The three-dimensional scene taken is determined.
可选的,若上述至少一种音效中不存在目标音效,则可以继续执行步骤102。Optionally, if there is no target sound effect in the at least one sound effect, step 102 may be continued.
下面以图3为例,阐述虚拟人物在AR场景中的位置确定方法。图3是本申请实施例中公开的一种虚拟人物在AR场景中的位置确定原理示意图。如图3所示,虚拟人物发出的音频信号经过三条路径到达摄像头所在的位置后,在摄像头所在的位置形成混响音效,该混响音效P=S1*R1+S2*R2+S3*R3,其中,S1为第一反射路径的衰减系数、S2为第二反射路径的衰减系数、S3为第三反射路径的衰减系数、R1为沿第一反射路径发射的第一初始音频信号、R2为沿第二反射路径发射的第二初始音频信号、R3为沿直达路径发射的第三初始音频信号。第一反射路径经过第一反射面,S1与第一反射面的材质、AR场景中默认的传播介质以及第一反射路径的路径长度相关,第二反射路径经过第二反射面,S2与第二反射面的材质、AR场景中默认的传播介质以及第二反射路径的路径长度相关。S3与AR场景中默认的传播介质以及直达路径的长度相关。R1、R2和R3与虚拟人物发出的音频信号的声场在真实三维空间的空间分布相关。当第一反射面的材质、AR场景中默认的传播介质确定的情况下,第一反射路径的路径长度越大,则S1越小;当第二反射面的材质、AR场景中默认的传播介质确定的情况下,第二反射路径的路径长度越大,则S2越小;当AR场景中默认的传播介质确定的情况下,直达路径的长度越大,则S3越小。The following uses FIG. 3 as an example to explain the method for determining the position of a virtual character in an AR scene. FIG. 3 is a schematic diagram of a principle for determining the position of a virtual character in an AR scene disclosed in an embodiment of the present application. As shown in Figure 3, after the audio signal sent by the virtual person reaches the position of the camera through three paths, a reverb sound effect is formed at the position of the camera, the reverb sound effect P = S1 * R1 + S2 * R2 + S3 * R3, Where S1 is the attenuation coefficient of the first reflection path, S2 is the attenuation coefficient of the second reflection path, S3 is the attenuation coefficient of the third reflection path, R1 is the first initial audio signal transmitted along the first reflection path, and R2 is the edge The second initial audio signal transmitted by the second reflection path, R3, is the third initial audio signal transmitted along the direct path. The first reflection path passes through the first reflection surface, S1 is related to the material of the first reflection surface, the default propagation medium in the AR scene, and the path length of the first reflection path, the second reflection path passes through the second reflection surface, S2 and the second The material of the reflection surface, the default propagation medium in the AR scene, and the path length of the second reflection path are related. S3 is related to the default propagation medium and the length of the direct path in the AR scenario. R1, R2 and R3 are related to the spatial distribution of the sound field of the audio signal emitted by the virtual character in the real three-dimensional space. When the material of the first reflection surface and the default propagation medium in the AR scene are determined, the larger the path length of the first reflection path, the smaller S1; when the material of the second reflection surface and the default propagation medium in the AR scene When determined, the larger the path length of the second reflection path, the smaller S2; when the default propagation medium in the AR scene is determined, the larger the direct path length, the smaller S3.
其中,在AR场景确定的情况下,虚拟人物发出的音频信号的声场在真实三维空间的空间分布也是确定的,第一反射面的材质和第二反射面的材质也是确定的,则可以确定R1、R2和R3的大小,AR场景中默认的传播介质也可以确定下来,则会留下三个变量,即第一反射路径的路径长度、第二反射路径的长度以及第三反射路径的长度。可以在短时间内连续在摄像头所在的位置处连续三次获取AR场景中产生的目标音效,得到三个方程式,三个方程式中的变量为S1、S2和S3,而三个方程式中的R1、R2、R3和P都确定且不相同(因为虚拟人物发出的初始音频随时间变化在强度、频率分布上都会发生变化),则可以通过三元一次方程组解得S1、S2和S3,进而根据S1、S2和S3计算第一反射路径的路径长度、第二反射路径的长度以及第三反射路径的长度,根据这三条路径的长度确定虚拟人物相对于摄像头的位置。由于是短时间内连续获取三组参数,虚拟人物相对于摄像头的位置几乎不变,则S1、S2和S3几乎保持不变。In the case where the AR scene is determined, the spatial distribution of the sound field of the audio signal emitted by the virtual character in the real three-dimensional space is also determined, and the material of the first reflective surface and the second reflective surface are also determined, then R1 can be determined , The sizes of R2 and R3, and the default propagation medium in the AR scene can also be determined, leaving three variables, namely the path length of the first reflection path, the length of the second reflection path, and the length of the third reflection path. The target sound effect generated in the AR scene can be continuously obtained three times in a short time at the position of the camera to obtain three equations, the variables in the three equations are S1, S2 and S3, and the three equations are R1 and R2 , R3, and P are all determined and different (because the initial audio emitted by the virtual character changes in intensity and frequency distribution with time), you can solve S1, S2, and S3 through the ternary linear equations, and then according to S1 , S2 and S3 calculate the path length of the first reflection path, the length of the second reflection path, and the length of the third reflection path, and determine the position of the virtual character relative to the camera according to the lengths of the three paths. Since three sets of parameters are continuously acquired in a short time, the position of the virtual character relative to the camera is almost unchanged, and S1, S2, and S3 remain almost unchanged.
上述混响音效算法(混响音效P=S1*R1+S2*R2+S3*R3)仅为一种可能的示例,依据AR场景的不同、虚拟人物产生的音效不同,混响音效算法还可以通过其他的方式实现,此处不再赘述。The above reverb sound effect algorithm (reverb sound effect P = S1 * R1 + S2 * R2 + S3 * R3) is just one possible example. According to different AR scenes and different sound effects produced by virtual characters, the reverb sound algorithm can also be This is achieved by other means, and will not be repeated here.
105,移动终端在虚拟人物在AR场景中的位置处绘制虚拟人物。105. The mobile terminal draws the virtual character at the position of the virtual character in the AR scene.
本申请实施例中,在步骤104中确定了虚拟人物在AR场景中的位置之后,移动终端可以在虚拟人物在AR场景中的位置处绘制虚拟人物的形象。移动终端的显示器可以显示AR场景中的虚拟人物。移动终端可以依据预先设置好的人物模型来绘制虚拟人物。虚拟人物可以具有动画效果。In the embodiment of the present application, after determining the position of the virtual character in the AR scene in step 104, the mobile terminal may draw the image of the virtual character at the position of the virtual character in the AR scene. The display of the mobile terminal can display virtual characters in the AR scene. The mobile terminal can draw the virtual character according to the pre-set character model. The virtual character may have an animation effect.
本申请实施例中,可以在识别出目标音效后,根据音效生成算法反推出目标音效对应的未绘制的虚拟人物在AR场景中的准确位置,能够在AR场景中根据虚拟人物的音效在准确的位置绘制虚拟人物,提高AR场景中虚拟人物的互动效果。In the embodiment of the present application, after the target sound effect is recognized, the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and the accurate effect of the virtual character sound effect in the AR scene Draw virtual characters at locations to improve the interactive effect of virtual characters in AR scenes.
请参阅图4,图4是本申请实施例公开的另一种AR虚拟人物绘制方法的流程示意图,图4是在图1的基础上进一步优化得到的。如图4所示,该AR虚拟人物绘制方法包括如下步骤。Please refer to FIG. 4. FIG. 4 is a schematic flowchart of another method for drawing an AR virtual character disclosed in an embodiment of the present application. FIG. 4 is further optimized based on FIG. 1. As shown in FIG. 4, the AR virtual character drawing method includes the following steps.
401,移动终端通过摄像头捕获真实三维场景画面,依据真实三维场景画面构建增强现实AR场景。401. The mobile terminal captures a real three-dimensional scene picture through a camera, and constructs an augmented reality AR scene according to the real three-dimensional scene picture.
402,移动终端获取AR场景中产生的至少一种音效,识别至少一种音效中是否存在目标音效,目标音效由AR场景中的未绘制的虚拟人物产生的音频生成。402. The mobile terminal acquires at least one sound effect generated in the AR scene, and identifies whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene.
403,若存在,移动终端获取摄像头在AR场景中的位置,确定音效生成算法。403. If it exists, the mobile terminal obtains the position of the camera in the AR scene, and determines the sound effect generation algorithm.
404,移动终端根据虚拟人物产生的音频、目标音效、音效生成算法和摄像头在AR场景中的位置确定虚拟人物在AR场景中的位置。404. The mobile terminal determines the position of the virtual character in the AR scene according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene.
405,移动终端在虚拟人物在AR场景中的位置处绘制虚拟人物。405. The mobile terminal draws the virtual character at the position of the virtual character in the AR scene.
本申请实施例中的步骤401至步骤405的具体实施可以参见图1所示的步骤101至步骤105,此处不再赘述。For specific implementation of steps 401 to 405 in the embodiments of the present application, reference may be made to steps 101 to 105 shown in FIG. 1, and details are not described herein again.
406,当虚拟人物在AR场景中的位置发生变化,并且AR场景未发生变化时,移动终端根据虚拟人物在AR场景中的位置变化调整虚拟人物对应的音效。406. When the position of the virtual character in the AR scene changes and the AR scene does not change, the mobile terminal adjusts the sound effect corresponding to the virtual character according to the change in the position of the virtual character in the AR scene.
407,当AR场景发生变化时,移动终端根据AR场景的变化调整虚拟人物对应的音效。407. When the AR scene changes, the mobile terminal adjusts the sound effect corresponding to the virtual character according to the change in the AR scene.
本申请实施例中,当用户手持移动终端移动时,摄像头捕获的真实三维场景画面可能会发生变化,相应的AR场景也可能会发生变化。比如,用户手持移动终端从一个房间进入另一个房间时,AR场景会发生变化。当用户手持移动终端移动时,虚拟人物在AR场景中的位置也可能会发生变化。当用户点击AR场景中的显示控件来调节虚拟人物的位置时,虚拟人物在AR场景中的位置会发生变化。当用户手持移动终端移动时,虚拟人物在AR场景中的位置可能会发生变化。In the embodiment of the present application, when the user moves with the mobile terminal, the real three-dimensional scene picture captured by the camera may change, and the corresponding AR scene may also change. For example, when a user holds a mobile terminal from one room to another, the AR scene will change. When the user moves with the mobile terminal, the position of the virtual character in the AR scene may also change. When the user clicks the display control in the AR scene to adjust the position of the avatar, the position of the avatar in the AR scene will change. When the user moves with the mobile terminal, the position of the virtual character in the AR scene may change.
当虚拟人物在AR场景中的位置发生变化时,虚拟人物在AR场景中的位置与摄像头之间的相对位置关系会发生变化,虚拟人物产生的音频传播到摄像头的混响音效效果也会发生变化,因此,移动终端需要根据虚拟人物在AR场景中的位置变化调整虚拟人物对应的音效,可以通过音效变化增加用户与AR场景中虚拟人物的互动效果。可选的,用户可以发送语音互动指令,虚拟人物可以根据语音互动指令在AR场景中移动,从而出现不同的互动音效,增加用户与AR场景中虚拟人物的互动效果。When the position of the virtual character in the AR scene changes, the relative position relationship between the position of the virtual character in the AR scene and the camera will change, and the reverb sound effect of the audio generated by the virtual character propagating to the camera will also change. Therefore, the mobile terminal needs to adjust the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene, and the interaction effect between the user and the virtual character in the AR scene can be increased through the change of the sound effect. Optionally, the user can send a voice interaction instruction, and the virtual character can move in the AR scene according to the voice interaction instruction, so that different interactive sound effects appear, increasing the interaction effect between the user and the virtual character in the AR scene.
当AR场景发生变化时,虚拟人物在AR场景中的位置必然会发生变化,对应的音效生成算法中的参数也会发生相应的变化,虚拟人物产生的音频传播到摄像头的混响音效效果也会发生变化,因此,移动终端需要根据AR场景的变化以及虚拟人物在新的AR场景中的位置重新调整虚拟人物对应的音效,可以通过音效变化增加用户与AR场景中虚拟人物的互动效果。When the AR scene changes, the position of the virtual character in the AR scene will inevitably change, and the corresponding parameters in the sound effect generation algorithm will also change accordingly, and the reverb sound effect of the audio generated by the virtual character propagating to the camera will also Changes occur, therefore, the mobile terminal needs to readjust the sound effect corresponding to the virtual character according to the change of the AR scene and the position of the virtual character in the new AR scene, and the interaction effect between the user and the virtual character in the AR scene can be increased through the change of the sound effect.
可选的,步骤406中,移动终端根据虚拟人物在AR场景中的位置变化调整虚拟人物对应的音效,具体包括如下步骤:Optionally, in step 406, the mobile terminal adjusts the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene, specifically including the following steps:
若虚拟人物在AR场景中的位置从第一位置变为第二位置时,移动终端根据虚拟人物产生的音频、音效生成算法、摄像头在AR场景中的位置、第二位置重新确定虚拟人物对应的音效。If the position of the virtual character in the AR scene changes from the first position to the second position, the mobile terminal re-determines the corresponding position of the virtual character according to the audio generated by the virtual character, the sound effect generation algorithm, the position of the camera in the AR scene, and the second position Sound effects.
本申请实施例中,当虚拟人物在AR场景中的位置从第一位置变为第二位置时,虚拟人物产生的音频到摄像头的直达路径以及反射路径都会发生变化,因此音效生成算法中的参数会发生相应的变化,虚拟人物产生的音频传播到摄像头的混响音效效果也会发生变化,移动终端可以根据虚拟人物产生的音频、音效生成算法、摄像头在AR场景中的位置、第二位置重新确定虚拟人物对应的音效。本申请实施例可以在虚拟人物在AR场景中的位置发生变化时,及时调整虚拟人物对应的音效,可以通过音效变化增加用户与AR场景中虚拟人物的互动效果。用户可以在手持移动终端移动的情况下,通过改变虚拟人物在AR场景中的位置,来改变AR场景中虚拟人物的音效,从而增加用户与AR场景中虚拟人物的互动效果。In the embodiment of the present application, when the position of the avatar in the AR scene changes from the first position to the second position, the direct path and the reflection path of the audio generated by the avatar to the camera will change, so the parameters in the sound effect generation algorithm Corresponding changes will occur, and the reverb sound effect of the audio generated by the virtual character propagating to the camera will also change. The mobile terminal can recalculate the audio generated by the virtual character, the sound generation algorithm, the camera position in the AR scene, and the second position. Determine the sound effect corresponding to the avatar. The embodiment of the present application can adjust the sound effect corresponding to the virtual character in time when the position of the virtual character in the AR scene changes, and can increase the interaction effect between the user and the virtual character in the AR scene through the change of the sound effect. The user can change the sound effect of the virtual character in the AR scene by changing the position of the virtual character in the AR scene while the handheld mobile terminal is moving, thereby increasing the interaction effect between the user and the virtual character in the AR scene.
可选的,步骤407中,移动终端根据AR场景的变化调整虚拟人物对应的音效,具体包括如下步骤:Optionally, in step 407, the mobile terminal adjusts the sound effect corresponding to the virtual character according to the change of the AR scene, which specifically includes the following steps:
(21)若虚拟人物所在的AR场景从第一AR场景变为第二AR场景时,移动终端获取虚拟人物在第二AR场景中的位置,获取第二AR场景对应的场景数据,基于第二AR场景对应的场景数据和虚拟人物的类型重新确定新的音效生成算法;(21) If the AR scene where the virtual character is located changes from the first AR scene to the second AR scene, the mobile terminal obtains the position of the virtual character in the second AR scene, and obtains the scene data corresponding to the second AR scene based on the second The scene data corresponding to the AR scene and the type of virtual character re-determine the new sound effect generation algorithm;
(22)移动终端根据虚拟人物产生的音频、新的音效生成算法、摄像头在第二AR场景中的位置、虚拟人物在第二AR场景中的位置重新确定虚拟人物对应的音效。(22) The mobile terminal re-determines the sound effect corresponding to the virtual character according to the audio generated by the virtual character, a new sound effect generation algorithm, the position of the camera in the second AR scene, and the position of the virtual character in the second AR scene.
本申请实施例中,当虚拟人物在AR场景从第一AR场景变为第二AR场景时,对应的音效生成算法中的参数也会发生相应的变化,虚拟人物产生的音频传播到摄像头的混响音效效果也会发生变化,因此,移动终端根据虚拟人物产生的音频、新的音效生成算法、摄像头在第二AR场景中的位置、虚拟人物在第二AR场景中的位置重新确定虚拟人物对应的音效。本申请实施例可以在AR场景发生变化时,及时调整虚拟人物对应的音效,可以通过音效变化增加用户与AR场景中虚拟人物的互动效果。In the embodiment of the present application, when the virtual character changes from the first AR scenario to the second AR scenario in the AR scene, the parameters in the corresponding sound effect generation algorithm will also change accordingly, and the audio generated by the virtual character propagates to the camera The sound effect will also change, so the mobile terminal re-determines the correspondence of the virtual character according to the audio generated by the virtual character, the new sound effect generation algorithm, the position of the camera in the second AR scene, and the position of the virtual character in the second AR scene Sound effects. The embodiment of the present application can adjust the sound effect corresponding to the virtual character in time when the AR scene changes, and can increase the interaction effect between the user and the virtual character in the AR scene through the change of the sound effect.
其中,移动终端可以根据摄像头拍摄的场景画面来分析移动终端所在的AR场景是否发生变化。具体的,可以通过场景画面中的元素(比如,场景画面中的建筑物、植物、车辆、道路等)来分析移动终端所在的AR场景是否发生变化。若从第一AR场景变为第二AR场景,移动终端可以根据摄像头全景拍摄的三维场景确定摄像头在第二AR场景中的位置。Among them, the mobile terminal can analyze whether the AR scene where the mobile terminal is located changes according to the scene picture taken by the camera. Specifically, it can be analyzed whether the AR scene where the mobile terminal is located changes through elements in the scene picture (for example, buildings, plants, vehicles, roads, etc. in the scene picture). If the first AR scene is changed to the second AR scene, the mobile terminal may determine the position of the camera in the second AR scene according to the three-dimensional scene captured by the camera in a panoramic view.
其中,移动终端基于第二AR场景对应的场景数据和虚拟人物的类型重新确定新的音效生成算法的具体实施方式可以参见图1中步骤103的描述,步骤(22)的具体实施方式可以参见图1中步骤104的描述,此处不再赘述。上述主要从方法侧执行过程的角度对本申请实施例的方案进行了介绍。可以理解的是,移动终端为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。For a specific implementation manner of the mobile terminal re-determining a new sound effect generation algorithm based on the scene data corresponding to the second AR scene and the type of virtual character, reference may be made to the description of step 103 in FIG. 1, and specific implementation of step (22) may be shown in FIG. The description of step 104 in 1 will not be repeated here. The above mainly introduces the solutions of the embodiments of the present application from the perspective of the execution process on the method side. It can be understood that, in order to realize the above-mentioned functions, the mobile terminal includes a hardware structure and / or a software module corresponding to each function. Those skilled in the art should easily realize that the present invention can be implemented in the form of hardware or a combination of hardware and computer software in combination with the units and algorithm steps of the examples described in the embodiments disclosed herein. Whether a function is executed by hardware or computer software driven hardware depends on the specific application and design constraints of the technical solution. Professional technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present invention.
本申请实施例可以根据上述方法示例对移动终端进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。The embodiments of the present application may divide the functional unit of the mobile terminal according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The above integrated unit may be implemented in the form of hardware or software functional unit. It should be noted that the division of the units in the embodiments of the present application is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
请参阅图5,图5是本申请实施例公开的一种AR虚拟人物绘制装置的结构示意图。如图5所示,该AR虚拟人物绘制装置500包括捕获单元501、构建单元502、第一获取单元503、识别单元504、第二获取单元505、确定单元506和绘制单元507,其中:Please refer to FIG. 5, which is a schematic structural diagram of an AR virtual character drawing device disclosed in an embodiment of the present application. As shown in FIG. 5, the AR virtual character drawing device 500 includes a capturing unit 501, a construction unit 502, a first acquisition unit 503, a recognition unit 504, a second acquisition unit 505, a determination unit 506, and a drawing unit 507, where:
捕获单元501,用于通过摄像头捕获真实三维场景画面;The capturing unit 501 is used to capture a real three-dimensional scene picture through a camera;
构建单元502,用于依据真实三维场景画面构建增强现实AR场景;The construction unit 502 is used to construct an augmented reality AR scene according to the real three-dimensional scene picture;
获取单元503,用于获取AR场景中产生的至少一种音效;The obtaining unit 503 is configured to obtain at least one sound effect generated in the AR scene;
识别单元504,用于识别至少一种音效中是否存在目标音效,目标音效由AR场景中的未绘制的虚拟人物产生的音频生成;The recognition unit 504 is configured to recognize whether a target sound effect exists in at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene;
第二获取单元505,用于在识别单元504识别到至少一种音效中存在目标音效时,获取摄像头在AR场景中的位置;The second obtaining unit 505 is configured to obtain the position of the camera in the AR scene when the recognition unit 504 recognizes that there is a target sound effect in at least one sound effect;
确定单元506,用于确定音效生成算法,根据虚拟人物产生的音频、目标音效、音效生成算法和摄像头在AR场景中的位置确定虚拟人物在AR场景中的位置;The determining unit 506 is used to determine a sound effect generation algorithm, and determine the position of the virtual character in the AR scene according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene;
绘制单元507,用于在虚拟人物在AR场景中的位置处绘制虚拟人物。The drawing unit 507 is used to draw the virtual character at the position of the virtual character in the AR scene.
可选的,识别单元504识别至少一种音效中是否存在目标音效,具体为:获取虚拟人物产生的音频特征;识别至少一种音效中是否存在与音频特征匹配的音效;若存在,则确定至少一种音效中与音频特征匹配的音效为目标音效。Optionally, the identification unit 504 identifies whether there is a target sound effect in at least one sound effect, specifically: acquiring audio features generated by the virtual character; identifying whether there is a sound effect matching the audio feature in the at least one sound effect; if it exists, determining at least A sound effect that matches an audio feature in a sound effect is a target sound effect.
可选的,确定单元506确定音效生成算法,具体为:获取AR场景对应的场景数据;获取虚拟人物的类型;基于场景数据和虚拟人物的类型确定音效生成算法。Optionally, the determining unit 506 determines the sound effect generation algorithm, specifically: acquiring scene data corresponding to the AR scene; acquiring the type of virtual character; and determining the sound effect generation algorithm based on the scene data and the type of virtual character.
可选的,该AR虚拟人物绘制装置500还可以包括调整单元508。Optionally, the AR virtual character drawing apparatus 500 may further include an adjustment unit 508.
调整单元508,用于当虚拟人物在AR场景中的位置发生变化,并且AR场景未发生变化时,根据虚拟人物在AR场景中的位置变化调整虚拟人物对应的音效;The adjusting unit 508 is configured to adjust the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene when the position of the virtual character changes in the AR scene and the AR scene does not change;
调整单元508,还用于当AR场景发生变化时,根据AR场景的变化调整虚拟人物对应的音效。The adjusting unit 508 is also used to adjust the sound effect corresponding to the virtual character according to the change of the AR scene when the AR scene changes.
可选的,调整单元508根据虚拟人物在AR场景中的位置变化调整虚拟人物对应的音效,具体为:若虚拟人物在AR场景中的位置从第一位置变为第二位置时,根据虚拟人物产生的音频、音效生成算法、摄像头在AR场景中的位置、第二位置重新确定虚拟人物对应的音效。Optionally, the adjusting unit 508 adjusts the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene, specifically: if the position of the virtual character in the AR scene changes from the first position to the second position, according to the virtual character The generated audio, the sound effect generation algorithm, the position of the camera in the AR scene, and the second position re-determine the sound effect corresponding to the virtual character.
可选的,调整单元508根据AR场景的变化调整虚拟人物对应的音效,具体为:若虚拟人物所在的AR场景中从第一AR场景变为第二AR场景时,获取虚拟人物在第二AR场景中的位置,获取第二AR场景对应的场景数据,基于第二AR场景对应的场景数据和虚拟人物的类型重新确定新的音效生成算法;根据虚拟人物产生的音频、新的音效生成算法、摄像头在第二AR场景中的位置、虚拟人物在第二AR场景中的位置重新确定虚拟人物对应的音效。Optionally, the adjustment unit 508 adjusts the sound effect corresponding to the virtual character according to the change of the AR scene, specifically: if the AR scene where the virtual character is located changes from the first AR scene to the second AR scene, the virtual character is acquired in the second AR Position in the scene, obtain the scene data corresponding to the second AR scene, and re-determine a new sound effect generation algorithm based on the scene data corresponding to the second AR scene and the type of virtual character; based on the audio generated by the virtual character, the new sound effect generation algorithm, The position of the camera in the second AR scene and the position of the virtual character in the second AR scene re-determine the sound effect corresponding to the virtual character.
可选的,AR场景对应的场景数据包括真实三维场景的空间几何参数和真实三维场景的构成材质参数。Optionally, the scene data corresponding to the AR scene includes spatial geometric parameters of the real three-dimensional scene and constituent material parameters of the real three-dimensional scene.
其中,捕获单元501具体可以为移动终端中的摄像头,构建单元502、第一获取单元503、识别单元504、第二获取单元505、确定单元506、绘制单元507、调整单元508具体可以为移动终端中的处理器。The capturing unit 501 may specifically be a camera in a mobile terminal, and the construction unit 502, first acquisition unit 503, recognition unit 504, second acquisition unit 505, determination unit 506, drawing unit 507, and adjustment unit 508 may specifically be mobile terminals Processor.
实施图5所示的AR虚拟人物绘制装置,可以在识别出目标音效后,根据音效生成算法反推出目标音效对应的未绘制的虚拟人物在AR场景中的准确位置,能够在AR场景中根据虚拟人物的音效在准确的位置绘制虚拟人物,提高AR场景中虚拟人物的互动效果。By implementing the AR virtual character drawing device shown in FIG. 5, after identifying the target sound effect, the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and can be based on the virtual The character's sound effect draws the virtual character at an accurate position, improving the interactive effect of the virtual character in the AR scene.
请参阅图6,图6是本申请实施例公开的一种移动终端的结构示意图。如图6所示,该移动终端600包括处理器601和存储器602,其中,移动终端600还可以包括总线603,处理器601和存储器602可以通过总线603相互连接,总线603可以是外设部件互连标准(Peripheral Component Interconnect,简称PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,简称EISA)总线等。总线603可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。其中,移动终端600还可以包括输入输出设备604,输入输出设备604可以包括显示屏,例如液晶显示屏。存储器602用于存储包含指令的一个或多个程序;处理器601用于调用存储在存储器602中的指令执行上述图1至图4中的部分或全部方法步骤。Please refer to FIG. 6, which is a schematic structural diagram of a mobile terminal disclosed in an embodiment of the present application. As shown in FIG. 6, the mobile terminal 600 includes a processor 601 and a memory 602, wherein the mobile terminal 600 may further include a bus 603, the processor 601 and the memory 602 may be connected to each other through the bus 603, and the bus 603 may be a peripheral component Peripheral Component (Interconnect, PCI) bus or Extended Industry Standard Architecture (EISA) bus, etc. The bus 603 can be divided into an address bus, a data bus, and a control bus. For ease of representation, only a thick line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus. The mobile terminal 600 may further include an input and output device 604, and the input and output device 604 may include a display screen, such as a liquid crystal display screen. The memory 602 is used to store one or more programs containing instructions; the processor 601 is used to call the instructions stored in the memory 602 to perform some or all of the method steps in FIGS. 1 to 4 described above.
实施图6所示的移动终端,可以在识别出目标音效后,根据音效生成算法反推出目标音效对应的未绘制的虚拟人物在AR场景中的准确位置,能够在AR场景中根据虚拟人物的音效在准确的位置绘制虚拟人物,提高AR场景中虚拟人物的互动效果。After implementing the mobile terminal shown in FIG. 6, after recognizing the target sound effect, the exact position of the undrawn virtual character corresponding to the target sound effect in the AR scene can be reversed according to the sound effect generation algorithm, and can be based on the sound effect of the virtual character in the AR scene Draw virtual characters at accurate locations to improve the interactive effect of virtual characters in AR scenes.
本申请实施例还提供了另一种移动终端,如图7所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该移动终端可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS (Point of Sales,销售终端)、车载电脑等任意终端设备,以移动终端为手机为例:The embodiment of the present application also provides another mobile terminal. As shown in FIG. 7, for ease of description, only parts related to the embodiment of the present application are shown. For specific technical details not disclosed, please refer to the method of the embodiment of the present application section. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, etc. Taking the mobile terminal as a mobile phone as an example:
图7示出的是与本申请实施例提供的移动终端相关的手机的部分结构的框图。参考图7,手机包括:射频(Radio Frequency,RF)电路910、存储器920、输入单元930、显示单元940、传感器950、音频电路960、无线保真(Wireless Fidelity,WiFi)模块970、处理器980、以及电源990等部件。本领域技术人员可以理解,图7中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。7 is a block diagram of a partial structure of a mobile phone related to a mobile terminal provided by an embodiment of the present application. 7, the mobile phone includes: a radio frequency (Radio Frequency) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, a processor 980 , And power supply 990 and other components. Those skilled in the art may understand that the structure of the mobile phone shown in FIG. 7 does not constitute a limitation on the mobile phone, and may include more or less components than those shown in the figure, or a combination of certain components, or a different component arrangement.
下面结合图7对手机的各个构成部件进行具体的介绍:The following describes each component of the mobile phone in detail with reference to FIG. 7:
RF电路910可用于信息的接收和发送。通常,RF电路910包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路910还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。The RF circuit 910 can be used for information reception and transmission. Generally, the RF circuit 910 includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 can also communicate with other devices through a wireless communication network. The above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Message Service (SMS), etc.
存储器920可用于存储软件程序以及模块,处理器980通过运行存储在存储器920的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器920可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据手机的使用所创建的数据等。此外,存储器920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 920 may be used to store software programs and modules. The processor 980 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 920. The memory 920 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, application programs required for at least one function, and the like; the storage data area may store data created according to the use of a mobile phone and the like. In addition, the memory 920 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
输入单元930可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元930可包括指纹识别模组931以及其他输入设备932。指纹识别模组931,可采集用户在其上的指纹数据。除了指纹识别模组931,输入单元930还可以包括其他输入设备932。具体地,其他输入设备932可以包括但不限于触控屏、物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 930 may be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the mobile phone. Specifically, the input unit 930 may include a fingerprint recognition module 931 and other input devices 932. The fingerprint recognition module 931 can collect fingerprint data on the user. In addition to the fingerprint recognition module 931, the input unit 930 may also include other input devices 932. Specifically, other input devices 932 may include, but are not limited to, one or more of a touch screen, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), trackball, mouse, joystick, and so on.
显示单元940可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元940可包括显示屏941,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机或无机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示屏941。The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 940 may include a display screen 941. Alternatively, the display screen 941 may be configured in the form of a liquid crystal display (Liquid Crystal) (LCD), an organic or inorganic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
手机还可包括至少一种传感器950,比如光传感器、运动传感器、压力传感器、温度传感器以及其他传感器。具体地,光传感器可包括环境光传感器(也称为光线传感器)及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节手机的背光亮度,进而调节显示屏941的亮度,接近传感器可在手机移动到耳边时,关闭显示屏941和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The mobile phone may further include at least one sensor 950, such as a light sensor, a motion sensor, a pressure sensor, a temperature sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor (also called a light sensor) and a proximity sensor, wherein the ambient light sensor may adjust the backlight brightness of the mobile phone according to the brightness of the ambient light, thereby adjusting the brightness of the display screen 941, and the proximity sensor may When the mobile phone moves to the ear, the display 941 and / or backlight is turned off. As a type of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when at rest, and can be used to identify mobile phone gesture applications (such as horizontal and vertical screen switching, magnetic force) Gesture attitude calibration), vibration recognition related functions (such as pedometer, tap), etc. As for other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that can also be configured on mobile phones, they will not be repeated here.
音频电路960、扬声器961,传声器962可提供用户与手机之间的音频接口。音频电路960可将接收到的音频数据转换后的电信号,传输到扬声器961,由扬声器961转换为声音信号播放;另一方面,传声器962将收集的声音信号转换为电信号,由音频电路960接收后转换为音频数据,再将音频数据播放处理器980处理后,经RF电路910以发送给比如 另一手机,或者将音频数据播放至存储器920以便进一步处理。The audio circuit 960, the speaker 961, and the microphone 962 can provide an audio interface between the user and the mobile phone. The audio circuit 960 can transmit the converted electrical signal of the received audio data to the speaker 961, and the speaker 961 converts it into a sound signal to play; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, which is converted by the audio circuit 960 After receiving, it is converted into audio data, and then processed by the audio data playback processor 980, and then sent to, for example, another mobile phone through the RF circuit 910, or the audio data is played to the memory 920 for further processing.
WiFi属于短距离无线传输技术,手机通过WiFi模块970可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图7示出了WiFi模块970,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。WiFi is a short-range wireless transmission technology. The mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 970. It provides users with wireless broadband Internet access. Although FIG. 7 shows the WiFi module 970, it can be understood that it is not a necessary component of a mobile phone, and can be omitted as needed without changing the essence of the invention.
处理器980是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器920内的软件程序和/或模块,以及调用存储在存储器920内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器980可包括一个或多个处理单元;优选的,处理器980可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器980中。The processor 980 is the control center of the mobile phone, connects various parts of the entire mobile phone with various interfaces and lines, executes or executes the software programs and / or modules stored in the memory 920, and calls the data stored in the memory 920 to execute Various functions and processing data of the mobile phone, so as to monitor the mobile phone as a whole. Optionally, the processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and application programs, etc. The modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 980.
手机还包括给各个部件供电的电源990(比如电池),优选的,电源可以通过电源管理系统与处理器980逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The mobile phone also includes a power supply 990 (such as a battery) for powering various components. Preferably, the power supply can be logically connected to the processor 980 through a power management system, so as to realize functions such as charging, discharging, and power management through the power management system.
手机还可以包括摄像头9100,摄像头9100用于拍摄图像与视频,并将拍摄的图像和视频传输到处理器980进行处理。The mobile phone may further include a camera 9100. The camera 9100 is used to capture images and videos, and transmit the captured images and videos to the processor 980 for processing.
手机还可以蓝牙模块等,在此不再赘述。The mobile phone can also have a Bluetooth module, etc., which will not be repeated here.
前述图1~图4所示的实施例中,各步骤方法流程可以基于该手机的结构实现。In the foregoing embodiments shown in FIGS. 1 to 4, the method flow of each step may be implemented based on the structure of the mobile phone.
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任何一种AR虚拟人物绘制方法的部分或全部步骤。An embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes the computer to execute any one of the AR virtual character drawing methods described in the above method embodiments Part or all of the steps.
本申请实施例还提供一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,该计算机程序可操作来使计算机执行如上述方法实施例中记载的任何一种AR虚拟人物绘制方法的部分或全部步骤。An embodiment of the present application further provides a computer program product, the computer program product includes a non-transitory computer-readable storage medium storing the computer program, the computer program is operable to cause the computer to perform any one of the methods described in the above embodiment Or all steps of a method for drawing an AR virtual character.
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。It should be noted that, for the sake of simple description, the foregoing method embodiments are all expressed as a series of action combinations, but those skilled in the art should know that the present invention is not limited by the sequence of actions described. Because according to the invention, certain steps can be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by the present invention.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above embodiments, the description of each embodiment has its own emphasis. For a part that is not detailed in an embodiment, you can refer to related descriptions in other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device may be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections through some interfaces, devices or units, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above integrated unit may be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可 以存储在一个计算机可读取存储器中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable memory. Based on such an understanding, the technical solution of the present invention essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a memory, Several instructions are included to enable a computer device (which may be a personal computer, server, network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。A person of ordinary skill in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by a program instructing relevant hardware. The program may be stored in a computer-readable memory, and the memory may include: a flash disk , Read-Only Memory (English: Read-Only Memory, abbreviation: ROM), Random Access Device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The embodiments of the present application have been described in detail above, and specific examples have been used in this article to explain the principles and implementations of the present invention. The descriptions of the above embodiments are only used to help understand the method and core idea of the present invention; Those of ordinary skill in the art, according to the ideas of the present invention, may have changes in specific implementations and application scopes. In summary, the content of this specification should not be construed as limiting the present invention.

Claims (20)

  1. 一种AR虚拟人物绘制方法,其特征在于,包括:A method for drawing AR virtual characters, which includes:
    通过摄像头捕获真实三维场景画面,依据所述真实三维场景画面构建增强现实AR场景;Capture a real three-dimensional scene picture through a camera, and construct an augmented reality AR scene according to the real three-dimensional scene picture;
    获取所述AR场景中产生的至少一种音效,识别所述至少一种音效中是否存在目标音效,所述目标音效由所述AR场景中的未绘制的虚拟人物产生的音频生成;Acquiring at least one sound effect generated in the AR scene, and identifying whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene;
    若存在,获取所述摄像头在所述AR场景中的位置,确定音效生成算法;If it exists, obtain the position of the camera in the AR scene and determine the sound effect generation algorithm;
    根据所述虚拟人物产生的音频、所述目标音效、所述音效生成算法和所述摄像头在所述AR场景中的位置确定所述虚拟人物在所述AR场景中的位置;Determine the position of the virtual character in the AR scene according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene;
    在所述虚拟人物在所述AR场景中的位置处绘制所述虚拟人物。The virtual character is drawn at the position of the virtual character in the AR scene.
  2. 根据权利要求1所述的方法,其特征在于,所述识别所述至少一种音效中是否存在目标音效,包括:The method according to claim 1, wherein the identifying whether a target sound effect exists in the at least one sound effect comprises:
    获取所述虚拟人物产生的音频特征;Acquiring audio features generated by the virtual character;
    识别所述至少一种音效中是否存在与所述音频特征匹配的音效;Identify whether there is a sound effect matching the audio feature in the at least one sound effect;
    若存在,则确定所述至少一种音效中与所述音频特征匹配的音效为目标音效。If it exists, it is determined that the sound effect matching the audio feature in the at least one sound effect is the target sound effect.
  3. 根据权利要求1所述的方法,其特征在于,所述确定音效生成算法,包括:The method according to claim 1, wherein the determining sound effect generating algorithm comprises:
    获取所述AR场景对应的场景数据,获取所述虚拟人物的类型;Obtaining scene data corresponding to the AR scene, and obtaining the type of the virtual character;
    基于所述场景数据和所述虚拟人物的类型确定音效生成算法。The sound effect generation algorithm is determined based on the scene data and the type of the virtual character.
  4. 根据权利要求3所述的方法,其特征在于,所述在所述虚拟人物在所述AR场景中的位置处绘制所述虚拟人物之后,所述方法还包括:The method according to claim 3, wherein after the virtual character is drawn at the position of the virtual character in the AR scene, the method further comprises:
    当所述虚拟人物在所述AR场景中的位置发生变化,并且所述AR场景未发生变化时,根据所述虚拟人物在所述AR场景中的位置变化调整所述虚拟人物对应的音效;When the position of the virtual character in the AR scene changes and the AR scene does not change, adjust the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene;
    当所述AR场景发生变化时,根据所述AR场景的变化调整所述虚拟人物对应的音效。When the AR scene changes, adjust the sound effect corresponding to the virtual character according to the change in the AR scene.
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述虚拟人物在所述AR场景中的位置变化调整所述虚拟人物对应的音效,包括:The method according to claim 4, wherein the adjusting the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene includes:
    若所述虚拟人物在所述AR场景中的位置从第一位置变为第二位置时,根据所述虚拟人物产生的音频、所述音效生成算法、所述摄像头在所述AR场景中的位置、所述第二位置重新确定所述虚拟人物对应的音效。If the position of the virtual character in the AR scene changes from the first position to the second position, according to the audio generated by the virtual character, the sound effect generation algorithm, and the position of the camera in the AR scene 2. The second position re-determines the sound effect corresponding to the virtual character.
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述AR场景的变化调整所述虚拟人物对应的音效,包括:The method according to claim 4, wherein the adjusting the sound effect corresponding to the virtual character according to the change of the AR scene includes:
    若所述虚拟人物所在的AR场景中从第一AR场景变为第二AR场景时,获取所述虚拟人物在所述第二AR场景中的位置,获取所述第二AR场景对应的场景数据,基于所述第二AR场景对应的场景数据和所述虚拟人物的类型重新确定新的音效生成算法;If the AR scene where the virtual character is located changes from the first AR scene to the second AR scene, obtain the position of the virtual character in the second AR scene, and obtain scene data corresponding to the second AR scene , Re-determining a new sound effect generation algorithm based on the scene data corresponding to the second AR scene and the type of the virtual character;
    根据所述虚拟人物产生的音频、所述新的音效生成算法、所述摄像头在所述第二AR场景中的位置、所述虚拟人物在所述第二AR场景中的位置重新确定所述虚拟人物对应的音效。Re-determine the virtual based on the audio generated by the virtual character, the new sound effect generation algorithm, the position of the camera in the second AR scene, and the position of the virtual character in the second AR scene The sound effect corresponding to the character.
  7. 根据权利要求3~6任一项所述的方法,其特征在于,所述AR场景对应的场景数据 包括所述真实三维场景的空间几何参数和所述真实三维场景的构成材质参数。The method according to any one of claims 3 to 6, wherein the scene data corresponding to the AR scene includes spatial geometric parameters of the real three-dimensional scene and constituent material parameters of the real three-dimensional scene.
  8. 根据权利要求3~6任一项所述的方法,其特征在于,所述基于所述场景数据和所述虚拟人物的类型确定音效生成算法,包括:The method according to any one of claims 3 to 6, wherein the determining the sound effect generation algorithm based on the scene data and the type of the virtual character includes:
    根据类型与音效算法模型的对应关系确定与所述虚拟人物的类型对应的音效算法模型;Determining a sound effect algorithm model corresponding to the type of the virtual character according to the correspondence between the type and the sound effect algorithm model;
    基于所述场景数据确定所述音效算法模型的算法参数;Determine the algorithm parameters of the sound effect algorithm model based on the scene data;
    基于所述虚拟人物的类型对应的音效算法模型和所述音效算法模型的算法参数确定音效生成算法。The sound effect generation algorithm is determined based on the sound effect algorithm model corresponding to the type of the virtual character and the algorithm parameters of the sound effect algorithm model.
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    若所述至少一种音效中不存在所述目标音效,则继续执行所述获取所述AR场景中产生的至少一种音效的步骤。If the target sound effect does not exist in the at least one sound effect, the step of acquiring at least one sound effect generated in the AR scene is continued.
  10. 一种AR虚拟人物绘制装置,其特征在于,包括:An AR virtual character drawing device, characterized in that it includes:
    捕获单元,用于通过摄像头捕获真实三维场景画面;Capture unit, used to capture the real three-dimensional scene picture through the camera;
    构建单元,用于依据所述真实三维场景画面构建增强现实AR场景;A construction unit, configured to construct an augmented reality AR scene according to the real three-dimensional scene picture;
    第一获取单元,用于获取所述AR场景中产生的至少一种音效;A first obtaining unit, configured to obtain at least one sound effect generated in the AR scene;
    识别单元,用于识别所述至少一种音效中是否存在目标音效,所述目标音效由所述AR场景中的未绘制的虚拟人物产生的音频生成;A recognition unit, configured to recognize whether a target sound effect exists in the at least one sound effect, and the target sound effect is generated by audio generated by an undrawn virtual character in the AR scene;
    第二获取单元,用于在所述识别单元识别到所述至少一种音效中存在所述目标音效时,获取所述摄像头在所述AR场景中的位置;A second obtaining unit, configured to obtain the position of the camera in the AR scene when the recognition unit recognizes that the target sound effect exists in the at least one sound effect;
    确定单元,用于确定音效生成算法,根据所述虚拟人物产生的音频、所述目标音效、所述音效生成算法和所述摄像头在所述AR场景中的位置确定所述虚拟人物在所述AR场景中的位置;A determining unit, configured to determine a sound effect generation algorithm, and determine the virtual character in the AR according to the audio generated by the virtual character, the target sound effect, the sound effect generation algorithm, and the position of the camera in the AR scene Position in the scene;
    绘制单元,用于在所述虚拟人物在所述AR场景中的位置处绘制所述虚拟人物。The drawing unit is configured to draw the virtual character at the position of the virtual character in the AR scene.
  11. 根据权利要求10所述的装置,其特征在于,所述识别单元识别所述至少一种音效中是否存在目标音效,具体为:获取所述虚拟人物产生的音频特征;识别所述至少一种音效中是否存在与所述音频特征匹配的音效;若存在,则确定所述至少一种音效中与所述音频特征匹配的音效为目标音效。The apparatus according to claim 10, wherein the recognition unit recognizes whether a target sound effect exists in the at least one sound effect, specifically: acquiring an audio feature generated by the virtual character; identifying the at least one sound effect Whether there is a sound effect matching the audio feature in; if it exists, determining that the sound effect matching the audio feature in the at least one sound effect is a target sound effect.
  12. 根据权利要求10所述的装置,其特征在于,所述确定单元确定音效生成算法,具体为:获取所述AR场景对应的场景数据,获取所述虚拟人物的类型;The apparatus according to claim 10, wherein the determination unit determines the sound effect generation algorithm, specifically: acquiring scene data corresponding to the AR scene and acquiring the type of the virtual character;
    基于所述场景数据和所述虚拟人物的类型确定音效生成算法。The sound effect generation algorithm is determined based on the scene data and the type of the virtual character.
  13. 根据权利要求12所述的装置,其特征在于,所述装置还包括调整单元;The device according to claim 12, wherein the device further comprises an adjustment unit;
    所述调整单元,用于当所述虚拟人物在所述AR场景中的位置发生变化,并且所述AR场景未发生变化时,根据所述虚拟人物在所述AR场景中的位置变化调整所述虚拟人物对应的音效;The adjustment unit is configured to adjust the position of the virtual character in the AR scene when the position of the virtual character changes in the AR scene and the AR scene does not change Sound effects corresponding to virtual characters;
    所述调整单元,还用于当所述AR场景发生变化时,根据所述AR场景的变化调整所述虚拟人物对应的音效。The adjustment unit is further configured to adjust the sound effect corresponding to the virtual character according to the change of the AR scene when the AR scene changes.
  14. 根据权利要求13所述的装置,其特征在于,所述调整单元根据所述虚拟人物在所 述AR场景中的位置变化调整所述虚拟人物对应的音效,具体为:The apparatus according to claim 13, wherein the adjustment unit adjusts the sound effect corresponding to the virtual character according to the position change of the virtual character in the AR scene, specifically:
    若所述虚拟人物在所述AR场景中的位置从第一位置变为第二位置时,根据所述虚拟人物产生的音频、所述音效生成算法、所述摄像头在所述AR场景中的位置、所述第二位置重新确定所述虚拟人物对应的音效。If the position of the virtual character in the AR scene changes from the first position to the second position, according to the audio generated by the virtual character, the sound effect generation algorithm, and the position of the camera in the AR scene 2. The second position re-determines the sound effect corresponding to the virtual character.
  15. 根据权利要求13所述的装置,其特征在于,所述调整单元根据所述AR场景的变化调整所述虚拟人物对应的音效,具体为:The apparatus according to claim 13, wherein the adjustment unit adjusts the sound effect corresponding to the virtual character according to the change of the AR scene, specifically:
    若所述虚拟人物所在的AR场景中从第一AR场景变为第二AR场景时,获取所述虚拟人物在所述第二AR场景中的位置,获取所述第二AR场景对应的场景数据,基于所述第二AR场景对应的场景数据和所述虚拟人物的类型重新确定新的音效生成算法;根据所述虚拟人物产生的音频、所述新的音效生成算法、所述摄像头在所述第二AR场景中的位置、所述虚拟人物在所述第二AR场景中的位置重新确定所述虚拟人物对应的音效。If the AR scene where the virtual character is located changes from the first AR scene to the second AR scene, obtain the position of the virtual character in the second AR scene, and obtain scene data corresponding to the second AR scene , Based on the scene data corresponding to the second AR scene and the type of the virtual character to re-determine a new sound effect generation algorithm; according to the audio generated by the virtual character, the new sound effect generation algorithm, the camera in the The position in the second AR scene and the position of the virtual character in the second AR scene re-determine the sound effect corresponding to the virtual character.
  16. 根据权利要求12~15任一项所述的装置,其特征在于,所述AR场景对应的场景数据包括所述真实三维场景的空间几何参数和所述真实三维场景的构成材质参数。The device according to any one of claims 12 to 15, wherein the scene data corresponding to the AR scene includes spatial geometric parameters of the real three-dimensional scene and constituent material parameters of the real three-dimensional scene.
  17. 根据权利要求12~15任一项所述的装置,其特征在于,所述确定单元基于所述场景数据和所述虚拟人物的类型确定音效生成算法,具体为:根据类型与音效算法模型的对应关系确定与所述虚拟人物的类型对应的音效算法模型;基于所述场景数据确定所述音效算法模型的算法参数;基于所述虚拟人物的类型对应的音效算法模型和所述音效算法模型的算法参数确定音效生成算法。The apparatus according to any one of claims 12 to 15, wherein the determining unit determines a sound effect generation algorithm based on the scene data and the type of the virtual character, specifically: according to the correspondence between the type and the sound effect algorithm model The relationship determines the sound effect algorithm model corresponding to the type of the virtual character; determines the algorithm parameters of the sound effect algorithm model based on the scene data; the sound effect algorithm model corresponding to the type of the virtual character and the algorithm of the sound effect algorithm model The parameters determine the sound effect generation algorithm.
  18. 根据权利要求10所述的装置,其特征在于,The device according to claim 10, characterized in that
    所述第一获取单元,还用于在所述识别单元识别到所述至少一种音效中不存在所述目标音效时,获取所述AR场景中产生的至少一种音效。The first obtaining unit is further configured to obtain at least one sound effect generated in the AR scene when the recognition unit recognizes that the target sound effect does not exist in the at least one sound effect.
  19. 一种移动终端,其特征在于,包括处理器以及存储器,所述存储器用于存储一个或多个程序,所述一个或多个程序被配置成由所述处理器执行,所述程序包括用于执行如权利要求1-9任一项所述的方法。A mobile terminal is characterized by comprising a processor and a memory, the memory is used to store one or more programs, the one or more programs are configured to be executed by the processor, the program includes The method according to any one of claims 1-9 is performed.
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-9任一项所述的方法。A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program for electronic data exchange, wherein the computer program causes a computer to execute the computer program according to any one of claims 1-9 method.
PCT/CN2019/112729 2018-11-16 2019-10-23 Ar virtual character drawing method and apparatus, mobile terminal and storage medium WO2020098462A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811367269.8 2018-11-16
CN201811367269.8A CN109597481B (en) 2018-11-16 2018-11-16 AR virtual character drawing method and device, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
WO2020098462A1 true WO2020098462A1 (en) 2020-05-22

Family

ID=65957666

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/112729 WO2020098462A1 (en) 2018-11-16 2019-10-23 Ar virtual character drawing method and apparatus, mobile terminal and storage medium

Country Status (2)

Country Link
CN (1) CN109597481B (en)
WO (1) WO2020098462A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639613A (en) * 2020-06-04 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN113034668A (en) * 2021-03-01 2021-06-25 中科数据(青岛)科技信息有限公司 AR-assisted mechanical simulation operation method and system
CN114356068A (en) * 2020-09-28 2022-04-15 北京搜狗智能科技有限公司 Data processing method and device and electronic equipment
CN117152349A (en) * 2023-08-03 2023-12-01 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis
CN117273054A (en) * 2023-09-28 2023-12-22 南京八点八数字科技有限公司 Virtual human interaction method and system applying different scenes

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597481B (en) * 2018-11-16 2021-05-04 Oppo广东移动通信有限公司 AR virtual character drawing method and device, mobile terminal and storage medium
CN110211222B (en) * 2019-05-07 2023-08-01 谷东科技有限公司 AR immersion type tour guide method and device, storage medium and terminal equipment
CN110390730B (en) * 2019-07-05 2023-12-29 北京悉见科技有限公司 Method for arranging augmented reality object and electronic equipment
CN113272878A (en) * 2019-11-05 2021-08-17 山东英才学院 Paperless early teaching machine for children based on wireless transmission technology
CN111104927B (en) * 2019-12-31 2024-03-22 维沃移动通信有限公司 Information acquisition method of target person and electronic equipment
CN112308983B (en) * 2020-10-30 2024-03-29 北京虚拟动点科技有限公司 Virtual scene arrangement method and device, electronic equipment and storage medium
CN113220123A (en) * 2021-05-10 2021-08-06 深圳市慧鲤科技有限公司 Sound effect control method and device, electronic equipment and storage medium
CN114565696A (en) * 2022-03-08 2022-05-31 北京玖零时代影视传媒有限公司 Meta universe virtual digital person making method and system
CN114443886A (en) * 2022-04-06 2022-05-06 南昌航天广信科技有限责任公司 Sound effect adjusting method and system of broadcast sound box, computer and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534824A (en) * 2015-05-18 2018-01-02 索尼公司 Message processing device, information processing method and program
CN107801120A (en) * 2017-10-24 2018-03-13 维沃移动通信有限公司 A kind of method, device and mobile terminal for determining audio amplifier putting position
CN108594988A (en) * 2018-03-22 2018-09-28 美律电子(深圳)有限公司 Wearable electronic device and its operating method for audio imaging
CN108769535A (en) * 2018-07-04 2018-11-06 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment
CN109597481A (en) * 2018-11-16 2019-04-09 Oppo广东移动通信有限公司 AR virtual portrait method for drafting, device, mobile terminal and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9037468B2 (en) * 2008-10-27 2015-05-19 Sony Computer Entertainment Inc. Sound localization for user in motion
US9563265B2 (en) * 2012-01-12 2017-02-07 Qualcomm Incorporated Augmented reality with sound and geometric analysis
EP3172730A1 (en) * 2014-07-23 2017-05-31 PCMS Holdings, Inc. System and method for determining audio context in augmented-reality applications
WO2018072214A1 (en) * 2016-10-21 2018-04-26 向裴 Mixed reality audio system
CN106485774B (en) * 2016-12-30 2019-11-15 当家移动绿色互联网技术集团有限公司 Drive the expression of person model and the method for posture in real time based on voice
CN107248795A (en) * 2017-08-14 2017-10-13 珠海格力节能环保制冷技术研究中心有限公司 Motor, electric machine assembly and electric equipment
CN108597530B (en) * 2018-02-09 2020-12-11 腾讯科技(深圳)有限公司 Sound reproducing method and apparatus, storage medium and electronic apparatus
CN108762494B (en) * 2018-05-16 2021-06-29 北京小米移动软件有限公司 Method, device and storage medium for displaying information
CN108744516B (en) * 2018-05-29 2020-09-29 腾讯科技(深圳)有限公司 Method and device for acquiring positioning information, storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534824A (en) * 2015-05-18 2018-01-02 索尼公司 Message processing device, information processing method and program
CN107801120A (en) * 2017-10-24 2018-03-13 维沃移动通信有限公司 A kind of method, device and mobile terminal for determining audio amplifier putting position
CN108594988A (en) * 2018-03-22 2018-09-28 美律电子(深圳)有限公司 Wearable electronic device and its operating method for audio imaging
CN108769535A (en) * 2018-07-04 2018-11-06 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment
CN109597481A (en) * 2018-11-16 2019-04-09 Oppo广东移动通信有限公司 AR virtual portrait method for drafting, device, mobile terminal and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639613A (en) * 2020-06-04 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN111639613B (en) * 2020-06-04 2024-04-16 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN114356068A (en) * 2020-09-28 2022-04-15 北京搜狗智能科技有限公司 Data processing method and device and electronic equipment
CN114356068B (en) * 2020-09-28 2023-08-25 北京搜狗智能科技有限公司 Data processing method and device and electronic equipment
CN113034668A (en) * 2021-03-01 2021-06-25 中科数据(青岛)科技信息有限公司 AR-assisted mechanical simulation operation method and system
CN113034668B (en) * 2021-03-01 2023-04-07 中科数据(青岛)科技信息有限公司 AR-assisted mechanical simulation operation method and system
CN117152349A (en) * 2023-08-03 2023-12-01 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis
CN117152349B (en) * 2023-08-03 2024-02-23 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis
CN117273054A (en) * 2023-09-28 2023-12-22 南京八点八数字科技有限公司 Virtual human interaction method and system applying different scenes

Also Published As

Publication number Publication date
CN109597481A (en) 2019-04-09
CN109597481B (en) 2021-05-04

Similar Documents

Publication Publication Date Title
WO2020098462A1 (en) Ar virtual character drawing method and apparatus, mobile terminal and storage medium
WO2018192415A1 (en) Data live broadcast method, and related device and system
US10891938B2 (en) Processing method for sound effect of recording and mobile terminal
WO2018077207A1 (en) Viewing angle mode switching method and terminal
US9632683B2 (en) Methods, apparatuses and computer program products for manipulating characteristics of audio objects by using directional gestures
CN107592466B (en) Photographing method and mobile terminal
CN107707817B (en) video shooting method and mobile terminal
CN108924438B (en) Shooting control method and related product
WO2017020663A1 (en) Live-comment video live broadcast method and apparatus, video source device, and network access device
CN111010508B (en) Shooting method and electronic equipment
US20160066119A1 (en) Sound effect processing method and device thereof
CN110166848B (en) Live broadcast interaction method, related device and system
CN107730460B (en) Image processing method and mobile terminal
CN107908765B (en) Game resource processing method, mobile terminal and server
CN109550248B (en) Virtual object position identification method and device, mobile terminal and storage medium
CN108270971B (en) Mobile terminal focusing method and device and computer readable storage medium
CN110297543B (en) Audio playing method and terminal equipment
CN109587552B (en) Video character sound effect processing method and device, mobile terminal and storage medium
CN111182211A (en) Shooting method, image processing method and electronic equipment
CN109361864B (en) Shooting parameter setting method and terminal equipment
CN109529335B (en) Game role sound effect processing method and device, mobile terminal and storage medium
CN108536513B (en) Picture display direction adjusting method and mobile terminal
CN114205701B (en) Noise reduction method, terminal device and computer readable storage medium
WO2021078182A1 (en) Playback method and playback system
CN111367492B (en) Webpage display method and device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19884793

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19884793

Country of ref document: EP

Kind code of ref document: A1