CN115604408A - Video recording method and device under virtual scene, storage medium and electronic equipment - Google Patents

Video recording method and device under virtual scene, storage medium and electronic equipment Download PDF

Info

Publication number
CN115604408A
CN115604408A CN202211185076.7A CN202211185076A CN115604408A CN 115604408 A CN115604408 A CN 115604408A CN 202211185076 A CN202211185076 A CN 202211185076A CN 115604408 A CN115604408 A CN 115604408A
Authority
CN
China
Prior art keywords
audio
data
virtual
virtual scene
virtual camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211185076.7A
Other languages
Chinese (zh)
Inventor
岳豪
史俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211185076.7A priority Critical patent/CN115604408A/en
Publication of CN115604408A publication Critical patent/CN115604408A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

The method comprises the steps of acquiring first picture data of a virtual scene under the view angle of a camera by controlling a first virtual camera, carrying out spatialization processing on an original audio signal triggered by a sounding body on the basis of first position and posture information of the first virtual camera and second position and posture information of the sounding body in the virtual scene by controlling an audio engine, obtaining first audio data, and combining the first audio data and the first picture data to obtain a first recorded video. Therefore, the sound and the picture in the first recorded video are matched, the video recording under the real condition is met, and the reality of the video recorded in the virtual scene is greatly increased.

Description

Video recording method and device under virtual scene, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for recording a video in a virtual scene, a storage medium, and an electronic device.
Background
With the rapid growth of Virtual Reality (VR) markets and users, users have an increasing demand for VR recordings, especially when the users want to capture a scene where the users play games. In the related art, a virtual scene is mainly photographed in a third person perspective photographing mode to obtain a picture of the virtual scene, and then the picture is synthesized with sound output by a system playing layer of a VR device to obtain a recorded video.
However, such a video recording method may cause the sound in the video not to match the picture, that is, the picture captured by the third person called the angle of view does not match the picture displayed by the helmet, which may cause the sound generated based on the picture displayed by the helmet to be inconsistent with the actual sound corresponding to the picture captured by the third person called the angle of view.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a video recording method in a virtual scene, including:
responding to a video recording instruction, and determining a first virtual camera in a virtual scene;
in response to a target operation for the first virtual camera, controlling the first virtual camera to acquire first picture data of the virtual scene under a camera view angle; and
controlling an audio engine to process an original audio signal of a sounding body in the virtual scene based on first position and posture information of the first virtual camera and second position and posture information of the sounding body in the virtual scene to obtain first audio data;
acquiring the first audio data generated by the audio engine and acquiring the first picture data acquired by the first virtual camera;
and obtaining a first recorded video based on the first picture data and the first audio data.
In a second aspect, the present disclosure provides a video recording apparatus under a virtual scene, including:
the creating module is configured to respond to a video recording instruction and determine a first virtual camera in a virtual scene;
a picture acquisition module configured to control the first virtual camera to acquire first picture data of the virtual scene under a camera view angle in response to a target operation for the first virtual camera;
an audio generating module configured to control an audio engine to process an original audio signal of a sounding body in the virtual scene based on first pose information of the first virtual camera and second pose information of the sounding body in the virtual scene to obtain first audio data;
the acquisition module is configured to acquire the first audio data generated by the audio engine and acquire the first picture data acquired by the first virtual camera;
a video generation module configured to obtain a first recorded video based on the first picture data and the first audio data.
In a third aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method of the first aspect.
Based on the technical scheme, the first virtual camera is controlled to collect first picture data of a virtual scene under the visual angle of the camera, the audio engine is controlled to perform spatialization processing on an original audio signal triggered by a sounding body based on first position and posture information of the first virtual camera and second position and posture information of the sounding body in the virtual scene, first audio data are obtained, and the first audio data and the first picture data are combined to obtain a first recorded video. Therefore, the sound and the picture in the first recorded video are matched, the video recording under the real condition is met, and the reality of the video recorded in the virtual scene is greatly increased.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
fig. 1 is a flow diagram illustrating a method for video recording in a virtual scene, in accordance with some embodiments.
FIG. 2 is a schematic diagram illustrating an interface for controlling a first virtual camera to capture images, according to some embodiments.
Fig. 3 is a schematic diagram illustrating an audio engine generating first audio data and second audio data, according to some embodiments.
Fig. 4 is a schematic diagram illustrating the recording of a first recorded video according to some embodiments.
Fig. 5 is a schematic diagram illustrating the recording of a first recorded video according to still further embodiments.
Fig. 6 is a schematic flow diagram illustrating audio data according to some embodiments.
Fig. 7 is a flow diagram illustrating a method for video recording in a virtual scene according to further embodiments.
Fig. 8 is a block diagram illustrating the connection of modules of a video recording device in a virtual scene, according to some embodiments.
FIG. 9 is a schematic diagram of a structure of an electronic device shown in accordance with some embodiments.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, when recording a video, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested action to be performed will require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the technical solution of the present disclosure, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
Meanwhile, it is understood that the data involved in the present technical solution (including but not limited to the data itself, the acquisition or use of the data) should comply with the requirements of the corresponding laws and regulations and the related regulations.
The embodiment of the disclosure provides a video recording method and device in a virtual scene, a storage medium and electronic equipment. The video recording method under the virtual scene can be used in the field of games and can also be used in the technical field of virtual reality.
In the field of games, a user can start a video recording function of a game application program on an electronic device, so that in the process of playing a game by the user, the posture and the scene of a game role operated by the user in a game scene can be recorded through the video recording function.
The game application program can be a three-dimensional game application program, the three-dimensional game application program is a three-dimensional electronic game manufactured on the basis of three-dimensional computer graphics, and compared with a traditional two-dimensional game, the three-dimensional game application program can bring more real game experience to a player. Three-dimensional games refer to games that are made with three-dimensional technology, and not to screens that are output in three dimensions.
In the technical field of virtual reality, when a user wears a VR helmet to play a VR application program, the user can start a video recording function in the VR application program on electronic equipment, so that in the process that the user plays the VR application program, the posture and the scene of a virtual character substituted by the user in a virtual scene can be recorded through the video recording function.
VR applications are interactive applications that provide immersive experiences in an interactive three-dimensional environment generated on a computer by using a computer graphics system and various interface devices such as reality and control. The three-dimensional Environment generated by the computer and capable of interacting is called a Virtual scene (hereinafter, referred to as "Virtual Environment" for short). The VR application may provide a human interface to the user that enables the user to command the VR device on which the VR application is installed and how the VR device provides information to the user.
Fig. 1 is a flow diagram illustrating a method for video recording in a virtual scene, in accordance with some embodiments. As shown in fig. 1, an embodiment of the present disclosure provides a video recording method in a virtual scene, where the method may be executed by an electronic device, and specifically, may be executed by a video recording apparatus in a virtual scene, where the apparatus may be implemented by software and/or hardware and configured in the electronic device. As shown in fig. 1, the method may include the following steps.
And S110, responding to a video recording instruction, and determining a first virtual camera in the virtual scene.
Here, the video recording instruction may be triggered by a preset operation by the user. For example, a user may trigger a video recording command by clicking a record button in a user interface. For another example, the user may trigger the video recording command by clicking a hardware button of the electronic device, for example, when the electronic device is a VR device, the user may trigger the video recording command by clicking a hardware button in the handle. Of course, the video recording instruction may also be triggered in other manners, such as triggering the video recording instruction through voice, motion, and the like, which are not described in detail herein.
The electronic device determines a first virtual camera in response to the video recording instruction. The first virtual camera may be a virtual camera already created in the virtual scene, or may be a virtual camera newly created in the virtual scene. The first virtual camera is a camera assumed in software in the electronic device, and is a tool for expressing a viewpoint in a three-dimensional virtual environment, and is used for capturing and recording picture data of a game scene in a virtual scene. That is, the first virtual camera is a camera assumed in a video recording program in the electronic device, and a scene image can be recorded in a virtual scene by the first virtual camera. It should be understood that the first virtual camera may be represented in a virtual scene by a model, for example, the first virtual camera may be a camera model in a virtual scene or a selfie stick model.
In the disclosed embodiments, the virtual scene may be a three-dimensional game scene or a virtual reality scene.
And S120, responding to the target operation aiming at the first virtual camera, and controlling the first virtual camera to collect the first picture data of the virtual scene under the camera view angle.
Here, the target operation for the first virtual camera may be an operation in which the user moves or rotates the first virtual camera, and the first virtual camera may be caused to capture images at different angles and distances in a virtual scene by the target operation for the first virtual camera.
As some examples, the target operation for the first virtual camera may be user-triggered through a virtual button in the user interface. FIG. 2 is a schematic diagram illustrating an interface for controlling a first virtual camera to capture images, according to some embodiments. As shown in fig. 2, in the user interface 200, the left side is a control area, the right side is a virtual scene 201, and a first virtual camera 202 is created in the virtual scene 201. The control areas of the user interface 200 include a position control area 203 and a rotation control area 204. The orientation control area 203 controls the first virtual camera 202 to move in three dimensions of up-down, left-right, and front-back, and the rotation control area 204 controls the first virtual camera 202 to rotate in two dimensions of up-down, left-right. It should be understood that the user may control the first virtual camera 202 to capture the first picture data of the virtual scene 201 at different angles by touch operations of the orientation control area 203 and the rotation control area 204 on the user interface 200.
As other examples, the target operation for the first virtual camera may be user-triggered by the VR handle. When a user can start a video recording function in a VR application on an electronic device, the user can control the movement and rotation of the first virtual camera by operating the VR handle. For example, the pose of the VR handle is tied to the pose of the first virtual camera, which rotates synchronously as the VR handle rotates.
It is worth mentioning that the camera angle of view is the third person's named shooting angle of view of the first virtual camera in the virtual scene. The user operates a target for the first virtual camera to adjust the shooting angle of the first virtual camera, that is, to adjust the shooting angle of the camera angle. The first picture data may refer to picture material of the virtual scene presented under the camera view, for example, a certain region in a three-dimensional game scene, or a certain region in a virtual reality scene.
S130, controlling an audio engine to process an original audio signal of the sounding body based on the first position and posture information of the first virtual camera and the second position and posture information of the sounding body in the virtual scene to obtain first audio data.
Here, the first audio data is audio data in which the audio engine spatializes an original audio signal emitted by an acoustic body based on first position and orientation information of the first virtual camera and second position and orientation information of the acoustic body in the virtual scene. The sound-producing body is an object that produces a specific sound effect in a virtual scene. For example, if a pile of firewood is in the virtual scene to make a crackling sound, a sounding body is in the position of the firewood and makes a crackling sound. It should be understood that the sound producing body may be invisible in the virtual scene, or the sound producing body merely represents a sound event of an object playing a sound effect, and is not necessarily a substantial object. In addition, the sounding body may also be referred to as a transmitter, among other terms.
In some embodiments, the audio engine may generate the first audio data and, at the same time, may process the original audio data based on the third pose information and the second pose information of the virtual character to generate the second audio data.
Wherein the virtual character refers to a game character operated by a player, and the original audio signal refers to the most original audio file created by an audio effect designer using a tool similar to Audacity. In the audio engine, the virtual character operated by the player is equivalent to a second listener, and the audio engine performs spatialization processing on the original audio signal triggered by the emitter based on third posture information of the second listener and second posture information of the emitter to obtain second audio data. It should be noted that the pose information (including the first, second, and third position information) according to the embodiments of the present disclosure may refer to position information and pose information, where the position information determines a distance between the second monitor and the emitter, and the pose information determines an orientation between the second monitor and the emitter.
Illustratively, spatializing the original audio signal includes distance processing and azimuth processing the original audio signal. Wherein the distance processing is to change one or more of loudness, frequency, diffuseness and degree of focus of the original audio signal in accordance with a distance parameter of the transmitter relative to the second listener. The directional processing is to change the timbre of the original audio signal, etc., in accordance with the orientation parameters of the emitter relative to the second listener.
It should be understood that the spatialized second audio data is actually a sound signal having six degrees of freedom of forward and backward, left and right, up and down, pitch, roll, and yaw.
The second audio data is for output through the audio output device, the second audio data being actually sounds heard by the player through the electronic apparatus. For example, for a virtual reality scene, the second audio data is output through the audio output device of the VR headset and is the sound actually heard by the wearer of the VR headset. For example, when the audio output device of the electronic apparatus used by the player is a headphone, the second audio data is processed into two ears and is output through the headphone. For example, when the audio output device configured in the electronic apparatus is an independent speaker, the second audio data is subjected to speaker processing, and the second audio data is output through the speaker.
In the process that the electronic equipment outputs the second audio data through the audio output device, the audio engine also performs spatialization processing on the original audio signal based on the first position and posture information of the first virtual camera and the second position and posture information of the sounding body in the virtual scene to obtain the first audio data.
In the audio engine, a first listener is created, and the fourth pose information of the first listener is associated with the pose information of the first virtual camera, namely the fourth pose information of the first listener is equal to the first pose information of the first virtual camera. The audio engine performs spatialization processing on the original audio signal based on the fourth pose information and the second pose information of the first listener to obtain first audio data. It should be appreciated that the first audio data actually corresponds to the sound heard by the first virtual camera at its current position and orientation.
Fig. 3 is a schematic diagram that illustrates an audio engine generating first audio data and second audio data, according to some embodiments. As shown in fig. 3, when recording a video in a virtual scene, the audio engine 300 has two audio streams, where the first audio stream is: the audio engine 300 performs spatialization processing on the original audio signal according to the second pose information of the sounding body corresponding to the original audio signal and the second pose information of the second listener to obtain second audio data, the second audio data is sent to an audio playback system of the electronic device by the audio engine 300, and the second audio data is played back by the audio playback system to be output on the audio output device, so that the sound triggered in the virtual scene actually heard by the user is formed. The second path of audio stream is: the audio engine 300 performs spatialization processing on the original audio signal according to the second pose information of the sounding body corresponding to the original audio signal and the fourth pose information of the first listener to obtain first audio data, and the first audio data is not output through the audio playback system and the audio output device. I.e. the first audio data is actually the sound that the first virtual camera hears at its current position and orientation, and not the sound that the user actually hears that triggers in the virtual scene.
It is worth noting that in the disclosed embodiment, the audio engine may be a Wwise (a spatial engine).
S140, the first audio data generated by the audio engine and the first picture data collected by the first virtual camera are obtained.
Here, the electronic device may extract the first audio data from the audio engine and the first picture data captured by the first virtual camera from the game engine.
S150, obtaining a first recorded video based on the first picture data and the first audio data.
Here, the electronic device combines the first picture data and the first audio data according to a time axis of the first picture data and the first audio data to form a first recorded video.
It should be noted that the sound in the first recorded video is captured from the camera view of the first virtual camera, and the sound is matched with the picture data in the first recorded video.
Fig. 4 is a schematic diagram illustrating the recording of a first recorded video according to some embodiments. As shown in fig. 4, in the virtual scene 401, the tank truck 402 is used as a sounding body which gives out a horn sound, the first virtual camera 403 is positioned on the left side of the tank truck 402, and the virtual character 404 is positioned on the right side of the tank truck 402. At this time, in the camera view of the first virtual camera 403, the horn sound emitted from the tank lorry 402 should be transmitted from the right side of the first virtual camera 403. And in the perspective of the virtual character 404, the horn sound emitted by the tank truck 402 should be transmitted from the left side of the virtual character 404. Therefore, the audio engine performs spatialization processing on the original audio signal (horn sound) triggered by the sound generating body based on the first position and orientation information of the first virtual camera 403 and the second position and orientation information of the sound generating body (the tank truck 402) in the virtual scene 401 to obtain first audio data, so that the sound in the first recorded video can be matched with the first picture data of the virtual scene 401 collected by the first virtual camera 403 under the camera view angle. Moreover, the user can still hear the real sound in accordance with the visual angle range of the user, that is, the original audio signal (horn sound) triggered by the sounding body is subjected to spatialization processing through the audio engine based on the third posture information of the virtual character 404 and the second posture information of the sounding body (the tank truck 402) in the virtual scene 401 to obtain second audio data, and the second audio data is output through the audio output device, so that the user can hear the real horn sound in accordance with the current posture of the user in the current posture of the user.
Fig. 5 is a schematic diagram illustrating the recording of a first recorded video according to still further embodiments. As shown in fig. 5, when a user plays a virtual reality game, the user may create a selfie stick 502 (equivalent to a first virtual camera) in a virtual scene 501 to record a video of a fire heap 503 burning in the virtual scene 501 by the selfie stick 502. At this time, since the selfie stick 502 is closer to the fire pile 503 and the virtual character operated by the user is farther from the fire pile 503, the sound of the fire pile 503 burning that the selfie stick 502 should actually record should be larger than the sound of the fire pile 503 burning that the user hears. The original audio signal (combustion sound) triggered by the sounding body is spatially processed through an audio engine based on the first position and posture information of the selfie stick 502 and the second position and posture information of the sounding body (fire pile 503) in the virtual scene 501 to obtain first audio data, so that the sound in the first recorded video can be matched with the first picture data of the virtual scene 501 collected by the selfie stick 502 under the view angle of a camera. Moreover, the user can still hear the real sound in accordance with the visual angle range thereof, that is, the audio engine performs spatialization processing on the original audio signal (combustion sound) triggered by the sound generating body based on the third posture information of the virtual character (not shown in fig. 5) and the second posture information of the sound generating body (fire mass 502) in the virtual scene 501 to obtain second audio data, and the audio output device outputs the second audio data, so that the user can hear the combustion sound of the real fire mass in accordance with the current posture thereof in the current posture thereof.
Therefore, the first virtual camera is controlled to collect first picture data of a virtual scene under the visual angle of the camera, the audio engine is controlled to perform spatialization processing on an original audio signal triggered by a sounding body based on first position and posture information of the first virtual camera and second position and posture information of the sounding body in the virtual scene, first audio data are obtained, and the first audio data and the first picture data are combined to obtain a first recorded video. Therefore, the sound and the picture in the first recorded video are matched, the video recording under the real condition is met, and the reality of the video recorded in the virtual scene is greatly increased.
In some implementations, the obtaining the first audio data generated by the audio engine in S140 may include: and sampling the first audio data through a first audio output management plug-in to obtain an audio sampling signal in a pulse code modulation format.
Here, in the Audio engine, a first Audio output management plug-in (Audio Route) may be provided, the first Audio output management plug-in being configured to sample first Audio data generated by the Audio engine, obtain an Audio sample signal in a Pulse Code Modulation (PCM) format, and enable the electronic device to generate a first recorded video based on the sampled Audio sample signal and the acquired first picture data.
It should be noted that the first audio output management plug-in samples the first audio data, and actually outputs the first audio data generated by the audio engine to the game engine of the electronic device in the form of an audio stream through the first audio output management plug-in, so as to combine the first audio data and the first picture data in the game engine to generate the first recorded video.
In some implementation embodiments, in S150, obtaining a first recorded video based on the first picture data and the first audio data includes:
and receiving the audio sampling signal output by the first audio output management plug-in unit through a game engine, and combining the audio sampling signal and the first picture data to obtain the first recorded video.
Here, the first audio output management plug-in may send the sampled audio sample signal to the game engine, and the game engine may combine the audio sample signal and the first picture data according to a time axis of the audio sample signal and the first picture data to obtain the first recorded video. The first picture data may be acquired by the electronic device through a game engine, that is, the game engine creates a first virtual camera, and the first virtual camera acquires the first picture data of the virtual scene.
In the game engine, a video recording module may be created, the video recording module configured to control the start and end of video recording, receive the audio sample signal, and combine the audio sample signal with the first picture data, outputting a first recorded video.
In some implementations, the electronic device may further control the first audio output management plug-in to mute the audio sample signal so that the audio sample signal is not sent to an audio playback system.
Here, the mute processing does not adjust the loudness of the audio sample signal sampled by the first audio output management plug-in to 0, but directly adjusts the semaphore of the audio sample signal to 0, so that the first audio output management plug-in does not send the audio sample signal to the audio playback system.
It should be understood that when the first audio output management plug-in does not send the audio sample signal to the audio playback system, the audio output device of the electronic equipment does not output the audio sample signal. At this time, in the hearing sense of the user, the user hears only the second audio data, not the first audio data.
Therefore, the audio sampling signal is subjected to mute processing, so that the audio output device of the electronic equipment can be prevented from simultaneously outputting the first audio data and the second audio data, and sound interference can not be caused. In this case, the sound in the first recorded video recorded by the first virtual camera is the first audio data, and the sound actually heard by the user is the second audio data.
In other implementation manners, the electronic device may further control the audio output device to output the first audio data and control the audio output device not to output second audio data in response to an audio playing instruction, where the second audio data is obtained by the audio engine processing the original audio signal based on the third pose information of the virtual character and the second pose information.
Here, the audio play instruction is user-triggered, and the user can select an audio signal that the user wants to play in the user interface. For example, the first audio data, the second audio data, and the first audio data and the second audio data are selected to be played simultaneously. The first audio data is output through the audio output device, namely the first audio data is played through an audio playing system of the electronic equipment and is played through the audio output device to form sound heard by a user.
The first audio output management plug-in can send the sampled audio sampling signal to the audio playback system, so that the audio playback system plays the audio sampling signal through the audio output device. In the case of outputting the first audio data, the audio output device may be controlled not to output the second audio data so that the user can hear only the first audio data without hearing the second audio data.
It should be noted that, in the above embodiments, the second audio data has been described in detail, and is not described herein again.
Therefore, by outputting the first audio data, a user can feel the real sound effect of the recorded video when recording the video, so that the recording angle can be adjusted according to the sound effect, and a better first recorded video can be obtained. In addition, the first audio output management plug-in sends to the game engine audio samples in PCM format, which are frames of audio stream data, rather than an entire audio file. Therefore, each time the game engine receives one frame of audio sampling signal, the game engine can synchronize the frame of audio sampling signal with the corresponding first picture data to generate a video, and therefore the generation speed of the first recorded video is increased.
The above embodiment will be described in detail with reference to fig. 6.
FIG. 6 is a schematic flow diagram illustrating audio data according to some embodiments. As shown in fig. 6, the audio engine 600 performs spatialization processing on an original audio signal triggered by an utterance based on first pose information of a first listener and second pose information of the utterance in a virtual scene to obtain first audio data, and is configured to perform processing on the original audio signal based on third pose information of a second listener and the second pose information of the utterance to obtain second audio data.
The audio engine 300 outputs the second audio data to the audio playback system, and the audio playback system plays back the second audio data, and outputs the second audio data through the audio output device to form sound heard by the user.
For the first audio data, the audio engine 300 samples the first audio data through the first audio output management plug-in to form an audio sample signal in PCM format. The first audio output management plug-in performs mute processing on the audio sampling signal, so that the first audio output management plug-in does not send the audio sampling signal to the audio playback system. It should be understood that if the first audio output management plug-in does not mute the audio sample signal, the audio sample signal is necessarily sent to the audio playback system. That is, the sound signal generated by the audio engine 300 is necessarily output to the audio playback system for playing. In addition, the first audio output management plug-in also sends the audio sampling signal to the game engine, and the game engine combines the received audio sampling signal and the first picture data acquired by the first virtual camera to generate a first recorded video.
Fig. 7 is a flow diagram illustrating a method for video recording in a virtual scene according to further embodiments. As shown in fig. 7, the video recording method in the virtual scene may include the following steps:
and S710, controlling the audio engine to split the third audio data into at least one path of audio stream according to the sound effect type of the audio included in the third audio data, wherein each path of audio stream includes the audio of one sound effect type, and the third audio data is obtained by processing the original audio signal of the sound producing body in the virtual scene.
Here, the third audio data may be the first audio data or the second audio data of the above-described embodiment.
The sound effect types of the audio may include background music, skill sound, speaking sound, horn sound, car sound, and the like. The audio engine splits the third audio data into at least one audio stream based on the sound effect type of the audio included in the third audio data. For example, the third audio data includes background music and speaking sound, the audio engine may split the third audio data into two audio streams of the background music and the speaking sound, where each audio stream includes an audio corresponding to one sound effect type.
S720, obtaining a target audio stream belonging to the target sound effect type from the at least one path of audio stream.
Here, the target sound effect type means a sound effect type that the user wishes to record. For example, when a user needs to record dialogue in a game, a target audio stream belonging to a sound effect type of speaking sound can be acquired from at least one audio stream. The target sound effect type can be one type or multiple types, and is specifically set according to the requirements of users.
In some embodiments, the target audio stream in the pulse code modulation format may be obtained by sampling an audio stream belonging to the target sound effect type from the at least one audio stream through a second audio output management plug-in.
The audio engine may be provided with a second audio output management plug-in, and the second audio output management plug-in samples an audio stream belonging to a target sound effect type from at least one audio stream to obtain a target audio stream in a pulse code modulation format.
It should be understood that the functions and principles of the second audio output management plug-in have been described in detail in the above embodiments with respect to the parts of the first audio output management plug-in, and are not described in detail herein.
And S730, generating a second recorded video based on the target audio stream and second picture data, wherein the second picture data is the picture data of the virtual scene acquired by a second virtual camera.
Here, the electronic device may synchronize the target audio stream and the second screen data according to a time axis of the target audio stream and the second screen data by the game engine, and generate the second recorded video. Since the target audio stream is an audio stream corresponding to the target sound effect type, the second recorded video actually includes the sound that the user wishes to record, but does not include the sound that the user does not wish to record.
For example, when a user needs to record a video with a skill release, the user may set a target audio stream for sampling the skill sound effect if the user wants the sound effect of the skill release to be more obvious. In this way, the generated second recorded video includes the skill sound effect, but does not include other sound effects such as background sound, environmental sound, and the like.
For another example, when the user finds that the background music in the virtual scene is copyrighted music, the user may choose not to record the background music, i.e., the selected target sound effect type does not include the background music. Therefore, the generated second recorded video does not include background music with copyright music, and copyright risk is avoided when video sharing and uploading are carried out.
In some embodiments, the game engine may generate a second recorded video based on the target audio stream in the pulse code modulation format and the second picture data. The second image data may be acquired by a second virtual camera, and the acquisition principle of the second image data is consistent with that of the first image data, which is not described herein again.
The second audio output management plug-in sends the target audio stream in PCM format to the game engine as a frame-by-frame audio stream data rather than as an entire audio file. Therefore, each time the game engine receives one frame of target audio stream, the target audio stream and the corresponding second picture data can be synchronized to generate a video, and the generation speed of the second recorded video is increased.
In the above embodiment, the electronic device may determine, in response to the recording instruction, a second virtual camera in the virtual scene, control the second virtual camera to acquire second picture data of the virtual scene, control the audio engine to generate third audio data based on the fourth pose information of the virtual character and the fifth pose information of the sound generating body, control the audio engine to split the third audio data into at least one audio stream according to a sound effect type of an audio included in the third audio data, obtain, from the at least one audio stream, a target audio stream belonging to the target sound effect type, and generate a second recorded video based on the target audio stream and the second picture data.
Therefore, the third audio data are split, and the target audio stream belonging to the target sound effect type is obtained from the split multi-channel audio stream, so that the user can define the sound in the virtual scene to be recorded, and the copyright risk of the user in video sharing can be avoided.
Fig. 8 is a block diagram illustrating the connection of modules of a video recording device in a virtual scene, according to some embodiments. As shown in fig. 8, an embodiment of the present disclosure provides an apparatus for recording a video in a virtual scene, where the apparatus 800 includes:
a creating module 801 configured to determine a first virtual camera in a virtual scene in response to a video recording instruction;
a picture acquisition module 802 configured to control the first virtual camera to acquire first picture data of the virtual scene under a camera view angle in response to a target operation for the first virtual camera;
an audio generating module 803, configured to control an audio engine to process an original audio signal of a sounding body in the virtual scene based on first pose information of the first virtual camera and second pose information of the sounding body, so as to obtain first audio data;
an obtaining module 804, configured to obtain the first audio data generated by the audio engine, and obtain the first picture data collected by the first virtual camera;
a video generating module 805 configured to obtain a first recorded video based on the first picture data and the first audio data.
Optionally, the obtaining module 804 is specifically configured to:
and sampling the first audio data through a first audio output management plug-in to obtain an audio sampling signal in a pulse code modulation format.
Optionally, the video generating module 805 is specifically configured to:
and receiving the audio sampling signal output by the first audio output management plug-in unit through a game engine, and combining the audio sampling signal and the first picture data to obtain the first recorded video.
Optionally, the apparatus 800 further comprises:
and the mute module is configured to control the first audio output management plug-in to mute the audio sampling signal so that the audio sampling signal is not sent to an audio playback system.
Optionally, the apparatus 800 further comprises:
and the output module is configured to respond to an audio playing instruction, control the audio output device to output the first audio data, and control the audio output device not to output second audio data, wherein the second audio data is obtained by processing the original audio signal by the audio engine based on the third posture information of the virtual character and the second posture information.
Optionally, the apparatus 800 further comprises:
the audio splitting module is configured to control the audio engine to split third audio data into at least one path of audio stream according to a sound effect type of audio included in the third audio data, wherein each path of audio stream includes audio of one sound effect type, and the third audio data is obtained by processing an original audio signal of a sound generating body in the virtual scene;
the extraction module is configured to acquire a target audio stream belonging to a target sound effect type from the at least one path of audio stream;
and the recording module is configured to generate a second recorded video based on the target audio stream and second picture data, wherein the second picture data is the picture data of the virtual scene acquired by a second virtual camera.
Optionally, the extracting module is specifically configured to:
sampling the audio stream belonging to the target sound effect type from the at least one path of audio stream through a second audio output management plug-in to obtain a target audio stream in a pulse code modulation format;
the recording module is specifically configured to:
and generating a second recorded video based on the target audio stream in the pulse code modulation format and the second picture data.
With respect to the apparatus 800 in the above embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be described in detail here.
Referring now to FIG. 9, shown is a block diagram of an electronic device 900 suitable for use in implementing embodiments of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), VR devices, and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic device 900 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 901 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage means 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic apparatus 900 are also stored. The processing apparatus 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
Generally, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication device 909 may allow the electronic apparatus 900 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 9 illustrates an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 909, or installed from the storage device 908, or installed from the ROM 902. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing apparatus 901.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some implementations, the electronic devices may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to a video recording instruction, and determining a first virtual camera in a virtual scene; in response to a target operation for the first virtual camera, controlling the first virtual camera to acquire first picture data of the virtual scene under a camera view angle; controlling an audio engine to process an original audio signal of a sounding body in the virtual scene based on first position and posture information of the first virtual camera and second position and posture information of the sounding body in the virtual scene to obtain first audio data; acquiring the first audio data generated by the audio engine and acquiring the first picture data acquired by the first virtual camera; and obtaining a first recorded video based on the first picture data and the first audio data.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of a module in some cases does not constitute a limitation on the module itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.

Claims (10)

1. A video recording method under a virtual scene is characterized by comprising the following steps:
responding to a video recording instruction, and determining a first virtual camera in a virtual scene;
in response to a target operation for the first virtual camera, controlling the first virtual camera to acquire first picture data of the virtual scene under a camera view angle; and
controlling an audio engine to process an original audio signal of a sounding body in the virtual scene based on first position and posture information of the first virtual camera and second position and posture information of the sounding body in the virtual scene to obtain first audio data;
acquiring the first audio data generated by the audio engine and acquiring the first picture data acquired by the first virtual camera;
and obtaining a first recorded video based on the first picture data and the first audio data.
2. The method of claim 1, wherein the obtaining the first audio data generated by the audio engine comprises:
and sampling the first audio data through a first audio output management plug-in to obtain an audio sampling signal in a pulse code modulation format.
3. The method of claim 2, wherein obtaining a first recorded video based on the first picture data and the first audio data comprises:
and receiving the audio sampling signal output by the first audio output management plug-in unit through a game engine, and combining the audio sampling signal and the first picture data to obtain the first recorded video.
4. The method of claim 2, further comprising:
and controlling the first audio output management plug-in to mute the audio sampling signal so that the audio sampling signal is not sent to an audio playback system.
5. The method of claim 1, further comprising:
and in response to an audio playing instruction, controlling the audio output device to output the first audio data, and controlling the audio output device not to output second audio data, wherein the second audio data is obtained by processing the original audio signal by the audio engine based on third posture information of the virtual character and the second posture information.
6. The method of claim 1, further comprising:
controlling an audio engine to split third audio data into at least one path of audio stream according to the sound effect type of the audio included in the third audio data, wherein each path of audio stream includes the audio of one sound effect type, and the third audio data is obtained by processing the original audio signal of the sounding body in the virtual scene;
acquiring a target audio stream belonging to a target sound effect type from the at least one path of audio stream;
and generating a second recorded video based on the target audio stream and second picture data, wherein the second picture data is the picture data of the virtual scene acquired by a second virtual camera.
7. The method according to claim 6, wherein said obtaining a target audio stream belonging to a target sound effect type from the at least one audio stream comprises:
sampling the audio stream belonging to the target sound effect type from the at least one path of audio stream through a second audio output management plug-in to obtain a target audio stream in a pulse code modulation format;
generating a second recorded video based on the target audio stream and second picture data, comprising:
and generating a second recorded video based on the target audio stream in the pulse code modulation format and the second picture data.
8. A video recording apparatus under a virtual scene, comprising:
the creating module is configured to respond to a video recording instruction and determine a first virtual camera in a virtual scene;
a picture acquisition module configured to control the first virtual camera to acquire first picture data of the virtual scene under a camera view angle in response to a target operation for the first virtual camera;
an audio generating module configured to control an audio engine to process an original audio signal of a sounding body in the virtual scene based on first pose information of the first virtual camera and second pose information of the sounding body to obtain first audio data;
the acquisition module is configured to acquire the first audio data generated by the audio engine and acquire the first picture data acquired by the first virtual camera;
a video generation module configured to obtain a first recorded video based on the first picture data and the first audio data.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method of any one of claims 1 to 7.
CN202211185076.7A 2022-09-27 2022-09-27 Video recording method and device under virtual scene, storage medium and electronic equipment Pending CN115604408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211185076.7A CN115604408A (en) 2022-09-27 2022-09-27 Video recording method and device under virtual scene, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211185076.7A CN115604408A (en) 2022-09-27 2022-09-27 Video recording method and device under virtual scene, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115604408A true CN115604408A (en) 2023-01-13

Family

ID=84844354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211185076.7A Pending CN115604408A (en) 2022-09-27 2022-09-27 Video recording method and device under virtual scene, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115604408A (en)

Similar Documents

Publication Publication Date Title
EP3590097B1 (en) Virtual and real object recording in mixed reality device
KR102197544B1 (en) Mixed reality system with spatialized audio
CN109874021B (en) Live broadcast interaction method, device and system
EP3440538B1 (en) Spatialized audio output based on predicted position data
WO2022121558A1 (en) Livestreaming singing method and apparatus, device, and medium
JP2020096354A (en) Video headphones, system, platform, methods, apparatuses and media
CN110213616B (en) Video providing method, video obtaining method, video providing device, video obtaining device and video providing equipment
WO2016177296A1 (en) Video generation method and apparatus
CN109660817B (en) Video live broadcast method, device and system
JP5971316B2 (en) INFORMATION PROCESSING SYSTEM, ITS CONTROL METHOD, AND PROGRAM, AND INFORMATION PROCESSING DEVICE, ITS CONTROL METHOD, AND PROGRAM
WO2022134684A1 (en) Interaction method and apparatus based on live streaming application program, and device and storage medium
CN112995759A (en) Interactive service processing method, system, device, equipment and storage medium
CN108600778B (en) Media stream transmitting method, device, system, server, terminal and storage medium
CN112969093A (en) Interactive service processing method, device, equipment and storage medium
CN111628925A (en) Song interaction method and device, terminal and storage medium
EP4113517A1 (en) Method and apparatus for processing videos
CN115604408A (en) Video recording method and device under virtual scene, storage medium and electronic equipment
US11368611B2 (en) Control method for camera device, camera device, camera system, and storage medium
CN111770373B (en) Content synchronization method, device and equipment based on live broadcast and storage medium
CN113709652B (en) Audio play control method and electronic equipment
WO2022220306A1 (en) Video display system, information processing device, information processing method, and program
CN115033201A (en) Audio recording method, device, system, equipment and storage medium
JPWO2018003081A1 (en) Spherical camera captured image display system, method and program
CN115237250A (en) Audio playing method and device, storage medium, client and live broadcasting system
CN118059485A (en) Audio processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination