CN117135393A - Recording processing method and device based on virtual reality and electronic equipment - Google Patents

Recording processing method and device based on virtual reality and electronic equipment Download PDF

Info

Publication number
CN117135393A
CN117135393A CN202210540541.8A CN202210540541A CN117135393A CN 117135393 A CN117135393 A CN 117135393A CN 202210540541 A CN202210540541 A CN 202210540541A CN 117135393 A CN117135393 A CN 117135393A
Authority
CN
China
Prior art keywords
information
virtual reality
user
recording
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210540541.8A
Other languages
Chinese (zh)
Inventor
赵文珲
王璨
吴培培
黄翔宇
施磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210540541.8A priority Critical patent/CN117135393A/en
Publication of CN117135393A publication Critical patent/CN117135393A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • H04N5/9202Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal the additional signal being a sound signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a recording processing method and device based on virtual reality and electronic equipment, and relates to the technical field of virtual reality, wherein the method comprises the following steps: firstly, acquiring user voice information acquired by microphone equipment and acquiring media sound information played in a virtual reality scene; mixing and synthesizing the voice information of the user and the media sound information to obtain audio information; and then generating recorded video information according to the audio information and the view finding picture information in the virtual reality camera model, wherein the view finding picture information is rendered according to the element content corresponding to the shooting range in the virtual reality scene. Through the application of the technical scheme, a user in the real environment with the dotted line can experience the feeling that the user uses the camera to record the scene and participate in the scene in the real environment, the recorded video content can contain the content in which the user participates in the video content, such as the voice which the user speaks by himself, and the VR use experience of the user is improved.

Description

Recording processing method and device based on virtual reality and electronic equipment
Technical Field
The disclosure relates to the technical field of virtual reality, and in particular relates to a recording processing method and device based on virtual reality and electronic equipment.
Background
With the continuous development of social productivity and scientific technology, various industries have increasingly demanded Virtual Reality (VR) technology. VR technology has also made tremendous progress and has gradually become a new scientific and technological area.
At present, based on VR technology, users can watch virtual live broadcast and other video content, for example, users can enter a virtual concert scene after wearing VR equipment to watch performance content as if the users feel on the scene.
However, the prior art cannot meet the recording requirement of the user on the participating content in the process of watching the VR video, and the VR use experience of the user is affected.
Disclosure of Invention
In view of the above, the present disclosure provides a recording processing method, apparatus and electronic device based on virtual reality, and is mainly aimed at improving the technical problem that the prior art cannot meet the recording requirement of users on participating in content during the process of watching VR video, and affects the VR use experience of users.
In a first aspect, the present disclosure provides a recording processing method based on virtual reality, including:
responding to the recording instruction, acquiring user voice information acquired by the microphone equipment and acquiring media sound information played in the virtual reality scene;
mixing and synthesizing the user voice information and the media sound information to obtain audio information;
and generating recorded video information according to the audio information and the view finding picture information in the virtual reality camera model, wherein the view finding picture information is rendered according to element content corresponding to a shooting range in the virtual reality scene.
In a second aspect, the present disclosure provides a recording processing apparatus based on virtual reality, including:
the acquisition module is configured to acquire user voice information acquired by the microphone equipment and acquire media sound information played in the virtual reality scene;
the synthesizing module is configured to carry out audio mixing synthesis on the user voice information and the media sound information to obtain audio information;
the generating module is configured to generate recorded video information according to the audio information and view finding picture information in the virtual reality camera model, wherein the view finding picture information is rendered according to element content corresponding to a shooting range in the virtual reality scene.
In a third aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the virtual reality based recording processing method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device, including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, where the processor implements the virtual reality-based recording processing method according to the first aspect when executing the computer program.
By means of the technical scheme, the recording processing method and device based on virtual reality and the electronic equipment are provided, and compared with the prior art, the recording processing method and device based on virtual reality can meet the recording requirement of users on self-participation content in the VR video watching process. Specifically, when receiving a recording instruction of user voice, the VR device may first obtain user voice information collected by the microphone device, and obtain media sound information played in a virtual reality scene; mixing and synthesizing the voice information of the user and the media sound information to obtain audio information; and then generating recorded video information according to the synthesized audio information and the view finding picture information in the virtual reality camera model, wherein the view finding picture information is rendered according to the element content corresponding to the shooting range in the virtual reality scene. Through the application of the technical scheme, a user in the real environment with the dotted line can experience the feeling that the user uses the camera to record the scene and participate in the scene in the real environment, the recorded video content can contain the content in which the user participates in the video content, such as the voice which the user speaks by himself, and the VR use experience of the user is improved.
The foregoing description is merely an overview of the technical solutions of the present disclosure, and may be implemented according to the content of the specification in order to make the technical means of the present disclosure more clearly understood, and in order to make the above and other objects, features and advantages of the present disclosure more clearly understood, the following specific embodiments of the present disclosure are specifically described.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic diagram illustrating a VR device usage procedure provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a recording processing method based on virtual reality according to an embodiment of the disclosure;
fig. 3 is a schematic flow chart of another recording processing method based on virtual reality according to an embodiment of the disclosure;
fig. 4 is a schematic diagram showing a display example effect of a microphone on state provided by an embodiment of the present disclosure;
fig. 5 is a schematic diagram showing a display example effect of a microphone off state provided by an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a recording processing device based on virtual reality according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other.
As shown in fig. 1, a user may enter a virtual reality space through an intelligent terminal device such as a head-mounted VR glasses, and control his or her virtual character (Avatar) in the virtual reality space to perform social interaction, entertainment, learning, remote office, etc. with other user-controlled virtual characters.
The virtual reality space may be a simulation environment for the real world, a virtual scene of half simulation and half virtual, or a virtual scene of pure virtual. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in this embodiment. For example, a virtual scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
In one embodiment, in virtual reality space, a user may implement related interactive operations through a controller, which may be a handle, for example, a user performs related operation control through operation of keys of the handle. Of course, in other embodiments, the target object in the virtual reality device may be controlled using a gesture or voice or multi-mode control mode instead of using the controller.
In one embodiment, as virtual reality technology evolves, a performer may utilize the technology to perform a virtual reality performance, such as a virtual reality concert, by using a virtual reality device to immersively capture an experience similar to a real concert for a viewer. For example, a virtual reality space model of a performer is built through a virtual reality technology, a virtual reality environment for generating a concert is calculated based on the space model, technologies such as auditory perception, tactile perception, motion perception, taste perception, olfactory perception and the like can be provided, fusion of the virtual environment and simulation of interactive three-dimensional dynamic views and physical behaviors are realized, a user is immersed into the simulated virtual reality environment, performance of the performer in the virtual reality environment can be realized, and when the user brings virtual reality equipment, the performer can enter the concert scene, interaction can be performed through related perception technologies and the performer, and music feast is listened to, so that an immersed real concert experience effect is achieved.
In order to improve the technical problem that the prior art cannot meet the recording requirement of the user on the participating content in the process of watching VR video (such as a concert for experiencing and watching virtual reality), and the VR use experience of the user is affected. The embodiment provides a recording processing method based on virtual reality, as shown in fig. 2, which can be applied to an end side of VR equipment, and the method includes:
step 101, acquiring user voice information acquired by a microphone device and acquiring media sound information played in a virtual reality scene.
A Microphone (Microphone) device may be connected to the VR device or may be a device on the VR device. In the recording process in this embodiment, the microphone device may collect the voice information sent by the user (the user using the VR device) to obtain the voice information of the user.
The media tone information played in the virtual reality scene may include: real-time media sounds of the scene and/or chat voices of other users in the scene, etc. For example, in a virtual reality concert scene, the voice of a singer, the voice of a music background, the voice of a live audience, and the like may be included.
For this embodiment, after receiving the recording instruction, the user may simultaneously obtain the voice information and the media sound information played in real time in the virtual reality scene in parallel. To ensure that these sound messages can be recorded.
Step 102, mixing and synthesizing the acquired user voice information and the acquired media sound information to obtain audio information.
Before the mixing synthesis, noise filtering can be further performed on the obtained user voice information and the obtained media voice information, for example, background noise elimination is performed on the user voice information collected by the microphone device. And then mixing and synthesizing the noise-filtered user voice information and the media sound information to reduce the occurrence of noise and the like in the finally recorded video information.
And 103, generating recorded video information according to the audio information obtained by mixing and synthesizing and the framing picture information in the virtual reality camera model.
For this embodiment, after entering the virtual reality space, the user may call the camera model when recording the virtual reality scene content is required, and then the camera model may be displayed in the virtual reality space. The camera model may specifically be a preset model related to the photographing device, for example, may be a smart phone model, a self-timer stick camera model, and the like. And the view finding picture information which is rendered according to the element content corresponding to the shooting range in the virtual reality scene can be displayed in the camera model.
For example, a shooting range of a camera model is acquired; then in the virtual reality video, selecting element content corresponding to the shooting range and rendering the element content to textures in real time; finally, the rendered texture map may be placed in a camera model.
The shooting range of the camera refers to a range where a user needs to shoot a virtual reality scene during viewing of VR video, and for this embodiment, parameters related to controlling the shooting range of the camera, such as a field of view (FOV), may be preset. The shooting range can be adjusted according to the requirements of users, and then the required video content and the like can be recorded.
The element content corresponding to the shooting range may include virtual scene content such as character roles, backgrounds, field content, etc. for virtual reality that can be seen in the shooting range. Specifically, a Unity Camera (Camera) tool is used to select element contents corresponding to a shooting range of a Camera model in a virtual reality video and render the element contents to a texture (Render To Texture, RTT). And then placing the rendered texture map in a camera model, so as to display framing picture information in the camera model. The purpose is to allow the user to preview the effect of the selected scene information map before confirming the recording.
For example, the three-dimensional space position of the camera model is bound with the three-dimensional space position of the virtual role of the user in advance, then the three-dimensional space position currently displayed by the camera model is determined based on the real-time three-dimensional space position of the virtual role of the user, and the camera model is displayed according to the position, so that the effect that the user uses the camera, such as the effect that the user holds the self-timer stick camera by the virtual role of the user, is displayed. In the virtual reality video, element content corresponding to the shooting range of the self-timer rod camera is selected to be rendered to the texture, and then the rendered texture map is placed in the display screen position of the self-timer rod camera, so that a framing picture preview effect similar to that of a real camera before shooting is obtained through simulation.
When the user finds the content to be recorded through the view finding of the camera model, the recording function can be triggered, and the recorded video information is obtained by recording the live view picture information in the camera model and combining the audio information obtained in the step 102. The video information may include a voice uttered by the user himself, a self-screen captured in a virtual reality scene, and the like.
It should be noted that, the recording process related to the embodiment is different from the recording process of VR video in the traditional sense, and the virtual recording mode in this embodiment is that VR scene element content in the selected range is rendered to textures in real time, and then is attached to a camera model, so that recorded video information is generated through the texture maps and media sounds played in user voices and virtual reality scenes. Those sensors of the physical camera module are not needed, so that the picture quality of the recorded video information can be ensured. In addition, in the moving process of the camera model, VR scene element content in the dynamic moving shooting range can be presented in the camera model in real time, and the recorded video picture display effect cannot be affected by factors such as swinging of the camera model. The real recording feeling of the user can be well simulated.
Compared with the prior art, the embodiment can meet the recording requirement of the user on the participation content in the process of watching VR video. The user in the dotted line real environment can experience the scene of using the camera to record and participate in the real environment, the recorded video content can contain the content in which the user participates, such as the voice of the user himself, and the VR using experience of the user is improved.
Further, as a refinement and extension of the foregoing embodiment, in order to fully describe a specific implementation procedure of the method of the present embodiment, the present embodiment provides a specific method as shown in fig. 3, where the method includes:
step 201, acquiring user voice information acquired by a microphone device and acquiring media sound information played in a virtual reality scene.
Optionally, the process of obtaining the media sound information played in the virtual reality scene may specifically include: and acquiring the media audio information played in the virtual reality scene by analyzing the video stream of the virtual reality video. In this alternative, unlike the manner of capturing by the microphone device, the media audio information played in the virtual reality scene can be accurately parsed in real time from the video stream of the virtual reality video without using device hardware such as a microphone for traditional audio capturing. And no mutual interference exists between the process of acquiring the media sound information and the process of acquiring the user voice information.
In practical applications, there may be sounds made by multiple sound sources in the media sound information, such as sounds made by virtual character a, sounds made by virtual character b, background sounds of a venue, special sound effects in a scene, and the like. Therefore, in order to meet the personalized requirement of the user in the video recording process, exemplary, the method for obtaining the media audio information played in the virtual reality scene by analyzing the video stream of the virtual reality video may specifically include: firstly, audio data in a video stream are acquired by analyzing the video stream; and extracting the audio information of the target sound source (such as at least one sound source required to be recorded by a user) from the audio data to serve as the acquired media sound information. For example, in a virtual reality concert site, the voice sent by the user and the sound sent by the singer can be selected to be mixed and synthesized, so that the effect that the user sings along with the singer is achieved.
In practical applications, the user may not need to record his own voice. Thus, to ensure accurate voice recording, step 201 may specifically include: acquiring the starting state of microphone equipment; if the microphone device is turned on, acquiring user voice information acquired by the microphone device and media sound information played in the virtual reality scene, and then executing a process shown in step 202; if the microphone device is not turned on, media sound information played in the virtual reality scene is obtained, and accordingly, in step 202, mixing and synthesizing the user voice information and the media sound information to obtain audio information, which may specifically include: if the microphone device is not started, the audio information is obtained according to the media sound information. By the alternative mode, the actual recording requirement of the user can be met, and whether the sound made by the user is recorded in the video information can be selected.
Further, if the microphone device is not turned on, it is also possible that the user may find that the user may turn on his microphone during the video recording process, and participate his voice information in the recording, and in order to avoid the recording error, optionally, before step 201, the method further includes: acquiring a recording state in which the recording state is currently in; and if the current state is in the pre-recording state, allowing the recording instruction of the user voice to be input. For example, as shown in fig. 4, whether to turn on the microphone may be determined by the microphone state in the camera model, and the default is that the microphone is turned on, and the user may control whether to turn on the microphone for recording through the function button. And if the recording is in the recording state, the recording instruction of the user voice is not allowed to be input. For example, as shown in fig. 5, during video recording, if the microphone is not turned on before, the microphone is turned off, and the user is not allowed to turn on the microphone again for recording, so as to avoid recording errors, such as avoiding the problem of sonic boom when the microphone is switched during recording. By the aid of the optional mode, video information containing the user participation content can be ensured to be accurately recorded.
Step 202, mixing and synthesizing the voice information of the user and the media information according to a preset volume ratio to obtain audio information.
The preset volume ratio can be preset according to actual requirements. According to the embodiment, the voice information of the user and the voice information of the media are mixed and synthesized according to the preset volume proportion, so that the harmony of the voice of the media and the recorded voice of the user can be ensured, the modes such as default proportion and the like can be participated in, or the user can also perform proportion adjustment according to the own requirements. For example, through proportional adjustment, chorus or following sings and the like are achieved.
Step 203, generating recorded video information according to the audio information and the framing picture information in the virtual reality camera model.
Optionally, step 203 may specifically include: in the virtual reality video, element content corresponding to a shooting range is selected to be rendered to textures in real time, and the rendered texture map is placed in a camera model, wherein the shooting range is a static or dynamic shooting range; based on the texture map corresponding to each point in time during the recording period and the audio information (the audio information obtained in step 202), recorded video information is generated.
For example, a user may manipulate the camera model's shooting range by a control handle (or hand motion, etc.), such as a user handle moving left, right, up, down, left, etc., may trigger the camera model to follow left, right, up, down, etc., along with its shooting range; the user handle moves forwards or backwards and can trigger and adjust the shooting focal length of the camera tool; the user handle rotates, and the camera model can be triggered to rotate along with the shooting range. Through the optional mode, a user can conveniently perform shooting control, and shooting efficiency is improved.
According to the embodiment, the recorded video picture content can be obtained through the texture mapping information, the recorded video sound content can be obtained through audio information synthesized through audio mixing, and further the recorded video information can be generated.
After step 203, the method of this embodiment further includes: outputting prompt information of successful recording, for example, after successful recording, the recorded video can be prompted to be successfully saved, and a saving catalog of the video can be displayed; and/or playing and previewing the recorded video information, for example, playing and previewing the recorded video information according to a selection playing instruction of a user, so as to find out whether the recorded video information has noise or other poor tone quality, and then selecting video information with better recording quality again according to the requirement of the user.
In order to ensure the preview effect of the recorded video, the playing preview of the recorded video information may include: playing the recorded video information in a preset area in the virtual reality space, and simultaneously reducing the media volume of the currently played virtual reality scene to a preset volume threshold (for example, to 0, or other threshold, etc. to reduce the effect of influencing the video preview due to the overlarge media volume), wherein the preset area may include: the preset interface area (such as a popup window area or other specific interface areas) or the preset view-finding frame area of the camera model, for example, view finding before and during recording through the preset view-finding frame area, video preview can be performed through the area in the follow-up process, so that the user experiences the experience of recording like a smart phone in a virtual reality environment, and VR use experience of the user can be further enhanced.
In step 204, in response to the sharing instruction, the recorded video information is shared to the target platform, or is shared to the designated user in the contact list through the server, or is shared to the user corresponding to other virtual objects in the same virtual reality space.
A target platform, such as a social platform, which a user or other user may access such recorded video information; and the appointed user shares the recorded video information to friends appointed by the user through the server.
For example, the user can view other users currently entering the same room, and then select the users to share the recorded video information to him; or selecting other virtual objects in the same VR scene in the modes of user focus, handle rays and the like, sharing the recorded video information to the virtual objects, searching a corresponding target user according to the identification of the virtual objects by the system, forwarding the recorded video information shared by the user to the target user, and achieving the sharing purpose of the recorded video information.
In order to illustrate the specific implementation procedure of the above embodiments, the following application scenario is given, but not limited thereto:
the user wears the VR device to enter a virtual concert scene to watch the performance content. When a user needs to record video, the camera model can be evoked to display in the virtual reality space. And triggering a corresponding recording instruction to record the voice content sent by the user into the video through the microphone equipment while recording the video picture content of the concert site by using the camera model.
At the VR equipment side, user voice information acquired by the microphone equipment can be acquired, and media sound information played in a virtual reality scene is acquired by analyzing a video stream of the virtual reality video; mixing and synthesizing the voice information of the user and the media information according to a preset volume proportion to obtain audio information; and then generating recorded video information according to the audio information and the framing picture information in the virtual reality camera model.
By the method, the user can record and obtain the video content with singer or the video content interacted with other user roles in the virtual reality concert site, and the like, so that the requirement of the user on recording the self-participation content in the VR video watching process can be met. The user in the dotted line real environment can experience the scene of using the camera to record and participate in the scene, the recorded video content can contain the content in which the user participates in the video content, and VR use experience of the user is improved.
Further, as a specific implementation of the method shown in fig. 2 and fig. 3, the embodiment provides a recording processing device based on virtual reality, as shown in fig. 6, the device includes: an acquisition module 31, a synthesis module 32, and a generation module 33.
An obtaining module 31 configured to obtain user voice information collected by the microphone device and obtain media sound information played in the virtual reality scene;
a synthesizing module 32 configured to mix and synthesize the user voice information and the media sound information to obtain audio information;
the generating module 33 is configured to generate recorded video information according to the audio information and view finding picture information in the virtual reality camera model, where the view finding picture information is rendered according to element content corresponding to a shooting range in the virtual reality scene.
In a specific application scenario, the obtaining module 31 is specifically configured to obtain media audio information played in the virtual reality scenario by parsing a video stream of the virtual reality video.
In a specific application scenario, the obtaining module 31 is specifically further configured to obtain audio data in the video stream by parsing the video stream; and extracting the audio information of the target sound source from the audio data as the media sound information.
In a specific application scenario, the generating module 33 is specifically configured to select, in the virtual reality video, element content corresponding to a shooting range to render to a texture in real time, and place the rendered texture map in the camera model, where the shooting range is a static or dynamic shooting range; and generating the recorded video information based on the texture map and the audio information corresponding to each time point in the recording time period.
In a specific application scenario, the synthesizing module 32 is specifically configured to mix and synthesize the user voice information and the media sound information according to a preset volume ratio, so as to obtain audio information.
In a specific application scenario, the obtaining module 31 is further configured to obtain a recording state currently in before obtaining the user voice information collected by the microphone device and obtaining the media sound information played in the virtual reality scenario; if the current state is in the pre-recording state, allowing the recording instruction to be input; if the recording is currently in the recording state, the recording instruction is not allowed to be input.
In a specific application scenario, the device further includes: a post-processing module;
the post-processing module is configured to output prompt information of successful recording after the recorded video information is generated according to the audio information and the framing picture information in the virtual reality camera model; and/or playing and previewing the recorded video information.
In a specific application scenario, the post-processing module is specifically configured to play the recorded video information in a preset area in the virtual reality space, and simultaneously reduce the media volume of the currently played virtual reality scenario to a preset volume threshold, where the preset area includes: a preset interface area or a preset viewfinder area of the camera model.
In a specific application scenario, the device further includes: a sharing module;
and the sharing module is configured to respond to a sharing instruction after the recorded video information is generated according to the audio information and the view finding picture information in the virtual reality camera model, and share the recorded video information to a target platform, or share the video information to a designated user in a contact list through a server, or share the video information to users corresponding to other virtual objects in the same virtual reality space.
In a specific application scenario, the obtaining module 31 is specifically further configured to obtain an on state of the microphone device; if the microphone equipment is started, acquiring user voice information acquired by the microphone equipment and acquiring media sound information played in a virtual reality scene; if the microphone equipment is not started, acquiring media sound information played in a virtual reality scene;
correspondingly, the synthesizing module 32 is specifically further configured to obtain the audio information according to the media sound information if the microphone device is not turned on.
It should be noted that, in the other corresponding descriptions of each functional unit related to the recording processing device based on virtual reality provided in this embodiment, reference may be made to corresponding descriptions in fig. 2 and fig. 3, and no further description is given here.
Based on the above-mentioned methods shown in fig. 2 and 3, correspondingly, the present embodiment further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned recording processing method based on virtual reality shown in fig. 2 and 3.
Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method of each implementation scenario of the present disclosure.
Based on the methods shown in fig. 2 and fig. 3 and the virtual device embodiment shown in fig. 6, in order to achieve the above objects, the embodiments of the present disclosure further provide an electronic device, which may specifically be a virtual reality device, such as a VR headset, where the device includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the virtual reality-based recording processing method as shown in fig. 2 and 3.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and so on. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be appreciated by those skilled in the art that the above-described physical device structure provided in this embodiment is not limited to this physical device, and may include more or fewer components, or may combine certain components, or may be a different arrangement of components.
The storage medium may also include an operating system, a network communication module. The operating system is a program that manages the physical device hardware and software resources described above, supporting the execution of information handling programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the information processing entity equipment.
From the above description of embodiments, it will be apparent to those skilled in the art that the present disclosure may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. By applying the scheme of the embodiment, compared with the prior art, the embodiment can meet the recording requirement of the user on the self-participation content in the process of watching VR video. The user in the dotted line real environment can experience the scene of using the camera to record and participate in the scene, recorded video content can contain the content in which the user participates in, such as voice spoken by the user himself and pictures shot in the virtual reality scene, and the like, so that VR use experience of the user is improved.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. The recording processing method based on virtual reality is characterized by comprising the following steps:
acquiring user voice information acquired by microphone equipment and acquiring media sound information played in a virtual reality scene;
mixing and synthesizing the user voice information and the media sound information to obtain audio information;
and generating recorded video information according to the audio information and the view finding picture information in the virtual reality camera model, wherein the view finding picture information is rendered according to element content corresponding to a shooting range in the virtual reality scene.
2. The method of claim 1, wherein the obtaining media sound information played in the virtual reality scene comprises:
and acquiring the media audio information played in the virtual reality scene by analyzing the video stream of the virtual reality video.
3. The method according to claim 2, wherein the obtaining media audio information played in the virtual reality scene by parsing a video stream of the virtual reality video specifically includes:
acquiring audio data in the video stream by analyzing the video stream;
and extracting the audio information of the target sound source from the audio data as the media sound information.
4. The method of claim 1, wherein generating recorded video information from the audio information and viewfinder information in a virtual reality camera model comprises:
in the virtual reality video, element content corresponding to a shooting range is selected to be rendered to textures in real time, and the rendered texture map is placed in the camera model, wherein the shooting range is a static or dynamic shooting range;
and generating the recorded video information based on the texture map and the audio information corresponding to each time point in the recording time period.
5. The method of claim 1, wherein the mixing and synthesizing the user voice information and the media sound information to obtain audio information comprises:
and mixing and synthesizing the user voice information and the media sound information according to a preset volume proportion to obtain audio information.
6. The method of claim 1, wherein prior to the acquiring the user voice information collected by the microphone device and the media sound information played in the virtual reality scene, the method further comprises:
acquiring a recording state in which the recording state is currently in;
if the current state is in the pre-recording state, allowing the recording instruction of the user voice to be input;
if the recording is in the recording state, the recording instruction of the user voice is not allowed to be input.
7. The method of claim 1, wherein after generating the recorded video information from the audio information and the viewfinder information in the virtual reality camera model, the method further comprises:
outputting prompt information of successful recording; and/or the number of the groups of groups,
and playing and previewing the recorded video information.
8. The method of claim 7, wherein playing the recorded video information comprises:
playing the recorded video information in a preset area in the virtual reality space, and simultaneously reducing the media volume of the currently played virtual reality scene to a preset volume threshold, wherein the preset area comprises: a preset interface area or a preset viewfinder area of the camera model.
9. The method of claim 1, wherein after generating the recorded video information from the audio information and the viewfinder information in the virtual reality camera model, the method further comprises:
and responding to the sharing instruction, sharing the recorded video information to a target platform, or sharing the video information to a designated user in a contact list through a server, or sharing the video information to users corresponding to other virtual objects in the same virtual reality space.
10. The method according to any one of claims 1 to 9, wherein the acquiring the user voice information collected by the microphone device and acquiring the media sound information played in the virtual reality scene includes:
acquiring the starting state of the microphone equipment;
if the microphone equipment is started, acquiring user voice information acquired by the microphone equipment and acquiring media sound information played in a virtual reality scene;
if the microphone equipment is not started, acquiring media sound information played in a virtual reality scene;
the step of performing audio mixing synthesis on the user voice information and the media sound information to obtain audio information comprises the following steps:
and if the microphone equipment is not started, obtaining the audio information according to the media sound information.
11. A recording processing apparatus based on virtual reality, comprising:
the acquisition module is configured to acquire user voice information acquired by the microphone equipment and acquire media sound information played in the virtual reality scene;
the synthesizing module is configured to carry out audio mixing synthesis on the user voice information and the media sound information to obtain audio information;
the generating module is configured to generate recorded video information according to the audio information and view finding picture information in the virtual reality camera model, wherein the view finding picture information is rendered according to element content corresponding to a shooting range in the virtual reality scene.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 10.
13. An electronic device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 10 when executing the computer program.
CN202210540541.8A 2022-05-17 2022-05-17 Recording processing method and device based on virtual reality and electronic equipment Pending CN117135393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210540541.8A CN117135393A (en) 2022-05-17 2022-05-17 Recording processing method and device based on virtual reality and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210540541.8A CN117135393A (en) 2022-05-17 2022-05-17 Recording processing method and device based on virtual reality and electronic equipment

Publications (1)

Publication Number Publication Date
CN117135393A true CN117135393A (en) 2023-11-28

Family

ID=88858697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210540541.8A Pending CN117135393A (en) 2022-05-17 2022-05-17 Recording processing method and device based on virtual reality and electronic equipment

Country Status (1)

Country Link
CN (1) CN117135393A (en)

Similar Documents

Publication Publication Date Title
US11030987B2 (en) Method for selecting background music and capturing video, device, terminal apparatus, and medium
WO2019128787A1 (en) Network video live broadcast method and apparatus, and electronic device
CN105450642A (en) Data processing method based on on-line live broadcast, correlation apparatus and system
TW202007142A (en) Video file generation method, device, and storage medium
CN112637622A (en) Live broadcasting singing method, device, equipment and medium
JP2014127987A (en) Information processing apparatus and recording medium
WO2018135343A1 (en) Information processing apparatus, information processing method, and program
CN108449632B (en) Method and terminal for real-time synthesis of singing video
CN114071180A (en) Live broadcast room display method and device
CN111530088B (en) Method and device for generating real-time expression picture of game role
KR20150105058A (en) Mixed reality type virtual performance system using online
JP2013093840A (en) Apparatus and method for generating stereoscopic data in portable terminal, and electronic device
WO2021143574A1 (en) Augmented reality glasses, augmented reality glasses-based ktv implementation method and medium
CN106604147A (en) Video processing method and apparatus
CN106686463A (en) Video role replacing method and apparatus
KR20230021640A (en) Customize soundtracks and hair styles in editable videos for multimedia messaging applications
JP2006039917A (en) Information processing apparatus and method, recording medium, and program
CN114531564A (en) Processing method and electronic equipment
TW201917556A (en) Multi-screen interaction method and apparatus, and electronic device
WO2023174009A1 (en) Photographic processing method and apparatus based on virtual reality, and electronic device
CN105094823B (en) A kind of method and apparatus for generating interface of input method
CN116761009A (en) Video playing method and device in meta-universe panoramic live broadcast scene and live broadcast system
CN115050228B (en) Material collection method and device and electronic equipment
KR20200028830A (en) Real-time computer graphics video broadcasting service system
CN106331525A (en) Realization method for interactive film

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination