CN115242980B - Video generation method and device, video playing method and device and storage medium - Google Patents

Video generation method and device, video playing method and device and storage medium Download PDF

Info

Publication number
CN115242980B
CN115242980B CN202210867273.0A CN202210867273A CN115242980B CN 115242980 B CN115242980 B CN 115242980B CN 202210867273 A CN202210867273 A CN 202210867273A CN 115242980 B CN115242980 B CN 115242980B
Authority
CN
China
Prior art keywords
data
video
shooting
information
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210867273.0A
Other languages
Chinese (zh)
Other versions
CN115242980A (en
Inventor
蒋俊磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202210867273.0A priority Critical patent/CN115242980B/en
Publication of CN115242980A publication Critical patent/CN115242980A/en
Application granted granted Critical
Publication of CN115242980B publication Critical patent/CN115242980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application provides a video generation method and device, a video playing method and device and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring scene information and azimuth information; extracting control data from a preset control resource library according to the scene information and the azimuth information; operating a preset virtual object according to the operation data to execute a preset operation; wherein the preset operation at least comprises one of the following operations: guiding operation and explaining operation; acquiring shooting parameters in the shooting process according to the scene information and the azimuth information; shooting data and recording data obtained by shooting according to shooting parameters to obtain video shooting data; the recording data are data obtained by recording virtual objects and executing preset operation; generating preliminary video data according to the video shooting data and shooting parameters; and writing the preliminary video data into a preset file according to the scene information to obtain target video data. According to the method and the device for shooting the VR guest meeting scene, shooting of the VR guest meeting scene can be simplified, and shooting time is shortened.

Description

Video generation method and device, video playing method and device and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a video generating method and apparatus, a video playing method and apparatus, and a storage medium.
Background
Virtual Reality (VR) video reception scene resource shooting requires a professional to take a picture by using professional photographic equipment, and after the shooting is completed, the video resource needs to be displayed in the form of a webpage. However, VR video guest-meeting scene resources are generated through professionals, professional equipment and webpage production, so that the VR video guest-meeting scene resources are complex to acquire, long in time consumption and complex to produce.
Disclosure of Invention
The main purpose of the embodiments of the present application is to provide a video generating method and apparatus, a video playing method and apparatus, and a storage medium, which aims to simplify resource acquisition and production of VR guest receiving scenes and reduce production duration.
To achieve the above object, a first aspect of an embodiment of the present application proposes a video generating method, including:
acquiring a scene switching request; wherein the scene change request includes: scene information and azimuth information;
extracting control data from a preset control resource library according to the scene information and the azimuth information;
Controlling a preset virtual object to execute preset operation according to the control data; wherein the preset operation at least comprises one of the following operations: guiding operation and explaining operation;
acquiring shooting parameters of a shooting process according to the scene information and the azimuth information;
shooting data and recording data obtained by shooting according to the shooting parameters to obtain video shooting data; the recording data are data obtained by recording the virtual object and executing the preset operation;
generating preliminary video data according to the video shooting data and the shooting parameters;
and writing the preliminary video data into a preset file according to the scene information to obtain target video data.
In some embodiments, the manipulation data comprises: target scene guide information and target scene explanation content; the extracting the control data from the preset control resource library according to the scene information and the azimuth information comprises the following steps:
extracting original scene guide information and original scene explanation content from the preset control resource library according to the scene information;
screening the original scene guide information according to the azimuth information to obtain the target scene guide information;
And screening the original scene explanation content according to the azimuth information to obtain the target scene explanation content.
In some embodiments, the shooting parameters include: shooting angle and shooting time; the generating preliminary video data according to the video shooting data and the shooting parameters comprises the following steps:
extracting video segment data from the video shooting data according to the shooting time;
screening a target angle from the shooting angle according to the shooting time;
screening the video segment data according to the target angle to obtain a preliminary video segment;
and merging the preliminary video segments to obtain the preliminary video data.
In some embodiments, the writing the preliminary video data into a preset file according to the scene information to obtain target video data includes:
acquiring video information of the preliminary video data;
screening the preliminary video data according to the video information and the scene information to obtain a target video segment;
and merging the target video segments and writing the merged target video segments into the preset file to obtain the target video data.
In some embodiments, after writing the preliminary video data into a preset file according to the scene information to obtain target video data, the method further includes:
Updating the target video data specifically comprises:
generating pause playing information according to the scene information and the azimuth information;
performing pause playing marking on the preliminary video data according to the pause playing information to obtain a pause playing node;
and writing the pause playing node into the preset file to update the target video data.
In order to achieve the above object, a second aspect of the embodiments of the present application provides a video playing method, applied to a playing terminal, where the video playing method includes:
acquiring a play request;
acquiring target video data from a shooting terminal according to the playing request; the target video data is obtained by the video generation method in the first aspect;
analyzing the target video data to obtain the scene information and the preliminary video data;
receiving playing control information;
and playing the preliminary video data according to the playing control information and the scene information.
In some embodiments, after the playing the preliminary video data according to the playing manipulation information and the scene information, the method further includes:
receiving a display request sent by a synchronous terminal;
Screening video resource files from the preliminary video data according to the display request and the scene information;
transmitting the video resource file to the synchronous terminal;
and controlling the synchronous terminal to play the video resource file according to the play control information.
To achieve the above object, a third aspect of the embodiments of the present application proposes a video generating apparatus applied to a photographing terminal, the apparatus including:
the request acquisition module is used for acquiring a scene switching request; wherein the scene change request includes: scene information and azimuth information;
the extraction module is used for extracting control data from a preset control resource library according to the scene information and the azimuth information;
the control module is used for controlling a preset virtual object to execute preset operation according to the control data; wherein the preset operation at least comprises one of the following operations: guiding operation and explaining operation;
the parameter acquisition module is used for acquiring shooting parameters in a shooting process according to the scene information and the azimuth information;
the association module is used for obtaining video shooting data according to shooting data and recording data obtained by shooting the shooting parameters; the recording data are data obtained by recording the virtual object and executing the preset operation;
The generation module is used for generating preliminary video data according to the video shooting data and the shooting parameters;
and the writing module is used for writing the preliminary video data into a preset file according to the scene information to obtain target video data.
In order to achieve the above object, a fourth aspect of the embodiments of the present application provides a video playing device, applied to a playing terminal, including:
the request receiving module is used for acquiring a play request;
the data acquisition module is used for acquiring target video data from the shooting terminal according to the playing request; the target video data is obtained by the video generating apparatus according to the third aspect;
the analysis module is used for analyzing the target video data to obtain the scene information and the preliminary video data;
the information receiving module is used for receiving the playing control information;
and the playing module is used for playing the preliminary video data according to the playing control information and the scene information.
To achieve the above object, a fifth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium, for computer-readable storage, the storage medium storing one or more programs executable by one or more processors to implement the method described in the first aspect.
According to the video generation method and device, the video playing method and device and the storage medium, the virtual object is controlled according to the control data to execute the guiding operation and the explaining operation, so that a user can shoot a guest meeting scene according to the guiding operation and the explaining operation of the virtual object to obtain video data, the guest meeting scene shooting operation is simple, and no professional technician is required. And finally, generating video shooting data from the video data and the recorded data, generating preliminary video data from shooting parameters and the video shooting data, writing the preliminary video data into a preset file according to scene information to obtain target video data, and constructing a VR (virtual reality) reception scene by playing the target video data without depending on a browser to generate a webpage, so that the VR reception scene is easy to manufacture, and the labor cost is reduced.
Drawings
Fig. 1 is a system architecture diagram of a video generation method provided in an embodiment of the present application;
fig. 2 is a flowchart of a video generating method provided in an embodiment of the present application;
fig. 3 is a flowchart of step S202 in fig. 2;
fig. 4 is a flowchart of step S206 in fig. 2;
fig. 5 is a flowchart of step S207 in fig. 2;
FIG. 6 is a flow chart of a video generation method provided in another embodiment of the present application;
Fig. 7 is a flowchart of a video playing method provided in an embodiment of the present application;
fig. 8 is a flowchart of a video playing method according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a video playing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Virtual Reality (VR): the virtual reality is also called as virtual technology and virtual environment, is a brand new practical technology developed in the 20 th century, is a virtual world generated by computer simulation and is a three-dimensional space, and the simulation of sense organs such as vision of a user is provided, so that the user feels the environment of the user, and can observe things in the three-dimensional space instantly and without limitation. With the development of science and technology, the virtual reality technology has also made great progress and gradually becomes a new scientific and technical field. Virtual reality is an integration of a variety of technologies including real-time three-dimensional computer graphics technology, wide-angle (wide-field) stereoscopic display technology, tracking of the observer's head, eyes and hands, and haptic/force feedback, stereo, network transmission, voice input and output technology, etc.
Virtual Human (VH): virtual persons refer to virtual characters having a digitized appearance that will depend on the display device presence and possess the appearance of the person, the person's behavior, and the person's mind.
Rendering: rendering is the last process of CG and is also the stage of the 3D scene that ultimately fits the image. English is a Render, which is also sometimes referred to as coloring, but Shade is generally referred to as coloring, and Render is referred to as rendering. Because the two words, render and Shade, are two distinct concepts in three-dimensional software, they differ, though their functions are very similar.
And (3) transferring: each paragraph (the smallest unit that constitutes a television picture is a shot, and a series of shots formed by connecting together individual shots) has a single, relatively complete meaning, such as representing a course of action, representing a correlation, representing a meaning, etc. It is a complete narrative hierarchy in a video, just like a curtain in a drama, and a chapter in a novel, with individual paragraphs joined together, to form a complete video. Thus, a paragraph is the most basic structural form of a television film, and the structural hierarchy of the television film on the content is represented by the paragraph. And transitions or transitions between paragraphs and paragraphs, scenes and scenes are called transitions.
In order to improve the experience of the guests, a VR guest-receiving scene is usually manufactured, and the experience of the users in guest-receiving is improved by accessing various VR guest-receiving scenes during guest-receiving. However, making VR-based guest-receiving scenes requires specialized companies to take pictures with specialized equipment, and after taking pictures, generating web pages for use in providing guest-receiving. Therefore, special companies are adopted to shoot the meeting scene by using special equipment, the cost of VR meeting resources is increased, the manufacturing time is long, and finally loading is needed through webpage display, and the loading is time-consuming, so that the use of the VR meeting scene can delay the reception of interaction members during meeting, and guidance and explanation are absent during scene switching, so that the experience of a user in using the VR meeting scene is reduced.
Based on this, the embodiments of the present application provide a video generating method and apparatus, a video playing method and apparatus, and a storage medium, by extracting control data from a preset control resource library according to scene information and azimuth information, and controlling a virtual object according to the control data to perform guiding operation and explanation operation, and simultaneously acquiring shooting parameters, and acquiring shooting data and recording data according to the shooting parameters, where the recording data is data obtained by performing guiding operation and explanation operation on the recording virtual object, then generating preliminary video data according to the recording data and the shooting data, and writing the preliminary video data into a preset file to obtain target video data. Therefore, by controlling the virtual object to execute the guiding operation and the explaining operation according to the control data, the user can shoot the meeting scene according to the guiding operation and the explaining operation of the virtual object so as to obtain the video data, so that the shooting operation of the meeting scene is simple and does not need professional technicians. And finally, generating video shooting data from the video data and the recorded data, generating preliminary video data from shooting parameters and the video shooting data, writing the preliminary video data into a preset file according to scene information to obtain target video data, and playing the target video data to obtain a VR reception scene, so that a browser is not required to be relied on to generate a webpage, the VR reception scene is easy to manufacture, and the labor cost is reduced.
The embodiment of the application provides a video generation method and device, a video playing method and device and a storage medium, and specifically describes the following embodiments.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides a video generation method and a video playing method, and relates to the technical field of artificial intelligence. The video generation method and the video playing method provided by the embodiment of the application can be applied to the terminal, can be applied to the server side, and can also be software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application that implements a video generation method, a video playback method, or the like, but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In the embodiments of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of the data comply with related laws and regulations and standards of related countries and regions. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the user is explicitly acquired, necessary user related data for enabling the embodiment of the application to normally operate is acquired.
Fig. 1 is a system architecture diagram of a video generating method according to an embodiment of the present application, where in the embodiment of fig. 1, the system architecture includes a capturing terminal, a playing terminal, and a synchronization terminal.
The shooting terminal is used as a video generator and is in communication connection with the playing terminal, the shooting terminal extracts control data from a preset control resource library according to scene information and azimuth information, controls a preset virtual object according to the control data to execute preset operation, and the preset operation comprises at least one of guiding operation and explanation operation. The user shoots the video according to the preset operation executed by the virtual object to obtain shooting parameters and shooting data, the shooting process records the virtual object to obtain recording data by executing the preset operation, finally, the shooting data and the recording data are combined to obtain video shooting data, meanwhile, the shooting parameters and the video shooting data are generated into preliminary video data, and the preliminary video data are written into a preset file to obtain target video data. Therefore, the shooting terminal shoots the guest scene according to the guidance or explanation of the virtual object, and automatically generates the target video data of the guest scene, so that the guest scene is easy to manufacture, and the guest scene can be directly played without generating a webpage, thereby improving the efficiency of guest scene manufacture.
The system architecture diagram shown in fig. 1 can also realize a video playing method, wherein the playing terminal is used as a video player, the playing terminal is in communication connection with the shooting terminal, and the playing terminal is in communication connection with the synchronous terminal, when the playing terminal receives a playing request, the playing terminal obtains target video data of the shooting terminal, and plays the target video data according to playing control information, so that automatic display of the VR receiving scene is realized, a webpage is not required to be made, and network loading is not required, so that the display of the VR receiving scene is simpler. Meanwhile, the playing terminal sends a display request according to the synchronous terminal, extracts a video resource file from the primary video data according to the display request, and controls the synchronous terminal to play the video resource file according to the playing control information, so that the playing terminal and the synchronous terminal synchronously play the primary video data, VR (virtual reality) receiving scenes at two ends are synchronous, and delay of switching of the VR receiving scenes is reduced.
The synchronous terminal is used as a synchronous player of the playing terminal, the synchronous terminal is in communication connection with the playing terminal, the synchronous terminal receives the video resource file sent by the playing terminal, and the synchronous terminal plays the video resource file according to playing control information of the playing terminal.
Those skilled in the art will appreciate that the system architecture shown in fig. 1 is not limiting of the embodiments of the present application, and may include more or fewer components than shown, or certain components in combination, or a different arrangement of components.
Referring to fig. 2, fig. 2 is an optional flowchart of a video generating method according to an embodiment of the present application, and the video generating method is applied to a capturing terminal, and the method in fig. 2 may include, but is not limited to, steps S201 to S207.
Step S201, obtaining a scene switching request; wherein, the scene change request includes: scene information and azimuth information;
step S202, extracting control data from a preset control resource library according to scene information and azimuth information;
step S203, a preset virtual object is controlled to execute a preset operation according to the control data; wherein the preset operation at least comprises one of the following operations: guiding operation and explaining operation;
step S204, shooting parameters in the shooting process are obtained according to scene information and azimuth information;
step S205, shooting the obtained shooting data and recording data according to shooting parameters to obtain video shooting data; the recording data are data obtained by recording virtual objects and executing preset operation;
Step S206, generating preliminary video data according to the video shooting data and shooting parameters;
step S207, the preliminary video data is written into a preset file according to the scene information to obtain target video data.
In step S201 to step S207 illustrated in the embodiment of the present application, by acquiring the scene information and the azimuth information, extracting the manipulation data from the preset manipulation resource library according to the scene information and the azimuth information, and manipulating the virtual object according to the manipulation data to execute a preset operation, where the preset operation at least includes one of the following operations: guiding operation and explaining operation. Accordingly, the user can shoot a scene according to the guiding operation and the explanation operation performed by the virtual object to obtain shooting data and shooting parameters. Meanwhile, a preset operation is carried out on the virtual object in the shooting process to obtain recording data, then the shooting data and the recording data are combined to obtain video shooting data, preliminary video data are generated according to shooting parameters and the video shooting data, and the preliminary video data are written into a preset file to obtain target video data, so that the resource acquisition operation of the VR reception scene is simple, the VR reception scene generation can be realized by playing the target video data, and the VR reception scene is not required to be generated by a browser, so that the production efficiency of the VR reception scene is improved.
In step S201 of some embodiments, an initial scene is switched to a target scene by a user at an operation interface of a photographing terminal to generate a scene switching request, and the scene switching request includes scene information and azimuth information. The scene information is the scene type of the target scene, the azimuth information is the azimuth of the target scene, and the azimuth information comprises any one of the following: upper direction, lower direction, left direction, right direction, front direction, and rear direction.
It should be noted that, the shooting terminal runs AR camera software to connect with the AR interface of the shooting terminal through the AR camera software, and the user selects to switch to the target scene on the AR camera software to generate scene information, and selects to switch to the target azimuth to obtain azimuth information, then the user performs scene shooting after selecting the target scene and the target direction one to one, so as to simplify the scene shooting operation.
In step S202 of some embodiments, the preset operation and control resource library stores operation and control data matched with the scene information and the azimuth information, wherein the preset operation and control mapping relation table stores mapping relations of the scene information, the azimuth information and the operation and control data, the matched operation and control information is searched from the preset operation and control mapping relation table through the scene information and the azimuth information, and the operation and control data is extracted from the preset operation and control resource library according to the operation and control information.
In step S203 of some embodiments, shooting is started after receiving the scene-switching request, a preset virtual object is added in a shooting window, and the virtual object is controlled to execute a preset operation according to the manipulation data, where the preset operation includes at least one of the following: guiding operation and explaining operation. The virtual object is used for performing guiding operation and explanation operation so that a user can shoot corresponding video pictures according to guiding and explanation, shooting data of the VR guest receiving scene are easy to obtain, professional technicians and professional equipment are not needed, shooting can be performed, shooting difficulty of the VR guest receiving scene is reduced, and cost of shooting data acquisition is reduced.
It should be noted that, the virtual object is a virtual person, and the virtual object is constructed according to the predefined three-dimensional face data and three-dimensional body data, so that the shooting operation of the VR guest-receiving scene is simplified by setting the virtual object in the shooting window and executing the preset operation through the virtual object to guide the user to perform the shooting operation.
In step S204 of some embodiments, shooting parameters of the shooting process are obtained according to the scene information and the azimuth information, so as to record the shooting parameters of the shooting process, so that when the shooting data is played later, the shooting data corresponding to the scene information and the azimuth information is called according to the shooting parameters, and the shooting data is easy to find.
In step S205 of some embodiments, shooting is performed according to shooting parameters to obtain shooting data, recording data of a virtual object in a shooting process for performing a preset operation is obtained at the same time, and the shooting data and the recording data are combined to obtain video shooting data. The video shooting data is synthesized by the recorded data and the shooting data of the virtual object which are subjected to the preset operation, so that the video shooting data are richer, and the richness of the data resources of the VR guest receiving scene is improved.
In step S206 of some embodiments, preliminary video data is generated according to the shooting parameters and the video shooting data, and the shooting parameters correspond to the video shooting data, so that the corresponding video shooting data can be found from the preliminary video data according to the shooting parameters, so that the video shooting data is simpler to obtain.
In step S207 of some embodiments, the preset file stores a plurality of target video data, and then the preliminary video data is written into a subfolder of the preset file according to the scene information to obtain the target video data. And determining a subfolder of the preset file according to the scene information, and storing the preliminary video data into the subfolder according to the scene information to obtain the target video data.
It should be noted that, the scene information includes a scene type of the target scene, and the scene type includes: meeting room type, meeting living room type, lecture table type, stage type, and preset files set corresponding subfolders according to scene types, and scene tags are set on each subfolder. And screening out target labels from the scene labels according to the scene information, and storing the preliminary video data into subfolders according to the target labels to obtain target video data so as to realize classified storage of the preliminary video data.
Referring to fig. 3, the manipulation data includes: the step S202 may include, but is not limited to, steps S301 to S303:
step S301, extracting original scene guide information and original scene explanation content from a preset control resource library according to scene information;
step S302, screening the original scene guide information according to the azimuth information to obtain target scene guide information;
and step S303, screening the original scene explanation content according to the azimuth information to obtain the target scene explanation content.
In step S301 of some embodiments, since one scene information corresponds to one target scene and each target scene has six orientations, each target scene includes six orientation information. Therefore, the original scene guide information and the original scene explanation content are extracted from the preset control resource library according to the scene information, each piece of scene information corresponds to a plurality of pieces of original scene guide information and a plurality of pieces of original scene explanation content, and each piece of original scene guide information and each piece of original scene explanation content correspond to one piece of azimuth information.
In step S302 of some embodiments, the original scene guidance information is filtered according to the azimuth information, so as to screen the original scene guidance information matched with the azimuth information from the original scene guidance information to obtain the target scene guidance information.
For example, if the scene information is conference room information, original scene guide information related to the conference room information is extracted from a preset control resource library according to the scene information, if the azimuth information is left direction, the original scene guide information is screened according to the left direction to obtain target scene guide information pointing to the left direction, then the virtual object is controlled to point to the left direction according to the target scene guide information, the user can shoot a video according to the virtual object pointing to the left direction, and video shooting operation of the VR guest receiving scene is simplified.
In step S303 of some embodiments, the original scene interpretation content matched with the azimuth information is screened out from the original scene interpretation content according to the azimuth information, so as to obtain the target scene interpretation content. And controlling the virtual object to explain the target scene explanation content according to the target scene explanation content by screening the target scene explanation content.
For example, if the azimuth information is the left direction, the target scene interpretation content is screened from the original scene interpretation content as "the current azimuth is the left direction of the conference room", and the virtual object interpretation is controlled according to the target scene interpretation content as "the current azimuth is the left direction of the conference room". Therefore, the target scene explanation content is screened out to control the virtual object to explain with the target scene explanation content, so that a user can know which direction the direction to be shot belongs to, and the shooting operation of the VR guest receiving scene is simplified.
Referring to fig. 4, in some embodiments, the parameters include: the photographing angle, photographing time, step S206 may include, but is not limited to, steps S401 to S404:
step S401, extracting video segment data from video shooting data according to shooting time;
step S402, screening out a target angle from the shooting angle according to the shooting time;
step S403, screening the video segment data according to the target angle to obtain a preliminary video segment;
step S404, merging the preliminary video segments to obtain preliminary video data.
In step S401 of some embodiments, the video shooting data are shooting data and recording data obtained by shooting for a preset period of time, and the shooting data are obtained by shooting a guest scene at a corresponding shooting angle according to a guiding operation and an explanation operation of a virtual object, and each video shooting data includes a plurality of video segment data, and each video segment data corresponds to one azimuth. Since the photographing parameters of the photographing apparatus during photographing are recorded according to the scene information and the azimuth information, each photographing time has a corresponding photographing angle, video segment data matching the photographing time is extracted from the video photographing data according to the photographing time to distinguish each video segment data according to the photographing time.
In step S402 of some embodiments, each photographing time corresponds to one photographing angle, and each azimuth information corresponds to one photographing angle, so that the photographing angle corresponding to the photographing time is selected from the photographing angles according to the photographing time to obtain the target angle, i.e. video segment data can be extracted from the video photographing data according to the target angle.
In step S403 of some embodiments, the video segment data is filtered according to the target angle, so as to extract the matched video segment data from the plurality of video segment data according to the target angle to obtain the preliminary video segment, so as to achieve association between each target angle and the preliminary video segment. Through associating the target angle with the preliminary video segment, when the preliminary video segment of the target azimuth is searched, the target angle is determined through the target azimuth, and the matched preliminary video segment is screened out from the plurality of video segment data according to the target angle, so that the preliminary video segment corresponding to the target azimuth is searched.
In step S404 of some embodiments, a preliminary video segment is obtained by associating the target angle with the video segment data, a video segment mapping relationship table of the target angle and the video segment data is constructed, and then a plurality of preliminary video segments are combined to obtain preliminary video data. Therefore, the video segment data are screened according to the target angles, so that the video segment data corresponding to each target angle are distinguished to obtain the preliminary video segment, and the preliminary video segments are combined to obtain the preliminary video data, so that the preliminary video segment of the preliminary video data can be found according to the target angles.
Referring to fig. 5, in some embodiments, step S207 may include, but is not limited to, steps S501 to S503:
step S501, obtaining video information of preliminary video data;
step S502, screening the preliminary video data according to the video information and the scene information to obtain a target video segment;
step S503, merging the target video segments and writing the merged target video segments into a preset file to obtain target video data.
In step S501 of some embodiments, in order to facilitate obtaining the preliminary video data according to the scene information, the preliminary video data needs to be written into subfolders of the preset file respectively, so as to store different preliminary video data respectively. Accordingly, video information of the preliminary video data is acquired to determine scene information corresponding to the preliminary video data from the video information. In this embodiment, the video information includes at least one of the following: the data information of the preliminary video data can be clear according to the video information.
In step S502 of some embodiments, the preliminary video data is filtered according to the video information and the scene information, that is, the target scene information is filtered from the scene information according to the video information, and the preliminary video data corresponding to the target scene information is obtained to obtain the target video segment.
It should be noted that, an information mapping relation table of video information and scene information is pre-constructed, and then the scene information is searched from the information mapping relation table according to the video information to obtain target scene information, so as to obtain a corresponding target video segment according to the target scene information. In this embodiment, the video information is a video file name, and the target scene information is found from the scene information of the information mapping relationship table according to the video file name. For example, if the video file name is a and the mapping relationship between a and the meeting room information is recorded in the information mapping relationship table, the target scene information is found from the information mapping relationship table to be the meeting room information according to the video file name a.
In step S503 of some embodiments, the target video segments are merged and written into a preset file, that is, subfolders are screened out from the preset file according to the target scene information, and then the target video segments are merged and written into the subfolders, so that different target video segments are classified and stored into the subfolders, so as to obtain the target video data. Therefore, the target video segments are merged and stored into the subfolder of the preset file so as to store the target video data respectively, and the target video data can be directly taken out from the subfolder when being called, so that the target video data is easy to extract.
In some embodiments, after step S207, the video generation method further includes:
and updating the target video data.
It should be noted that after the target video data is stored in the preset file, a pause playing node needs to be set on the target video data so as to pause playing according to the pause playing node when the target video data is played, so that the pause playing node is found in the target video data according to a pause playing instruction sent by the playing terminal, playing is paused when different scenes are switched, and transition control when the video is played is further realized. Therefore, by setting a pause play node of the target video data to update the target video data, transition control of scene switching is realized.
Referring to fig. 6, in some embodiments, updating the target video data includes, but is not limited to, steps S601 to S603:
step S601, generating pause playing information according to scene information and azimuth information;
step S602, performing pause playing marking on the preliminary video data according to the pause playing information to obtain a pause playing node;
in step S603, the pause playing node is written into the preset file to update the target video data.
In step S601 of some embodiments, pause playing information is generated according to scene information and azimuth information, each corresponding to one piece of pause playing information, and the pause playing information includes: pause instructions and pause time information, the pause instructions being used to instruct the preliminary video data to pause playing. The preliminary video data comprises a plurality of preliminary video segments, and the pause instruction comprises a plurality of transition instructions and a stop instruction, wherein the transition instructions are used for controlling transition among the plurality of preliminary video segments, and the stop instruction is used for controlling the playing stop of the preliminary video data. The pause time information includes transition time information of the preliminary video segment and stop play time information of the preliminary video data. Therefore, by generating pause play information from the scene information and the azimuth information, the play of the preliminary video data is controlled to be shifted in accordance with the azimuth switching.
In step S602 of some embodiments, pause playing is marked on the preliminary video data according to pause playing information, where the pause playing information includes: a pause instruction including a plurality of transition instructions and a stop instruction, and pause time information including: transition time information and stop play time information, the pause play node includes: transition node and stop playing node. And setting a transition node between two preliminary video segments of the preliminary video data according to the transition time information, and setting a stop play node at the tail end of the preliminary video data according to the stop play time information. Therefore, by setting a transition node between two preliminary video segments, and each preliminary video segment corresponds to one azimuth information, when the preliminary video data is played, the preliminary video segment is transited and played when the azimuth is switched, and the playing of the preliminary video data is stopped after the playing of the preliminary video data is completed. Therefore, the VR guest scene is constructed by directly playing the preliminary video data, so that the construction of the VR guest scene is simplified.
In step S603 of some embodiments, after the preliminary video data is marked for pause playing, a pause playing node is obtained, and the pause playing node is written into a preset file to update the target video data, so that the target video data can be directly played when the VR guest-receiving scene is constructed, the construction of the VR guest-receiving scene is simplified, and the experience of the user in guest-receiving is improved.
For example, because the pause playing node is set on the preliminary video data, when VR guest receiving scenes with the scene type being the conference room type are required to be displayed, the target video data is directly played, the preliminary video segments of the main scene are firstly displayed in the process of playing the target video data, then the corresponding preliminary video segments are accessed to play through azimuth switching, and a transition playing effect can occur between the two preliminary video segments due to the transition node during azimuth switching until all the target video data are played, and then the playing is stopped according to the stop playing node. Therefore, by setting the pause playing node on the preliminary video data, the VR guest receiving scene construction can be realized only by playing the target video data, the VR guest receiving scene construction is simplified without loading the target video data on a network for rendering, and the delay of VR guest receiving scene display is also reduced.
Referring to fig. 7, the embodiment of the present application further provides a video playing method, and in combination with fig. 1, the video playing method is applied to a playing terminal, and the video playing method includes, but is not limited to, steps S701 to S705:
step S701, obtaining a play request;
step S702, obtaining target video data from a shooting terminal according to a playing request; the target video data is obtained by the video generation method;
Step S703, analyzing the target video data to obtain scene information and preliminary video data;
step S704, receiving playing control information;
step S705, playing the preliminary video data according to the playing control information and the scene information.
In step S701 of some embodiments, a play request is acquired, where the play request includes: target scene information to obtain corresponding target video data through the target scene information.
In step S702 of some embodiments, the capturing terminal stores the target video data in a preset file, so that the target video data is obtained from the capturing terminal according to the play request, that is, the subfolder is selected from the preset file according to the target scene information, and then the video data of the subfolder is extracted to obtain the target video data. Since the target video data is obtained by the video method described above, the target video data includes the recorded data and the shot data so that the scene information of the actual meeting scene is known by the shot data and the recorded data.
In step S703 of some embodiments, since the target video data is obtained by writing the target video data into the preset file according to the scene information, the target video data is parsed to obtain the scene information and the preliminary video data, so that it is known which scene is captured by the preliminary video data, so as to play the preliminary video data of the corresponding scene according to the scene information.
In step S704 of some embodiments, the playing manipulation information is received, and the playing manipulation information includes: scene switching information and azimuth switching information, wherein the scene switching information is used for switching the primary video data of the playing target scene, and the azimuth switching information is used for switching the primary video segment of the playing target azimuth.
In step S705 of some embodiments, preliminary video data matched with scene information is obtained by playing the preliminary video data according to the play control information and the scene information, then a corresponding preliminary video segment is obtained from the preliminary video data according to the azimuth switching information, and the preliminary video segment of the preliminary video data is played, so that VR-based guest-receiving scenes are displayed in a targeted manner, VR-based guest-receiving scene display is not required to be made through a web page, and VR-based guest-receiving scene display is simplified, so that VR-based guest-based scene display is simpler.
For example, if the playing terminal needs to play the left direction of the conference room scene, playing control information is generated according to the control of the user at the playing terminal, so as to obtain preliminary video data matched with the scene information according to the scene switching information, wherein the scene type of the preliminary video data is the conference room type. And then acquiring a preliminary video segment of the preliminary video data for the left direction according to the azimuth switching information, wherein the preliminary video segment is obtained by shooting data and recording data of the left direction of the conference room, and finally playing the preliminary video segment for the left direction of the conference room scene, wherein the virtual object points to the left direction and explains that the current azimuth is the left direction of the conference room. Therefore, the VR guest scene can be displayed by directly playing the primary video data without depending on browser display, and the pause playing nodes are arranged in the primary video data, so that the corresponding pause playing nodes can be acquired according to scene switching information and azimuth switching information, and the transition and pause control of the primary video data playing process are controlled according to the pause playing nodes, so that the automatic display of the VR guest scene is realized.
Referring to fig. 8, in some embodiments, after step S705, the video playing method further includes, but is not limited to, steps S801 to S804:
step S801, receiving a display request sent by a synchronous terminal;
step S802, a video resource file is screened out from the preliminary video data according to the display request and the scene information;
step S803, the video resource file is sent to a synchronous terminal;
step S804, the synchronous terminal is controlled to play the video resource file according to the play control information.
In step S801 of some embodiments, when the playing terminal and the synchronous terminal receive a guest, the synchronous terminal sends a presentation request, and the presentation request includes: and displaying the scene information. Therefore, the VR guest-receiving scene display is conveniently performed by the synchronous terminal according to the display request by receiving the display request sent by the synchronous terminal.
In step S802 of some embodiments, preliminary video data matching the scene information is screened from the preliminary video data according to the presentation scene information, and the preliminary video data is packaged to generate a video resource file. Therefore, the preliminary video data are screened out through the display scene information, so that the corresponding preliminary video data can be played according to the scene display request of the synchronous terminal.
In step S803 of some embodiments, by sending the video resource file to the synchronization terminal, the synchronization terminal stores the video resource file of the corresponding scene, so as to facilitate the VR guest reception scene to be displayed according to the video resource file.
It should be noted that, the video resource file is obtained by encrypting the preliminary video data, and the synchronization terminal cannot directly play the video resource file, and the preliminary video data can be played only after the preliminary video data is obtained by decrypting the video resource file.
In step S804 of some embodiments, the playing terminal controls the synchronization terminal according to the playing control information, and since the synchronization terminal does not need to directly play the video resource file, the playing control instruction is generated by the playing terminal according to the playing control information, and the playing control instruction is directly sent to the synchronization terminal, and then the synchronization terminal decrypts and plays the video resource file according to the playing control instruction. Therefore, when the VR reception scene is displayed by the synchronous terminal, the playing terminal controls the synchronous terminal to display the VR reception scene, and directly generates a playing control instruction when the direction is switched, and sends the playing control instruction to the synchronous terminal, so that video resource files downloaded by the synchronous terminal are directly controlled according to the playing control instruction without sending video data packets when each control is performed, the playing of the video resource files of the playing terminal and the synchronous terminal is synchronous, and the delay of playing of the video resource files is reduced.
It should be noted that, the playing terminal generates a playing control instruction when playing control is performed on the synchronous terminal each time, and the playing control instruction is sent to the synchronous terminal through the socket message channel, the synchronous terminal receives the playing control instruction, analyzes the video resource file to obtain preliminary video data, and plays the preliminary video data according to the playing control instruction. The conventional VR guest-meeting scene synchronization requires the synchronization terminal to automatically load the VR guest-meeting resource file package, and the playing terminal generates a resource data package for each control and sends the resource data package to the synchronization terminal, which easily causes delay in VR guest-meeting scene display. Therefore, when the synchronous terminal is controlled to play the video resource file, the playing terminal only needs to send a playing control instruction, so that the operation of VR guest-receiving scene display is simplified, and the VR guest-receiving scene display of the playing terminal and the synchronous terminal is more synchronous.
According to the method and the device for processing the virtual object, the shooting terminal extracts control data from the preset control resource library according to the scene information and the azimuth information, and controls the preset virtual object to execute preset operation according to the control data, and the preset operation comprises at least one of guiding operation and explanation operation. The user shoots the video according to the preset operation executed by the virtual object to obtain shooting parameters and shooting data, the shooting process records the virtual object to obtain recording data by executing the preset operation, finally, the shooting data and the recording data are combined to obtain video shooting data, meanwhile, the shooting parameters and the video shooting data are generated into preliminary video data, and the preliminary video data are written into a preset file to obtain target video data. The playing terminal obtains target video data of the shooting terminal when receiving the playing request, and plays the target video according to the playing control information, so that automatic playing of the VR guest receiving scene is realized, a webpage is not required to be manufactured, network loading is not required, and the VR guest receiving scene is displayed more rapidly. Meanwhile, the playing terminal sends a display request according to the synchronous terminal, extracts a video resource file from the primary video data according to the display request, and controls the synchronous terminal to play the video resource file according to the playing control information, so that the playing terminal and the synchronous terminal synchronously play the primary video data, VR (virtual reality) receiving scenes at two ends are synchronous, and delay of switching of VR receiving scenes is reduced.
Referring to fig. 9, an embodiment of the present application further provides a video generating apparatus, applied to a shooting terminal, capable of implementing the video generating method, where the apparatus includes:
a request acquisition module 901, configured to acquire a scene switching request; wherein, the scene change request includes: scene information and azimuth information;
the extracting module 902 is configured to extract, according to the scene information and the azimuth information, control data from a preset control resource library;
the manipulation module 903 is configured to manipulate a preset virtual object according to manipulation data to perform a preset operation; wherein the preset operation at least comprises one of the following operations: guiding operation and explaining operation;
a parameter obtaining module 904, configured to obtain shooting parameters in a shooting process according to the scene information and the azimuth information;
the association module 905 is configured to obtain video shooting data according to shooting data and recording data obtained by shooting according to shooting parameters; the recording data are data obtained by recording virtual objects and executing preset operation;
a generating module 906, configured to generate preliminary video data according to the video capturing data and the capturing parameters;
the writing module 907 is configured to write the preliminary video data into a preset file according to the scene information, so as to obtain target video data.
The specific implementation of the video generating apparatus is substantially the same as the specific embodiment of the video generating method described above, and will not be described herein.
Referring to fig. 10, an embodiment of the present application further provides a video playing device, which is applied to a playing terminal and can implement the video playing method, where the device includes:
a request receiving module 101, configured to obtain a play request;
a data acquisition module 102, configured to acquire target video data from a shooting terminal according to a play request; the target video data is obtained by the video generating device;
the parsing module 103 is configured to parse the target video data to obtain scene information and preliminary video data;
an information receiving module 104, configured to receive play control information;
and the playing module 105 is used for playing the preliminary video data according to the playing control information and the scene information.
The specific implementation of the video playing device is basically the same as the specific embodiment of the video playing method, and will not be described herein.
In addition, the target video data of the data acquisition module 102 may also be obtained according to the video generation method described above.
According to the video generation method and device, the video playing method and device and the storage medium, scene information and azimuth information are obtained through a video shooting terminal, control data are extracted from a preset control resource library according to the scene information and the azimuth information, a preset virtual object is controlled to conduct guiding operation and explanation operation according to the control data, shooting parameters in a shooting process are obtained at the same time, shooting data obtained through shooting according to the shooting parameters are obtained, the process of executing the preset operation on the virtual object is recorded to obtain recording data, then the shooting data and the recording data are combined to obtain video shooting data, preliminary video data are generated according to the shooting parameters and the video shooting data, and then the preliminary video data are written into a preset file to obtain target video data. Therefore, the virtual object is set to conduct guiding operation and explanation operation so that a user can conduct video shooting according to the preset operation executed by the virtual object, and the resource file acquisition operation of the VR guest receiving scene is simplified. Meanwhile, the playing terminal acquires target video data from the shooting terminal, analyzes the target video data to obtain scene information and preliminary video data, and plays the preliminary video data according to playing control information so as to realize direct display of VR (virtual reality) guest-receiving scenes, and can realize the display of VR guest-receiving scenes without loading a webpage, thereby simplifying the display operation of the VR guest-receiving scenes. If the VR reception scene needs to be synchronously displayed, a display request sent by the synchronous terminal is received, a video resource file is screened out from the primary video data according to the display request, the video resource file is sent to the synchronous terminal by the playing terminal, and when the primary video data is played by the playing terminal according to the playing control information, the primary video data is played by the synchronous terminal according to the playing control information, so that the VR reception scene of the playing terminal and the synchronous terminal is synchronously displayed, and the synchronicity of the VR reception scene displayed by the playing terminal and the synchronous terminal is improved.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-8 are not limiting to embodiments of the present application and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A video generation method, applied to a photographing terminal, comprising:
acquiring a scene switching request; wherein the scene change request includes: scene information and azimuth information;
extracting control data from a preset control resource library according to the scene information and the azimuth information;
controlling a preset virtual object to execute preset operation according to the control data; wherein the preset operation at least comprises one of the following operations: guiding operation and explaining operation;
acquiring shooting parameters of a shooting process according to the scene information and the azimuth information;
shooting according to the shooting parameters, and recording the preset operation of the virtual object to obtain video shooting data;
generating preliminary video data according to the video shooting data and the shooting parameters;
writing the preliminary video data into a preset file according to the scene information to obtain target video data;
the shooting according to the shooting parameters, and recording the preset operation of the virtual object to obtain video shooting data specifically includes:
recording the preset operation executed by the virtual object to obtain recording data;
shooting according to the preset operation and the shooting parameters to obtain shooting data;
And combining the recorded data with the shooting data to obtain the video shooting data.
2. The method of claim 1, wherein the manipulation data comprises: target scene guide information and target scene explanation content; the extracting the control data from the preset control resource library according to the scene information and the azimuth information comprises the following steps:
extracting original scene guide information and original scene explanation content from the preset control resource library according to the scene information;
screening the original scene guide information according to the azimuth information to obtain the target scene guide information;
and screening the original scene explanation content according to the azimuth information to obtain the target scene explanation content.
3. The method of claim 1, wherein the photographing parameters include: shooting angle and shooting time; the generating preliminary video data according to the video shooting data and the shooting parameters comprises the following steps:
extracting video segment data from the video shooting data according to the shooting time;
screening a target angle from the shooting angle according to the shooting time;
Screening the video segment data according to the target angle to obtain a preliminary video segment;
and merging the preliminary video segments to obtain the preliminary video data.
4. The method according to claim 1, wherein writing the preliminary video data into a preset file according to the scene information to obtain target video data comprises:
acquiring video information of the preliminary video data;
screening the preliminary video data according to the video information and the scene information to obtain a target video segment;
and merging the target video segments and writing the merged target video segments into the preset file to obtain the target video data.
5. The method according to any one of claims 1 to 4, wherein after writing the preliminary video data into a preset file according to the scene information to obtain target video data, the method further comprises:
updating the target video data specifically comprises:
generating pause playing information according to the scene information and the azimuth information;
performing pause playing marking on the preliminary video data according to the pause playing information to obtain a pause playing node;
And writing the pause playing node into the preset file to update the target video data.
6. A video playing method, which is applied to a playing terminal, the video playing method comprising:
acquiring a play request;
acquiring target video data from a shooting terminal according to the playing request; the target video data is obtained by the video generation method according to any one of claims 1 to 5;
analyzing the target video data to obtain the scene information and the preliminary video data;
receiving playing control information;
and playing the preliminary video data according to the playing control information and the scene information.
7. The method of claim 6, wherein after the preliminary video data is played according to the play manipulation information and the scene information, the method further comprises:
receiving a display request sent by a synchronous terminal;
screening video resource files from the preliminary video data according to the display request and the scene information;
transmitting the video resource file to the synchronous terminal;
and controlling the synchronous terminal to play the video resource file according to the play control information.
8. A video generating apparatus, which is applied to a photographing terminal, comprising:
the request acquisition module is used for acquiring a scene switching request; wherein the scene change request includes: scene information and azimuth information;
the extraction module is used for extracting control data from a preset control resource library according to the scene information and the azimuth information;
the control module is used for controlling a preset virtual object to execute preset operation according to the control data; wherein the preset operation at least comprises one of the following operations: guiding operation and explaining operation;
the parameter acquisition module is used for acquiring shooting parameters in a shooting process according to the scene information and the azimuth information;
the association module is used for shooting according to the shooting parameters and recording the preset operation of the virtual object to obtain video shooting data; the generation module is used for generating preliminary video data according to the video shooting data and the shooting parameters;
the writing module is used for writing the preliminary video data into a preset file according to the scene information to obtain target video data; the shooting according to the shooting parameters, and recording the preset operation of the virtual object to obtain video shooting data specifically includes:
Recording the preset operation executed by the virtual object to obtain recording data;
shooting a scene according to the preset operation and the shooting parameters to obtain shooting data;
and combining the recorded data with the shooting data to obtain the video shooting data.
9. A video playing device, characterized by being applied to a playing terminal, the device comprising:
the request receiving module is used for acquiring a play request;
the data acquisition module is used for acquiring target video data from the shooting terminal according to the playing request; the target video data being generated by the video generating apparatus according to claim 8;
the analysis module is used for analyzing the target video data to obtain the scene information and the preliminary video data;
the information receiving module is used for receiving the playing control information;
and the playing module is used for playing the preliminary video data according to the playing control information and the scene information.
10. A storage medium, which is a computer-readable storage medium, characterized in that the storage medium stores one or more programs, which are executable by one or more processors, to implement the method of any one of claims 1 to 5, or the steps of the method of any one of claims 6 to 7.
CN202210867273.0A 2022-07-22 2022-07-22 Video generation method and device, video playing method and device and storage medium Active CN115242980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210867273.0A CN115242980B (en) 2022-07-22 2022-07-22 Video generation method and device, video playing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210867273.0A CN115242980B (en) 2022-07-22 2022-07-22 Video generation method and device, video playing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN115242980A CN115242980A (en) 2022-10-25
CN115242980B true CN115242980B (en) 2024-02-20

Family

ID=83674537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210867273.0A Active CN115242980B (en) 2022-07-22 2022-07-22 Video generation method and device, video playing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115242980B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784160A (en) * 2021-09-09 2021-12-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and readable storage medium
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
WO2022095467A1 (en) * 2020-11-06 2022-05-12 北京市商汤科技开发有限公司 Display method and apparatus in augmented reality scene, device, medium and program
CN114615513A (en) * 2022-03-08 2022-06-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
CN114745598A (en) * 2022-04-12 2022-07-12 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022095467A1 (en) * 2020-11-06 2022-05-12 北京市商汤科技开发有限公司 Display method and apparatus in augmented reality scene, device, medium and program
CN113784160A (en) * 2021-09-09 2021-12-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and readable storage medium
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
CN114615513A (en) * 2022-03-08 2022-06-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
CN114745598A (en) * 2022-04-12 2022-07-12 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115242980A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
KR102503413B1 (en) Animation interaction method, device, equipment and storage medium
CN110968736B (en) Video generation method and device, electronic equipment and storage medium
US10339715B2 (en) Virtual reality system
TWI752502B (en) Method for realizing lens splitting effect, electronic equipment and computer readable storage medium thereof
KR20210110620A (en) Interaction methods, devices, electronic devices and storage media
CN110809175B (en) Video recommendation method and device
US20190156558A1 (en) Virtual reality system
CN105635712A (en) Augmented-reality-based real-time video recording method and recording equipment
US11847726B2 (en) Method for outputting blend shape value, storage medium, and electronic device
CN108257205B (en) Three-dimensional model construction method, device and system
WO2017157135A1 (en) Media information processing method, media information processing device and storage medium
JP2022500795A (en) Avatar animation
CN112528768A (en) Action processing method and device in video, electronic equipment and storage medium
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN114222076B (en) Face changing video generation method, device, equipment and storage medium
WO2017042070A1 (en) A gazed virtual object identification module, a system for implementing gaze translucency, and a related method
KR20160069663A (en) System And Method For Producing Education Cotent, And Service Server, Manager Apparatus And Client Apparatus using therefor
CN108320331B (en) Method and equipment for generating augmented reality video information of user scene
CN115242980B (en) Video generation method and device, video playing method and device and storage medium
KR20160136833A (en) medical education system using video contents
CN115442519A (en) Video processing method, device and computer readable storage medium
CN107368193A (en) Human-machine operation exchange method and system
KR20140136713A (en) Methods and apparatuses of an learning simulation model using images
KR102459198B1 (en) Apparatus for displaying contents
CN114241132B (en) Scene content display control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant