CN109889914B - Video picture pushing method and device, computer equipment and storage medium - Google Patents

Video picture pushing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109889914B
CN109889914B CN201910176636.4A CN201910176636A CN109889914B CN 109889914 B CN109889914 B CN 109889914B CN 201910176636 A CN201910176636 A CN 201910176636A CN 109889914 B CN109889914 B CN 109889914B
Authority
CN
China
Prior art keywords
video
model
camera
picture
camera information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910176636.4A
Other languages
Chinese (zh)
Other versions
CN109889914A (en
Inventor
郑旭东
孙弢
彭帅
朱羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910176636.4A priority Critical patent/CN109889914B/en
Publication of CN109889914A publication Critical patent/CN109889914A/en
Application granted granted Critical
Publication of CN109889914B publication Critical patent/CN109889914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application relates to a video picture pushing method, a video picture pushing device, computer equipment and a storage medium. The method comprises the following steps: the method comprises the steps of pushing a video picture of a first video to a terminal, wherein the first video is a video generated according to a picture obtained by shooting of an entity camera; when receiving an operation of switching to a target view angle, acquiring first camera information when a virtual camera is at the target view angle; acquiring second camera information when the entity camera acquires the first video; and after generating a first switching animation according to the first camera information and the second camera information and pushing the first switching animation to the terminal, pushing a video picture of a second video to the terminal, wherein the second video is a video obtained by shooting a three-dimensional field model of the real scene field by the virtual camera from the first camera information. When the video generated by the picture shot by the entity camera is pushed, the video generated by shooting the three-dimensional field model through the virtual camera can be smoothly adjusted, and the efficiency of switching the view angle of the live broadcast picture in the live broadcast process is improved.

Description

Video picture pushing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of internet application technologies, and in particular, to a method and an apparatus for pushing a video frame, a computer device, and a storage medium.
Background
With the continuous development of internet application technology, live video broadcast is applied more and more widely, for example, more and more complex and/or large-field-based games are presented to audiences through video broadcast.
In the live broadcast based on complicated and/or large-scale place, live broadcast personnel set up fixed or mobilizable camera position in the place in advance, and at the live broadcast in-process, when the direct broadcast personnel need switch the live broadcast visual angle to certain target visual angle, can the manual work select suitable camera picture, if the visual angle of the camera picture of selection is unsatisfactory, can also remove the camera to obtain ideal live broadcast visual angle.
However, in the related art, the selection of the switched camera view when switching the live view and the process of moving the camera position generally consume more time, thereby affecting the efficiency of view switching in the live view process.
Disclosure of Invention
The embodiment of the application provides a video picture pushing method, a video picture pushing device, computer equipment and a storage medium, which can improve the efficiency of switching the view angle of a live broadcast picture in the live broadcast process, and the technical scheme is as follows:
in one aspect, a video picture pushing method is provided, and the method includes:
the method comprises the steps that a video picture of a first video is pushed to a terminal, wherein the first video is a video generated according to a picture obtained by shooting a real scene field by an entity camera;
when receiving an operation of switching to a target view angle, acquiring first camera information, wherein the first camera information is the camera information of a virtual camera at the target view angle, and the camera information comprises a shooting position and a shooting direction;
acquiring second camera information, wherein the second camera information is camera information when the entity camera acquires a picture corresponding to the first video;
generating a first switching animation according to the first camera information and the second camera information;
pushing the first switching animation to the terminal;
after the first switching animation is pushed, pushing a video picture of a second video to the terminal, wherein the second video is a video obtained by shooting the three-dimensional field model of the real scene field by the virtual camera from the first camera information.
In another aspect, a video picture pushing apparatus is provided, the apparatus comprising:
the first pushing module is used for pushing a video picture of a first video to the terminal, wherein the first video is a video generated according to a picture obtained by shooting a real scene field by an entity camera;
the first acquisition module is used for acquiring first camera information when receiving an operation of switching to a target view angle, wherein the first camera information is the camera information of a virtual camera at the target view angle, and the camera information comprises a shooting position and a shooting direction;
a second obtaining module, configured to obtain second camera information, where the second camera information is camera information obtained when the entity camera collects the first video;
the first animation generation module is used for generating a first switching animation according to the first camera information and the second camera information;
the second pushing module is used for pushing the first switching animation to the terminal;
and the third pushing module is used for pushing a video picture of a second video to the terminal after the first switching animation is pushed, wherein the second video is a video obtained by shooting the three-dimensional field model of the real scene field by the virtual camera from the first camera information.
Optionally, the first animation generation module is specifically configured to obtain N pieces of intermediate camera information, where the N pieces of intermediate camera information are camera information respectively corresponding to N times in a process of changing from the second camera information to the first camera information; acquiring N pictures obtained by the virtual camera shooting the three-dimensional field model according to the N pieces of intermediate camera information respectively; and generating the first switching moving picture according to the N pictures, wherein N is a positive integer.
Optionally, when the first switching animation is generated according to the N pictures, the first animation generation module is specifically configured to sort the N pictures in order from first to last according to the N moments; and combining the N pictures into the first switching animation according to the sequencing result of the N pictures.
Optionally, when the N pictures are combined to generate the first switching animation according to the sorting result of the N pictures, the first animation generation module is specifically configured to sequentially add the N pictures to the target video picture according to the sorting result of the N pictures, and then combine to generate the first switching animation; the target video picture is a video picture which is pushed to the terminal when the operation of switching to the target visual angle is received.
Optionally, the apparatus further comprises:
a position acquisition module, configured to acquire position information of each moving object in the real-scene site at the time when a target video picture is acquired before the first animation generation module acquires N pictures obtained by the virtual camera shooting the three-dimensional site model according to the N pieces of intermediate camera information, respectively; the target video picture is a video picture which is pushed to the terminal when the operation of switching to the target visual angle is received;
the model adding module is used for adding the object model of each activity object in the three-dimensional field model according to the position information of each activity object to obtain the three-dimensional field model after the object model is added;
when acquiring N pictures obtained by the virtual camera shooting the three-dimensional field model according to the N pieces of intermediate camera information, the first animation generation module is specifically configured to perform image acquisition on the three-dimensional field model after the object model is added according to the N pieces of intermediate camera information, so as to acquire the N pictures.
Optionally, the apparatus further comprises:
a third obtaining module, configured to obtain new first camera information when an operation of adjusting a target view angle is received, where the new first camera information is camera information of the virtual camera at the adjusted target view angle;
and the fourth pushing module is used for pushing a new model picture to the terminal, wherein the new model picture is a picture obtained by shooting the three-dimensional field model by the virtual camera according to the new first camera information.
Optionally, the apparatus further comprises:
and the fourth obtaining module is used for shooting the three-dimensional field model after the object model is added from the first camera information before the third pushing module pushes the video picture of the second video to the terminal, so as to obtain the video picture of the second video.
Optionally, the apparatus further comprises:
a fifth obtaining module, configured to obtain third camera information when an operation of resuming video playing is received, where the third camera information is camera information of the virtual camera when the operation of resuming video playing is received;
the second animation generation module is used for generating a second switching animation according to the third camera information and the second camera information;
a fifth pushing module, configured to push the second switching animation to the terminal;
and the sixth pushing module is used for continuously pushing the video picture of the first video to the terminal after the second switching animation is pushed.
Optionally, the apparatus further comprises:
the splicing image obtaining module is used for obtaining a splicing model image before the first pushing module pushes a video image of a first video to a terminal, wherein the splicing model image is obtained by shooting a three-dimensional field model of the real scene field according to the appointed camera information by the virtual camera;
and the splicing module is used for splicing the picture obtained by shooting the real scene field by the entity camera with the spliced model picture to obtain the video picture of the first video.
Optionally, the apparatus further comprises:
the parameter model acquisition module is used for acquiring a field parameter model before the first pushing module pushes the video picture of the first video to the terminal, and the field parameter model is used for indicating the field parameter of the real scene field;
the model adding module is used for adding the site parameter model on the three-dimensional site model;
a parameter picture acquiring module, configured to acquire a parameter model picture, where the parameter model picture is a picture obtained by the virtual camera shooting the site parameter model according to the second camera information;
and the superposition module is used for superposing the parameter model picture on a picture obtained by shooting the real scene field by the entity camera to obtain a video picture of the first video.
Optionally, the apparatus further comprises:
a sixth obtaining module, configured to obtain a first model before the first pushing module pushes a video picture of a first video to a terminal, where the first model is generated in a laser scanning manner and is a three-dimensional model of the real-scene site;
a seventh obtaining module, configured to obtain a second model and a position of the second model in the real-scene site, where the second model is generated in a photo synthesis manner and is a three-dimensional model of a site object; the field object is a fixed object in the live-action field;
and the model merging module is used for merging the first model and the second model according to the position of the second model in the real scene field to obtain the three-dimensional field model.
Optionally, the apparatus further comprises:
and the seventh pushing module is used for pushing the three-dimensional field model to the terminal.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the above-mentioned video picture pushing method.
In yet another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the above-mentioned video picture pushing method.
The technical scheme provided by the application can comprise the following beneficial effects:
in the process of pushing a first video obtained by shooting a real-scene field by an entity camera to a terminal, when receiving an operation of switching to a target view angle, the broadcast directing equipment obtains first camera information when the virtual camera is at the target view angle, generates a section of switching animation according to the first camera information and second camera information when the entity camera collects the video, and pushes a model picture obtained by shooting a three-dimensional field model of the real-scene field by the virtual camera according to the first camera information after pushing the switching animation to the terminal. Through the scheme shown in the application, in the live broadcasting process, the director can switch the director picture to the three-dimensional field model of the real scene field displayed on the target view angle through switching operation, namely, when the video generated by the picture shot by the entity camera is pushed, the pushed video picture can be smoothly adjusted to the video of shooting the three-dimensional field model through the virtual camera, because the view angle of the virtual camera is adjusted more conveniently and quickly relative to the view angle of the entity camera, the director can complete view angle switching in a very short time, and therefore the efficiency of view angle switching of the live broadcasting picture in the live broadcasting process is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic block diagram of a video push system according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a video picture pushing method in accordance with an exemplary embodiment;
fig. 3 is a schematic block diagram of a director device according to the embodiment shown in fig. 2;
FIG. 4 is a flow diagram illustrating a video picture pushing method in accordance with an exemplary embodiment;
FIG. 5 is a flow diagram of a live scheme provided in accordance with an exemplary embodiment;
FIG. 6 is a schematic view of a rock wall model to which the embodiment shown in FIG. 5 relates;
FIG. 7 is a schematic diagram of a rock point model to which the embodiment of FIG. 5 relates;
fig. 8 is an architectural diagram of a director device to which the embodiment shown in fig. 5 relates;
FIG. 9 is a schematic diagram of a model graphical control interface according to the embodiment shown in FIG. 5;
FIG. 10 is a schematic diagram of a switching animation according to the embodiment shown in FIG. 5;
fig. 11 is a block diagram showing a configuration of a video picture pushing apparatus according to an exemplary embodiment;
FIG. 12 is a block diagram illustrating a configuration of a computer device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Before describing the various embodiments shown herein, several concepts related to the present application will be described:
1) live-action field
In the embodiment of the present application, the live-action field refers to a target field shot by a physical camera, for example, a live broadcast of a race is taken as an example, different fields exist in live broadcasts of different types of races, for example, the live-action field of a rock climbing race is a rock wall used in the rock climbing race, the live-action field of a racing race is a race track used in a racing race, and the like.
Wherein, a real scene place can correspond to a plurality of entity cameras. Video pictures shot by the plurality of entity cameras respectively form a video pushed to the terminal by the server through partition or time-sharing splicing.
2) Three-dimensional field model
In this embodiment, the three-dimensional field model is a three-dimensional virtual model of the real field, and the three-dimensional virtual model may be regarded as a result of scaling the real field according to a certain proportion. A three-dimensional site model of the real-world site may be generated and stored in advance.
3) Virtual camera
In the embodiment of the present application, the virtual camera may also be referred to as a camera model, and is a virtual model for capturing a picture in a virtual scene including the three-dimensional field model.
The virtual camera has camera information similar to the physical camera, such as position, attitude, aperture size, focal length, and the like. The pictures acquired by the virtual camera in the virtual scene can be determined by the camera information of the virtual camera in the virtual scene.
Fig. 1 is a schematic diagram illustrating a structure of a video push system according to an exemplary embodiment. The system comprises: a server 120, a director device 130, and a number of terminals 140.
The server 120 is a server, or a plurality of servers, or a virtualization platform, or a cloud computing service center.
The terminal 140 may be a terminal device having a video playing function, for example, the terminal may be a mobile phone, a tablet computer, an e-book reader, smart glasses, a smart watch, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts Group Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts Group Audio Layer 4), a laptop portable computer, a desktop computer, and the like.
The terminal 140 and the server 120 are connected via a communication network. Optionally, the communication network is a wired network or a wireless network.
In the embodiment of the present application, the server 120 sends the live video frame to the terminal 140, and the terminal 140 plays the video. The video frames may include, but are not limited to, video frames of a live video stream, video frames of a video file, frames of a video animation, and the like.
Optionally, when the video pushed by the server is a live video stream, the video pushing system may further include a director device 160.
Director device 160 may comprise a single computer device or alternatively, may comprise multiple computer devices.
The director device 160 corresponds to an image capture assembly (i.e., a physical camera) and an audio capture assembly. The image capturing component and the audio capturing component may be part of the direct broadcasting equipment 160, for example, the image capturing component and the audio capturing component may be a built-in camera and a built-in microphone of the direct broadcasting equipment 160; alternatively, the image capture component and the audio capture component may also be connected to the broadcast directing device 160 as peripheral devices of the broadcast directing device 160, for example, the image capture component and the audio capture component may be a camera and a microphone respectively connected to the broadcast directing device 160; alternatively, the image capture component and the audio capture component may be partially built into the director device 160 and partially serve as peripherals of the director device 160, for example, the image capture component may be a camera built into the director device 160 and the audio capture component may be a microphone in a headset connected to the director device 160. The embodiment of the application does not limit the implementation forms of the image acquisition assembly and the audio acquisition assembly.
In this embodiment, the director device 160 may upload the live video stream recorded locally to the server 120, and the server 120 performs related processing such as transcoding on the live video stream and then pushes the live video stream to the terminal 140.
Optionally, the system may further include a management device (not shown in fig. 1), which is connected to the server 120 through a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
Fig. 2 is a flowchart illustrating a video picture pushing method that may be used in the video pushing system shown in fig. 1 according to an exemplary embodiment. As shown in fig. 2, the video picture pushing method may include the steps of:
step 21, the director equipment pushes a video picture of a first video to the terminal, and the terminal receives the video picture of the first video pushed by the director equipment; the first video is a video generated from a picture obtained by shooting a real-scene field by a physical camera.
And step 22, the terminal plays the video picture of the first video.
Step 23, when receiving the operation of switching to the target view angle, the director equipment acquires first camera information, where the first camera information is camera information of the virtual camera at the target view angle, and the camera information includes a shooting position and a shooting direction.
Optionally, the camera information may further include information such as an aperture size and a focal length.
The target view may be a view determined by a free view adjustment operation. That is, in the embodiment of the present application, the target viewing angle is not limited to a fixed viewing angle, but a viewing angle that can be freely adjusted by the director.
And 24, the director equipment acquires second camera information, wherein the second camera information is the camera information when the entity camera acquires the picture corresponding to the first video.
And step 25, the director equipment generates a first switching animation according to the first camera information and the second camera information.
The first switching animation may be an animation formed by each acquired image in the process of switching the camera information of the virtual camera from the second camera information to the first camera information.
Step 26, the director equipment pushes the first switching animation to the terminal; and the terminal receives the first switching animation pushed by the director equipment.
Optionally, in another possible implementation manner, the generation process of the first switching animation may also be executed by a terminal, that is, the terminal stores a three-dimensional field model in advance, when the terminal receives an operation of switching to a target view angle, determines the target view angle according to the received operation, acquires first camera information according to the determined target view angle, and then generates the first switching animation according to the first camera information and the second camera information, where the second camera information may be sent to the terminal by the server.
The three-dimensional field model can be an equal-proportion virtual model which is constructed in advance according to a real scene field.
And 27, the terminal plays the first switching animation.
Step 28, after pushing the first switching animation, the director equipment pushes a video image of a second video to the terminal, and the terminal receives the video image of the second video; the video picture of the second video is a video obtained by the virtual camera shooting the three-dimensional field model of the real-scene field from the first camera information.
Optionally, in another possible implementation manner, the video frame of the second video may also be generated and displayed by the terminal, that is, the terminal stores a three-dimensional field model in advance, and when the terminal receives an operation of switching to a target view angle, the terminal determines the target view angle according to the received operation, and after acquiring the first camera information according to the determined target view angle, the terminal starts to shoot the three-dimensional field model of the real scene field from the first camera information through the virtual camera to acquire the video frame of the second video.
And step 29, the terminal plays the video picture of the second video.
In the embodiment of the application, the director equipment pushes pictures or animations to the terminal through the server.
In the system shown in the embodiment of the present application, the director device 160 may freely insert a three-dimensional virtual picture of a live-action field in the process of live-broadcasting live-action video pictures based on live-action video pictures shot by the physical camera and a pre-generated three-dimensional field model of the live-action field.
Please refer to fig. 3, which illustrates an architecture diagram of a director device according to an embodiment of the present application. As shown in fig. 3, the director device 30 includes several physical cameras 31, a rendering engine 32, and a director interface 33, and a three-dimensional field model of the real world field is stored in a memory of the director device 30.
The real camera 31 is configured to collect scene images of a real-scene site, the director interface 33 is configured to control a viewing angle of the virtual camera for observing the three-dimensional site model, and the rendering engine 32 renders and generates a video image and a switching animation according to at least one of the images collected by the real camera 31 and the viewing angle of the virtual camera controlled by the director interface 33.
Based on the architecture of the director equipment shown in fig. 3, a director controls the rendering engine 32 through the director interface 33, generates a video frame of the first video according to a frame rendering taken by the entity camera 31, and pushes the video frame of the first video to the server, and when the director wants to switch a live frame from a live frame to a model frame, the director performs free view angle switching operation through the director interface 33, for example, the director freely adjusts a view angle of the observed three-dimensional field model through a touch screen or a mouse based on the director interface 33 displaying the three-dimensional field model of the real scene field, and acquires an adjusted view angle as a target view angle, and acquires camera information (i.e., first camera information) at the target view angle. The rendering engine 32 renders an effect animation (i.e., the first switching animation) for observing the three-dimensional field model in the process of switching from the first camera information to the second camera information according to the first camera information and the camera information of the entity camera (i.e., the second camera information), and pushes the generated effect animation to the terminal after connecting the generated effect animation to the video picture of the first video; after the pushing of the effect animation is finished, the rendering engine 32 starts to shoot a video picture of the three-dimensional field model from the target view angle (corresponding to the first camera information) as a second video picture, and pushes the second video picture to the terminal after the effect animation.
In summary, in the embodiment of the application, in the process of pushing the first video obtained by shooting the real-world field by the physical camera to the terminal, the director equipment obtains the first camera information when the virtual camera is at the target view angle when receiving the operation of switching to the target view angle, generates a section of switching animation according to the first camera information and the second camera information when the physical camera collects the video, and pushes the model picture obtained by shooting the three-dimensional field model of the real-world field by the virtual camera according to the first camera information after pushing the switching animation to the terminal. Through the scheme shown in the application, in the live broadcasting process, the director can switch the director picture to the three-dimensional field model of the real scene field on the target view angle through switching operation, that is to say, when the video of the picture generation that the solid camera was shot is pushed, can be with the smooth adjustment of the video picture of propelling movement to the video of shooting the three-dimensional field model through the virtual camera, thereby the visual angle of the field that the adjustment director wanted the propelling movement that can be free wants, because the visual angle of adjustment virtual camera is convenient and fast more for the visual angle of selection and adjustment solid camera, the director can accomplish the visual angle in the time of extremely short and switch, thereby improve the efficiency that the visual angle of carrying out the live broadcasting picture switches in the live broadcasting process.
In addition, in the embodiment of the application, before the model picture is pushed, a section of switching animation is generated and pushed to the terminal for playing, so that an approximately actual camera moving effect is presented, the picture continuity in the switching process is ensured, and the picture display effect when the view angle of the entity camera is switched to the view angle of the virtual camera is improved.
Fig. 4 is a flowchart illustrating a video picture pushing method that may be used in the video pushing system shown in fig. 1 according to an exemplary embodiment. As shown in fig. 4, the video picture pushing method may include the steps of:
step 401, a director equipment pushes a video picture of a first video to a terminal, and the terminal receives the video picture of the first video pushed by the director equipment; the first video is a video generated from a picture obtained by shooting a real-scene field by a physical camera.
When the video picture is pushed to the terminal by the director equipment, the video picture can be pushed through the server. That is, the director equipment uploads the video pictures to be pushed to the server, and the video pictures are pushed to each accessed terminal by the server.
For example, taking a live video scene of a certain rock climbing competition as an example, a director device shoots an actual rock climbing track (i.e., a live-action field) through an entity camera, obtains a live video stream (i.e., a first video) according to a shot picture, uploads the live video stream to a server, and the server pushes a video picture in the live video stream to a terminal (which may be a director terminal or a user terminal).
In this embodiment of the present application, the video frame of the first video may be an original video frame captured by a physical camera; alternatively, the video frame of the first video may be a frame obtained by further processing on the basis of an original video frame captured by the physical camera.
Optionally, the video frame of the first video may be a frame obtained by splicing an original frame obtained by shooting with a physical camera and a frame of a virtual model.
For example, when generating a video frame of a first video, the director equipment may obtain a mosaic model frame, where the mosaic model frame is obtained by shooting a three-dimensional field model of the real-scene field according to the specified camera information by the virtual camera; and the director equipment carries out picture splicing on the picture obtained by shooting the real-scene field by the entity camera and the spliced model picture to obtain the video picture of the first video.
In the embodiment of the present application, the picture obtained by splicing the picture obtained by shooting the real-world field and the spliced model picture by the physical camera may also be a Computer Graphics (CG) picture.
For example, in one possible implementation manner, a rendering engine of the director device acquires a picture taken by the physical camera, acquires a spliced model picture of the three-dimensional field model taken by the virtual camera according to the specified camera information, splices the spliced model picture on one side (for example, the right side) of the picture taken by the physical camera, and pushes the spliced picture to the terminal as a video picture of the first video. The splicing position and the edge shape can be set by a director or a developer.
In a possible implementation manner, the mosaic model picture may contain other information besides a picture obtained by the virtual camera shooting the three-dimensional field model according to the specified camera information. For example, in a rock climbing game as an example, the mosaic model screen further includes a progress indicator bar, and the progress indicator bar may indicate the progress of the competition completed by a player of the rock climbing game.
The designated camera information may be preset camera information corresponding to current camera information of the entity camera, when the current camera information of the entity camera changes, the designated camera information changes accordingly, and a corresponding relationship between the current camera information of the entity camera and the designated camera information may be preset by a developer or a manager. Alternatively, the specified camera information may be camera information manually set or adjusted by the director.
Optionally, the video frame of the first video may also be a frame obtained by superimposing the field parameters involved in the frame on the original frame captured by the physical camera.
For example, when generating a video frame of a first video, the director equipment may further obtain a site parameter model, where the site parameter model is used to indicate a site parameter of the real-scene site; adding the site parameter model to the three-dimensional site model; acquiring a parameter model picture, wherein the parameter model picture is a picture obtained by shooting the field parameter model by the virtual camera according to the second camera information; and overlaying the parameter model picture on a picture obtained by shooting the real scene field by the entity camera to obtain a video picture of the first video. And the second camera information is the camera information when the entity camera acquires the picture corresponding to the first video.
In the embodiment of the present application, the image obtained by superimposing the parameter model image on the image obtained by the real camera shooting the real field may also be referred to as an Augmented Reality (AR) image.
In this embodiment, the director equipment may add a model for indicating a site parameter to a three-dimensional site model, where the site parameter model may be a preset model, for example, a rock climbing site is taken as an example, and the site parameter model may be a pre-made covering layer model of a certain wall, where the covering layer model includes an angle between the wall and a horizontal plane. Alternatively, the site parameter model may be an instant-generated model, for example, whether a rock climbing site is taken as an example, the site parameter model may be a line model generated by the director equipment in instant after detecting that the director selects two rock climbing points, and connecting the two rock climbing points, where the line model further includes a number indicating a length between the two rock climbing points.
After the field parameter model is added to the three-dimensional field model by the director equipment, a virtual camera can be set in a virtual space according to the camera information of the current entity camera, at the moment, the picture of the three-dimensional field model shot by the virtual camera is synchronous with the picture of the real scene field shot by the entity camera, the picture of the field parameter model is collected by the director equipment through the virtual camera, for example, in a certain virtual space, after the field parameter model is added to the three-dimensional field model by the director equipment, the three-dimensional field model is removed, only the field parameter model is reserved, and then the field parameter model is shot through the virtual camera set according to the camera information of the entity camera to obtain a parameter model picture. Or, the director equipment may also determine coordinates of the site parameter model according to the coordinates of the three-dimensional site model, directly set the site parameter model in the virtual scene, and then shoot the site parameter model through the virtual camera set according to the camera information of the physical camera to obtain a parameter model picture. After the parameter model picture is obtained, the rendering engine overlays the parameter model picture on the upper layer of the picture of the real scene field shot by the entity camera under the same camera information, and the video picture of the first video is obtained.
When the parameter model picture is superimposed on the picture obtained by the real camera shooting the real scene, the parameter model picture can be set to be in a semitransparent state.
In step 402, the terminal plays the video frame of the first video.
And after the terminal receives the video pictures in the live video stream pushed by the broadcast guide equipment, the received video pictures are played in a playing interface.
In step 403, when the director equipment receives an operation of switching to a target view angle, acquiring first camera information, where the first camera information is camera information of the virtual camera at the target view angle, and the camera information includes a shooting position and a shooting direction.
Each piece of camera information in the embodiment of the present application is based on camera information in the same spatial coordinate system, that is, a shooting position in each piece of camera information in the embodiment of the present application is a shooting position in the same spatial coordinate system, and correspondingly, a shooting direction in each piece of camera information in the embodiment of the present application is also a shooting direction in the same spatial coordinate system.
In this embodiment of the application, when the director wants to switch the viewing angle for observing the real-scene site, the director may perform an operation of switching to a target viewing angle in the director, and the director determines the target viewing angle according to the received operation, and then acquires the first camera information.
In the embodiment of the present application, the camera information may include other attribute information related to image acquisition, such as an aperture size, a focal length, an acquisition frame rate, and the like, in addition to the shooting position and the shooting direction.
Alternatively, the camera information may include only the shooting position and the shooting direction, and the attribute information other than the shooting position and the shooting direction may be a fixed default value.
Wherein the above-mentioned photographing position and photographing direction may be determined by the terminal according to an operation by switching to the target angle of view.
In the embodiment of the present application, the operation of controlling the target view angle of the virtual camera may include, but is not limited to, the following two:
1) the virtual camera is controlled in a touch screen mode, a spherical coordinate system is established in the space, and the virtual camera always aims at a certain sphere center to shoot. The method is characterized in that the free visual angle is realized by controlling the rotation (a certain point is taken as a spherical center to move on the spherical surface in space), the translation (the spherical center and the virtual camera are together in accordance with a certain space vector) and the zooming (the distance from the spherical center), and simultaneously, an independent control area controls pan, tilt and zoom of the camera.
2) The virtual camera under the control of the peripheral equipment is controlled to move forwards, backwards, leftwards and rightwards, move upwards and downwards, pan, tilt and zoom, compared with the condition that the position of the virtual camera in the mode 1) is limited to the position of the center of sphere in a moving mode, the virtual camera under the control of the peripheral equipment takes the reference of the free visual angle in the three-dimensional virtual game, so that the position of the virtual camera is separated from the limitation of the position of the center of sphere and directly moves freely in a virtual space, and an operator can more easily place the virtual camera at a proper position; meanwhile, the requirement of an operator can be met by taking the sphere center as a reference object for moving, for example, the operator can switch two operation modes by the mode of carrying out multi-angle observation around a certain rock point.
In step 404, the director device obtains second camera information.
Wherein the second camera information may be camera information determined and stored at the time of setting the physical video camera, when the camera information of the physical video camera is fixed.
Alternatively, when the camera information of the physical camera is not fixed, the second camera information may be obtained by performing perspective tracking on the physical camera. For example, an initial positioning position and a shooting direction are set in the entity camera, and in the process of shooting the real scene site and in the process of controlling the position, the shooting direction, the focal length, and the aperture size of the entity camera, the director equipment may obtain camera information corresponding to each time of the entity camera by combining the initial positioning position and the shooting direction and a control instruction for controlling the entity camera.
In step 405, the director device generates a first switching animation according to the first camera information and the second camera information.
The first switching animation may be an animation formed by each acquired image in the process of switching the camera information of the virtual camera from the second camera information to the first camera information.
Optionally, when generating the first switching animation according to the first camera information and the second camera information, the director equipment may obtain N pieces of intermediate camera information, where the N pieces of intermediate camera information are camera information respectively corresponding to N times in a process of changing from the second camera information to the first camera information, and N is a positive integer; acquiring N pictures obtained by shooting the three-dimensional field model by the virtual camera according to the N pieces of intermediate camera information; and generating the first switching animation according to the N pictures.
In this embodiment of the application, the director equipment may simulate a process of gradually changing the first camera information to the second camera information, and perform N times of sampling according to a preset sampling frequency in the process of changing the first camera information to the second camera information, where each time of sampling obtains intermediate camera information corresponding to a sampling time.
Optionally, when the first switching animation is generated according to the N pictures, the director device sorts the N pictures in order from first to last according to the N times; and combining the N pictures into the first switching animation according to the sequencing result of the N pictures.
Since the N moments are moments in the process of gradually changing from the first camera information to the second camera information, after the N pictures are sequenced according to the sequence of the N moments, the sequence of the N pictures can correspond to pictures sequentially collected by the virtual camera from the first camera information to the second camera information, so that a continuous picture seen in the process of changing from the view angle of the physical camera to the target view angle is visually created.
Optionally, when the N pictures are combined to generate the first switching animation according to the sorting result of the N pictures, the director device may sequentially add the N pictures to the target video picture according to the sorting result of the N pictures, and then combine to generate the first switching animation; the target video picture is a video picture pushed to the terminal when the operation of switching to the target view angle is received.
In the embodiment of the present application, in order to reduce the obtrusiveness when the angle of view of the physical camera is switched to the angle of view of the virtual camera as much as possible, the last live-action picture before switching may be added before the N pictures.
Optionally, before acquiring N pictures obtained by the virtual camera shooting the three-dimensional field model according to the information of the N intermediate cameras, the director equipment may further acquire the time when the target video picture is acquired and the position information of each moving object in the real-scene field; the target video picture is a video picture pushed to the terminal when the switching instruction is received; and adding the object model of each activity object in the three-dimensional field model according to the position information of each activity object, and obtaining the three-dimensional field model after the object model is added.
Correspondingly, when acquiring the N pictures acquired by the virtual camera shooting the three-dimensional field model according to the N pieces of intermediate camera information, the director equipment may acquire the images of the three-dimensional field model to which the object model is added according to the N pieces of intermediate camera information, respectively, to acquire the N pictures.
In the video live broadcast process, various moving objects (such as moving people or objects) usually appear in the live-action site, and the moving objects usually change along with the live broadcast process, but the pre-generated three-dimensional site model cannot directly represent the moving objects. In order to keep the moving objects in the current live-action site when the view angle of the physical camera is adjusted to the view angle of the virtual camera, in the embodiment of the application, in the live broadcast process, the director equipment may further add corresponding object models to the three-dimensional virtual model according to the moving objects actually existing in the live-action site. Thus, after the angle of view is adjusted, the first switching animation can also contain the image of the object model, so that the switching process shown by the first switching animation is more vivid.
Step 406, the director equipment pushes the first switching animation to the terminal; and the terminal receives the first switching animation pushed by the director equipment.
After the director equipment generates the first switching animation, the first switching animation can be pushed to the terminal.
Step 407, the terminal plays the first switching animation.
The terminal can play the first switching animation after receiving the first switching animation.
Step 408, after pushing the first switching animation, the director equipment pushes a video image of a second video to the terminal, and the terminal receives the video image of the second video; the second video is a video obtained by the virtual camera shooting the three-dimensional field model of the real-scene field from the first camera information.
After the first switching animation is pushed, the video picture of the virtual model can be pushed to the terminal by the broadcasting guide equipment, so that the live broadcast picture pushed to the user is switched to the picture of the three-dimensional field model from the picture of the real scene field.
Optionally, before pushing the video picture of the second video to the terminal, the director equipment may further shoot the three-dimensional field model after the object model is added according to the first camera information, so as to obtain the video picture of the second video.
In this embodiment of the application, after the angle of view of the physical camera is adjusted to the angle of view of the virtual camera, the director equipment may further control the virtual camera to shoot the three-dimensional virtual model added with the object model to obtain a video picture of the second video, so that the collected three-dimensional model picture may also indicate a moving object actually existing in the real scene field, thereby improving a display effect of the video picture of the second video.
In step 409, the terminal plays the video frame of the second video.
After the terminal plays the first switching animation, the video image of the second video can be played.
Optionally, after the director equipment pushes the video frame of the second video to the terminal, the operation of adjusting the target view angle may be further detected, and when the operation of adjusting the target view angle is received, new first camera information is acquired, where the new first camera information is camera information of the virtual camera at the adjusted target view angle; and pushing a new model picture of a second video to the terminal, wherein the new model picture of the second video is a picture obtained by shooting the three-dimensional field model by the virtual camera according to the new first camera information.
The operation of adjusting the target viewing angle may be a free viewing angle adjustment operation, that is, the director may adjust the target viewing angle to any reachable viewing angle according to a preset viewing angle adjustment range.
In step 410, the director equipment obtains third camera information when receiving the operation of resuming the video playing, where the third camera information is the camera information of the virtual camera when receiving the operation of resuming the video playing.
When the director wants to restore the picture of the pushed live-action arena, an operation of restoring to the first video may be performed in the director device.
The third camera information may be different from the first camera information, because the user may change the viewing angle of the three-dimensional field model during viewing the three-dimensional field model.
In step 411, the director equipment generates a second switching animation according to the third camera information and the second camera information.
The process of the director equipment generating the second switching animation according to the third camera information and the second camera information is similar to the process of the director equipment generating the first switching animation according to the first camera information and the second camera information, and details are not repeated here.
Step 412, the director equipment pushes the second switching animation to the terminal; and the terminal receives the second switching animation pushed by the director equipment.
In step 413, the terminal plays the second switching animation.
Step 414, after pushing the second switching animation, the director device continues to push the video frame of the first video to the terminal; and the terminal receives the video picture of the first video continuously pushed by the director equipment.
In step 415, the terminal continues to play the video frame of the first video.
Optionally, before pushing the video frame of the first video to the terminal, the director equipment may further obtain a first model, where the first model is generated in a laser scanning manner and is a three-dimensional model of the real-scene site; acquiring a second model and the position of the second model in the real scene field, wherein the second model is generated in a photo synthesis mode and is a three-dimensional model of a field object; the field object is a fixed object in the live-action field; and combining the first model and the second model according to the position of the second model in the real scene field to obtain the three-dimensional field model.
Optionally, after the director equipment generates the three-dimensional field model, the director equipment may also push the three-dimensional field model to the terminal.
In this embodiment of the application, a display interface of the terminal may be divided into at least two display areas, where one display area is used to display a picture or animation pushed by the director device, another display area may display a picture of the three-dimensional venue model, and a user may freely change a viewing angle of the three-dimensional venue model in the another display area through a touch manner.
The three-dimensional field model can be established in a laser scanning mode and a photo synthesis mode, wherein the laser scanning mode is fast and accurate, but the quality of a finished product is poor, the photo synthesis mode is time-consuming, manual secondary fine adjustment is needed on the precision, and the quality of the rendered product is good. Therefore, the scheme shown in the embodiment of the present application can combine the laser scanning mode and the photo synthesis mode.
For example, taking a live-action field as a rock climbing track as an example, the rock mass is divided into two parts of a rock wall and a rock point; according to the scheme, a laser scanning technology is used for obtaining a basic model of a rock wall (unassembled rock point) and a position relation between the rock wall (assembled rock point) and the rock point on the rock wall; obtaining the material of a rock wall and a model of a rock point by using a photo synthesis technology; and finally, placing the manufactured rock points on the rock wall according to the position relation between the manufactured rock points and the rock wall, and splicing to form a model of the whole rock body.
In summary, in the embodiment of the application, in the process of pushing the first video obtained by shooting the real-world field by the physical camera to the terminal, the director equipment obtains the first camera information when the virtual camera is at the target view angle when receiving the operation of switching to the target view angle, generates a section of switching animation according to the first camera information and the second camera information when the physical camera collects the video, and pushes the model picture obtained by shooting the three-dimensional field model of the real-world field by the virtual camera according to the first camera information after pushing the switching animation to the terminal. Through the scheme shown in the application, in the live broadcasting process, the director can switch the director picture to the three-dimensional field model of the real scene field on the target view angle through switching operation, that is to say, when the video of the picture generation that the solid camera was shot is pushed, can be with the smooth adjustment of the video picture of propelling movement to the video of shooting the three-dimensional field model through the virtual camera, thereby the visual angle of the field that the adjustment director wanted the propelling movement that can be free wants, because the visual angle of adjustment virtual camera is convenient and fast more for the visual angle of selection and adjustment solid camera, the director can accomplish the visual angle in the time of extremely short and switch, thereby improve the efficiency that the visual angle of carrying out the live broadcasting picture switches in the live broadcasting process.
In addition, in the embodiment of the application, before the model picture is pushed, a section of switching animation is generated and pushed to the terminal for playing, so that an approximately actual camera moving effect is presented, the picture continuity in the switching process is ensured, and the picture display effect when the view angle of the entity camera is switched to the view angle of the virtual camera is improved.
By taking the application of the scheme in the application to live broadcast in a rock climbing game as an example, rock climbing is a game item rich in ornamental value, but because the rock wall of the live broadcast is usually composed of rock surfaces with large inclination angles with the horizontal plane, and the rock wall is too long and too high, the details of part of rock points cannot be shot well by a camera, so that the watching experience of the live broadcast on a television screen for users is not comprehensive.
In the current live broadcast process of a rock climbing competition, the display of the shape of a rock wall and related data such as height and inclination angle is usually carried by a small piece manufactured in the later stage, and the displayed picture is all fixed and cannot be temporarily changed according to the progress of the competition when the picture is played at the beginning of the competition. In the process of a match, the audience can only observe the track through the visual angle which can be reached by the camera, the visual angle which can be shot by the camera is limited due to physical limitation, the audience can only passively accept the visual angle pushed by the director, and the audience can not acquire related information if the current visual angle picture is missed or the director is not switched to the visual angle which the audience wants to know.
The above solution of the present application can solve the above two problems by the following three points:
1) the 3D rock mass model is constructed and a virtual camera control technology is added, so that the director can observe each part of details of the rock wall at a free visual angle and is not limited by the physics of a real camera;
2) by introducing the model into the player of the user mobile terminal, the user can observe the rock wall at the mobile terminal at any time with a free view angle.
3) Some AR animations, such as showing distances between rock points, showing angles of rock walls, showing scores of the first players, showing course routes, showing difficulty points of the course, etc., can also enrich program pictures; meanwhile, the real camera is switched to the free visual angle of the virtual camera through the flying animation, so that the television picture has more logicality and continuity.
The playing of the AR animation can be controlled by special playing control software or a touch screen.
In the animation (i.e., switching animation) in the process of the real camera to the virtual camera (in the present application, the process of adjusting the angle of view of the real camera to the angle of view of the virtual camera may be referred to as cam chopper), a rendering engine may generate a segment of animation which jumps from the angle of view of the real camera to the virtual camera by acquiring parameters (i.e., camera information) of the virtual camera and parameters of the real camera, so as to ensure continuity and logicality of the picture and better enable a user to understand the content of the picture.
In addition, the model control assembly can be embedded into a player of the user terminal, so that the user can view the rock mass model from a free view angle in a mode similar to the mode that the touch screen controls the virtual camera. Meanwhile, some function buttons can be added in the playing interface, and the game data are pushed to the user terminal through triggering of the function buttons, so that the user can obtain information display of own preference in an interactive mode instead of completely depending on package data pushed by the guide, and the use experience of the user is improved.
Taking a rock climbing competition live broadcast as an example, fig. 5 is a flow chart of a live broadcast scheme provided according to an exemplary embodiment. As shown in fig. 5, the whole live broadcasting scheme includes the following steps:
1) needs are known.
After receiving a live broadcast task, live broadcast related personnel firstly know the requirements of the live broadcast task, such as how many machine positions of the entity cameras need to be set and whether a three-dimensional virtual model needs to be used.
2) And (5) inspecting the field.
After the three-dimensional virtual model is determined to be needed, live broadcast related personnel examine the live-action site, and determine needed resources according to the live-action site, such as how many personnel are needed to participate in model creation, equipment building and the like.
After the scene is investigated, the live related personnel can synchronously carry out the following steps 3) to 5).
3) Determining the acquisition time of the model material according to the work progress of the line-defining staff; performing rock wall scanning according to the determined acquisition time; shooting a rock point picture according to a competition path set by a line-determining person; and carrying out model mapping according to the rock wall scanning result and the rock point picture.
In a rock climbing competition, the routineer needs to make a competition route before the competition, install rock points and the like according to the competition route. And determining the time for scanning the rock wall by the laser and the time for shooting the rock point picture according to the working progress of the line-defining staff. In general, the laser may be scanned to a time before the routineer determines the race path, and the shot of the spot image may be taken after the routineer determines the race path. And the live broadcast related personnel complete material acquisition and mapping by field scanning and picture shooting.
4) Designing a virtual background, confirming the virtual background and manufacturing a functional template.
The live broadcast related personnel also need to design a virtual background and various function templates, such as a view switching function template, a virtual enhancement function template, an image splicing function template, and the like.
5) Determining a shooting scheme, preparing hardware equipment, building field equipment and carrying out field tracking and debugging.
The related personnel of live broadcast still carry out the building of the preparation of entity equipment when making model map, background and function template, for example, formulate the scheme of building of entity equipment, prepare equipment such as camera, the on-the-spot director equipment of building to and on-the-spot setting, debugging entity camera, trail position and the visual angle etc. of every entity camera.
6) And constructing a virtual scene according to the results of the steps 3), 4) and 5), and synthesizing a virtual model in the virtual scene.
According to the scheme, live broadcast related personnel construct a virtual scene containing a three-dimensional field model according to a model map, a virtual background and a tracking result of an entity camera, and synchronization of the virtual scene and a real scene field is guaranteed.
7) And after the virtual scene is successfully constructed, confirming the overall effect of the virtual scene, and if the match path is changed, adjusting the virtual scene.
8) And in the rock climbing competition process, live broadcasting is carried out according to the virtual scene.
The live broadcast process may refer to the steps introduced in the embodiments shown in fig. 2 or fig. 4, and details are not described here.
In the above whole implementation flow, the model generation occupies most of the implementation time, and the time consumption is about equal to that of the whole scheme.
For an example of a certain rock climbing competition, please refer to fig. 6 and fig. 7, where fig. 6 shows a schematic diagram of a rock wall model related to an embodiment of the present application, and fig. 7 shows a schematic diagram of a rock point model related to an embodiment of the present application. The data scale and generation time of the above model can be referred to the following tables 1 and 2:
TABLE 1
Figure BDA0001989823400000221
TABLE 2
Figure BDA0001989823400000222
Table 1 above shows the scale of data for the rock walls and rock points that need to be scanned/collected and table 2 above shows the time required for each rock wall or rock point model to be generated.
The total time in table 2 is calculated by linear accumulation time, all the above model generation processes can also be executed in parallel, and ideally, the actual total time is the maximum value of all tasks, i.e., 3188 minutes.
Please refer to fig. 8, which illustrates an architecture diagram of a director device according to an embodiment of the present application. As shown in fig. 8, a broadcasting guide apparatus 80 built on site includes several physical cameras 81 (optionally, a microphone/microphone assembly, etc.), a rendering engine 82, and a broadcasting guide interface 83, and a memory of the broadcasting guide apparatus 80 stores a three-dimensional field model of a climbing racetrack. The director interface 83 may include a plurality of interfaces, such as a live-action image control interface 83a, a model image control interface 83b, a CG image control interface 83c, an AR image control interface 83d, and so on.
The live-action image control interface 83a is used for controlling the physical cameras 81, such as controlling the position, angle, aperture size, and focal length of each physical camera. The director operating the live-action image control interface 83a may control each of the physical cameras through the live-action image control interface 83 a. The rendering engine 82 may output the live-action image captured by each of the live-action cameras according to the control operation in the live-action image control interface 83 a.
The model image control interface 83b is configured to determine the switched target view angle and control the virtual view angle to change freely, so as to provide a picture for observing the three-dimensional field model according to the free view angle. The director operating the model image control interface 83b can freely adjust the viewing angle from which the three-dimensional field model is viewed through the model image control interface 83b (including determining the target viewing angle when switching from the physical camera viewing angle to the virtual camera viewing angle and freely adjusting the switched target viewing angle). The rendering engine 82 may output a first switching animation when jumping from the viewpoint of the physical camera to the viewpoint of the virtual camera, a second switching animation when jumping from the viewpoint of the virtual camera to the viewpoint of the physical camera, and a virtual picture (a video picture corresponding to a second video) obtained by photographing the three-dimensional field model by the virtual camera, according to the control operation in the model image control interface 83 b.
Please refer to fig. 9, which illustrates a schematic diagram of a model image control interface according to an embodiment of the present application. As shown in fig. 9, the model image control interface 90 includes a control area 91 and a display area 92, wherein the control area 91 is used for adjusting the camera information of the virtual camera, and the display area 92 is used for displaying a model picture obtained by shooting the three-dimensional field model according to the adjusted camera information.
The CG image control interface 83c is configured to control splicing between a live-action picture captured by the physical camera and a model picture of the three-dimensional field model, for example, determine or adjust designated camera information corresponding to the spliced model picture, and set a game progress of a current competitor, so that the game progress of the current competitor can be visually displayed on the spliced model picture, and the like. The rendering engine 82 may output a CG frame in which the live-action image is stitched with the three-dimensional field model image according to a control operation in the stitched image control interface 83 c.
The AR image control interface 83d is configured to control a parameter model picture superimposed on a live-action picture captured by the entity camera, for example, control an angle model picture of a certain rock wall to be superimposed on the actual picture, so that the live-action picture captured by the entity camera can visually display data of the rock climbing racetrack. The rendering engine 82 may output an AR screen on which a pattern of course parameters is superimposed according to a control operation in the AR image control interface 83 d.
Based on the architecture of the director equipment shown in fig. 8, the rendering engine 82 simultaneously outputs an output picture corresponding to each control interface according to the control operations in the live-action image control interface 83a, the model image control interface 83b, the CG image control interface 83c, and the AR image control interface 83d, that is, in the four control interfaces, each control interface corresponds to one video picture, and the director can select one video picture to be pushed to the user.
For example, the director first selects a video screen (i.e., a video screen of the first video) output by the push rendering engine 802 according to a control operation of any one of the live-view image control interface 83a, the CG image control interface 83c, or the AR image control interface 83 d. When it is necessary to switch to the view angle of the virtual camera, the director selects to push the video picture output by the rendering engine 802 according to the control operation in the model image control interface 83b, and at the same time, the director performs an operation of switching to the target view angle through the model image control interface 83b, so that the rendering engine 82 generates a first switching animation corresponding to switching from the view angle of the physical camera to the view angle of the virtual camera according to the operation, please refer to fig. 10, which shows a switching animation diagram according to an embodiment of the present application, where the switching animation shows a process of switching from the picture shot by the physical camera to the picture shot by the virtual camera. After the first switching animation, the rendering engine 82 renders and generates a picture (i.e., a video picture of the second video) obtained by the virtual camera shooting the three-dimensional scene model, and in the process, the director can freely adjust the camera information shot by the virtual camera on the three-dimensional scene model through the model image control interface 83 b. When the angle of view of the virtual camera needs to be switched back, the director selects again to push the video frame output by the rendering engine 802 according to the control operation of any one of the live-view image control interface 83a, the CG image control interface 83c, or the AR image control interface 83d, at this time, the rendering engine 82 generates a second switching animation corresponding to the angle of view that is switched back to the physical camera from the angle of view of the virtual camera, and after the director pushes the second switching animation, the director continues to push the video frame corresponding to the interface selected by the director in the rendering engine 802 according to the live-view image control interface 83a, the CG image control interface 83c, or the AR image control interface 83 d.
Fig. 11 is a block diagram illustrating a configuration of a video picture push apparatus according to an exemplary embodiment. The video picture pushing apparatus may be used in a system as shown in fig. 1 to perform all or part of the steps performed by the director equipment in the method provided by the embodiment shown in fig. 2 or fig. 4. The video picture pushing apparatus may include:
the first pushing module 1101 is configured to push a video picture of a first video to a terminal, where the first video is a video generated according to a picture obtained by shooting a real-scene field by an entity camera;
a first obtaining module 1102, configured to obtain first camera information when an operation of switching to a target view angle is received, where the first camera information is camera information of a virtual camera at the target view angle, and the camera information includes a shooting position and a shooting direction;
a second obtaining module 1103, configured to obtain second camera information, where the second camera information is camera information obtained when the entity video camera collects a picture corresponding to the first video;
a first animation generation module 1104, configured to generate a first switching animation according to the first camera information and the second camera information;
a second pushing module 1105, configured to push the first switching animation to the terminal;
a third pushing module 1106, configured to push a video picture of a second video to the terminal after the first switching animation is pushed, where the second video is a video obtained by the virtual camera shooting the three-dimensional field model of the real-scene field from the first camera information.
Optionally, the first animation generation module 404 is specifically configured to obtain N pieces of intermediate camera information, where the N pieces of intermediate camera information are camera information respectively corresponding to N times in a process of changing from the second camera information to the first camera information; acquiring N pictures obtained by the virtual camera shooting the three-dimensional field model according to the N pieces of intermediate camera information respectively; and generating the first switching moving picture according to the N pictures, wherein N is a positive integer.
Optionally, when the first switching animation is generated according to the N pictures, the first animation generation module 1104 is specifically configured to sort the N pictures in order from first to last according to the N moments; and combining the N pictures into the first switching animation according to the sequencing result of the N pictures.
Optionally, when the N pictures are combined to generate the first switching animation according to the sorting result of the N pictures, the first animation generation module 1104 is specifically configured to sequentially add the N pictures to a target video picture according to the sorting result of the N pictures, and then combine to generate the first switching animation; the target video picture is a video picture which is pushed to the terminal when the operation of switching to the target visual angle is received.
Optionally, the apparatus further comprises:
a position acquisition module, configured to acquire position information of each moving object in the real-scene site at the time when a target video picture is acquired before the first animation generation module acquires N pictures obtained by the virtual camera shooting the three-dimensional site model according to the N pieces of intermediate camera information, respectively; the target video picture is a video picture which is pushed to the terminal when the operation of switching to the target visual angle is received;
the model adding module is used for adding the object model of each activity object in the three-dimensional field model according to the position information of each activity object to obtain the three-dimensional field model after the object model is added;
when acquiring N pictures obtained by the virtual camera shooting the three-dimensional field model according to the N pieces of intermediate camera information, the first animation generation module is specifically configured to perform image acquisition on the three-dimensional field model after the object model is added according to the N pieces of intermediate camera information, so as to acquire the N pictures.
Optionally, the apparatus further comprises:
a third obtaining module, configured to obtain new first camera information when an operation of adjusting a target view angle is received, where the new first camera information is camera information of the virtual camera at the adjusted target view angle;
and the fourth pushing module is used for pushing a new model picture to the terminal, wherein the new model picture is a picture obtained by shooting the three-dimensional field model by the virtual camera according to the new first camera information.
Optionally, the apparatus further comprises:
and the fourth obtaining module is used for shooting the three-dimensional field model after the object model is added from the first camera information before the third pushing module pushes the video picture of the second video to the terminal, so as to obtain the video picture of the second video.
Optionally, the apparatus further comprises:
a fifth obtaining module, configured to obtain third camera information when an operation of resuming video playing is received, where the third camera information is camera information of the virtual camera when the operation of resuming video playing is received;
the second animation generation module is used for generating a second switching animation according to the third camera information and the second camera information;
a fifth pushing module, configured to push the second switching animation to the terminal;
and the sixth pushing module is used for continuously pushing the video picture of the first video to the terminal after the second switching animation is pushed.
Optionally, the apparatus further comprises:
the splicing image obtaining module is used for obtaining a splicing model image before the first pushing module pushes a video image of a first video to a terminal, wherein the splicing model image is obtained by shooting a three-dimensional field model of the real scene field according to the appointed camera information by the virtual camera;
and the splicing module is used for splicing the picture obtained by shooting the real scene field by the entity camera with the spliced model picture to obtain the video picture of the first video.
Optionally, the apparatus further comprises:
the parameter model acquisition module is used for acquiring a field parameter model before the first pushing module pushes the video picture of the first video to the terminal, and the field parameter model is used for indicating the field parameter of the real scene field;
the model adding module is used for adding the site parameter model on the three-dimensional site model;
a parameter picture acquiring module, configured to acquire a parameter model picture, where the parameter model picture is a picture obtained by the virtual camera shooting the site parameter model according to the second camera information;
and the superposition module is used for superposing the parameter model picture on a picture obtained by shooting the real scene field by the entity camera to obtain a video picture of the first video.
Optionally, the apparatus further comprises:
a sixth obtaining module, configured to obtain a first model before the first pushing module pushes a video picture of a first video to a terminal, where the first model is generated in a laser scanning manner and is a three-dimensional model of the real-scene site;
a seventh obtaining module, configured to obtain a second model and a position of the second model in the real-scene site, where the second model is generated in a photo synthesis manner and is a three-dimensional model of a site object; the field object is a fixed object in the live-action field;
and the model merging module is used for merging the first model and the second model according to the position of the second model in the real scene field to obtain the three-dimensional field model.
Optionally, the apparatus further comprises:
and the seventh pushing module is used for pushing the three-dimensional field model to the terminal.
In summary, in the embodiment of the application, in the process of pushing the first video obtained by shooting the real-world field by the entity camera to the terminal, when the operation of switching to the target view angle is received, the first camera information of the virtual camera at the target view angle is obtained, a section of switching animation is generated according to the first camera information and the second camera information of the entity camera when the entity camera collects the video, and after the switching animation is pushed to the terminal, a model picture obtained by shooting the three-dimensional field model of the real-world field by the virtual camera according to the first camera information is pushed. Through the scheme shown in the application, in the live broadcasting process, the director can switch the director picture to the three-dimensional field model of the real scene field on the target view angle through switching operation, that is to say, when the video of the picture generation that the solid camera was shot is pushed, can be with the smooth adjustment of the video picture of propelling movement to the video of shooting the three-dimensional field model through the virtual camera, thereby the visual angle of the field that the adjustment director wanted the propelling movement that can be free wants, because the visual angle of adjustment virtual camera is convenient and fast more for the visual angle of selection and adjustment solid camera, the director can accomplish the visual angle in the time of extremely short and switch, thereby improve the efficiency that the visual angle of carrying out the live broadcasting picture switches in the live broadcasting process.
In addition, in the embodiment of the application, before the model picture is pushed, a section of switching animation is generated and pushed to the terminal for playing, so that an approximately actual camera moving effect is presented, the picture continuity in the switching process is ensured, and the picture display effect when the view angle of the entity camera is switched to the view angle of the virtual camera is improved.
Fig. 12 is a block diagram illustrating a structure of a computer device 1200 according to an exemplary embodiment of the present application. The computer apparatus 1200 includes a Central Processing Unit (CPU)1201, a system memory 1204 including a Random Access Memory (RAM)1202 and a Read Only Memory (ROM)1203, and a system bus 1205 connecting the system memory 1204 and the central processing unit 1201. The computer device 1200 also includes a basic input/output system (I/O system) 1206 for facilitating information transfer between various devices within the computer, and a mass storage device 1207 for storing an operating system 1213, application programs 1214, and other program modules 1215. The computer device 1200 further comprises a video capturing device comprising a physical camera, and further the video capturing device further comprises a microphone/microphone array.
The basic input/output system 1206 includes a display 1208 for displaying information and an input device 1209, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1208 and input device 1209 are connected to the central processing unit 1201 through an input-output controller 1210 coupled to the system bus 1205. The basic input/output system 1206 may also include an input/output controller 1210 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1210 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1207 is connected to the central processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205. The mass storage device 1207 and its associated computer-readable media provide non-volatile storage for the computer device 1200. That is, the mass storage device 1207 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1204 and mass storage device 1207 described above may be collectively referred to as memory.
The computer device 1200 may be connected to the internet or other network devices through a network interface unit 1211 connected to the system bus 1205.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 1201 implements all or part of the steps performed by the director device in the method shown in any one of fig. 2 or fig. 4 by executing the one or more programs.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as a memory comprising computer programs (instructions), executable by a processor of a computer device to perform methods, among the methods illustrated in the various embodiments of the present application, performed by a director device, is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A video picture pushing method, the method comprising:
the method comprises the steps that a video picture of a first video is pushed to a terminal, wherein the first video is a video generated according to a picture obtained by shooting a real scene field by an entity camera;
when receiving an operation of switching to a target view angle, acquiring first camera information, wherein the first camera information is the camera information of a virtual camera at the target view angle, and the camera information comprises a shooting position and a shooting direction;
acquiring second camera information, wherein the second camera information is camera information when the entity camera acquires a picture corresponding to the first video;
generating a first switching animation according to the first camera information and the second camera information;
pushing the first switching animation to the terminal;
after the first switching animation is pushed, pushing a video picture of a second video to the terminal, wherein the second video is a video obtained by shooting a three-dimensional field model of the real scene field by the virtual camera from the first camera information;
wherein the generating a first switching animation according to the first camera information and the second camera information includes:
acquiring N pieces of intermediate camera information, wherein the N pieces of intermediate camera information are respectively corresponding to N moments in the process of changing from the second camera information to the first camera information, and N is a positive integer;
acquiring the time when a target video picture is acquired and the position information of each moving object in the real scene field; the target video picture is a video picture which is pushed to the terminal when the operation of switching to the target visual angle is received;
adding the object model of each activity object in a three-dimensional field model according to the position information of each activity object to obtain the three-dimensional field model after the object model is added;
respectively acquiring images of the three-dimensional field model added with the object model according to the N pieces of intermediate camera information to obtain N pictures;
and generating the first switching picture according to the N pictures.
2. The method of claim 1, wherein the generating the first switching animation from the N pictures comprises:
sequencing the N pictures according to the sequence of the N moments from first to last;
and combining the N pictures to generate the first switching animation according to the sequencing result of the N pictures.
3. The method of claim 2, wherein said combining the N pictures to generate the first switching animation according to the results of the ordering of the N pictures comprises:
according to the sequencing result of the N pictures, after the N pictures are sequentially added to a target video picture, the N pictures are combined to generate the first switching animation; the target video picture is a video picture which is pushed to the terminal when the operation of switching to the target visual angle is received.
4. The method of claim 1, further comprising, prior to pushing the video picture of the second video to the terminal:
shooting the three-dimensional field model after the object model is added from the first camera information to obtain a video picture of the second video.
5. The method of any of claims 1 to 3, further comprising:
when receiving an operation of resuming video playing, acquiring third camera information, wherein the third camera information is the camera information of the virtual camera when receiving the operation of resuming video playing;
generating a second switching animation according to the third camera information and the second camera information;
pushing the second switching animation to the terminal;
and after the second switching animation is pushed, continuously pushing the video picture of the first video to the terminal.
6. The method according to any one of claims 1 to 3, wherein before pushing the video picture of the first video to the terminal, the method further comprises:
acquiring a spliced model picture, wherein the spliced model picture is obtained by shooting a three-dimensional field model of the real scene field according to the specified camera information by the virtual camera;
and carrying out picture splicing on the picture obtained by shooting the real scene field by the entity camera and the spliced model picture to obtain the video picture of the first video.
7. The method according to any one of claims 1 to 3, wherein before pushing the video picture of the first video to the terminal, the method further comprises:
acquiring a site parameter model, wherein the site parameter model is used for indicating site parameters of the live-action site;
adding the site parameter model to the three-dimensional site model;
acquiring a parameter model picture, wherein the parameter model picture is obtained by shooting the field parameter model by the virtual camera according to the second camera information;
and overlaying the parameter model picture on a picture obtained by shooting the real scene field by the entity camera to obtain a video picture of the first video.
8. The method according to any one of claims 1 to 3, further comprising, before pushing the video picture of the first video to the terminal:
acquiring a first model, wherein the first model is generated in a laser scanning mode and is a three-dimensional model of the real scene field;
acquiring a second model and the position of the second model in the real scene field, wherein the second model is generated in a photo synthesis mode and is a three-dimensional model of a field object; the field object is a fixed object in the live-action field;
and combining the first model and the second model according to the position of the second model in the real scene field to obtain the three-dimensional field model.
9. The method of claim 8, further comprising:
and pushing the three-dimensional field model to the terminal.
10. A video picture pushing apparatus, comprising:
the first pushing module is used for pushing a video picture of a first video to the terminal, wherein the first video is a video generated according to a picture obtained by shooting a real scene field by an entity camera;
the first acquisition module is used for acquiring first camera information when receiving an operation of switching to a target view angle, wherein the first camera information is the camera information of a virtual camera at the target view angle, and the camera information comprises a shooting position and a shooting direction;
a second obtaining module, configured to obtain second camera information, where the second camera information is camera information obtained when the entity camera collects the first video;
the first animation generation module is used for generating a first switching animation according to the first camera information and the second camera information;
the second pushing module is used for pushing the first switching animation to the terminal;
a third pushing module, configured to push a video image of a second video to the terminal after the first switching animation is pushed, where the second video is a video obtained by the virtual camera shooting a three-dimensional field model of the real-scene field from the first camera information;
the first animation generation module is further configured to acquire N pieces of intermediate camera information, where the N pieces of intermediate camera information are camera information respectively corresponding to N times in a process of changing from the second camera information to the first camera information, and N is a positive integer; acquiring the time when a target video picture is acquired and the position information of each moving object in the real scene field; the target video picture is a video picture which is pushed to the terminal when the operation of switching to the target visual angle is received; adding the object model of each activity object in a three-dimensional field model according to the position information of each activity object to obtain the three-dimensional field model after the object model is added; respectively acquiring images of the three-dimensional field model added with the object model according to the N pieces of intermediate camera information to obtain N pictures; and generating a first switching animation according to the N pictures.
11. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the video picture pushing method according to any one of claims 1 to 9.
12. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the video picture pushing method according to any one of claims 1 to 9.
CN201910176636.4A 2019-03-08 2019-03-08 Video picture pushing method and device, computer equipment and storage medium Active CN109889914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910176636.4A CN109889914B (en) 2019-03-08 2019-03-08 Video picture pushing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910176636.4A CN109889914B (en) 2019-03-08 2019-03-08 Video picture pushing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109889914A CN109889914A (en) 2019-06-14
CN109889914B true CN109889914B (en) 2021-04-02

Family

ID=66931312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910176636.4A Active CN109889914B (en) 2019-03-08 2019-03-08 Video picture pushing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109889914B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225224B (en) * 2019-07-05 2023-05-16 北京星律动科技有限公司 Virtual image guiding and broadcasting method, device and system
CN110290360B (en) * 2019-08-01 2021-02-23 浙江开奇科技有限公司 Image stitching method and terminal equipment for panoramic video image
CN111586317A (en) * 2020-05-27 2020-08-25 上海姝嫣文化传播中心 Picture scene switching method and device, computer equipment and storage medium
CN113784148A (en) * 2020-06-10 2021-12-10 阿里巴巴集团控股有限公司 Data processing method, system, related device and storage medium
CN113301351B (en) * 2020-07-03 2023-02-24 阿里巴巴集团控股有限公司 Video playing method and device, electronic equipment and computer storage medium
CN111711733B (en) * 2020-08-18 2020-11-13 北京理工大学 Live broadcast scheme simulation design verification method
CN113055547A (en) * 2020-08-26 2021-06-29 视伴科技(北京)有限公司 Method and device for previewing event activities
CN113472999B (en) * 2020-09-11 2023-04-18 青岛海信电子产业控股股份有限公司 Intelligent device and control method thereof
CN112543342B (en) * 2020-11-26 2023-03-14 腾讯科技(深圳)有限公司 Virtual video live broadcast processing method and device, storage medium and electronic equipment
CN112770018A (en) * 2020-12-07 2021-05-07 深圳市大富网络技术有限公司 Three-dimensional display method and device for 3D animation and computer readable storage medium
CN112770017A (en) * 2020-12-07 2021-05-07 深圳市大富网络技术有限公司 3D animation playing method and device and computer readable storage medium
CN113157178B (en) * 2021-02-26 2022-03-15 北京五八信息技术有限公司 Information processing method and device
CN113114956A (en) * 2021-03-18 2021-07-13 深圳市博实结科技有限公司 Method and device for video information superposition
CN113041613B (en) * 2021-04-26 2022-08-09 腾讯科技(深圳)有限公司 Method, device, terminal and storage medium for reviewing game
CN113573079B (en) * 2021-09-23 2021-12-24 北京全心数字技术有限公司 Method for realizing free visual angle live broadcast mode
CN115965519A (en) * 2021-10-08 2023-04-14 北京字跳网络技术有限公司 Model processing method, device, equipment and medium
CN113965766A (en) * 2021-10-27 2022-01-21 腾竞体育文化发展(上海)有限公司 Live event broadcasting system, method and device for electric competition, computer equipment and storage medium
CN114745598B (en) * 2022-04-12 2024-03-19 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium
CN117278731A (en) * 2023-11-21 2023-12-22 启迪数字科技(深圳)有限公司 Multi-video and three-dimensional scene fusion method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2030661A2 (en) * 2007-08-30 2009-03-04 Kabushiki Kaisha Square Enix (also trading as Square Enix Co., Ltd.) Image generating apparatus, method of generating image, program, and recording medium
CN102857701A (en) * 2012-09-14 2013-01-02 北京东方艾迪普科技发展有限公司 Method for tracking virtual camera in three-dimensional scene
CN103871109A (en) * 2014-04-03 2014-06-18 深圳市德赛微电子技术有限公司 Virtual reality system free viewpoint switching method
CN104243961A (en) * 2013-06-18 2014-12-24 财团法人资讯工业策进会 Display system and method of multi-view image
CN107729673A (en) * 2017-10-30 2018-02-23 中建三局第建设工程有限责任公司 Road and bridge outdoor scene model analysis method, apparatus and its construction method based on BIM
CN107895399A (en) * 2017-10-26 2018-04-10 广州市雷军游乐设备有限公司 A kind of omnibearing visual angle switching method, device, terminal device and storage medium
CN108717733A (en) * 2018-06-07 2018-10-30 腾讯科技(深圳)有限公司 View angle switch method, equipment and the storage medium of virtual environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4990852B2 (en) * 2008-07-31 2012-08-01 Kddi株式会社 Free viewpoint video generation system and recording medium for three-dimensional movement
US9298283B1 (en) * 2015-09-10 2016-03-29 Connectivity Labs Inc. Sedentary virtual reality method and systems

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2030661A2 (en) * 2007-08-30 2009-03-04 Kabushiki Kaisha Square Enix (also trading as Square Enix Co., Ltd.) Image generating apparatus, method of generating image, program, and recording medium
CN102857701A (en) * 2012-09-14 2013-01-02 北京东方艾迪普科技发展有限公司 Method for tracking virtual camera in three-dimensional scene
CN104243961A (en) * 2013-06-18 2014-12-24 财团法人资讯工业策进会 Display system and method of multi-view image
CN103871109A (en) * 2014-04-03 2014-06-18 深圳市德赛微电子技术有限公司 Virtual reality system free viewpoint switching method
CN107895399A (en) * 2017-10-26 2018-04-10 广州市雷军游乐设备有限公司 A kind of omnibearing visual angle switching method, device, terminal device and storage medium
CN107729673A (en) * 2017-10-30 2018-02-23 中建三局第建设工程有限责任公司 Road and bridge outdoor scene model analysis method, apparatus and its construction method based on BIM
CN108717733A (en) * 2018-06-07 2018-10-30 腾讯科技(深圳)有限公司 View angle switch method, equipment and the storage medium of virtual environment

Also Published As

Publication number Publication date
CN109889914A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN109889914B (en) Video picture pushing method and device, computer equipment and storage medium
US9751015B2 (en) Augmented reality videogame broadcast programming
US8885023B2 (en) System and method for virtual camera control using motion control systems for augmented three dimensional reality
KR101713772B1 (en) Apparatus and method for pre-visualization image
JP6894962B2 (en) Image data capture method, device, and program for free-viewpoint video
CN106683195B (en) AR scene rendering method based on indoor positioning
JP5861499B2 (en) Movie presentation device
JP2024050721A (en) Information processing device, information processing method, and computer program
US8885022B2 (en) Virtual camera control using motion control systems for augmented reality
US11232626B2 (en) System, method and apparatus for media pre-visualization
Mase et al. Socially assisted multi-view video viewer
CN114390193B (en) Image processing method, device, electronic equipment and storage medium
JP7254464B2 (en) Information processing device, control method for information processing device, and program
JP7207913B2 (en) Information processing device, information processing method and program
CN114598819A (en) Video recording method and device and electronic equipment
CN110764247A (en) AR telescope
US20160344946A1 (en) Screen System
CN117333644A (en) Virtual reality display picture generation method, device, equipment and medium
Wang et al. Camswarm: Instantaneous smartphone camera arrays for collaborative photography
JP2021068330A (en) Information processor, information processing method, and program
CN116962748A (en) Live video image rendering method and device and live video system
JP2022131777A (en) Information processing device, system having the same, information processing method, and program
JP2023019088A (en) Image processing apparatus, image processing method, and program
CN108475410A (en) 3 D stereo watermark adding method, device and terminal
JP2002271692A (en) Image processing method, image processing unit, studio apparatus, studio system and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant