CN113596544A - Video generation method and device, electronic equipment and storage medium - Google Patents

Video generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113596544A
CN113596544A CN202110844901.9A CN202110844901A CN113596544A CN 113596544 A CN113596544 A CN 113596544A CN 202110844901 A CN202110844901 A CN 202110844901A CN 113596544 A CN113596544 A CN 113596544A
Authority
CN
China
Prior art keywords
target
audience
current
cameras
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110844901.9A
Other languages
Chinese (zh)
Inventor
王博
刘智美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110844901.9A priority Critical patent/CN113596544A/en
Publication of CN113596544A publication Critical patent/CN113596544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a video generation method, a video generation device, electronic equipment and a storage medium, and relates to the technical field of image processing. The method comprises the steps of obtaining the current audience visual angle and the current audience position of an audience in a target scene; finding out at least one target camera in the same direction as the current audience visual angle according to the current audience visual angle, the current audience position, the visual field directions of the cameras and the distance between the cameras and the target intersection point; calculating a current visual angle included angle between the target camera and the audience according to the current audience visual angle, the current audience position, the visual field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point; shifting a video picture shot by a target camera according to a current visual angle included angle between the target camera and the audience; and generating a target video picture based on the shifted video picture. The method, the device, the electronic equipment and the storage medium can realize the switching of any visual angle of the video.

Description

Video generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video generation method and apparatus, an electronic device, and a storage medium.
Background
With the popularity of live broadcast and short video, people have higher requirements on the watching form of the video, and panoramic video is a technology which is raised under the requirements, but the panoramic video is mainly applied to range recording, and audiences can only watch the panoramic video from a specified angle in the watching process and cannot interact with scenes. For example, in a panoramic video of a display study, the viewer has no way to approach the desk lamp to see the brand of the desk lamp, nor to go around the other side of the desk to see what the other side of the desk lamp is long. The viewing angle of the audience is limited to the position of the shooting equipment, and the audience cannot approach to the object concerned or change the angle to see the same object.
Therefore, how to provide an effective scheme for switching any view angle of a video has become an urgent problem in the prior art.
Disclosure of Invention
In a first aspect, an embodiment of the present application provides a video generation method, including:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
In one possible design, the generating the target video frame based on the shifted video frame includes:
if at least one target camera is a camera, taking a video picture obtained after the deviation as the target video picture;
and if at least one target camera is a plurality of cameras, splicing a plurality of video pictures obtained after the deviation to obtain the target video picture.
In one possible design, before obtaining the current viewer perspective and the current viewer position of the viewer in the target scene, the method further includes:
and adjusting the visual angle and/or the position of the audience in the target scene in response to the adjustment operation of the user.
In one possible design, before adjusting the viewing angle and/or the position of the viewer in the target scene in response to the user's adjustment operation, the method further includes:
the viewing angle and position of the viewer in the target scene is initialized.
In one possible design, the method further includes:
calculating a distance between at least one of the target cameras and the current viewer position;
before or after shifting the video picture shot by at least one target camera according to the included angle between the target camera and the current view angle of the audience, the method further comprises the following steps:
zooming a video picture shot by at least one target camera according to the distance between the at least one target camera and the current audience position;
generating a target video picture based on the shifted video picture, comprising:
and obtaining a target video picture based on the shifted and scaled video pictures.
In one possible design, shifting a video picture captured by at least one of the target cameras according to a current angle of view of the at least one of the target cameras from a viewer includes:
calculating the offset angle of the video picture corresponding to at least one target camera according to the current view angle between the at least one target camera and the audience;
and shifting the video picture shot by at least one target camera according to the shift angle of the video picture corresponding to at least one target camera.
In one possible design, the method further includes:
and receiving the view field directions of the cameras, the distances between the cameras and the target intersection point and the identification numbers of the cameras uploaded by third-party equipment connected with the cameras.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including:
the acquisition module is used for acquiring the current audience viewing angle and the current audience position of the audience in the target scene;
the searching module is used for finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
the calculation module is used for calculating the current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
the offset module is used for offsetting a video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience;
the generation module is used for generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the bus;
a memory for storing a computer program;
the processor is used for executing the program stored in the memory and realizing the following processes:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and when executed by a processor, the computer program implements the following procedures:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
The above-mentioned at least one technical scheme that this application one or more embodiments adopted can reach following beneficial effect:
the method comprises the steps of firstly finding out a target camera in the same direction as the current view angle of audiences, calculating the current view angle between the target camera and the audiences, then shifting a video picture shot by at least one target camera based on the current view angle between the target camera and the audiences, and generating a target video picture based on the shifted video picture. Therefore, the displayed video picture can change along with the change of the visual angle of the audience, so that the switching of any visual angle of the video is facilitated, and the interaction between the audience and the scene is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure in any way. In the drawings:
fig. 1 is a schematic application environment diagram of a video generation method, an apparatus, an electronic device, and a storage medium according to an embodiment of the present application.
Fig. 2 is a flowchart of a video generation method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application.
Detailed Description
In order to facilitate switching of any visual angle of a video, the embodiment of the application provides a video generation method, a video generation device, an electronic device and a storage medium, and the video generation method, the video generation device, the electronic device and the storage medium can enable a displayed video picture to change along with the change of the visual angle of a viewer so as to facilitate switching of any visual angle of the video.
First, in order to more intuitively understand the scheme provided by the embodiment of the present application, a system architecture of the video generation scheme provided by the embodiment of the present application is described below with reference to fig. 1.
Fig. 1 is a schematic application environment diagram of a video generation method, an apparatus, an electronic device, and a storage medium according to one or more embodiments of the present application. As shown in fig. 1, the third party device is connected to a plurality of cameras and is communicatively connected to the video playback device. The view field directions of the cameras face to the same region in the target scene, and the view field center lines of the cameras have a common intersection point, namely the target intersection point. The third-party equipment can send the data such as the view field directions of the cameras, the identification numbers of the cameras, the distances from the cameras to the target intersection point, the videos shot by the cameras, the number of the cameras and the like to the video playing equipment. The third-party device may be, but is not limited to, a server, a personal computer, or other devices for data summarization and forwarding, the video playing device may be, but is not limited to, a device with a video playing function, such as a personal computer, a smart phone, a tablet computer, a smart television, and the like, and the identification number may be a number or a device code that uniquely identifies the camera.
The video generation method provided by the embodiment of the present application will be described in detail below.
The video generation method provided by the embodiment of the application can be applied to video playing equipment. For convenience of description, the embodiments of the present application are described with a video playback device as an implementation subject, unless otherwise specified.
It is to be understood that the described execution body does not constitute a limitation of the embodiments of the present application.
As shown in fig. 2, a video generation method provided in an embodiment of the present application may include the following steps:
step S201, a current viewer viewing angle and a current viewer position of a viewer in a target scene are obtained.
In the embodiment of the application, when the video playing device starts to play the video picture shot by the camera, the vision and the position of the audience in the target scene may be initialized first. The user at the video playing device end can initiate the adjustment operation aiming at the vision and/or the position of the audience, and at the moment, the video playing device responds to the adjustment operation of the user and adjusts the visual angle and/or the position of the audience in the target scene.
After responding to the adjustment operation of the user each time, the video playing device can reacquire the current audience viewing angle and the current audience position of the audience in the target scene.
Step S202, at least one target camera in the same direction as the current audience visual angle is found out from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point.
In the embodiment of the application, the third-party device is connected with the plurality of cameras, and can send data such as the view field directions of the plurality of cameras, the identification numbers of the plurality of cameras, the distances from the plurality of cameras to the target intersection point, videos shot by the plurality of cameras, the number of the plurality of cameras and the like to the video playing device. The video playing device can find out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point.
Wherein, in the target scene, the center line of the current viewing angle of the audience is just within the field of view of one or more cameras, and the one or more cameras can be called as target cameras in the same direction as the current viewing angle of the audience.
It should be noted that, when the third-party device sends the videos captured by the multiple cameras to the video playing device, the third-party device needs to send the videos captured by the multiple cameras to the video playing device synchronously through the unified clock generator, so as to ensure that the multiple video pictures received by the video playing device keep clock synchronization, and avoid the situation that the video pictures are inconsistent with the actual situation due to asynchronous clocks when picture splicing is performed subsequently.
Step S203, calculating the included angle between at least one target camera and the current viewing angle of the audience according to the current viewing angle of the audience, the current viewing position, the viewing field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point.
The angle between the target camera and the current viewing angle of the audience may be an angle between a center line of a viewing field of the target camera and a center line of the current viewing angle of the audience.
In the embodiment of the application, the target intersection point is the central point of the target scene, and the position of the target intersection point is known. When the current view angle between the at least one target camera and the audience is calculated, the center line direction of the view field of each target camera can be determined according to the view field direction of each target camera in the at least one target camera, and the position of each target camera is determined according to the center line direction of the view field of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point. And then calculating the current view angle included angle between each target camera and the audience according to the position of each target camera, the view field center line direction of each target camera, the center line direction of the current audience view angle and the current audience position.
And step S204, shifting a video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience.
Specifically, the offset angle of the video image corresponding to each target camera may be calculated according to the current angle between each target camera in the at least one target camera and the audience. And then, shifting the video pictures shot by each target camera according to the shift angles of the video pictures corresponding to each target camera.
In one possible design, the video playback device may further calculate the distance between each target camera and the current viewer position based on the current viewer position and the position of each target camera. Before or after the video pictures shot by at least one target camera are shifted, the video pictures shot by each target camera can be zoomed according to the distance between each target camera and the current audience position. Specifically, the video images captured by the target cameras can be magnified, and the magnification factor is increased as the distance between the target camera and the current viewer position is increased. Therefore, the size of the object in the zoomed video picture is consistent with the size of the object shot from the current audience position, and the video picture is prevented from being distorted.
In step S205, a target video screen is generated based on the shifted video screen.
Specifically, if at least one target camera is a camera, the video picture obtained after the offset is used as the target video picture. And if at least one target camera is a plurality of cameras, splicing a plurality of video pictures obtained after the deviation to obtain a target video picture.
Splicing video frames is the prior art, and the embodiment of the present application is not specifically described.
In one possible design, if the video frames captured by the target cameras are scaled, the target video frames may be generated based on the offset and scaled video frames when the target video frames are obtained. Namely, if at least one target camera is a camera, the video picture obtained after offset and zooming is taken as a target video picture, and if at least one target camera is a plurality of cameras, the plurality of video pictures obtained after offset and zooming are spliced to obtain the target video picture.
In summary, in the video generation method provided in the embodiment of the present application, the target camera in the same direction as the current viewing angle of the audience is found, the current viewing angle included angle between the target camera and the audience is calculated, then, based on the current viewing angle included angle between the target camera and the audience, the video picture captured by at least one target camera is shifted, and based on the shifted video picture, the target video picture is generated. Therefore, the displayed video picture can change along with the change of the visual angle of the audience, so that the switching of any visual angle of the video is facilitated, and the interaction between the audience and the scene is realized. Meanwhile, when the third-party equipment sends the videos shot by the cameras to the video playing equipment, the videos shot by the cameras are synchronously sent to the video playing equipment, so that the multiple video pictures received by the video playing equipment are ensured to keep clock synchronization, and the situation that the video pictures are inconsistent with the actual situation due to asynchronous clocks when the pictures are spliced subsequently is avoided. In addition, the video pictures shot by the target cameras can be zoomed according to the distance between the target cameras and the current audience position, so that the size of the object in the zoomed video pictures is consistent with the size of the object shot from the current audience position, and the distortion of the video pictures is avoided.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 3, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 3, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the video generation device on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
The method performed by the video generation apparatus according to the embodiment shown in fig. 3 of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in one or more embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with one or more embodiments of the present application may be embodied directly in the hardware decoding processor, or in a combination of the hardware and software modules included in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may also execute the method of fig. 2 and implement the functions of the video generation apparatus in the embodiment shown in fig. 3, which are not described herein again in this embodiment of the present application.
Of course, besides the software implementation, the electronic device of the present application does not exclude other implementations, such as a logic device or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or a logic device.
Embodiments of the present application also provide a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by a portable electronic device including a plurality of application programs, enable the portable electronic device to perform the method of the embodiment shown in fig. 2, and are specifically configured to:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
Fig. 4 is a schematic structural diagram of a video generation apparatus according to an embodiment of the present application. Referring to fig. 4, in one software implementation, a video generation apparatus includes:
the acquisition module is used for acquiring the current audience viewing angle and the current audience position of the audience in the target scene;
the searching module is used for finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
the calculation module is used for calculating the current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
the offset module is used for offsetting a video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience;
the generation module is used for generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
In short, the above description is only a preferred embodiment of this document, and is not intended to limit the scope of protection of this document. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this document shall be included in the protection scope of this document.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
All the embodiments in this document are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (10)

1. A method of video generation, comprising:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
2. The method of claim 1, wherein generating the target video picture based on the shifted video picture comprises:
if at least one target camera is a camera, taking a video picture obtained after the deviation as the target video picture;
and if at least one target camera is a plurality of cameras, splicing a plurality of video pictures obtained after the deviation to obtain the target video picture.
3. The method of claim 1, wherein prior to obtaining the current viewer perspective and current viewer position of the viewer in the target scene, the method further comprises:
and adjusting the visual angle and/or the position of the audience in the target scene in response to the adjustment operation of the user.
4. The method of claim 3, wherein prior to adjusting the perspective and/or position of the viewer in the target scene in response to the user adjustment, the method further comprises:
the viewing angle and position of the viewer in the target scene is initialized.
5. The method of claim 1, further comprising:
calculating a distance between at least one of the target cameras and the current viewer position;
before or after shifting the video picture shot by at least one target camera according to the included angle between the target camera and the current view angle of the audience, the method further comprises the following steps:
zooming a video picture shot by at least one target camera according to the distance between the at least one target camera and the current audience position;
generating a target video picture based on the shifted video picture, comprising:
and obtaining a target video picture based on the shifted and scaled video pictures.
6. The method of claim 1, wherein shifting the video frames captured by the at least one target camera according to the angle between the at least one target camera and the current viewing angle of the viewer comprises:
calculating the offset angle of the video picture corresponding to at least one target camera according to the current view angle between the at least one target camera and the audience;
and shifting the video picture shot by at least one target camera according to the shift angle of the video picture corresponding to at least one target camera.
7. The method of claim 1, further comprising:
and receiving the view field directions of the cameras, the distances between the cameras and the target intersection point and the identification numbers of the cameras uploaded by third-party equipment connected with the cameras.
8. A video generation apparatus, comprising:
the acquisition module is used for acquiring the current audience viewing angle and the current audience position of the audience in the target scene;
the searching module is used for finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
the calculation module is used for calculating the current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
the offset module is used for offsetting a video picture shot by at least one target camera according to the included angle between the at least one target camera and the current visual angle of the audience;
the generation module is used for generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the bus;
a memory for storing a computer program;
the processor is used for executing the program stored in the memory and realizing the following processes:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
10. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the following procedure:
acquiring a current audience viewing angle and a current audience position of an audience in a target scene;
finding out at least one target camera in the same direction as the current audience visual angle from the plurality of cameras according to the current audience visual angle, the current audience position, the visual field directions of the plurality of cameras and the distance between the plurality of cameras and the target intersection point;
calculating a current view angle included angle between at least one target camera and the audience according to the current audience view angle, the current audience position, the view field direction of each target camera, the distance between each target camera and the target intersection point and the position of the target intersection point;
shifting a video picture shot by at least one target camera according to a current view angle between the at least one target camera and an audience;
generating a target video picture based on the shifted video picture;
the field of view directions of the cameras face to the same region in a target scene, and the target intersection point is a common intersection point of field of view center lines of the cameras.
CN202110844901.9A 2021-07-26 2021-07-26 Video generation method and device, electronic equipment and storage medium Pending CN113596544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110844901.9A CN113596544A (en) 2021-07-26 2021-07-26 Video generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110844901.9A CN113596544A (en) 2021-07-26 2021-07-26 Video generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113596544A true CN113596544A (en) 2021-11-02

Family

ID=78249991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110844901.9A Pending CN113596544A (en) 2021-07-26 2021-07-26 Video generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113596544A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500846A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Method, device and equipment for switching viewing angles of live action and readable storage medium
CN115373571A (en) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 Image display device, method, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546720A (en) * 2012-07-13 2014-01-29 晶睿通讯股份有限公司 Processing system and processing method for synthesizing virtual visual angle image
CN108174240A (en) * 2017-12-29 2018-06-15 哈尔滨市舍科技有限公司 Panoramic video playback method and system based on user location
US10390007B1 (en) * 2016-05-08 2019-08-20 Scott Zhihao Chen Method and system for panoramic 3D video capture and display
CN113038117A (en) * 2021-03-08 2021-06-25 烽火通信科技股份有限公司 Panoramic playing method and device based on multiple visual angles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546720A (en) * 2012-07-13 2014-01-29 晶睿通讯股份有限公司 Processing system and processing method for synthesizing virtual visual angle image
US10390007B1 (en) * 2016-05-08 2019-08-20 Scott Zhihao Chen Method and system for panoramic 3D video capture and display
CN108174240A (en) * 2017-12-29 2018-06-15 哈尔滨市舍科技有限公司 Panoramic video playback method and system based on user location
CN113038117A (en) * 2021-03-08 2021-06-25 烽火通信科技股份有限公司 Panoramic playing method and device based on multiple visual angles

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500846A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Method, device and equipment for switching viewing angles of live action and readable storage medium
CN114500846B (en) * 2022-02-12 2024-04-02 北京蜂巢世纪科技有限公司 Live action viewing angle switching method, device, equipment and readable storage medium
CN115373571A (en) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 Image display device, method, equipment and medium
CN115373571B (en) * 2022-10-26 2023-02-03 四川中绳矩阵技术发展有限公司 Image display device, method, equipment and medium

Similar Documents

Publication Publication Date Title
US10958834B2 (en) Method to capture, store, distribute, share, stream and display panoramic image or video
US9172856B2 (en) Folded imaging path camera
US20170064174A1 (en) Image shooting terminal and image shooting method
CN108933920B (en) Video picture output and viewing method and device
CN113596544A (en) Video generation method and device, electronic equipment and storage medium
CN107734244B (en) Panorama movie playback method and playing device
CN103078924A (en) Visual field sharing method and equipment
CN106470313B (en) Image generation system and image generation method
CN111385484B (en) Information processing method and device
CN111919451A (en) Live broadcasting method, live broadcasting device and terminal
US9325776B2 (en) Mixed media communication
US11206349B2 (en) Video processing method, apparatus and medium
WO2018103371A1 (en) Processing method in video recording and apparatus
CN107993253B (en) Target tracking method and device
CN109873941B (en) Panoramic lens assembly and panoramic generation method
CN111161148B (en) Panoramic image generation method, device, equipment and storage medium
WO2021022810A1 (en) Automatic focusing method and apparatus, electronic device, and readable storage medium
CN114025217A (en) Image display method, equipment and storage medium
CN108933881B (en) Video processing method and device
CN107241612B (en) Network live broadcast method and device
CN112532856A (en) Shooting method, device and system
CN117255247B (en) Method and device for linkage of panoramic camera and detail dome camera
JP7449519B2 (en) Systems, methods, and computer-readable media for video processing
CN114374815B (en) Image acquisition method, device, terminal and storage medium
CN112887655B (en) Information processing method and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination