CN113014960B - Method, device and storage medium for online video production - Google Patents

Method, device and storage medium for online video production Download PDF

Info

Publication number
CN113014960B
CN113014960B CN201911316898.2A CN201911316898A CN113014960B CN 113014960 B CN113014960 B CN 113014960B CN 201911316898 A CN201911316898 A CN 201911316898A CN 113014960 B CN113014960 B CN 113014960B
Authority
CN
China
Prior art keywords
video
picture
target
canvas
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911316898.2A
Other languages
Chinese (zh)
Other versions
CN113014960A (en
Inventor
李杨
王辉
潘梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911316898.2A priority Critical patent/CN113014960B/en
Publication of CN113014960A publication Critical patent/CN113014960A/en
Application granted granted Critical
Publication of CN113014960B publication Critical patent/CN113014960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a method, a device and a storage medium for online video production, wherein the method comprises the following steps: acquiring a file to be played and a video canvas, wherein the file to be played comprises at least two playing pictures; copying a target pixel area in each playing picture to the video canvas; coding the video canvas copied with the pixel area to obtain a target video; and sending the target video to a client. The scheme can improve the effect of ensuring the clarity of the picture and avoiding the content of the played picture from being shielded.

Description

Method, device and storage medium for online video production
Technical Field
The embodiment of the application relates to the technical field of video processing, in particular to a method and a device for online making of videos and a storage medium.
Background
In online video learning, a picture-in-picture mode or a large television mode is generally adopted to realize real-time online teaching. Specifically, when the picture-in-picture mode is adopted, a teacher teaching picture (which may be called a small picture) and a playing picture (which may be called a large picture) photographed by the camera are combined into one online teaching picture, and the setting of the teacher teaching picture is generally smaller (that is, the teacher teaching picture is set to be a small picture), for example, the teacher teaching picture is set at the edge of the online teaching picture. When the large television mode is adopted, the playing picture is played through the large television, the teacher stands in front of the television, the large television and the teacher are shot through the high-definition camera, and then the online teaching picture is generated.
In the research and practice process of the prior art, the inventor of the embodiment of the present application finds that, when a picture-in-picture mode is adopted, if the content in the original playing picture is more or is distributed at the edge of the online teaching picture, the content of the large picture can be blocked by the small picture, so that the content is not fully displayed. In addition, when the on-line lecture pictures are played in sequence, the transition between the large picture and the small picture is unnatural and unsmooth because the played pictures and the teacher lecture pictures are simply superimposed and synthesized. When a large television mode is adopted, the definition of the content of the played picture is obviously reduced because the picture is captured by the camera.
Therefore, the current online video learning mode can not ensure that the picture is clear and the content of the played picture is shielded at the same time.
Disclosure of Invention
The embodiment of the application provides a method, a device and a storage medium for online video production, which can improve the effect of ensuring clear pictures and avoiding the content of the played pictures from being blocked.
In a first aspect, an embodiment of the present application provides a method for making a video online, where the method includes:
acquiring a file to be played and a video canvas, wherein the file to be played comprises at least two playing pictures;
copying a target pixel area in each playing picture to the video canvas;
coding the video canvas copied with the pixel area to obtain a target video;
and sending the target video to a client.
In one possible design, the obtaining the video canvas comprises:
acquiring a first length-width ratio of a playing picture and a second length-width ratio of a video picture;
determining the video canvas according to the first aspect ratio and the second aspect ratio.
In one possible design, the determining the video canvas according to the first aspect ratio and the second aspect ratio includes:
when the first aspect ratio is larger than the second aspect ratio, if the width of the playing picture is larger than the width of the video picture, taking the width of the playing picture as the target width of the video canvas, and obtaining the target length of the video canvas according to the width of the playing picture and the second aspect ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
In one possible design, the determining the video canvas according to the first aspect ratio and the second aspect ratio includes:
when the first aspect ratio is smaller than the second aspect ratio, if the length of the playing picture is larger than that of the video picture, taking the length of the playing picture as the target length of the video canvas, and obtaining the target width of the video canvas according to the length of the playing picture and the second aspect ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
In one possible design, after the file to be played and the video canvas are obtained, and after the target pixel area in each playing picture is copied onto the video canvas, the method further includes:
acquiring the pixel value of each pixel point on each playing picture;
and determining the target pixel area according to the pixel values of the pixel points.
In one possible design, the determining the target pixel area according to the pixel values of the pixels includes:
if the pixel point is determined to be non-white according to the pixel value of the pixel point, taking the non-white pixel point as the target pixel point to obtain the target pixel area;
the copying the target pixel area in each playing picture onto the video canvas comprises:
determining the position information of each target pixel point in the playing picture;
and assigning the target pixel points to corresponding positions of the video canvas according to the position information of each target pixel point on the playing picture.
In one possible design, the target video is stored on a blockchain node.
In a second aspect, an embodiment of the present application provides an online video production apparatus having a function of implementing a method for producing a video online corresponding to the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware.
In one possible design, the apparatus includes:
the processing module is used for acquiring a file to be played and a video canvas, wherein the file to be played comprises at least two playing pictures; copying a target pixel area in each playing picture to the video canvas;
the coding module is used for coding the video canvas copied by the pixel area by the processing module to obtain a target video;
and the receiving and sending module is used for sending the target video to the client.
In one possible design, the processing module is specifically configured to:
acquiring a first length-width ratio of a playing picture and a second length-width ratio of a video picture;
determining the video canvas according to the first aspect ratio and the second aspect ratio.
In one possible design, the processing module is specifically configured to:
when the first aspect ratio is larger than the second aspect ratio, if the width of the playing picture is larger than the width of the video picture, taking the width of the playing picture as the target width of the video canvas, and obtaining the target length of the video canvas according to the width of the playing picture and the second aspect ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
In one possible design, the processing module is specifically configured to:
when the first aspect ratio is smaller than the second aspect ratio, if the length of the playing picture is larger than that of the video picture, taking the length of the playing picture as the target length of the video canvas, and obtaining the target width of the video canvas according to the length of the playing picture and the second aspect ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
In one possible design, after the processing module obtains the file to be played and the video canvas, before the processing module copies the target pixel area in each playing picture onto the video canvas, the processing module is further configured to:
acquiring the pixel value of each pixel point on each playing picture;
and determining the target pixel area according to the pixel values of the pixel points.
In one possible design, the target pixel region includes at least two target pixel points, and the processing module is specifically configured to:
if the pixel point is determined to be non-white according to the pixel value of the pixel point, taking the non-white pixel point as the target pixel point to obtain the target pixel area;
determining the position information of each target pixel point in the playing picture;
and assigning the target pixel points to corresponding positions of the video canvas according to the position information of each target pixel point on the playing picture.
In one possible design, the target video is stored on a blockchain node.
In yet another aspect, the present invention provides an online video production apparatus, which includes at least one connected processor, a memory and a transceiver, where the memory is used for storing a computer program, and the processor is used for calling the computer program in the memory to execute the method of the first aspect.
In yet another aspect, embodiments of the present application provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the method of the first aspect.
Compared with the prior art, in the scheme provided by the embodiment of the application, the target pixel area in each playing picture is copied to the video canvas, and the video canvas copied with the pixel area is encoded to obtain the target video. The scheme can improve the effect of ensuring the clarity of the picture and avoiding the content of the played picture from being shielded.
Drawings
FIG. 1 is a schematic flow chart of a method for online production of video in an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating the determination of the video canvas according to the embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating the process of determining the target pixel region according to the embodiment of the present application;
FIG. 4a is a schematic diagram of an interface for enabling PIP functionality according to an embodiment of the present application;
FIG. 4b is a schematic diagram of an interface for obtaining a user explanation of a playing screen according to an embodiment of the present application;
FIG. 5 is a block chain system according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an online video production device according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a physical device for performing the method of online video production according to the embodiment of the present application;
fig. 8 is a schematic structural diagram of a physical device for performing the method of making a video online in the embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and in the claims of the embodiments of the application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise," "include," and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that a division of modules presented in an embodiment of the present application is merely a logical division and may be implemented in a practical application in a different manner, such that multiple modules may be combined or integrated into another system or some features may be omitted or not implemented, such that a shown or discussed coupling or direct coupling or communication between modules may be through some interfaces and an indirect coupling or communication between modules may be electrical or other similar, and such that embodiments are not limited in this application. Moreover, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiments of the present application.
The embodiment of the application provides a method, a device and a storage medium for online video production, which can be used for a server, wherein the server can be a server or a terminal. The server can be used for making videos to achieve online education, online teaching or live broadcast interaction. In the embodiment of the present application, only the server is deployed in the terminal as an example, and the server may also be referred to as a video online production device.
It should be noted that, in particular, the terminal according to the embodiments of the present application may refer to a device providing voice and/or data connectivity to a user, a handheld device having a wireless connection function, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, such as mobile devices that may be portable, pocket, hand-held, computer-included, or vehicle-mounted, that exchange voice and/or data with a radio access network. Examples of such devices include Personal Communication Service (PCS) phones, cordless phones, session Initiation Protocol (SIP) phones, wireless Local Loop (WLL) stations, and Personal Digital Assistants (PDA).
The scheme provided by the embodiment of the application relates to a Computer Vision technology (CV) technology of artificial intelligence, and is specifically explained by the following embodiment:
CV computer vision is a science for researching how to make a machine "see", and further, refers to that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Referring to fig. 1, a method for making a video online according to an embodiment of the present application is described below, where in the embodiment of the present application, only a server executes a video (for example, a PPT video) for making a lecture online, and an application scenario such as a truer explaining an existing video may refer to the embodiment of the present application, and is not described in detail. The embodiment of the application comprises the following steps:
101. and acquiring a file to be played and a video canvas.
The file to be played comprises at least two playing pictures. And playing the playing pictures according to the time sequence. The file to be played can be a PPT file, and the display format and content of the file to be played are not limited in the embodiment of the present application. The playing picture may refer to a picture in which a teacher expresses at least one of materials, animations, pictures, characters, and the like related to the course content through a file to be played. If the file to be played is a PPT file, the playing picture can be a PPT picture.
The video canvas comprises video pictures, and the video pictures can be pictures of teachers lectures collected by computer equipment through video camera equipment. The video picture is generally a man-like picture of a teacher, and the actions, sounds, expressions and the like of limbs when the teacher gives lessons. The video pictures are used for supplementing the presentation modes of the in-class contents in the file to be played. In some implementations, the video canvas may be created based on a canvas (canvas), specifically defining canvas elements and creating canvas objects, which are then graphically drawn using the canvas objects. Materials such as pictures, animations and characters can be led in the video canvas, and the manufacturing mode of the video canvas is not limited in the embodiment of the application.
In some embodiments, the obtaining the video canvas comprises:
acquiring a first length-width ratio of a playing picture and a second length-width ratio of a video picture;
determining the video canvas according to the first aspect ratio and the second aspect ratio.
In some embodiments, after the camera shoots the video picture, the video picture can be overlaid on the blank picture according to the original proportion, and in order to ensure that the content of the played picture is not blocked, the length and the width of the video picture projected onto the blank picture can be calculated according to the length of the played picture or the width of the played picture. As shown in fig. 2, the following are introduced:
(1) And calculating the length and the width of the video picture projected on the blank picture according to the width of the playing picture.
After creating a blank canvas, in some embodiments, the determining the video canvas according to the first aspect ratio and the second aspect ratio comprises:
determining whether the first aspect ratio is less than the second aspect ratio;
when the first aspect ratio is greater than the second aspect ratio, determining whether a width of the play out picture is greater than a width of the video picture;
if the width of the playing picture is larger than that of the video picture, taking the width of the playing picture as the target width of the video canvas, and obtaining the target length of the video canvas according to the width of the playing picture and the second length-width ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
For example, taking the file to be played as a PPT file as an example, P 1 =PPT width /PPT Height ,P 2 =C width /C Height
When P is present 1 Greater than P 2 When, C width =PPT width ,C Height =C width /P 2 . Wherein, C width Refers to the width, PPT, of a video picture width Refers to the width, C, of the PPT picture Height Refers to the height, P, of the video picture 2 Refers to the second aspect ratio.
(2) And calculating the length and the width of the video picture projected on the blank picture according to the length of the playing picture.
In some embodiments, said determining said video canvas according to said first aspect ratio and said second aspect ratio comprises:
determining whether a length of the play-out picture is greater than a length of the video picture when the first aspect ratio is less than the second aspect ratio;
if the length of the playing picture is larger than that of the video picture, taking the length of the playing picture as the target length of the video canvas, and obtaining the target width of the video canvas according to the length of the playing picture and the second length-width ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
For example, when P 1 Less than P 2 When, C Height =PPT Height ,C width =C Height *P 2 . Wherein, C width Refers to the width, PPT, of a video picture Height Refers to the length of the PPT picture, C Height Refers to the height, P, of the video picture 2 Refers to the second aspect ratio.
102. And copying the target pixel area in each playing picture onto the video canvas.
The target pixel area refers to an area formed by non-white pixel points in a playing picture. The target pixel region includes at least two target pixel points.
In some embodiments, as shown in fig. 3, the target pixel region may be determined by:
a. and acquiring the pixel value of each pixel point on each playing picture.
Wherein the pixel value may be represented by red, green, and blue (RGB) values, and the RGB values are weighted according to the red, green, and blue (red, green, and blue) values. The pixel values may also be referred to as gray scale values, RGB values, which are not limited in the embodiments of the present application. The playing picture can be a gray scale image or a color image. The examples of the present application are not limited thereto.
b. And determining the target pixel area according to the pixel values of the pixel points.
In some embodiments, the determining the target pixel region according to the pixel values of the pixel points includes:
determining whether the pixel point is non-white according to the pixel value of the pixel point;
and if the pixel point is determined to be non-white according to the pixel value of the pixel point, taking the non-white pixel point as the target pixel point to obtain the target pixel area.
Correspondingly, the copying the target pixel area in each playing picture onto the video canvas comprises:
determining the position information of each target pixel point in the playing picture;
and assigning the target pixel points to corresponding positions of the video canvas according to the position information of each target pixel point on the playing picture.
And if the pixel point is determined to be white according to the pixel value of the pixel point, jumping back to the step of obtaining the pixel value of each pixel point on each playing picture, and continuously traversing and circulating to determine whether the pixel point is non-white according to the pixel value of the pixel point. For example, when R = G = B =255, the pixel point is white.
103. And coding the video canvas copied with the pixel area to obtain a target video.
In some embodiments, the video canvas may be encoded by using base64 encoding, and the target video may be obtained by converting images, characters, and the like on the video canvas into base64 character strings. The embodiment of the present application does not limit the encoding method.
104. And sending the target video to a client.
For example, a user (e.g., a lecturer) prepares a file to be played for online lecture, and fills the text, pictures, animations, etc. of the file to be played with a non-pure white background at the server. The user selects the "picture-in-picture" function and the interface as shown in figure 4a is displayed. After the "picture in picture" function is enabled, the server calls the camera of the terminal to shoot the current scene of the user to obtain the explanation video of the user for each playing picture, and reference may be made to fig. 4b for a scene schematic diagram of the explanation video of the user for each playing picture obtained by the camera.
In this embodiment of the application, the blank canvas may be in a regular shape, for example, a rectangle, and before the video picture is projected onto the blank canvas, a beginPath () method may be called to create a new path. The coordinate values and the width and height of the rectangle are set using the rect (x, y, width, height) method. Where x and y represent the coordinate values of the upper left corner of the rectangle. width and height indicate the width and height of the rectangle. And then, drawing a frame of the video canvas by a fill () method or a stroke () method.
And finally, drawing all the target pixel points into the video canvas according to the relative position or the actual position of each target pixel point in the playing picture, so that the final target video can be obtained.
In some embodiments, in order to ensure undistorted content and high reduction degree in the played picture, the position information of each target pixel point in the played picture and the position information of each pixel point in the video canvas may be defined by using the same coordinate system, and then each target pixel point is drawn into the video canvas according to the actual position of each target pixel point in the played picture.
In other embodiments, if the playing picture is smaller, the size of the shot video is larger, and besides the video canvas is set according to the size of the playing picture, the overall size of the playing picture can be adjusted at the same time, and the size of the video canvas is adjusted, so that the definition of the playing picture is ensured while the distortion of the video picture is smaller. The embodiment of the present application does not limit the adjustment manner, the adjustment dimension, and the like.
In the embodiment of the application, the target pixel area in each playing picture is copied to the video canvas, and the video canvas is encoded to obtain the target video. The scheme can improve and ensure the clarity of the picture and avoid the content of the played picture from being shielded.
In this embodiment, the target video may be stored in a blockchain. The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation, such as: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
A video online production apparatus (which may also be referred to as a server or a terminal) that performs the method of online producing a video in the embodiments of the present application may be a node in a blockchain system. The video online production apparatus in the embodiment of the present application may be a node in a block chain system as shown in fig. 5.
Any technical feature mentioned in the embodiment corresponding to any one of fig. 1 to 5 is also applicable to the embodiment corresponding to fig. 6 to 8 in the embodiment of the present application, and the details of the subsequent similarities are not repeated.
In the embodiment of the present application, a method for online production of a video is described above, and an apparatus for performing the method for online production of a video is described below.
Referring to fig. 6, a schematic structural diagram of a video online production device shown in fig. 6 can be applied to application scenarios such as online lecture and live broadcast interaction. The online video production device in the embodiment of the present application can implement the steps corresponding to the online video production method performed in the embodiment corresponding to fig. 1. The functions realized by the video online production device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware. The video online production device may include a processing module, a coding module, and a transceiver module, and the processing module, the coding module, and the transceiver module may implement the operations performed in the embodiment corresponding to fig. 1, which are not described herein again.
In some embodiments, the processing module 601 may be configured to obtain a file to be played and a video canvas, where the file to be played includes at least two playing frames; copying a target pixel area in each playing picture to the video canvas;
the encoding module 602 is configured to encode the video canvas copied by the pixel region by the processing module to obtain a target video;
the transceiver module 603 is configured to send the target video to a client.
In this embodiment, the processing module 601 copies the target pixel area in each playing picture to the video canvas, and the encoding module 602 encodes the video canvas to obtain the target video. The scheme can improve and ensure the clarity of the picture and avoid the content of the played picture from being shielded.
In some embodiments, the processing module 601 is specifically configured to:
acquiring a first length-width ratio of a playing picture and a second length-width ratio of a video picture;
determining the video canvas according to the first aspect ratio and the second aspect ratio.
In some embodiments, the processing module 601 is specifically configured to:
when the first aspect ratio is larger than the second aspect ratio, if the width of the playing picture is larger than the width of the video picture, taking the width of the playing picture as the target width of the video canvas, and obtaining the target length of the video canvas according to the width of the playing picture and the second aspect ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
In some embodiments, the processing module 601 is specifically configured to:
when the first aspect ratio is smaller than the second aspect ratio, if the length of the playing picture is larger than that of the video picture, taking the length of the playing picture as the target length of the video canvas, and obtaining the target width of the video canvas according to the length of the playing picture and the second aspect ratio;
and projecting the video picture onto a blank canvas to obtain the video canvas.
In some embodiments, after the processing module 601 acquires the file to be played and the video canvas, before the processing module copies the target pixel area in each playing screen onto the video canvas, the processing module is further configured to:
acquiring the pixel value of each pixel point on each playing picture;
and determining the target pixel area according to the pixel values of the pixel points.
In some embodiments, the target pixel region includes at least two target pixel points, and the processing module 601 is specifically configured to:
if the pixel point is determined to be non-white according to the pixel value of the pixel point, taking the non-white pixel point as the target pixel point to obtain the target pixel area;
determining the position information of each target pixel point in the playing picture;
and assigning the target pixel points to corresponding positions of the video canvas according to the position information of each target pixel point on the playing picture.
In some embodiments, the target video is stored on a blockchain node.
The video online production apparatus in the embodiment of the present application is described above from the perspective of the modular functional entity, and the servers that execute the method of online producing video in the embodiment of the present application are described below from the perspective of hardware processing. It should be noted that in the embodiment shown in fig. 6 of the present application, the entity device corresponding to the input/output module 603 may be an input/output unit, a transceiver, a radio frequency circuit, a communication module, an output interface, and the like, and the entity device corresponding to the detection module 602 and the processing module 601 may be a processor. The apparatus 60 shown in fig. 6 may have a structure as shown in fig. 7, when the apparatus 60 shown in fig. 6 has a structure as shown in fig. 7, the processor and the transceiver in fig. 7 can implement the same or similar functions of the processing module 601 and the encoding module 602 provided in the apparatus embodiment corresponding to the apparatus, the processor and the transceiver in fig. 7 can implement the same or similar functions of the transceiver module 603 provided in the apparatus embodiment corresponding to the apparatus 60, and the memory in fig. 7 stores a computer program that the processor needs to call when executing the method for online making video.
As shown in fig. 8, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application. The terminal device may be any terminal device including a terminal, a tablet computer, a Personal Digital Assistant (PDA, for short), a Sales terminal (POS, for short), a vehicle-mounted computer, etc., taking the terminal as an example:
fig. 8 is a block diagram illustrating a partial structure of a terminal provided in an embodiment of the present application. Referring to fig. 8, the terminal includes: radio Frequency (RF) circuit 88, memory 820, input unit 830, display unit 840, sensor 850, audio circuit 860, wireless fidelity (WiFi) module 870, processor 880, and power supply 890. Those skilled in the art will appreciate that the terminal structure shown in fig. 8 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each constituent element of the terminal in detail with reference to fig. 8:
the RF circuit 88 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information to the processor 880; in addition, data for designing uplink is transmitted to the base station. In general, RF circuit 88 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 88 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), e-mail, short Message Service (SMS), etc.
The memory 820 may be used to store software programs and modules, and the processor 880 executes various functional applications of the terminal and data processing by operating the software programs and modules stored in the memory 820. The memory 820 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 830 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the input unit 830 may include a touch panel 831 and other input devices 832. The touch panel 831, also referred to as a touch screen, can collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 or near the touch panel 831 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 831 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 880, and can receive and execute commands from the processor 880. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 830 may include other input devices 832 in addition to the touch panel 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 840 may be used to display information input by a user or information provided to the user and various menus of the terminal. The Display unit 840 may include a Display panel 841, and optionally, the Display panel 841 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 831 can cover the display panel 841, and when the touch panel 831 detects a touch operation thereon or nearby, the touch panel can transmit the touch operation to the processor 880 to determine the type of touch event, and then the processor 880 can provide a corresponding visual output on the display panel 841 according to the type of touch event. Although in fig. 8, the touch panel 831 and the display panel 841 are two separate components to implement the input and output functions of the terminal, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the terminal.
The terminal may also include at least one sensor 850, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 841 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 841 and/or backlight when the terminal is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 860, speaker 861, microphone 862 may provide an audio interface between the user and the terminal. The audio circuit 860 can transmit the electrical signal converted from the received audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 and output; on the other hand, the microphone 862 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 860, and outputs the audio data to the processor 880 for processing, and then to transmit to, for example, another terminal via the RF circuit 88, or outputs the audio data to the memory 820 for further processing.
Wi-Fi belongs to short-distance wireless transmission technology, and the terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 870, and provides wireless broadband internet access for the user. Although fig. 8 shows a W-iFi module 870, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope of not changing the essence of the application.
The processor 880 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby integrally monitoring the terminal. Optionally, processor 880 may include one or more processing units; preferably, the processor 880 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 880.
The terminal also includes a power supply 890 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 880 via a power management system that is configured to manage charging, discharging, and power consumption.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which will not be described herein.
In the embodiment of the present application, the processor 880 included in the terminal further has a function of controlling and executing the above method procedures executed by the terminal.
For example, the processor 822, by calling instructions in the memory 832, performs the following operations:
acquiring a file to be played and a video canvas through an input unit 830, wherein the file to be played comprises at least two playing pictures; copying a target pixel area in each playing picture to the video canvas;
coding the video canvas copied by the pixel area by the processing module to obtain a target video;
the target video is sent to the client through the RF circuitry 88.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The technical solutions provided by the embodiments of the present application are introduced in detail, and the principles and implementations of the embodiments of the present application are explained by applying specific examples in the embodiments of the present application, and the descriptions of the embodiments are only used to help understanding the method and core ideas of the embodiments of the present application; meanwhile, for a person skilled in the art, according to the idea of the embodiment of the present application, there may be a change in the specific implementation and application scope, and in summary, the content of the present specification should not be construed as a limitation to the embodiment of the present application.

Claims (6)

1. A method for making a video online, the method comprising:
acquiring a file to be played, wherein the file to be played comprises at least two playing pictures;
acquiring a first length-width ratio of the playing picture and a second length-width ratio of the video picture;
when the first aspect ratio is larger than the second aspect ratio, if the width of the playing picture is larger than the width of the video picture, taking the width of the playing picture as the target width of a canvas to be projected with the video picture, and obtaining the target length of the canvas according to the width of the playing picture and the second aspect ratio;
when the first aspect ratio is smaller than the second aspect ratio, if the length of the playing picture is larger than that of the video picture, taking the length of the playing picture as the target length of the canvas, and obtaining the target width of the canvas according to the length of the playing picture and the second aspect ratio;
projecting the video picture onto a canvas with the width being the target width and the length being the target length to obtain a video canvas;
acquiring the pixel value of each pixel point on the playing picture of the playing file;
determining a target pixel area in the playing picture according to the pixel value of the pixel point, and determining the position information of the target pixel area in the playing picture;
copying a target pixel area in the playing picture to a corresponding position of the video canvas according to the position information;
coding the video canvas copied with the target pixel area to obtain a target video;
and sending the target video to a client.
2. The method of claim 1, wherein the target pixel region comprises at least two target pixels, and wherein determining the target pixel region according to pixel values of the pixels comprises:
if the pixel point is determined to be non-white according to the pixel value of the pixel point, taking the non-white pixel point as the target pixel point to obtain the target pixel area;
the copying the target pixel area in the playing picture to the corresponding position of the video canvas according to the position information comprises:
determining the position information of each target pixel point in the playing picture;
and assigning the target pixel points to corresponding positions of the video canvas according to the position information of each target pixel point on the playing picture.
3. The method of claim 1, wherein the target video is stored on a blockchain node.
4. An online video production apparatus, comprising:
a processing module to:
acquiring a file to be played, wherein the file to be played comprises at least two playing pictures;
acquiring a first aspect ratio of the playing picture and a second aspect ratio of the video picture;
when the first aspect ratio is larger than the second aspect ratio, if the width of the playing picture is larger than the width of the video picture, taking the width of the playing picture as the target width of a canvas to be projected with the video picture, and obtaining the target length of the canvas according to the width of the playing picture and the second aspect ratio;
when the first aspect ratio is smaller than the second aspect ratio, if the length of the playing picture is larger than that of the video picture, taking the length of the playing picture as the target length of the canvas, and obtaining the target width of the canvas according to the length of the playing picture and the second aspect ratio; projecting the video picture onto a canvas with the width being the target width and the length being the target length to obtain a video canvas;
acquiring pixel values of all pixel points on a playing picture of the playing file;
determining a target pixel area in the playing picture according to the pixel value of the pixel point, and determining the position information of the target pixel area in the playing picture;
copying a target pixel area in the playing picture to a corresponding position of the video canvas according to the position information;
the coding module is used for coding the video canvas copied by the target pixel area by the processing module to obtain a target video;
and the receiving and sending module is used for sending the target video to the client.
5. An online video production apparatus, comprising:
at least one processor, memory, and transceiver;
wherein the memory is for storing a computer program and the processor is for calling the computer program stored in the memory to perform the method of any one of claims 1-3.
6. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-3.
CN201911316898.2A 2019-12-19 2019-12-19 Method, device and storage medium for online video production Active CN113014960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911316898.2A CN113014960B (en) 2019-12-19 2019-12-19 Method, device and storage medium for online video production

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911316898.2A CN113014960B (en) 2019-12-19 2019-12-19 Method, device and storage medium for online video production

Publications (2)

Publication Number Publication Date
CN113014960A CN113014960A (en) 2021-06-22
CN113014960B true CN113014960B (en) 2023-04-11

Family

ID=76382598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911316898.2A Active CN113014960B (en) 2019-12-19 2019-12-19 Method, device and storage medium for online video production

Country Status (1)

Country Link
CN (1) CN113014960B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222149A (en) * 2021-11-17 2022-03-22 武汉斗鱼鱼乐网络科技有限公司 Plug flow method, device, medium and computer equipment
CN115866315B (en) * 2023-02-14 2023-06-30 深圳市东微智能科技股份有限公司 Data processing method, device, equipment and computer readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150143450A1 (en) * 2013-11-21 2015-05-21 Broadcom Corporation Compositing images in a compressed bitstream
CN105744340A (en) * 2016-02-26 2016-07-06 上海卓越睿新数码科技有限公司 Real-time screen fusion method for live broadcast video and presentation file
CN105872418A (en) * 2016-03-30 2016-08-17 浙江大华技术股份有限公司 Method and device for superimposing a GUI (Graphical User Interface) image layer on a digital image
CN105828160B (en) * 2016-04-01 2017-09-12 腾讯科技(深圳)有限公司 Video broadcasting method and device
CN106791937B (en) * 2016-12-15 2020-08-11 广东威创视讯科技股份有限公司 Video image annotation method and system
CN108989830A (en) * 2018-08-30 2018-12-11 广州虎牙信息科技有限公司 A kind of live broadcasting method, device, electronic equipment and storage medium
CN110446110B (en) * 2019-07-29 2022-04-22 深圳市东微智能科技股份有限公司 Video playing method, video playing device and storage medium

Also Published As

Publication number Publication date
CN113014960A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
US11356619B2 (en) Video synthesis method, model training method, device, and storage medium
WO2020192465A1 (en) Three-dimensional object reconstruction method and device
WO2019184889A1 (en) Method and apparatus for adjusting augmented reality model, storage medium, and electronic device
CN111556278B (en) Video processing method, video display device and storage medium
CN103631768B (en) Collaboration data editor and processing system
CN104170318B (en) Use the communication of interaction incarnation
WO2019034142A1 (en) Three-dimensional image display method and device, terminal, and storage medium
CN107566730B (en) A kind of panoramic picture image pickup method and mobile terminal
WO2021184952A1 (en) Augmented reality processing method and apparatus, storage medium, and electronic device
CN108712603B (en) Image processing method and mobile terminal
CN111417028A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN107580209A (en) Take pictures imaging method and the device of a kind of mobile terminal
WO2018120657A1 (en) Method and device for sharing virtual reality data
CN110458921B (en) Image processing method, device, terminal and storage medium
CN107770454A (en) A kind of image processing method, terminal and computer-readable recording medium
CN109426343B (en) Collaborative training method and system based on virtual reality
US20210152751A1 (en) Model training method, media information synthesis method, and related apparatuses
CN108876878B (en) Head portrait generation method and device
US20230141166A1 (en) Data Sharing Method and Device
CN111368820A (en) Text labeling method and device and storage medium
CN108683850A (en) A kind of shooting reminding method and mobile terminal
CN113014960B (en) Method, device and storage medium for online video production
CN111556337B (en) Media content implantation method, model training method and related device
CN108880975B (en) Information display method, device and system
CN109803110A (en) A kind of image processing method, terminal device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40050061

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant