CN115396710A - Method for H5 or small program to project short video and related device - Google Patents

Method for H5 or small program to project short video and related device Download PDF

Info

Publication number
CN115396710A
CN115396710A CN202210947435.1A CN202210947435A CN115396710A CN 115396710 A CN115396710 A CN 115396710A CN 202210947435 A CN202210947435 A CN 202210947435A CN 115396710 A CN115396710 A CN 115396710A
Authority
CN
China
Prior art keywords
video
short video
short
picture
large screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210947435.1A
Other languages
Chinese (zh)
Inventor
王志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Happycast Technology Co Ltd
Original Assignee
Shenzhen Happycast Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Happycast Technology Co Ltd filed Critical Shenzhen Happycast Technology Co Ltd
Priority to CN202210947435.1A priority Critical patent/CN115396710A/en
Publication of CN115396710A publication Critical patent/CN115396710A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Abstract

The embodiment of the application discloses a method for projecting short videos by H5 or small programs and a related device. The method comprises the following steps: splitting a video frame of a short video into preset frame number pictures, and selecting a key frame picture of the preset frame number pictures to form an important picture set; uploading the important picture set to a cloud space based on a picture mode; setting a video recovery model by using the cloud space, and recovering the short video of the important picture set by using the video recovery model; and the recovered short video is projected to a large screen end. By the method and the device, the efficiency of throwing the short video to the large screen end is greatly improved.

Description

Method for H5 or small program to project short video and related device
Technical Field
The application belongs to the technical field of information processing, and mainly relates to a method for casting short videos by H5 or small programs and a related device.
Background
At present, as the user has more and more distinct advantages of watching short video programs on the large screen end, more and more users like to put short videos on the large screen end.
In the prior art, a service mode of a small program platform is not willing to support short video to be projected to a large screen end, and the time for establishing a screen projection link is long, so that the screen projection efficiency of the short video is low.
Disclosure of Invention
An object of the present application is to provide a method for H5 or applet to deliver short video and a related apparatus, which are advantageous in that the efficiency of delivering short video to a large screen can be greatly improved by delivering short video to the large screen through H5 or applet.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a method for H5 or applet to project short video, where the method includes:
splitting a video frame of a short video into preset frame number pictures, and selecting a key frame picture of the preset frame number pictures to form an important picture set;
uploading the important picture set to a cloud space based on a picture mode;
setting a video recovery model by using the cloud space, and recovering the short video of the important picture set by using the video recovery model;
and the recovered short video is projected to a large screen end.
The method comprises the steps of splitting a video frame of a short video into pictures with preset frame numbers, selecting key frame pictures of the pictures with the preset frame numbers to form an important picture set, uploading the important picture set to a cloud space based on a picture mode, setting a video recovery model by using the cloud space, recovering the short video of the important picture set by using the video recovery model, and projecting the recovered short video to a large-screen end, so that the efficiency of projecting the short video to the large-screen end can be improved.
In one possible example, the splitting the video frames of the short video into the preset frame number pictures includes the following steps:
collecting video frames of a short video;
and setting a picture format of the preset frame number picture, and splitting the video frame of the short video into the preset frame number picture.
It can be understood that the video frame of the short video is collected, the picture format of the preset frame number picture is set, the video frame of the short video is split into the preset frame number picture, and the splitting efficiency of the preset frame number picture can be improved.
In one possible example, the video restoration model is trained by using a training picture set, and is optimized according to a training result.
It can be understood that the video recovery model is trained by using the training picture set, and is optimized according to a training result, so that the optimization efficiency of the video recovery model can be improved.
In one possible example, the optimizing according to the training result includes the following steps:
inputting the exercise picture set into the video recovery model for recovery training;
and adjusting parameters of the video recovery model according to the training result.
It can be understood that the training picture set is input into the video recovery model for recovery training, and parameters of the video recovery model are adjusted according to the training result, so that the training efficiency of the video recovery model can be improved.
In one possible example, the method for recovering a short video of the important picture set by using the video recovery model comprises the following steps:
setting a video format of the short video;
based on the video format, recovering the short video of the important picture set by using the video recovery model.
It can be understood that the video format of the short video is set, and based on the video format, the short video of the important picture set is recovered by using the video recovery model, so that the recovery efficiency of the short video can be optimized.
In one possible example, the step of projecting the recovered short video to the large screen end comprises the following steps:
establishing a release link with a large screen end by using an H5 webpage or a small program;
and the recovered short video is launched to a large screen end through the launching link.
It can be understood that the release link is established with the large screen end by using the H5 webpage or the small program, and the restored short video is released to the large screen end through the release link, so that the release efficiency of the short video is optimized.
In one possible example, the establishing a drop link with a large screen end by using an H5 webpage or a small program includes the following steps:
when an H5 webpage and a large screen end are used for establishing a release link, the recovered short video is uploaded to a cloud end to generate a video link, and the video link is sent to a cloud desktop.
It can be understood that when an H5 webpage and a large screen end are used for establishing a release link, the recovered short video is uploaded to a cloud end to generate a video link, and the video link is sent to a cloud desktop, so that the establishment efficiency of the video link can be improved.
In a second aspect, an apparatus for H5 or applet projection of short videos includes means for performing the method provided in the first aspect or any implementation manner of the first aspect.
In a third aspect, an apparatus for H5 or applet shortening video, comprising a processor, a memory, and one or at least one program, wherein the one or at least one program is stored in the memory and configured to be executed by the processor, the program comprising instructions for performing the method provided in the first aspect or any embodiment of the first aspect.
In a fourth aspect, a computer-readable storage medium stores a computer program, which causes a computer to execute to implement the method provided by the first aspect or any implementation manner of the first aspect.
The embodiment of the application has the following beneficial effects:
the method comprises the steps of splitting a video frame of a short video into preset frame number pictures, selecting key frame pictures of the preset frame number pictures to form an important picture set, uploading the important picture set to a cloud space based on a picture mode, setting a video recovery model by using the cloud space, recovering the short video of the important picture set by using the video recovery model, and projecting the recovered short video to a large-screen end, so that the efficiency of projecting the short video to the large-screen end is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts. Wherein:
fig. 1 is an application scene diagram of an H5 or applet short-cut video provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a screen-casting applet application provided in an embodiment of the present application;
fig. 3 is a scene schematic diagram of a screen projection option main interface provided in an embodiment of the present application;
fig. 4 is a screen projection schematic diagram of H5 or applet short video projection provided in an embodiment of the present application;
fig. 5 is a schematic flow chart of an H5 or applet short video according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an H5 or applet projection video apparatus according to an embodiment of the present disclosure;
fig. 7 is a structural diagram of an H5 or applet short video device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "1" and "2" and the like in this application are used to distinguish different objects, and are not used to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a view of an application scenario of H5 or applet short video according to an embodiment of the present disclosure. As shown in fig. 1, the application scenario diagram includes a user 101, an electronic device 102, and a server 103. It should be noted that the number of devices, the form of the devices, and the number of users in the system shown in fig. 1 are used for example, and do not limit the embodiments of the present application, and one user may use a plurality of electronic devices.
The user 101 is a user who actually operates the electronic device 102 to control the electronic device 102 to perform corresponding operations. The electronic device 102 may be a smart phone shown in fig. 1, and may also be a Personal Computer (PC), an all-in-one machine, a palm computer, a tablet computer (pad), a smart television playing terminal, a portable device, and the like. The operating system of the PC-side electronic device, such as a kiosk or the like, may include, but is not limited to, operating systems such as Linux system, unix system, windows series system (e.g., windows xp, windows 7, etc.). The operating system of the electronic device at the mobile end, such as a smart phone, may include, but is not limited to, an operating system such as an android system, an IOS (operating system of an apple mobile phone), a Window system, and the like.
The following describes a method for dynamically adjusting an alert provided by an embodiment of the present application, where the method may be performed by an alert dynamic adjustment apparatus, and the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device or a server.
Referring to fig. 2, fig. 2 is a schematic view of a screen-casting applet application provided in an embodiment of the present application. The first electronic device 201 may install the screen-casting applet 202 shown in fig. 2, where a user of the first electronic device 201 is a user, and when the user performs a trigger operation (for example, clicks an icon of the screen-casting applet 202) on the screen-casting applet 202 installed in the first electronic device 201, the first electronic device 201 may start the installed screen-casting applet 202 and enter a screen-casting applet application. The user may also click on the home page 203 to return to the initial interface of the first electronic device 201 after the application has been used up.
Referring to fig. 3, fig. 3 is a scene schematic diagram of a main interface of a screen projection option provided in an embodiment of the present application. Specifically, the user may view the process of selecting the screen projection details at the second electronic device 301, thereby causing the second electronic device 301 to project the short video to the large screen end. Specifically, after receiving the short video data, the second electronic device 301 may select the resolution adjustment 307, the brightness adjustment 306, the contrast adjustment 305, the screen projecting code rate 304, and the screen projecting frame rate 303 on the screen projecting option main interface 308, at which time, the pop-up box 302 of the second electronic device 301 displays "please complete the processing within 10 minutes".
Referring to fig. 4, fig. 4 is a screen projection schematic diagram of H5 or applet projecting a short video according to an embodiment of the present application. The user views the short video 403 through the third electronic device 401, and the popup frame 402 of the third electronic device 401 displays "please wait, shoot on the screen".
Referring to fig. 5, fig. 5 is a schematic flow chart of an H5 or applet short video according to an embodiment of the present disclosure. The H5 or applet cast short video apparatus may include a server or an electronic device, as exemplified by the application of the method to the H5 or applet cast short video process. The method comprises the following steps S501-S504, wherein,
s501: splitting a video frame of a short video into preset frame number pictures, and selecting a key frame picture of the preset frame number pictures to form an important picture set.
In this embodiment of the present application, a local device may be used by an applet to split a video frame of a short video into a preset frame number picture, and a specific form of this embodiment is not limited in this embodiment. The short video refers to a short video, and is an internet content transmission mode, and generally, the short video is a video which is transmitted on new internet media within 5 minutes. The method can be played on various new media platforms, can be pushed at high frequency, is different from micro-movies and live broadcasts, does not have specific expression forms and team configuration requirements like micro-movies in short video production, and has the characteristics of simple production flow, low production threshold, strong participation and the like.
For example, a key frame refers to a frame connecting two different pieces of content, and the video content after the frame has a new change or transition, and the frame has a small black dot mark on the time axis. The ordinary frame is used for measuring the playing time or the transition time, the content of the ordinary frame cannot be set manually, the ordinary frame is automatically filled by the front key frame and the rear key frame and the transition type in the playing process, the ordinary frame is manually inserted or deleted, and the transition time between the front key frame and the rear key frame can be changed.
In one possible example, step S501 includes the following steps A1-A2, wherein,
a1: video frames of the short video are collected.
For example, video is composed of still pictures, called frames, and when the frame rate is lower than 15 frames/second, the continuous motion video will have the feeling of pause, and the PAL system of the tv standard is adopted in China, which specifies 25 frames/second (interlaced mode) of video and 625 scan lines per frame. Since the larger the number of frames, the larger the data amount, the slower the frame rate is to reduce the data amount, and as an example, when the frame rate of the video is reduced from 30 frames per second to 15 frames per second, the data amount is reduced to a certain degree, but the visual effect is deteriorated.
A2: and setting a picture format of the preset frame number picture, and splitting the video frame of the short video into the preset frame number picture.
For example, the common picture formats include a JPG picture format, a PNG picture format, and a BMP picture format. The JPG picture format is a most common image file format, is organized and established by a software development association, is a lossy compression format, and can compress an image in a very small storage space, and repeated or unimportant data in the image can be lost, so that image data is easily damaged. Especially, the use of too high a compression ratio will significantly reduce the quality of the image restored after final decompression, and if a high quality image is sought, it is not suitable to use too high a compression ratio. However, the JPEG compression technique is very advanced, and it removes redundant image data by a lossy compression method, and can display a very rich and vivid image while obtaining a very high compression rate, in other words, can obtain a good image quality with a minimum of disk space. Moreover, JPEG is a very flexible format, has the function of adjusting the image quality, allows files to be compressed with different compression ratios, supports multiple compression levels, the compression ratio is usually between 10: 1 and 40: 1, and the higher the compression ratio, the lower the quality; conversely, the smaller the compression ratio, the better the quality. For example, a BMP file with Mb can be compressed to several tens of KB, but it is also possible to find a balance point between image quality and file size by a trade-off. The JPEG format mainly compresses high-frequency information, well retains color information, is suitable for being applied to the Internet, can reduce the transmission time of images, can support 24-bit true colors, and is also generally applied to images needing continuous tone.
The PNG picture format is used to store a gray image with a depth of up to 16 bits, a color image with a depth of up to 48 bits, and also to store up to 16 bits of alpha channel data. The PNG picture file is generally applied to JAVA programs, web pages or S60 programs because it has a high compression ratio, generates a small file size, and can support color images of 256 colors. It also has the following characteristics:
streaming read/write performance: the image file format allows continuous reading and writing of image data, a feature well suited for generating and displaying images during communication.
Successive approximation display: this feature allows to display the image on the terminal while transmitting the image file over the communication link, displaying the details of the image step by step after the whole outline has been displayed, i.e. displaying the image with a low resolution and then increasing its resolution step by step.
Transparency: this feature may cause certain portions of the image to be hidden from view, creating some distinctive images.
Auxiliary information: this feature may be used to store some text annotation information in the image file.
The BMP picture format is an image file format unrelated to hardware devices, and is widely used. The method adopts a bit mapping storage format, and does not adopt any other compression except that the image depth is selectable, so that the space occupied by the BMP file is large. The image depth of the BMP file can be selected from l bits, 4 bits, 8 bits, and 24 bits. When the BMP file stores data, the scanning mode of the image is from left to right, from bottom to top. Because the BMP file format is a standard for exchanging data related to a graph in the Windows environment, the graph image software running in the Windows environment supports the BMP image format, and a typical BMP image file is composed of four parts:
1: the bitmap file header data structure comprises information such as the type and display content of the BMP image file;
2: the bitmap information data structure comprises the width, height and compression methods of the BMP image, information such as defined colors and the like;
3: palettes, this part being optional, some bitmaps requiring palettes and some bitmaps, such as true color images (24-bit BMP), not requiring palettes;
4: the bitmap data, the contents of which vary according to the number of bits used in the BMP bitmap, uses RGB directly in the 24 bitmap, and uses the color index values in the palette of less than 24 bits.
S502: uploading the important picture set to a cloud space based on a picture mode.
S503: and setting a video recovery model by using the cloud space, and recovering the short video of the important picture set by using the video recovery model.
For example, the video recovery model comprises the function of a video engine, and can transfer the calculation work of video transcoding, storage and distribution to a cloud server, the cloud server completely serves a network cluster transparent to users, and the cloud server bears the specific application calculation tasks of the users, so that the equipment to software of terminals faced by the users are greatly simplified, the final form is a user software terminal with zero maintenance, user hardware equipment is greatly simplified, complex and variable software and hardware are put in a cloud end, and the video application speed of the users is greatly, quickly and conveniently increased.
In one possible example, step S503 includes the following step B1:
and the video recovery model is trained by using the training picture set and optimized according to a training result.
In one possible example, step B1 comprises the following steps B1-B2, wherein,
b1: and inputting the exercise picture set into the video recovery model for recovery training.
B2: and adjusting parameters of the video recovery model according to the training result.
In one possible example, step S503 includes the steps of:
setting a video format of the short video;
for ease of understanding, the video format is exemplified here, and Moving Picture Expert Group (MPEG) is a generic term of video formats such as MPEG-1, MPEG-2, etc., and the MPEG format is an international standard for a Moving Picture compression algorithm that employs a lossy compression method to reduce redundant information in a Moving Picture. The MPEG compression method reserves the most identical parts of two adjacent pictures, and removes the redundant parts of the subsequent picture and the previous picture, thereby achieving the purpose of compression. The Audio Video Interleaved format (AVI) was introduced by microsoft corporation in 1992 and is known and well known along with microsoft windows operating system version 3.1. The audio and video interleaving can interleave the video and the audio together for synchronous playing, the video format has the advantages of good image quality and capability of being used across a plurality of platforms, but the video format has the defects of overlarge volume and non-uniform compression standard, so that the AVI format video edited by early coding cannot be played by a high-version window operating system media player, and the AVI format video edited by the latest coding cannot be played by a low-version window operating system media player. The dynamic bit rate multimedia video package format (RMVB) is a new video format extended from the RM video format upgrading, and has the advantages of breaking through the average compression sampling mode of the original RM format, reasonably utilizing bit rate resources on the basis of ensuring the average compression ratio, and adopting a lower coding rate for picture scenes with few static and motion scenes, so that more bandwidth space can be reserved, and the bandwidth can be utilized when fast moving picture scenes appear. Thus, on the premise of ensuring the quality of a still picture, the picture quality of a moving picture is greatly improved, and thus the delicate balance between the picture quality and the file size is achieved. In addition, the video format also has the unique advantages of built-in subtitles, no need of plug-in support and the like. In the embodiment of the present application, the type of the video format is not limited.
Based on the video format, recovering the short video of the important picture set by using the video recovery model.
S504: and the recovered short video is projected to a large screen end.
In one possible example, step S504 includes the following steps C1-C2, wherein,
c1: and establishing a release link with the large screen end by using an H5 webpage or a small program.
For example, the H5 web page refers to a 5 th generation web page (HTML) and also refers to all digital products made in the H5 language. HTML is the english abbreviation of "hypertext markup language". Most of the web pages viewed by people on the internet are written by HTML. "hypertext" means that a page may contain pictures, links, and even non-textual elements such as music and programs. The "mark" means that the hypertext must be marked by the beginning and end marks containing the attribute, and the browser can display the web page content by decoding the HTML.
C2: and the recovered short video is launched to a large screen end through the launching link.
In one possible example, step C1 comprises the following steps C11-C12, wherein,
c11: and when the release link is established with the large screen end by using the H5 webpage, uploading the recovered short video to the cloud end to generate a video link.
C12: and sending the video link to the cloud desktop.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an H5 or applet short video device according to an embodiment of the present disclosure. Based on the above system architecture, the H5 or applet shorten video apparatus 600 may be a server or a module in the server. The apparatus 600, at least comprising: a splitting module 601, a processing module 602, and a delivering module 603, wherein,
the splitting module 601 is configured to split a video frame of a short video into preset frame number pictures;
the processing module 602 is configured to select a key frame picture of the preset frame number picture to form an important picture set; uploading the important picture set to a cloud space; setting a video recovery model by using the cloud space, and recovering the short video of the important picture set by using the video recovery model;
the delivering module 603 is configured to deliver the recovered short video to the large screen.
In one possible example, in the aspect of splitting the video frames of the short video into the preset frame number pictures, the processing module 602 acquires the video frames of the short video; and setting a picture format of the preset frame number picture, and splitting the video frame of the short video into the preset frame number picture.
In one possible example, the processing module 602 trains the video recovery model according to the training picture set, and optimizes the video recovery model according to the training result.
In one possible example, the processing module 602 inputs a set of exercise pictures into the video recovery model for recovery training; and adjusting parameters of the video recovery model according to the training result.
In one possible example, the processing module 602 sets the video format of the short video; based on the video format, recovering the short video of the important picture set by using the video recovery model.
In one possible example, the processing module 602 uses an H5 web page or an applet to establish a drop link with a large screen; and the recovered short video is launched to a large screen end through the launching link.
In one possible example, when a release link is established with a large screen using an H5 web page, the processing module 602 uploads the recovered short video to a cloud to generate a video link; and sending the video link to the cloud desktop.
Referring to fig. 7, fig. 7 is a structural diagram of an H5 or applet short video device according to an embodiment of the present disclosure. As shown in fig. 7, the H5 or applet-hosting video device 700 includes a processor 701, a memory 702, a communication interface 704, and at least one program 703. The at least one program 703 is stored in the memory 702 and configured to be executed by the processor 701, the at least one program 703 comprising instructions for:
splitting a video frame of a short video into preset frame number pictures, and selecting a key frame picture of the preset frame number pictures to form an important picture set;
uploading the important picture set to a cloud space based on a picture mode;
setting a video recovery model by using the cloud space, and recovering the short video of the important picture set by using the video recovery model;
and the recovered short video is projected to a large screen end.
In one possible example, the at least one program 703 is specifically for executing the instructions of the following steps:
collecting video frames of a short video;
and setting a picture format of the preset frame number picture, and splitting the video frame of the short video into the preset frame number picture.
In one possible example, the at least one program 703 is specifically for executing the instructions of the following steps:
and the video recovery model is trained by using the training picture set and optimized according to a training result.
In one possible example, the at least one program 703 is specifically for executing the instructions of the following steps:
inputting the exercise picture set into the video recovery model for recovery training;
and adjusting parameters of the video recovery model according to the training result.
In one possible example, the at least one program 703 is specifically for executing the instructions of the following steps:
setting a video format of the short video;
based on the video format, recovering the short video of the important picture set by using the video recovery model.
In one possible example, the at least one program 703 is specifically for executing the instructions of the following steps:
establishing a release link with a large screen end by using an H5 webpage or a small program;
and the recovered short video is launched to a large screen end through the launching link.
In one possible example, the at least one program 703 is specifically for executing the instructions of the following steps:
when an H5 webpage and a large screen end are used for establishing a release link, uploading the recovered short video to a cloud end to generate a video link;
and sending the video link to the cloud desktop.
Those skilled in the art will appreciate that only one memory 702 and processor 701 are shown in fig. 7 for ease of illustration. In an actual terminal or server, there may be multiple processors and memories. The memory may also be referred to as a storage medium or a storage device, and the like, which is not limited in this application.
It should be understood that, in the embodiment of the present Application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. The processor may also be a general-purpose microprocessor, a Graphics Processing Unit (GPU), or one or more integrated circuits, and is configured to execute the relevant programs to implement the functions required to be executed in the embodiments of the present application.
The processor 701 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the present application may be implemented by integrated logic circuits in hardware or instructions in software in the processor 701. The processor 701 described above may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash and rom, programmable rom or electrically erasable programmable memory, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 702, and the processor 701 reads information in the memory 702, and completes functions to be executed by units included in the method, apparatus, and storage medium according to the embodiments of the present application in combination with hardware thereof.
It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and Direct bus RAM (DR RAM). The Memory may also be, but is not limited to, a Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, optical disk storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integrated with the processor, and the memory may store a program, which when executed by the processor is adapted to perform the steps of the method of the present application as defined in the above embodiments.
It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, the memory (memory module) is integrated in the processor. It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and performs the steps of the method in combination with hardware thereof, which are not described in detail herein to avoid repetition.
Those of ordinary skill in the art will appreciate that the various Illustrative Logical Blocks (ILBs) and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer programmed program product. The computer program product includes one or more computer instructions. When loaded and executed on a processor, cause the processes or functions described in accordance with the embodiments of the application to occur in whole or in part. The computer may be a general purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server or data center to another website, computer, server or data center through a wired (e.g., coaxial cable, optical fiber) or wireless (e.g., infrared, wireless, microwave, etc.) manner, or may be transmitted from one website, computer, server or data center to a mobile phone processor through a wired manner. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A method for H5 or applet to render short video, comprising the steps of:
splitting a video frame of a short video into pictures with preset frame numbers, and selecting key frame pictures of the pictures with the preset frame numbers to form an important picture set;
uploading the important picture set to a cloud space based on a picture mode;
setting a video recovery model by using the cloud space, and recovering the short video of the important picture set by using the video recovery model;
and the recovered short video is projected to a large screen end.
2. The method according to claim 1, wherein the splitting of the video frames of the short video into the preset frame number pictures comprises the following steps:
collecting video frames of a short video;
and setting a picture format of the preset frame number picture, and splitting the video frame of the short video into the preset frame number picture.
3. The method of claim 1, wherein the video restoration model is trained using a set of training pictures, and wherein the optimization is performed based on the training results.
4. A method according to claim 1 or 3, wherein said optimizing according to training results comprises the steps of:
inputting the exercise picture set into the video recovery model for recovery training;
and adjusting parameters of the video recovery model according to the training result.
5. The method according to claim 1, wherein said recovering the short video of the important picture set using the video recovery model comprises the steps of:
setting a video format of the short video;
based on the video format, recovering the short video of the important picture set by using the video recovery model.
6. The method of claim 1, wherein the step of projecting the recovered short video to the large screen end comprises the steps of:
establishing a release link with a large screen end by using an H5 webpage or a small program;
and the recovered short video is launched to a large screen end through the launching link.
7. The method of claim 6, wherein the step of establishing a drop link with a large screen by using an H5 webpage or a small program comprises the following steps:
and when a released webpage link is established between the H5 webpage and the large screen end, sending the recovered short video to the large screen end through the released webpage link.
8. An apparatus for H5 or applet to render short a video, characterized by being adapted to perform the method of any of claims 1-7.
9. An apparatus for H5 or applet shortcasting video, comprising a processor, a memory and one or at least one program, wherein the one or at least one program is stored in the memory and configured to be executed by the processor, the program comprising instructions for performing the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, the computer program causing a computer to execute to implement the method of any one of claims 1-7.
CN202210947435.1A 2022-08-09 2022-08-09 Method for H5 or small program to project short video and related device Pending CN115396710A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210947435.1A CN115396710A (en) 2022-08-09 2022-08-09 Method for H5 or small program to project short video and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210947435.1A CN115396710A (en) 2022-08-09 2022-08-09 Method for H5 or small program to project short video and related device

Publications (1)

Publication Number Publication Date
CN115396710A true CN115396710A (en) 2022-11-25

Family

ID=84118653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210947435.1A Pending CN115396710A (en) 2022-08-09 2022-08-09 Method for H5 or small program to project short video and related device

Country Status (1)

Country Link
CN (1) CN115396710A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791273A (en) * 2016-12-07 2017-05-31 重庆大学 A kind of video blind restoration method of combination inter-frame information
CN108111860A (en) * 2018-01-11 2018-06-01 安徽优思天成智能科技有限公司 Video sequence lost frames prediction restoration methods based on depth residual error network
CN110392287A (en) * 2019-06-03 2019-10-29 广东有线广播电视网络有限公司 Processing method, device and the TV that TV throws screen authorization throw screen system
CN111369477A (en) * 2020-05-27 2020-07-03 杭州微帧信息科技有限公司 Method for pre-analysis and tool self-adaptation of video recovery task
CN111583112A (en) * 2020-04-29 2020-08-25 华南理工大学 Method, system, device and storage medium for video super-resolution
CN112135186A (en) * 2020-09-22 2020-12-25 深圳乐播科技有限公司 Screen projection method, device, equipment and storage medium based on small program
CN112541965A (en) * 2020-12-02 2021-03-23 国网重庆市电力公司电力科学研究院 Compressed sensing image and video recovery based on tensor approximation and space-time correlation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791273A (en) * 2016-12-07 2017-05-31 重庆大学 A kind of video blind restoration method of combination inter-frame information
CN108111860A (en) * 2018-01-11 2018-06-01 安徽优思天成智能科技有限公司 Video sequence lost frames prediction restoration methods based on depth residual error network
CN110392287A (en) * 2019-06-03 2019-10-29 广东有线广播电视网络有限公司 Processing method, device and the TV that TV throws screen authorization throw screen system
CN111583112A (en) * 2020-04-29 2020-08-25 华南理工大学 Method, system, device and storage medium for video super-resolution
CN111369477A (en) * 2020-05-27 2020-07-03 杭州微帧信息科技有限公司 Method for pre-analysis and tool self-adaptation of video recovery task
CN112135186A (en) * 2020-09-22 2020-12-25 深圳乐播科技有限公司 Screen projection method, device, equipment and storage medium based on small program
CN112541965A (en) * 2020-12-02 2021-03-23 国网重庆市电力公司电力科学研究院 Compressed sensing image and video recovery based on tensor approximation and space-time correlation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈慧芹: "《数字电视编辑技术》", vol. 9787309059670, 复旦大学出版社, pages: 106 - 112 *

Similar Documents

Publication Publication Date Title
KR100501173B1 (en) Method for Displaying High-Resolution Pictures in Mobile Communication Terminal, Mobile Communication Terminal and File Format Converting System of the Pictures therefor
US11200426B2 (en) Video frame extraction method and apparatus, computer-readable medium
US20180270496A1 (en) Composite video streaming using stateless compression
CN112235626B (en) Video rendering method and device, electronic equipment and storage medium
CN109840879B (en) Image rendering method and device, computer storage medium and terminal
JP2006085681A (en) File conversion and sharing system and method thereof
CN105518614A (en) Screencasting for multi-screen applications
CN111899322A (en) Video processing method, animation rendering SDK, device and computer storage medium
CN111885346A (en) Picture code stream synthesis method, terminal, electronic device and storage medium
CN113973224A (en) Method for transmitting media information, computing device and storage medium
CN115225615B (en) Illusion engine pixel streaming method and device
KR102273141B1 (en) System for cloud streaming service, method of cloud streaming service using still image compression technique and apparatus for the same
KR100892433B1 (en) System and Method for relaying motion pictures using mobile communication device
CN115396710A (en) Method for H5 or small program to project short video and related device
KR20160131827A (en) System for cloud streaming service, method of image cloud streaming service using alpha level of color bit and apparatus for the same
KR102407477B1 (en) System for cloud streaming service, method of image cloud streaming service using alpha value of image type and apparatus for the same
CN114217758A (en) Image display method, image display device, electronic equipment and computer readable storage medium
CN113423016A (en) Video playing method, device, terminal and server
WO2023193524A1 (en) Live streaming video processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN110392296B (en) Online playback technology for aircraft custom format trial flight video image
KR102247887B1 (en) System for cloud streaming service, method of cloud streaming service using source information and apparatus for the same
EP3910596A1 (en) Method for compressing and delivery of an image file
CN115801878A (en) Cloud application picture transmission method, equipment and storage medium
CN117692706A (en) Video processing method, device, equipment, readable storage medium and product
KR20170022599A (en) System for cloud streaming service, method of image cloud streaming service using reduction of color bit and apparatus for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination