CN114007084B - Video clip cloud storage method and device - Google Patents
Video clip cloud storage method and device Download PDFInfo
- Publication number
- CN114007084B CN114007084B CN202210001240.8A CN202210001240A CN114007084B CN 114007084 B CN114007084 B CN 114007084B CN 202210001240 A CN202210001240 A CN 202210001240A CN 114007084 B CN114007084 B CN 114007084B
- Authority
- CN
- China
- Prior art keywords
- video
- preprocessed
- brocade
- cloud server
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2181—Source of audio or video content, e.g. local disk arrays comprising remotely distributed storage units, e.g. when movies are replicated over a plurality of video servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4398—Processing of audio elementary streams involving reformatting operations of audio signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440245—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Abstract
The application provides a video clip cloud storage method, which comprises the following steps: acquiring a real-time video stream in a local area network; the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset segmentation rule to generate a brocade video; and sending the brocade video to a cloud server for storage. The video is intercepted and segmented before the real-time video is uploaded to the cloud server, unnecessary video segments are removed, and the influence of network fluctuation on real-time video transmission is not needed to be worried while the network load is reduced. The application also provides a video clip cloud storage device.
Description
Technical Field
The application provides a cloud storage technology, and particularly relates to a video clip cloud storage method. The application also relates to a video clip cloud storage device.
Background
With the development of internet technology, the space of cloud storage is large, and the characteristic of data security leads to that cloud storage becomes a common data storage mode more and more.
In the prior art, a method for real-time video cloud-up is to transmit real-time videos of all camera devices to a cloud end in a network mode, and process the videos at the cloud end.
Disclosure of Invention
In order to solve the problem that real-time video cloud is unstable due to network fluctuation in the prior art, the application provides a video clip cloud storage method and a video clip cloud storage device.
The application provides a video clip cloud storage method, which comprises the following steps:
acquiring a real-time video stream in a local area network;
segmenting the real-time video stream according to a preset time length threshold value to form a preprocessed video, and performing video segmentation, segment sequencing, segment discarding, special effect adding and music adding on the preprocessed video according to a preset editing rule to generate a brocade video;
and sending the brocade video to a cloud server for storage.
Optionally, the preset clipping rule includes:
scoring the human face image of each frame of the preprocessed video;
segmenting the preprocessed video according to the score to generate video segments, and discarding the video segments with the score lower than a preset score threshold;
and sequencing the video clips according to the scores.
Optionally, the scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
Optionally, the human body action amplitude and the expression action amplitude are obtained by comparing the human body face image with a standard human body face image.
Optionally, the acquiring the real-time video stream in the local area network includes: and taking over the camera equipment in the local area network, and acquiring the real-time video stream through the camera equipment.
The present application further provides a video clip cloud storage device, including:
the acquisition module is used for acquiring real-time video stream in the local area network;
the editing module is used for segmenting the real-time video stream according to a preset duration threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset editing rule to generate a brocade video;
and the sending module is used for sending the brocade video to a cloud server for storage.
Optionally, the clipping module further comprises:
the scoring unit is used for scoring the human face image of each frame of the preprocessed video;
the processing unit is used for segmenting the preprocessed video according to the score to generate a video segment, and discarding the video segment with the score lower than a preset score threshold;
and the sequencing unit is used for sequencing the video clips according to the scores.
Optionally, the scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
Optionally, the human body action amplitude and the expression action amplitude are obtained by comparing the human body face image with a standard human body face image.
Optionally, the obtaining module further includes:
and the taking-over unit is used for taking over the camera equipment in the local area network and acquiring the real-time video stream through the camera equipment.
The application has the advantages over the prior art that:
the application provides a video clip cloud storage method, which comprises the following steps: acquiring a real-time video stream in a local area network; the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset segmentation rule to generate a brocade video; and sending the brocade video to a cloud server for storage. The video is intercepted and segmented before the real-time video is uploaded to the cloud server, unnecessary video segments are removed, and the influence of network fluctuation on real-time video transmission is not needed to be worried while the network load is reduced.
Drawings
Fig. 1 is a flow chart of a video clip cloud storage in the present application.
FIG. 2 is a flow chart of the pre-processing video clip rules in the present application.
Fig. 3 is a schematic view of a video clip cloud storage device in the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The application provides a video clip cloud storage method, which comprises the following steps: acquiring a real-time video stream in a local area network; segmenting the real-time video stream according to a preset time length threshold value to form a preprocessed video, and performing video segmentation, segment sequencing, segment discarding, special effect adding and music adding on the preprocessed video according to a preset editing rule to generate a brocade video; and sending the brocade video to a cloud server for storage. The video is intercepted and segmented before the real-time video is uploaded to the cloud server, unnecessary video segments are removed, and the influence of network fluctuation on real-time video transmission is not needed to be worried while the network load is reduced.
Fig. 1 is a flow chart of the cloud storage of video clips in the present application.
Referring to fig. 1, S101 obtains a real-time video stream in a local area network.
One or more image pickup devices are connected to the local area network, and images picked up by the image pickup devices can be uploaded to the local area network and transferred to terminal equipment or storage equipment through the local area network. The storage device in this application refers to a cloud server, and the cloud server receives image data of the image pickup device in real time.
The cloud server is provided with an edge computing box, and the edge computing box can be arranged outside the cloud server and provides edge computing service.
In the application, the edge computing box is connected with the camera equipment through a local area network and is simultaneously connected with point cloud service. And when the camera equipment starts to shoot real-time video images, the edge computing box replaces the cloud server to communicate with the camera equipment, and acquires real-time videos of the camera equipment.
Referring to fig. 1, in S102, the real-time video stream is segmented according to a preset duration threshold to form a preprocessed video, and the preprocessed video is subjected to video segmentation, segment sorting, segment discarding, special effect adding, and music adding according to a preset clipping rule to generate a brocade video.
And in the process of receiving the real-time video, the edge computing box segments the real-time video according to a preset time length, and the segmented real-time video is used as a pre-processing video. Preferably, the preset time period is 10 minutes.
Specifically, the edge computing box has a storage unit, and when the preprocessed video is cut according to the preset time length, the preprocessed video is backed up and stored, and when the preprocessed video is lost due to network fluctuation in the network transmission process, the backup is retransmitted.
Before the pre-processed video is transmitted to the cloud server, the pre-processed video is edited to further reduce the transmission data volume, and preferably, the clipping is performed according to a preset clipping rule.
FIG. 2 is a flow diagram of the pre-processing video clip rules in the present application.
Referring to fig. 2, S201 scores human face images of each frame of the preprocessed video; s202, segmenting the preprocessed video according to the scores to generate video segments, and discarding the video segments with the scores lower than a preset score threshold value.
Each frame of the preprocessed video is cut out, and then the face image in each frame is scored. When there is no image of the body in the frame, or the body image is incomplete or unclear, this reduces the frame score to 0 and discards it.
And the scoring is carried out according to the human body action amplitude and the human face expression action amplitude of the human face image. Specifically, scoring the human face in one frame, firstly extracting pixels of the whole human image, then editing pixel blocks of five sense organs and four limbs respectively, and then calculating wallpaper of the pixel blocks and the pixels of the human image to obtain the score, wherein the score calculation formula is as follows:
wherein P is a score, A, B, C is a scale factor, the scale factor is manually set, and the method further comprisesIs the ratio of the pixel blocks of the five sense organs to the pixel blocks of the human face,、is a corresponding block of pixels;is the ratio of the pixel blocks of the four limbs to the pixel blocks of the human body,、is a corresponding block of pixels,Is a ratio of selected pixel block distances, such as a ratio of an canthus distance to an earlobe distance,、is the corresponding block of pixels.
S203, sorting the video clips according to the scores.
As mentioned above, the video frames will discard frames that do not meet the condition, which results in video incoherence, and then the video frames are sorted according to the scores. Specifically, video frames with scores in a coherent state are first combined into small segments of a video, and then sorted according to the rating scores in the small segments. The consistency state refers to that the difference between the scores of two associated frames is smaller than a preset score threshold, and in the application, the preset score threshold is manually set.
And after the ordering is finished, adding sound effects and special effects according to the duration of the small segments respectively to finish the brocade video. Alternatively, the person skilled in the art may also perform the scoring of the human body or the human face by other scoring methods.
Referring to fig. 1, in S103, the brocade video is sent to a cloud server for storage.
After the brocade video is finished, packaging the brocade video and sending the brocade video to cloud service for storage, extracting the brocade video from the cloud server by a later editing person, and finally editing to obtain a final video. Because the video has automatically completed the first round of editing, the editing time of post editing personnel can be greatly saved.
The application also provides a video clip cloud storage device, which comprises an acquisition module 301, a clipping module 302 and a sending module 303.
Fig. 3 is a schematic view of a video clip cloud storage device in the present application.
Referring to fig. 3, an obtaining module 301 is configured to obtain a real-time video stream in a local area network.
One or more image pickup devices are connected to the local area network, and images picked up by the image pickup devices can be uploaded to the local area network and transmitted to terminal equipment or storage equipment through the local area network. The storage device in the present application refers to a cloud server, and the cloud server receives image data of the image pickup device in real time.
The cloud server is provided with an edge computing box, and the edge computing box can be arranged outside the cloud server and provides edge computing service.
In this application, the obtaining module 301 further includes: and the taking-over unit is used for taking over the camera equipment in the local area network and acquiring the real-time video stream through the camera equipment. And when the camera equipment starts to shoot real-time video images, the edge computing box replaces the cloud server to communicate with the camera equipment, and acquires real-time videos of the camera equipment.
Referring to fig. 3, the clipping module 302 is configured to clip the real-time video stream according to a preset duration threshold to form a preprocessed video, where the preprocessed video performs video clipping, segment sorting, segment discarding, special effect adding, and music adding according to a preset clipping rule to generate a brocade video.
And in the process of receiving the real-time video, the edge computing box segments the real-time video according to a preset time length, and the segmented real-time video is used as a pre-processing video. Preferably, the preset time period is 10 minutes.
Specifically, the edge computing box has a storage unit, and when the preprocessed video is cut according to the preset time length, the preprocessed video is backed up and stored, and when the preprocessed video is lost due to network fluctuation in the network transmission process, the backup is retransmitted.
Before the pre-processed video is transmitted to the cloud server, the pre-processed video is edited to further reduce the transmission data volume, and preferably, the clipping is performed according to a preset clipping rule.
Referring to fig. 2, S201 scores human face images of each frame of the preprocessed video; s202, segmenting the preprocessed video according to the scores to generate video segments, and discarding the video segments with the scores lower than a preset score threshold value.
Each frame of the preprocessed video is cut out, and then the face image in each frame is scored. When there is no image of the human body in the frame, or the image of the human body is incomplete or unclear, this reduces the frame score to 0 and discards it.
And the scoring is carried out according to the human body action amplitude and the human face expression action amplitude of the human face image. Specifically, scoring the human face in one frame, firstly extracting pixels of the whole human image, then editing pixel blocks of five sense organs and four limbs respectively, and then calculating wallpaper of the pixel blocks and the pixels of the human image to obtain the score, wherein the score calculation formula is as follows:
wherein P is a score, A, B, C is a scale factor, the scale factor is manually set, and the method further comprisesIs the ratio of the pixel blocks of the five sense organs to the pixel blocks of the human face,、is a corresponding block of pixels;is the ratio of the pixel blocks of the four limbs to the pixel blocks of the human body,、is a corresponding pixel block,Is a ratio of selected pixel block distances, such as a ratio of an canthus distance to an earlobe distance,、is the corresponding block of pixels.
S203, sorting the video clips according to the scores.
As mentioned above, the video frames will drop frames that do not meet the condition, which results in video incoherency, and then the video frames are sorted according to the scores. Specifically, video frames with scores in a coherent state are first synthesized into small segments of a video, and then sorted according to the rating scores in the small segments. The consistency state refers to that the difference between the scores of two associated frames is smaller than a preset score threshold, and in the application, the preset score threshold is manually set.
And after the ordering is finished, adding sound effects and special effects respectively according to the duration of the small segments to finish the brocade set video. Alternatively, the person skilled in the art may also perform the scoring of the human body or the human face by other scoring methods.
Referring to fig. 3, the sending module 303 is configured to send the brocade video to a cloud server for storage.
After the brocade video is finished, packaging the brocade video and sending the brocade video to cloud service for storage, extracting the brocade video from the cloud server by a later editing person, and finally editing to obtain a final video. Because the video has automatically finished the first round of clipping, the editing time of post-clipping personnel can be greatly saved.
Claims (8)
1. A video clip cloud storage method is characterized by comprising the following steps:
the method comprises the steps that real-time video streams in a local area network are obtained, the local area network is connected with one or more camera devices, images shot by the camera devices are uploaded to the local area network and are transmitted to a storage device through the local area network, the storage device is a cloud server, the cloud server is provided with an edge computing box, the edge computing box is connected with the camera devices through the local area network, and the camera devices obtain the real-time video streams;
the real-time video stream is segmented according to a preset time length threshold value to form a preprocessed video, the preprocessed video is edited before being transmitted to a cloud server, the preprocessed video is subjected to video segmentation, segment sequencing, segment discarding, special effect adding and music adding according to a preset editing rule to generate a brocade video, the edge computing box is provided with a storage unit, the preprocessed video is subjected to backup storage after being segmented according to the preset time length threshold value, and the backup is sent again after the preprocessed video is lost due to network fluctuation in the transmission process; the preset clipping rule comprises the steps of scoring the human face image of each frame of the preprocessed video, firstly extracting pixels of the whole human face image, then editing pixel blocks of five sense organs and four limbs respectively, calculating wallpaper of the pixel blocks and the pixels of the human face image, and obtaining the score, wherein the score calculation formula is as follows:
P=(AS 1 +BS 2 +CS 3 )*100%
wherein P is a score, A, B, C is a scale factor, the scale factor is set manually, and S 1 Is the ratio of the pixel blocks of the five sense organs to the pixel blocks of the face, V 1 、V 2 Is a corresponding block of pixels; s. the 2 Is the ratio of pixel blocks of four limbs to pixel blocks of human body, W 1 、W 2 Is a corresponding pixel block, S 3 Is the ratio of the selected pixel block distances, K 1 、K 2 Is a corresponding block of pixels;
and sending the brocade video to a cloud server for storage, packaging the brocade video after the brocade video is finished, sending the brocade video to the cloud server for storage, extracting the brocade video from the cloud server by later editing personnel, and finally editing to obtain a final video.
2. The video clip cloud storage method according to claim 1, wherein the preset clipping rule further comprises:
segmenting the preprocessed video according to the score to generate video segments, and discarding the video segments with the score lower than a preset score threshold;
and sequencing the video clips according to the scores.
3. The video clip cloud storage method according to claim 2, wherein the scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
4. The video clip cloud storage method according to claim 3, wherein the human body motion amplitude and the expression motion amplitude are obtained by comparing the human body face image with a standard human body face image.
5. A video clip cloud storage device, comprising:
the system comprises an acquisition module, a storage module and a video processing module, wherein the acquisition module is used for acquiring real-time video streams in a local area network, the local area network is connected with one or more camera devices, images shot by the camera devices are uploaded to the local area network and are transmitted to the storage device through the local area network, the storage device is a cloud server, the cloud server is provided with an edge computing box, and the edge computing box is connected with the camera devices through the local area network and acquires real-time videos of the camera devices;
the editing module is used for segmenting the real-time video stream according to a preset time length threshold value to form a preprocessed video, editing the preprocessed video before the preprocessed video is transmitted to the cloud server, performing video segmentation, segment sequencing, segment discarding, special effect adding and music adding on the preprocessed video according to a preset editing rule to generate a brocade video, and the edge computing box is provided with a storage unit and is used for performing backup storage on the preprocessed video after the preprocessed video is segmented according to the preset time length threshold value; when the preprocessed video is lost due to network fluctuation in the process of transmission, the backup is sent again; the preset clipping rule comprises the steps of scoring the human face image of each frame of the preprocessed video, firstly extracting pixels of the whole human face image, then editing pixel blocks of five sense organs and four limbs respectively, calculating wallpaper of the pixel blocks and the pixels of the human face image, and obtaining the score, wherein the score calculation formula is as follows:
P=(AS 1 +BS 2 +CS 3 )*100%
wherein P is a score, A, B, C is a scale factor, the scale factor is set manually, and S 1 Is the ratio of the pixel blocks of the five sense organs to the pixel blocks of the face, V 1 、V 2 Is a corresponding block of pixels; s 2 Is the ratio of pixel blocks of four limbs to pixel blocks of human body, W 1 、W 2 Is a corresponding pixel block, S 3 Is the ratio of the selected pixel block distances, K 1 、K 2 Is a corresponding block of pixels;
and the sending module is used for sending the brocade video to a cloud server for storage, packaging the brocade video and sending the brocade video to the cloud server for storage after the brocade video is finished, and extracting the brocade video from the cloud server by later editing personnel for final editing to obtain a final video.
6. The video clip cloud storage device of claim 5, wherein said clipping module further comprises:
the processing unit is used for segmenting the preprocessed video according to the score to generate a video segment, and discarding the video segment with the score lower than a preset score threshold;
and the sequencing unit is used for sequencing the video clips according to the scores.
7. The video clip cloud storage device of claim 6, wherein said scoring comprises:
and scoring according to the human body action amplitude and the human face expression action amplitude of the human face image.
8. The video clip cloud storage device of claim 7, wherein said human motion magnitudes and expression motion magnitudes are obtained by comparing said human face image with a standard human face image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210001240.8A CN114007084B (en) | 2022-01-04 | 2022-01-04 | Video clip cloud storage method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210001240.8A CN114007084B (en) | 2022-01-04 | 2022-01-04 | Video clip cloud storage method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114007084A CN114007084A (en) | 2022-02-01 |
CN114007084B true CN114007084B (en) | 2022-09-09 |
Family
ID=79932584
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210001240.8A Active CN114007084B (en) | 2022-01-04 | 2022-01-04 | Video clip cloud storage method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114007084B (en) |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070212023A1 (en) * | 2005-12-13 | 2007-09-13 | Honeywell International Inc. | Video filtering system |
CN109121021A (en) * | 2018-09-28 | 2019-01-01 | 北京周同科技有限公司 | A kind of generation method of Video Roundup, device, electronic equipment and storage medium |
CN109862388A (en) * | 2019-04-02 | 2019-06-07 | 网宿科技股份有限公司 | Generation method, device, server and the storage medium of the live video collection of choice specimens |
CN109982109B (en) * | 2019-04-03 | 2021-08-03 | 睿魔智能科技(深圳)有限公司 | Short video generation method and device, server and storage medium |
CN110401873A (en) * | 2019-06-17 | 2019-11-01 | 北京奇艺世纪科技有限公司 | Video clipping method, device, electronic equipment and computer-readable medium |
CN112347941B (en) * | 2020-11-09 | 2021-06-08 | 南京紫金体育产业股份有限公司 | Motion video collection intelligent generation and distribution method based on 5G MEC |
CN112445935B (en) * | 2020-11-25 | 2023-07-04 | 开望(杭州)科技有限公司 | Automatic generation method of video selection collection based on content analysis |
CN113676671B (en) * | 2021-09-27 | 2023-06-23 | 北京达佳互联信息技术有限公司 | Video editing method, device, electronic equipment and storage medium |
-
2022
- 2022-01-04 CN CN202210001240.8A patent/CN114007084B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114007084A (en) | 2022-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102082816B1 (en) | Method for improving the resolution of streaming files | |
CN109145784B (en) | Method and apparatus for processing video | |
CN108200446B (en) | On-line multimedia interaction system and method of virtual image | |
CN103369289A (en) | Communication method of video simulation image and device | |
CN111464827A (en) | Data processing method and device, computing equipment and storage medium | |
CN111861572A (en) | Advertisement putting method and device, electronic equipment and computer readable storage medium | |
CN112672174A (en) | Split-screen live broadcast method, acquisition equipment, playing equipment and storage medium | |
EP3823267A1 (en) | Static video recognition | |
CN111985281A (en) | Image generation model generation method and device and image generation method and device | |
US9268791B2 (en) | Method and apparatus for image processing and computer readable medium | |
CN110234015A (en) | Live-broadcast control method, device, storage medium, terminal | |
CN115578512A (en) | Method, device and equipment for training and using generation model of voice broadcast video | |
CN114007084B (en) | Video clip cloud storage method and device | |
CN110677718A (en) | Video identification method and device | |
CN108881119B (en) | Method, device and system for video concentration | |
CN113593587B (en) | Voice separation method and device, storage medium and electronic device | |
CN104780387B (en) | A kind of video transmission method and system | |
CN113810725A (en) | Video processing method, device, storage medium and video communication terminal | |
CN112533024A (en) | Face video processing method and device and storage medium | |
CN113610731A (en) | Method, apparatus and computer program product for generating an image quality enhancement model | |
CN113709401A (en) | Video call method, device, storage medium, and program product | |
CN110958417B (en) | Method for removing compression noise of video call video based on voice clue | |
CN113920023A (en) | Image processing method and device, computer readable medium and electronic device | |
CN112261474A (en) | Multimedia video image processing system and processing method | |
CN112565178A (en) | Unmanned aerial vehicle power equipment system of patrolling and examining based on streaming media technique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |