CN112995760B - Video processing method, device, equipment and computer storage medium - Google Patents

Video processing method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN112995760B
CN112995760B CN201911307742.8A CN201911307742A CN112995760B CN 112995760 B CN112995760 B CN 112995760B CN 201911307742 A CN201911307742 A CN 201911307742A CN 112995760 B CN112995760 B CN 112995760B
Authority
CN
China
Prior art keywords
data
fmp4
video
edited
browser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911307742.8A
Other languages
Chinese (zh)
Other versions
CN112995760A (en
Inventor
梁鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911307742.8A priority Critical patent/CN112995760B/en
Publication of CN112995760A publication Critical patent/CN112995760A/en
Application granted granted Critical
Publication of CN112995760B publication Critical patent/CN112995760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV

Abstract

The application discloses a video processing method, a video processing device, video processing equipment and a computer storage medium, and belongs to the technical field of multimedia. The method comprises the following steps: acquiring at least one video file; converting the at least one video file into at least one fmp4 data; editing fmp4 data to be edited in the at least one fmp4 data; and bridging the edited fmp4 data to the video tag through an MSE application interface. According to the method and the device, the acquired video file is converted into fmp4 data, the fmp4 data is edited, and the edited fmp4 data is handed to the video tag to be played, so that the browser can edit and play the video without exporting and importing the video file. The problem that the video processing process in the related technology is complicated is solved. The effect of processing the video through the browser and simplifying the video processing process is achieved.

Description

Video processing method, device, equipment and computer storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a video processing method, apparatus, device, and computer storage medium.
Background
At present, browsers can be operated by various terminals such as personal computers, tablet computers, and smart phones, and the browsers in these terminals can play videos.
The video processing method comprises the steps of firstly obtaining videos to be edited through a browser, then extracting the videos to be edited from the browser and storing the videos in the local, then importing the videos into local video editing software, and then editing the videos through the video editing software.
However, in the above video processing method, the video needs to be extracted from the browser and stored locally, and then imported into the video editing software, so that the video processing process is complicated.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device, video processing equipment and a computer storage medium. The technical scheme is as follows:
according to an aspect of the present application, there is provided a video processing method, the method including:
acquiring at least one video file;
converting the at least one video file into at least one fmp4 data;
editing fmp4 data to be edited in the at least one fmp4 data to obtain edited fmp4 data;
and bridging the edited fmp4 data to a video tag through a media source extension application program interface.
Optionally, the editing fmp4 data to be edited in the at least one fmp4 data to obtain edited fmp4 data includes:
storing the at least one fmp4 data;
determining the fmp4 data to be edited from the stored at least one fmp4 data;
and editing the fmp4 data to be edited to obtain the edited fmp4 data.
Optionally, the fmp4 data to be edited includes video data and other data, where the other data includes at least one of audio data and subtitle data, and the editing the fmp4 data to be edited in the at least one fmp4 data to obtain edited fmp4 data includes:
and eliminating the other data in the fmp4 data to be edited to obtain the edited fmp4 data.
Optionally, the converting the at least one video file into at least one fmp4 data includes:
converting the at least one video file into the at least one fmp4 data through a decapsulation process.
Optionally, the obtaining at least one video file includes:
and acquiring the at least one section of video file through a hypertext transfer protocol, a hypertext transfer security protocol or a network socket protocol.
Optionally, the editing includes one or more of clipping, splicing, audio culling, subtitle culling, and color adjustment.
According to another aspect of the present application, there is provided a video processing apparatus comprising:
the video acquisition module is used for acquiring a plurality of sections of video files;
the conversion module is used for converting the multiple segments of video files into a plurality of fmp4 data;
the editing module is used for editing at least one fmp4 data to be edited in the fmp4 data to obtain edited fmp4 data;
and the playing module is used for bridging the edited fmp4 data to the video label through a media source expansion application program interface.
Optionally, the editing module is configured to:
storing the at least one fmp4 data;
determining the fmp4 data to be edited from the stored at least one fmp4 data;
and editing the fmp4 data to be edited to obtain the edited fmp4 data.
According to another aspect of the present application, there is provided a video processing apparatus comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement a video processing method as described above.
According to another aspect of the present application, there is provided a computer storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the computer storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the above-mentioned video processing method.
The beneficial effects that technical scheme that this application embodiment brought include at least:
the acquired video file is converted into fmp4 data, and after the fmp4 data are edited, the edited fmp4 data are handed to a video tag through a media source expansion interface to be played, so that a browser can edit and play the video without exporting and importing the video file. The problem that the video processing process in the correlation technique is complex is solved. The effect of processing the video through the browser and simplifying the video processing process is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a video processing method according to an embodiment of the present application;
fig. 2 is a flow chart of a video processing method according to an embodiment of the present application;
fig. 3 is a flowchart of another video processing method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a processing manner of fmp4 data according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a page display of video data according to an embodiment of the present application;
fig. 6 is a data processing flow chart of a video processing method provided by an embodiment of the present application;
fig. 7 is a block diagram of a video processing apparatus according to an embodiment of the present application;
fig. 8 shows a block diagram of a terminal according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
At present, browsers running in various terminals are deficient in video functions, and generally have only basic video playing functions, but do not have other functions. For example, when two segments of videos are played, one video tag is usually used to play one video, and then another video tag is used to play another segment of video, so that it is difficult to implement seamless playing of the two segments of videos. In addition, the browser is further unable to perform editing operations such as cropping on the video. When a user wants to edit a video, the user can only extract the video from the browser and then import the video into the video editing software, and then edit the video through the video editing software, so that the whole process is complex, time-consuming and labor-consuming.
The embodiment of the application provides a video processing method, a video processing device, video processing equipment and a computer storage medium.
Fig. 1 is a schematic diagram of an implementation environment of a video processing method provided in an embodiment of the present application, where the implementation environment may include a server 11 and a terminal 12.
The server 11 may be a server or a cluster of servers.
The terminal 12 may be a desktop computer, a notebook computer, a mobile phone, a tablet computer, an intelligent wearable device, and other various terminals capable of operating a browser. These terminals may run a desktop operating system such as a Windows operating system (Windows OS), an apple operating system (Mac OS), or a linnas (Linux) operating system, or these terminals may run a mobile operating system such as an Android operating system or an apple mobile operating system (IOS).
The terminal 12 may be connected to the server by wire or wirelessly (in the case of wireless connection shown in fig. 1).
The browser may be an independent program, or may be embedded in a web browsing (WebView) control or other programs, which is not limited in the embodiments of the present application.
Fig. 2 is a flowchart of a video processing method according to an embodiment of the present application, and this embodiment is illustrated by applying the video processing to a browser in the terminal 12 shown in fig. 1. The video processing method can comprise the following steps:
step 201, at least one video file is obtained.
Step 202, converting at least one video file into at least one fmp4 data.
And step 203, editing the fmp4 data to be edited in the at least one fmp4 data to obtain edited fmp4 data.
And step 204, bridging the edited fmp4 data to the video tag through a media source extension application program interface.
Among them, the Fragmented Moving Picture Experts Group4 (fmp 4) is a compression format of a video file, which is a format matched with an Application Programming Interface (API) of Media Source Extensions (MSE). Media Source Extensions (MSE) Application Programming Interface (API) is an API that enables plug-in-free and world wide area network (Web) based streaming Media functionality, and the MSE API enables Media streams to be created and modified through javascript (an interpreted scripting language).
In summary, according to the video processing method provided by the embodiment of the application, the acquired video file is converted into fmp4 data, and after the fmp4 data is edited, the edited fmp4 data is handed to the video tag for playing through the media source expansion interface, so that the browser can edit and play the video without exporting and importing the video file. The problem that the video processing process in the correlation technique is complex is solved. The effect of processing the video through the browser and simplifying the video processing process is achieved.
Fig. 3 is a flowchart of another video processing method according to an embodiment of the present application, and this embodiment is illustrated by applying the video processing to a browser in the terminal 12 shown in fig. 1. The video processing method can comprise the following steps:
step 301, at least one segment of video file is obtained through a communication protocol.
When the video processing method provided by the embodiment of the application is applied, the browser can download at least one video file through the communication protocol, and the at least one video file can be selected by a user. The format of the at least one Video file may be a Moving Picture Experts Group 4 (fmp 4), a streaming media Format (FLV), an Audio Video Interleaved format (AVI), or the like, and the embodiments of the present application are not limited thereto.
The communication Protocol may be a hypertext Transfer Protocol (HTTP), a hypertext Transfer Protocol over Secure Socket Layer (HTTPs), or a web Socket Protocol (WebSocket) to obtain at least one segment of the video file.
In an application scenario, if a user wants to splice a segment from 1 st second to 10 th second of video a with a segment from 5 th second to 9 th second of video B, the browser may be operated to download video a and video B, or only download a segment from 1 st second to 10 th second of video a with a segment from 5 th second to 9 th second of video B.
Step 302, converting at least one video file into at least one fmp4 data through a trans-encapsulation process.
The repackaging (repacking) process is a process capable of converting the format of a video file, and the browser can convert at least one piece of video file into at least one fmp4 data through the repackaging process. At least one video file and at least one fmp4 data can be in one-to-one correspondence, namely, one end video file is converted into one fmp4 data.
Each fmp4 data may include video data as well as other data, and the other data may include at least one of audio data and subtitle data.
Illustratively, a parsing module may be included in the browser, and the parsing module may convert at least one video file into at least one fmp4 data through a decapsulation process. In addition, in the case of the present invention,
at least one fmp4 data is stored, step 303.
The browser can temporarily store the at least one fmp4 data for subsequent steps to select fmp4 data to be edited for editing. Illustratively, a dispatch queue module may be included in the browser in which at least one fmp4 data may be temporarily stored.
Since fmp4 data may include video data, audio data, and subtitle data, these three data may be stored separately. Fig. 4 is a schematic diagram illustrating a processing manner of fmp4 data according to an embodiment of the present application. The video data, the audio data and the subtitle data may be stored in a video track, an audio track (which may include one or more tracks) and a subtitle track, respectively, track data stored in the video track, the audio track and the subtitle track may be decoded by a video decoder, an audio decoder and other decoders, respectively, data decoded by the video decoder and data decoded by the other decoders may be rendered and played by a renderer, and data decoded by the audio decoder may be bridged to an audio device for playing.
And step 304, determining fmp4 data to be edited from the stored at least one fmp4 data.
The fmp4 data to be edited can be one or more fmp4 data in at least one fmp4 data, and can be selected by the browser according to the operation of the user.
And 305, editing the fmp4 data to be edited to obtain edited fmp4 data.
The editing comprises one or more of cutting, splicing, audio removing, subtitle removing and color adjusting. The cropping may crop the audio data or the video data, the splicing may splice two segments of video data or audio data, the audio culling and the subtitle culling may cull the audio data and the subtitle data in the fmp4 data, and the color adjustment may be used to adjust the color of the video (e.g., add a filter, etc.).
In one application scenario, the fmp4 data to be edited may include fmp4 data of 1 st to 10 th seconds of video a and fmp4 data of 5 th to 9 th seconds of video B, and the browser may splice the two fmp4 data.
One implementation of step 305 may include: and (5) eliminating other data in the fmp4 data to be edited to obtain edited fmp4 data.
In some scenarios, only video data may be needed, at which point the browser may cull audio data as well as subtitle data, resulting in edited fmp4 data that includes video data. As shown in fig. 4, the audio data and the subtitle data in the dashed line frame are removed to obtain the edited fmp4 data, so that the audio data and the subtitle data do not need to be processed, the picture rendering efficiency is improved, and the playing performance is improved.
Illustratively, some advertisement popup pages may only include video data, and at this time, the audio data and the subtitle data in the original video file may be removed by the method provided in the embodiment of the present application, so that only the video data may be rendered subsequently, and the playing performance is improved.
And step 306, bridging the edited fmp4 data to a video tag in the browser through a media source expansion application program interface.
After obtaining the edited fmp4 data, the browser can bridge the edited fmp4 data to the video tags in the browser through the MSE API. The video tag may be a video tag, and the video tag may embed a video element in a Hyper Text Markup Language (HTML) page.
When the video tag obtains the edited fmp4 data, the edited fmp4 data can be played.
Exemplarily, as shown in fig. 5, which is a schematic diagram of page display of video data in an embodiment of the present application, since the page only includes the video data, a processing speed of a browser is faster, a playing performance is higher, and a user experience is higher.
In summary, according to the video processing method provided by the embodiment of the application, the acquired video file is converted into fmp4 data in the browser, and after the fmp4 data is edited, the edited fmp4 data is handed to the video tag through the media source expansion interface for playing, so that the browser can edit and play the video without exporting and importing the video file. The problem that the video processing process in the related technology is complicated is solved. The effect of processing the video through the browser and simplifying the video processing process is achieved.
In an exemplary implementation manner, an application process of the video processing method provided by the embodiment of the present application may include:
the purpose of a certain service is to splice a segment from 1 st second to 10 th second of the video a with a segment from 5 th second to 9 th second of the video B, the browser may first download the video a and the video B through a download module, then process the video a and the video B into fmp4 format through a parsing module and a trans-encapsulation process, store the video a and the video B in fmp4 format in a scheduling queue module, then clip and splice the video a and the video B through an editing module in the browser to obtain the data of fmp4 from 1 st second to 10 th second of the video a and the spliced result of fmp4 from 5 th second to 9 th second of the video B, and then bridge the spliced result to a video tag through a processor in the browser to play. Therefore, the video data is preloaded and preprocessed, the data of 2 seconds to 5 seconds of the A video is spliced with the B video data before playing, multi-video seamless playing is achieved, and user experience is good.
In addition, the video processing method provided by the embodiment of the application can realize operations such as splicing, cutting and media track processing of video files through a front-end technology. The matching can be carried out without a background, so that the cost is saved more efficiently.
As shown in fig. 6, it is a data processing flow chart in the application process of the video processing method.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 7 is a block diagram of a video processing apparatus, which may be a part of or all of a browser, according to an embodiment of the present disclosure. The video processing apparatus may include:
the video obtaining module 710 is configured to obtain a plurality of segments of video files.
A converting module 720, configured to convert the multiple-segment video file into multiple fmp4 data.
The editing module 730 is configured to edit at least one fmp4 data to be edited in the multiple fmp4 data, so as to obtain edited fmp4 data.
And a playing module 740, configured to bridge the edited fmp4 data to a video tag in the browser for playing.
Optionally, the editing module 730 is configured to:
storing at least one fmp4 data;
determining fmp4 data to be edited from the stored at least one fmp4 data;
editing the fmp4 data to be edited to obtain edited fmp4 data.
In summary, according to the video processing apparatus provided in the embodiment of the present application, an acquired video file is converted into fmp4 data in a browser, and after the fmp4 data is edited, the edited fmp4 data is handed to a video tag for playing through a media source extension interface, so that the browser can edit and play a video without exporting and importing the video file. The problem that the video processing process in the related technology is complicated is solved. The effect of processing the video through the browser and simplifying the video processing process is achieved.
Fig. 8 shows a block diagram of a terminal according to an embodiment of the present application. The terminal 800 may be: a smart phone, a tablet computer, a Moving Picture Experts Group 3 (MP 3) player, an MP4 player, a laptop computer, or a desktop computer. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a Graphics Processing Unit (GPU) which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 801 may also include an Artificial Intelligence (AI) processor for processing computational operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the video processing methods provided by method embodiments herein.
In some embodiments, the terminal 800 may further optionally include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a touch screen display 805, a camera 806, an audio circuit 807, and a power supply 809.
The peripheral interface 803 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting Radio Frequency (RF) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wireless Fidelity (WiFi) networks. In some embodiments, the rf circuit 804 may further include Near Field Communication (NFC) related circuits, which are not limited in this application.
Display 805 is used to display a User Interface (UI). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, providing the front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 may be made of Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED), or the like.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and a Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
A power supply 809 is used to supply power to the various components in the terminal 800. The power source 809 may be ac, dc, disposable or rechargeable. When the power supply 809 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, optical sensor 815, and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may acquire a 3D motion of the user on the terminal 800 in cooperation with the acceleration sensor 811. The processor 801 may implement the following functions according to the data collected by the gyro sensor 812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of terminal 800 and/or underneath touch display 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch screen 805 based on the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also known as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually decreases, the processor 801 controls the touch display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 becomes gradually larger, the processor 801 controls the touch display 805 to switch from the screen-on state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
According to another aspect of the present application, there is provided a video processing apparatus comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement the video processing method as provided by the above embodiments.
According to another aspect of the present application, there is provided a computer storage medium having at least one instruction, at least one program, code set, or set of instructions stored therein, the at least one instruction, the at least one program, the code set, or the set of instructions being loaded and executed by a processor to implement the video processing method provided by the above-described embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (9)

1. A method of video processing, the method comprising:
acquiring at least two sections of video files through a downloading module of a browser;
converting the at least two segments of video files into at least two fmp4 data through a parsing module of the browser, and storing the at least two fmp4 data in a scheduling queue module of the browser;
cutting and splicing fmp4 data to be edited in the at least two fmp4 data after other data are removed through an editing module in the browser to obtain spliced fmp4 data, wherein the fmp4 data to be edited comprises video data and the other data, and the other data comprises at least one of audio data and subtitle data;
Bridging, by a processor in the browser, the spliced fmp4 data to a video tag in the browser through a media source extension application program interface, so that the spliced fmp4 data can be played in the browser through the video tag.
2. The method of claim 1, further comprising:
storing the at least two fmp4 data;
determining the fmp4 data to be edited from the at least two stored fmp4 data.
3. The method of claim 1, wherein converting the at least two segments of video files into at least two fmp4 data comprises:
and converting the at least two segments of video files into the at least two fmp4 data through a decapsulation process.
4. The method of claim 1, wherein obtaining at least two segments of the video file comprises:
and acquiring the at least two segments of video files through a hypertext transfer protocol, a hypertext transfer security protocol or a network socket protocol.
5. The method according to claim 1 or 2, characterized in that the method further comprises: and performing color adjustment on fmp4 data to be edited.
6. A video processing apparatus, wherein the video processing apparatus is a part or all of a browser, the video processing apparatus comprising:
the video acquisition module is used for acquiring at least two sections of video files;
the conversion module is used for converting the at least two segments of video files into at least two fmp4 data and storing the at least two fmp4 data in a scheduling queue module of the browser;
the editing module is used for removing other data from at least two fmp4 data to be edited in the at least two fmp4 data, then cutting and splicing the data to obtain spliced fmp4 data, wherein the fmp4 data to be edited comprises video data and the other data, and the other data comprises at least one of audio data and subtitle data;
and the playing module is used for bridging the edited fmp4 data to a video tag in the browser through a media source extension application program interface, so that the spliced fmp4 data can be played in the browser through the video tag.
7. The video processing apparatus of claim 6, wherein the editing module is configured to:
Storing the at least two fmp4 data;
determining the fmp4 data to be edited from the at least two stored fmp4 data.
8. A video processing device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, said at least one instruction, said at least one program, set of codes, or set of instructions being loaded and executed by said processor to implement a video processing method according to any one of claims 1 to 5.
9. A computer storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the video processing method according to any one of claims 1 to 5.
CN201911307742.8A 2019-12-18 2019-12-18 Video processing method, device, equipment and computer storage medium Active CN112995760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911307742.8A CN112995760B (en) 2019-12-18 2019-12-18 Video processing method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911307742.8A CN112995760B (en) 2019-12-18 2019-12-18 Video processing method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN112995760A CN112995760A (en) 2021-06-18
CN112995760B true CN112995760B (en) 2022-06-28

Family

ID=76343808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911307742.8A Active CN112995760B (en) 2019-12-18 2019-12-18 Video processing method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112995760B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086717A (en) * 2022-06-01 2022-09-20 北京元意科技有限公司 Method and system for real-time editing, rendering and synthesizing of audio and video works
CN116962815B (en) * 2023-09-20 2023-11-21 成都华栖云科技有限公司 Method for playing MKV video in original mode by browser

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
CN106210451A (en) * 2016-08-02 2016-12-07 成都索贝数码科技股份有限公司 A kind of method and system of multi-track video editing based on html5
CN107613029A (en) * 2017-11-05 2018-01-19 深圳市青葡萄科技有限公司 A kind of virtual desktop remote method and system suitable for mobile terminal or Web ends
CN108718416A (en) * 2018-06-15 2018-10-30 深圳市安佳威视信息技术有限公司 Embedded video camera audio-visual system and its method is broadcast live in HTML5
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN109088887A (en) * 2018-09-29 2018-12-25 北京金山云网络技术有限公司 A kind of decoded method and device of Streaming Media
CN110446010A (en) * 2019-08-02 2019-11-12 江西航天鄱湖云科技有限公司 Video monitoring method, device, storage medium, server and system based on web
CN110545470A (en) * 2018-05-29 2019-12-06 北京字节跳动网络技术有限公司 Media file loading method and device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7769819B2 (en) * 2005-04-20 2010-08-03 Videoegg, Inc. Video editing with timeline representations
US9852761B2 (en) * 2009-03-16 2017-12-26 Apple Inc. Device, method, and graphical user interface for editing an audio or video attachment in an electronic message
US20110030031A1 (en) * 2009-07-31 2011-02-03 Paul Lussier Systems and Methods for Receiving, Processing and Organizing of Content Including Video
CN104135628B (en) * 2013-05-03 2018-01-30 安凯(广州)微电子技术有限公司 A kind of video editing method and terminal
CN110545456B (en) * 2018-05-29 2022-04-01 北京字节跳动网络技术有限公司 Synchronous playing method and device of media files and storage medium
CN109120877B (en) * 2018-10-23 2021-11-02 努比亚技术有限公司 Video recording method, device, equipment and readable storage medium
CN109474855A (en) * 2018-11-08 2019-03-15 北京微播视界科技有限公司 Video editing method, device, computer equipment and readable storage medium storing program for executing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
CN106210451A (en) * 2016-08-02 2016-12-07 成都索贝数码科技股份有限公司 A kind of method and system of multi-track video editing based on html5
CN107613029A (en) * 2017-11-05 2018-01-19 深圳市青葡萄科技有限公司 A kind of virtual desktop remote method and system suitable for mobile terminal or Web ends
CN110545470A (en) * 2018-05-29 2019-12-06 北京字节跳动网络技术有限公司 Media file loading method and device and storage medium
CN108718416A (en) * 2018-06-15 2018-10-30 深圳市安佳威视信息技术有限公司 Embedded video camera audio-visual system and its method is broadcast live in HTML5
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN109088887A (en) * 2018-09-29 2018-12-25 北京金山云网络技术有限公司 A kind of decoded method and device of Streaming Media
CN110446010A (en) * 2019-08-02 2019-11-12 江西航天鄱湖云科技有限公司 Video monitoring method, device, storage medium, server and system based on web

Also Published As

Publication number Publication date
CN112995760A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN110572722B (en) Video clipping method, device, equipment and readable storage medium
CN108401124B (en) Video recording method and device
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN110708596A (en) Method and device for generating video, electronic equipment and readable storage medium
CN109359262B (en) Animation playing method, device, terminal and storage medium
CN112394895B (en) Picture cross-device display method and device and electronic device
CN109348247B (en) Method and device for determining audio and video playing time stamp and storage medium
CN108419113B (en) Subtitle display method and device
CN111065001B (en) Video production method, device, equipment and storage medium
CN108449641B (en) Method, device, computer equipment and storage medium for playing media stream
CN108833963B (en) Method, computer device, readable storage medium and system for displaying interface picture
CN110324689B (en) Audio and video synchronous playing method, device, terminal and storage medium
CN108449651B (en) Subtitle adding method, device, equipment and storage medium
CN110602321A (en) Application program switching method and device, electronic device and storage medium
CN117063461A (en) Image processing method and electronic equipment
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN107896337B (en) Information popularization method and device and storage medium
CN111355998B (en) Video processing method and device
CN109982129B (en) Short video playing control method and device and storage medium
CN110868636B (en) Video material intercepting method and device, storage medium and terminal
CN113225616B (en) Video playing method and device, computer equipment and readable storage medium
CN112995760B (en) Video processing method, device, equipment and computer storage medium
CN114185503B (en) Multi-screen interaction system, method, device and medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN108289237B (en) Method, device and terminal for playing dynamic picture and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051648

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant