CN114885192A - Video processing method, video processing apparatus, and storage medium - Google Patents

Video processing method, video processing apparatus, and storage medium Download PDF

Info

Publication number
CN114885192A
CN114885192A CN202110163787.3A CN202110163787A CN114885192A CN 114885192 A CN114885192 A CN 114885192A CN 202110163787 A CN202110163787 A CN 202110163787A CN 114885192 A CN114885192 A CN 114885192A
Authority
CN
China
Prior art keywords
video
processing
slice
video processing
video slice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110163787.3A
Other languages
Chinese (zh)
Inventor
武小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110163787.3A priority Critical patent/CN114885192A/en
Publication of CN114885192A publication Critical patent/CN114885192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure relates to a video processing method, a video processing apparatus, and a storage medium. The video processing method is applied to a first device and comprises the following steps: the method comprises the steps that a video to be processed is sliced to obtain a first video slice, and the first video slice and video processing parameter information are distributed to at least one second device; acquiring a second video slice processed by at least one second device, wherein the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information; a target video is generated based on the second video slice. Through the embodiment of the disclosure, the first device can perform video processing by relying on the assistance of the at least one second device, and share the video processing capacity of the at least one second device, so that the video processing speed is increased, and the video processing efficiency is improved.

Description

Video processing method, video processing apparatus, and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, and a storage medium.
Background
Along with the development of science and technology, intelligent mobile terminal is more and more popularized in people's life, and image acquisition device configuration improves, and the function is more powerful, and the video file of shooting also is bigger and bigger. Because the mobile terminal is simple to operate and convenient to use, and the processing capacity of the mobile terminal is increasingly strong, people are willing to use the mobile terminal to perform processing such as photo editing, video editing and the like, for example, video clipping, filter adding in video, special effect adding and the like.
Due to different performance configurations of mobile terminals, the processing capability for high-resolution video is different. For example, the support rate of the maximum video processing of the mobile phone used by the user is 1080P, and when the user has a requirement of 4K video processing, the requirement of the user for using the mobile phone to perform video processing cannot be met.
In addition, the amount of calculation consumed by video processing is large, the time consumed for processing the video by using the mobile phone is long, especially when the duration of the video is long, the time consumed for processing the long video is increased, and inconvenience is brought to the user for processing the video by using the mobile phone.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a video processing method, a video processing apparatus, and a storage medium.
According to an aspect of the embodiments of the present disclosure, there is provided a video processing method applied to a first device, the video processing method including: the method comprises the steps that a video to be processed is subjected to slicing processing to obtain a first video slice, and the first video slice and video processing parameter information are distributed to at least one second device; acquiring a second video slice processed by the at least one second device, wherein the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information; generating a target video based on the second video slice.
In an embodiment, slicing a video to be processed to obtain a first video slice, and distributing the first video slice and video processing parameter information to at least one second device includes: slicing the video to be processed according to the key frames to obtain first video slices, and recording the time stamp of each first video slice; distributing the first video slice carrying the timestamp to the second device.
In an embodiment, generating the target video based on the second video slice comprises: performing data verification on the obtained second video slice; caching the second video slice passing the data verification, retransmitting the first video slice corresponding to the second video slice failing the data verification to the second equipment, and processing the first video slice failing the data verification again by the second equipment; and if all the second video slices pass the data verification, combining the second video slices passing the data verification into the target video.
In an embodiment, the video processing method further comprises: video processing capability information of a device having video processing capability is acquired and stored.
In an embodiment, the video processing method further comprises: and determining that other equipment is needed to assist in processing the video to be processed, and determining the at least one second equipment meeting the video processing requirement based on the video processing capacity information.
In an embodiment, the determining the at least one second device that meets video processing requirements comprises: for each of the first video slices, determining the second device that satisfies video processing requirements according to one or a combination of the following conditions: the video processing capability information of the second device is matched with video processing requirement information included in the video processing parameter information corresponding to the first video slice; the state of the second device is an idle state.
According to another aspect of the embodiments of the present disclosure, there is provided a video processing method applied to a second device, the video processing method including: receiving video processing parameter information and a first video slice, wherein the video processing parameter information and the first video slice are distributed to first equipment; and processing the first video slice based on the video processing parameter information, and sending a second video slice obtained by processing to the first equipment.
In an embodiment, processing the first video slice based on the video processing parameter information comprises: decoding the first video slice based on video information of the video to be processed; rendering the decoded first video slice based on the video information, and encoding the rendered first video slice to obtain the processed second video slice.
In an embodiment, the video processing method further comprises: responding to a video processing assistance request initiated by a first device, and establishing a communication connection with the first device; and sending self video processing capacity information based on the communication connection.
According to still another aspect of the embodiments of the present disclosure, there is provided a video processing apparatus applied to a first device, the video processing apparatus including: the processing module is used for carrying out slicing processing on a video to be processed to obtain a first video slice, distributing the first video slice and video processing parameter information to at least one second device, and generating a target video based on the second video slice; and the obtaining module is configured to obtain the second video slice processed by the at least one second device, where the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information.
In an embodiment, the processing module performs slicing processing on a video to be processed in the following manner to obtain a first video slice, and distributes the first video slice and video processing parameter information to at least one second device: slicing the video to be processed according to the key frames to obtain first video slices, and recording the time stamp of each first video slice; distributing the first video slice carrying the timestamp to the second device.
In an embodiment, the processing module generates the target video based on the processed video slice in the following manner: the processing module generates a target video based on the second video slice in the following manner: performing data verification on the obtained second video slice; caching the second video slice passing the data verification, retransmitting the first video slice corresponding to the second video slice failing the data verification to the second equipment, and processing the first video slice failing the data verification again by the second equipment; and if all the second video slices pass the data verification, combining the second video slices passing the data verification into the target video.
In an embodiment, the obtaining module is further configured to obtain and store video processing capability information of a device having video processing capability.
In one embodiment, the video processing apparatus further comprises: and the determining module is used for determining that other equipment is required to assist in processing the video to be processed, and determining the at least one second equipment meeting the video processing requirement based on the video processing capacity information.
In one embodiment, the determining module determines the at least one second device that meets the video processing requirements by: for each of the first video slices, determining the second device that satisfies video processing requirements according to one or a combination of the following conditions: the video processing capability information of the second device is matched with video processing requirement information included in the video processing parameter information corresponding to the first video slice; the state of the second device is an idle state.
According to still another aspect of the embodiments of the present disclosure, there is provided a video processing apparatus applied to a second device, the video processing apparatus including: the receiving module is used for receiving video processing parameter information and a first video slice, wherein the video processing parameter information and the first video slice are distributed to first equipment; and the processing module is used for processing the first video slice based on the video processing parameter information and sending a second video slice obtained by processing to the first equipment.
In one embodiment, the processing module processes the first video slice based on the video processing parameter information in the following manner: decoding the first video slice based on video information of the video to be processed; rendering the decoded first video slice based on the video information, and encoding the rendered first video slice to obtain the processed second video slice.
In an embodiment, the video processing apparatus further comprises: the connection module is used for responding to a video processing assistance request initiated by first equipment and establishing communication connection with the first equipment; and the sending module is used for sending the video processing capacity information of the sending module based on the communication connection.
According to still another aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: performing the video processing method of any of the preceding claims.
According to yet another aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions stored thereon, which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the video processing method of any one of the preceding claims.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: through the embodiment of the disclosure, the first device can perform video processing by relying on the assistance of the at least one second device, and share the video processing capacity of the at least one second device, so that the video processing speed is increased, and the video processing efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a view illustrating an application scenario of a video processing method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flow chart illustrating a video processing method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure.
Fig. 6 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating a video processing method according to an exemplary embodiment of the present disclosure.
Fig. 8 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure.
Fig. 9 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure.
Fig. 10 is a flowchart illustrating a method for a first device to establish a connection with a second device according to an exemplary embodiment of the present disclosure.
Fig. 11 is a flowchart illustrating a method for a second device to establish a connection with a first device according to an exemplary embodiment of the present disclosure.
Fig. 12 is a flowchart illustrating a method of video processing by a first device according to an exemplary embodiment of the present disclosure.
Fig. 13 is a flowchart illustrating a method of video processing by a second device according to an exemplary embodiment of the present disclosure.
Fig. 14 is a block diagram illustrating a video processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 15 is a block diagram illustrating a video processing apparatus according to yet another exemplary embodiment of the present disclosure.
Fig. 16 is a block diagram illustrating a video processing device according to an exemplary embodiment of the present disclosure.
Fig. 17 is a block diagram illustrating a video processing apparatus according to another exemplary embodiment of the present disclosure.
Fig. 18 is a block diagram illustrating an apparatus for video processing according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Along with official use of the mobile intelligent terminal in life of people, video processing modes are increasingly transferred from a computer end to processing through the intelligent terminal. The configuration of the image acquisition device is improved, so that video files shot by the image acquisition device are larger and larger, and the user demands for processing high-resolution videos and long videos by using the terminal are larger and larger.
However, due to different performance configurations of mobile terminals, the processing capability for high-resolution video is different. For example, the maximum video processing support rate of a mobile phone used by a user is 1080P, and when the user has a requirement of 4K video processing, the requirement of the user for video processing using the mobile phone cannot be met.
In addition, the amount of calculation consumed by video processing is large, the time consumed for processing the video by using the mobile phone is long, especially when the duration of the video is long, the time consumed for processing the long video is increased, and inconvenience is brought to the user for processing the video by using the mobile phone.
A Central Processing Unit (CPU) is an operation core and a control core, and mainly interprets instructions and processes data. A Graphics Processing Unit (GPU) is a microprocessor used for image arithmetic Processing. In video processing, processing such as adding special effects, adding filters, video cropping, and the like, is mainly performed using GPU processing, and the GPU processing speed is obviously due to the processing speed of the CPU.
Based on the video processing method, when the master device determines that the slave device needs to perform video processing in the video processing process, the master device performs slice processing on the video to be processed based on the capability information of the slave device, distributes slices to a plurality of slave devices, performs video slice processing at the slave devices, and achieves sharing of the video processing capability of the slave devices.
Fig. 1 is a view of an application scenario of a video processing method according to an exemplary embodiment of the present disclosure, and as shown in fig. 1, a first device is an initiator device that needs other devices to assist in video processing during video processing of a video to be processed. The first device opens a mobile phone service port and establishes communication connection with a plurality of second devices with video processing capability. The first device shares the video processing capabilities of the at least one second device by virtue of the video processing being performed with the assistance of the at least one second device.
Fig. 2 is a flowchart illustrating a video processing method applied to a first device according to an exemplary embodiment of the present disclosure. The first device may be, for example, a smartphone, a tablet, a wearable device. Referring to fig. 2, the video processing method includes the following steps.
In step S101, a video to be processed is sliced to obtain a first video slice, and the first video slice and video processing parameter information are distributed to at least one second device.
In step S102, a second video slice processed by at least one second device is obtained, and the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information.
In step S103, a target video is generated based on the second video slice.
In this embodiment of the present disclosure, the first device may be an initiator device that needs other devices to assist in video processing during video processing of a video to be processed. Other devices are needed to assist in video processing, for example, the video to be processed is a long video, and only the video processing capability of the first device is relied on, so that the processing speed is slow and the time consumption is long. For another example, the configuration of the first device is low, and when video processing is performed, a video with a highest supported resolution of 1080P is processed, and processing of a 4K video cannot be realized depending on the processing capability of the first device.
In the disclosed embodiments, the video processing capability information includes Graphics Processor (GPU) configuration information and/or Central Processor (CPU) configuration information. It can be understood that, when the first device and the second device perform video processing, the processing speed of the GPU is significantly better than that of the CPU, mainly depending on the processing capability of the GPU.
The method comprises the steps that a first device conducts slicing processing on a video to be processed to obtain a first video slice, and video processing parameter information of the video to be processed and the first video slice are distributed to at least one second device. In the embodiment of the present disclosure, the video processing parameter information may be video parameter information of a video to be processed, for example, a duration, a file format, coding, a resolution, a code stream, and the like of the video, and the video processing parameter information further includes video processing information that the first device desires to perform on the video to be processed, for example, add a special effect, add a subtitle, add a filter effect, perform clipping processing, and the like at a set time point. And the at least one second device processes the received first video slice based on the video processing parameter information respectively. And the first equipment acquires a second video slice processed by at least one second equipment, packages the second video slice, generates a target video and completes processing of the video to be processed.
For example, the first device may slice the video to be processed to obtain one or more slices of the first video. In an example, the first video slice is one, and the first device distributes the one first video slice to at least one second device, where different second devices may respectively perform different video processing on the one first video slice. In another example, the first video slice is a plurality of video slices, and the first device distributes the plurality of video slices to the at least one second device, where different second devices may respectively perform different video processing on different first video slices. In yet another example, the first video slice is in plurality, the first device distributes the plurality of first video slices to at least one second device, where different second devices may perform different video processing on the same first video slice, such as the first video slice being in two (e.g., first video slice a and first video slice B) and the second device being in two (e.g., second device a and second device B), the first device sends both first video slices to the second device a and second device B, respectively, the processing of the first video slice a and the first video slice B to add special effects may be performed by the second device a and the processing of the first video slice a and the first video slice B to add filters may be performed by the second device B.
According to the embodiment of the disclosure, in the process of video processing of a video to be processed, a first device performs slicing processing on the video to be processed, distributes a first video slice obtained after the slicing processing and video processing parameter information to a second device, at least one second device performs processing on the first video slice, and the first device acquires the processed second video slice to generate a target video. The first device processes the video by the aid of at least one second device, and the video processing capacity of the second device is shared, so that the video processing speed is increased, and the video processing efficiency is improved.
Fig. 3 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure. As shown in fig. 3, the video processing method includes the following steps.
In step S201, a video to be processed is sliced according to a key frame to obtain first video slices, and a timestamp of each first video slice is recorded.
In step S202, the first video slice carrying the timestamp is distributed to the second device.
In the embodiment of the disclosure, the first device performs slicing processing on a video to be processed according to a key frame to obtain a plurality of first video slices, where the key frame included in the video to be processed, for example, a start point or a turning point of a motion or animation, that is, a frame related to a motion, is used as the key frame. Differences between other frames in the video to be processed and the key frames may be stored based on the key frames. And recording the time stamp of each first video slice, namely the time stamp of the first video slice is used for synchronizing time when video is rendered or decoded, and determining the time sequence among the plurality of first video slices.
The first equipment distributes the first video slices carrying the time stamps to the second equipment according to a preset sequence, and at least one second equipment respectively processes the received first video slices. It is to be understood that the first video slices acquired by different second devices may be different, that is, different second devices respectively process different first video slices. For example, the second apparatus a performs processing for adding a special effect to the first video slice a, and the second apparatus B performs processing for adding a filter to the first video slice B. For example, the second device a may perform the subtitle adding process on the first video slice a, and the second device B may perform the filter adding process on the first video slice a. It can also be understood that the preset sequence may be a self-defined sequence or a sequence using timestamps as a sequence, and the like, and may be distribution of video slices according to a sequential order, or distribution of a first video slice to at least one second device at the same time, and video processing corresponding to the first video slice performed by the at least one second device.
According to the embodiment of the disclosure, the first device slices the video to be processed according to the key frame to obtain the first video slice, and distributes the first video slice and the video processing parameter information to the at least one second device, and the second device processes the first video slice, so that the first device and the at least one second device are effectively assisted in video processing, and the video processing speed is increased.
Fig. 4 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure. As shown in fig. 4, the video processing method includes the following steps.
In step S301, data verification is performed on the acquired second video slice.
In step S302, the second video slice that passes the data verification is cached, the first video slice corresponding to the second video slice that fails the data verification is retransmitted to the second device, and the second device processes the first video slice that fails the data verification again.
In step S303, if all the second video slices pass the data verification, the second video slices passing the data verification are combined into the target video.
In the embodiment of the disclosure, in the process of video processing of a video to be processed, a first device determines that other devices are needed to assist in video processing, determines at least one second device meeting video processing requirements based on video processing capability information, performs slice processing on the video to be processed to obtain a first video slice, and distributes the first video slice and video processing parameter information to the at least one second device, and the at least one second device performs video slice processing respectively. And after the second device finishes processing, returning the coded second video slice to the first device, and performing data verification on the obtained second video slice by the first device. For example, it is checked whether the second video slice processed by the second device is empty, whether the order is correct, etc.
The first device caches the second video slice for which the data check passes. And the first equipment retransmits the first video slice corresponding to the second video slice which fails in data verification to the second equipment with video processing capability, and the second equipment processes the first video slice again.
And if the second video slices all pass the data verification, packaging the second video slices passing the data verification to generate the target video.
According to the embodiment of the disclosure, a first device slices a video to be processed according to a key frame to obtain a first video slice, the first video slice and video processing parameter information are distributed to a second device according to a time stamp sequence, the second device processes the first video slice and obtains second video slice data processed by the second device, data verification is performed on the second video slice, and the second video slices passing the data verification are combined into a target video.
Fig. 5 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure. As shown in fig. 5, the video processing method includes the following steps.
In step S401, video processing capability information of a device having video processing capability is acquired and stored.
In step S402, a video to be processed is sliced to obtain a first video slice, and the first video slice and video processing parameter information are distributed to at least one second device.
In step S403, a second video slice processed by at least one second device is obtained, and the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information.
In step S404, a target video is generated based on the second video slice.
In the embodiment of the present disclosure, video processing capability information of a device is obtained, and the device and the video processing capability information corresponding to the device are stored. The video processing capability information of the device may be directly acquired, or the first device opens a mobile phone service port to establish a communication connection with a device existing within a preset range of the first device, for example, search of the assisting device is performed within a preset distance range of the first device, and a communication connection with the existing device is established in a bluetooth mode, a wireless network mode, a local area network mode, and the like. The device as the assisting side transmits video processing capability information of the own device, such as CPU, GPU information, and the like, to the first device through the communication connection. The first equipment verifies the identity of the assisting side equipment, confirms that the assisting side equipment has video processing capacity, and stores the received GPU and CPU information of the assisting side equipment. Therefore, when the first device needs the assistance of other devices for video processing, the first device determines at least one second device meeting the video processing requirement based on the stored assisting device information, and establishes communication connection with the second device.
The method can also be used for opening a mobile phone service port when the first device needs other devices to assist in video processing in the process of video processing of the video to be processed, establishing communication connection with the devices existing in the preset range of the first device, and acquiring video processing capacity information sent by the assisting device. The first device determines at least one second device meeting the video processing requirement based on the video processing capability information of the assisting-party device, namely, the at least one second device and the first device cooperate to perform video processing on the video to be processed. The first device disconnects the connection established with the device that does not meet the video processing requirements.
The method comprises the steps that a first device conducts slicing processing on a video to be processed to obtain a first video slice, and the first video slice and video processing parameter information are distributed to at least one second device. And the at least one second device processes the received first video slice based on the video processing parameter information respectively. And the first equipment acquires a second video slice processed by at least one second equipment, packages the second video slice, generates a target video and completes the processing of the video to be processed.
According to the embodiment of the disclosure, a first device acquires and stores video processing capability information of a device with video processing capability, determines at least one second device meeting video processing requirements based on the video processing capability information, performs slicing processing on a video to be processed, distributes a first video slice obtained after the slicing processing and video processing parameter information to the second device, respectively performs processing on the first video slice by the at least one second device, and the first device acquires the processed second video slice to generate a target video. The first device processes the video by the aid of at least one second device and shares the video processing capacity of the second device, so that the video processing speed is increased, and the video processing efficiency is improved.
Fig. 6 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure. As shown in fig. 6, the video processing method includes the following steps.
In step S501, video processing capability information of a device having video processing capability is acquired and stored.
In step S502, it is determined that other devices are needed to assist in processing the video to be processed, and the video to be processed is sliced to obtain a first video slice.
In step S503, at least one second device that satisfies the video processing requirement is determined based on the video processing capability information.
In step S504, the first video slice and the video processing parameter information are distributed to at least one second device.
In step S505, a second video slice processed by at least one second device is obtained, where the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information.
In step S506, a target video is generated based on the second video slice.
In the embodiment of the disclosure, in the process of video processing on a video to be processed, a first device performs slicing processing on the video to be processed to obtain a first video slice, and distributes the first video slice and video processing parameter information to at least one second device. When the first device determines that the video processing needs to be assisted by other devices, at least one second device meeting the video processing requirement is determined based on the video processing capability information of the devices, that is, the at least one second device and the first device cooperate to perform video processing on the video to be processed.
In an embodiment, for each first video slice obtained after the slice processing, the first device determines a second device that satisfies the video processing requirements of the first video slice based on the video processing capability information of the other devices. And the video processing capability information of the second device is matched with the video processing requirement information included in the video processing parameter information corresponding to the first video slice. For example, if the processing required for the first video slice is subtitling, then another device with subtitling capability may be video-processed as the second device. The selection range of the second equipment is larger, and the first equipment and the plurality of second equipment assist in video processing to be more flexible and effective.
In an embodiment, the state of the second device meeting the video processing requirement is an idle state, and the first video slice and the video processing parameter information are distributed to at least one idle-state second device, so that the processing speed can be increased, and the video processing speed can be increased. And distributing the video processing parameter information of the video to be processed and the first video slice to at least one second device, wherein the at least one second device processes the received first video slice based on the video processing parameter information. And the first equipment acquires a second video slice processed by at least one second equipment, packages the second video slice based on the processed second video slice, generates a target video and completes the processing of the video to be processed.
In an embodiment, in the process of performing video processing on a video to be processed, if it is determined that other devices are needed to assist in performing video processing, at least one second device meeting video processing requirements is determined based on video processing capability information of the devices. The video processing capability information of the device may be directly acquired by the first device, or may be acquired by the first device after the first device establishes a connection with the device. The method comprises the steps of slicing a video to be processed, distributing a first video slice obtained after slicing and video processing parameter information to second equipment, respectively processing the first video slice by at least one second equipment, and acquiring the processed second video slice by the first equipment to generate a target video.
According to the embodiment of the disclosure, a first device performs slicing processing on a video to be processed, determines at least one second device meeting video processing requirements based on video processing capability information, distributes a first video slice obtained after the slicing processing and video processing parameter information to the second device, respectively performs processing on the first video slice by the at least one second device, and acquires the processed second video slice by the first device to generate a target video. The first device processes the video by the aid of at least one second device and shares the video processing capacity of the second device, so that the video processing speed is increased, and the video processing efficiency is improved.
Fig. 7 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure. The video processing method is applied to the second device. The first device may be, for example, a smartphone, a tablet, a wearable device. Referring to fig. 7, the video processing method includes the following steps.
In step S601, video processing parameter information and a first video slice are received, and the video processing parameter information and the first video slice are distributed to a first device.
In step S602, the first video slice is processed based on the video processing parameter information, and a second video slice obtained by the processing is sent to the first device.
In this embodiment of the present disclosure, the second device may be an assisting device when the first device needs other devices to assist in performing video processing during video processing on the video to be processed. And the second equipment receives the video processing parameter information and the first video slice sent by the first equipment. The video processing parameter information includes parameter information of the video to be processed, such as time length, file format, encoding, resolution, code stream, etc. of the video, and the video processing parameter information also includes processing information for processing the video to be processed. And processing the first video slice based on the video processing parameter information, and sending a second video slice obtained by processing to the first equipment. And generating the target video by the first device based on the second video slice processed by the second device.
According to the embodiment of the disclosure, the second device receives the video processing parameter information and the first video slice, processes the first video slice based on the video processing parameter information, and sends the processed second video slice to the first device, so that the second device shares the video processing capacity with the first device, assists the first device in processing the video to be processed, and guarantees are provided for the first device to effectively process the video.
Fig. 8 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure. As shown in fig. 8, the video processing method includes the following steps.
In step S701, a first video slice is decoded based on the video processing parameter information.
In step S702, the decoded first video slice is rendered, and the rendered first video slice is encoded to obtain a processed second video slice.
In the embodiment of the present disclosure, the second device receives the video processing parameter information and the first video slice sent by the first device. The video processing parameter information includes parameter information of the video to be processed, video processing information and the like, and the second device decodes the first video slice based on the parameter information of the video to be processed. Rendering the decoded first video slice based on processing information included in the video information, and encoding the rendered first video slice based on the parameter information to obtain a processed second video slice. And the second equipment sends the processed second video slice to the first equipment, and the first equipment generates a target video based on the second video slice.
According to the embodiment of the disclosure, the second device receives the video processing parameter information and the first video slice, decodes, renders and encodes the first video slice based on the video processing parameter information, and sends the encoded second video slice to the first device, so that the second device shares the video processing capacity with the first device, assists the first device in processing the video to be processed, and provides guarantee for the first device to effectively process the video.
Fig. 9 is a flowchart illustrating a video processing method according to another exemplary embodiment of the present disclosure. As shown in fig. 9, the video processing method includes the following steps.
In step S801, a communication connection is established with a first device in response to a video processing assistance request initiated by the first device.
In step S802, the own video processing capability information is transmitted based on the communication connection.
In step S803, video processing parameter information and a first video slice are received, and the video processing parameter information and the first video slice are distributed to the first device.
In step S804, the first video slice is processed based on the video processing parameter information, and the processed second video slice is sent to the first device.
In the embodiment of the disclosure, a first device opens a mobile phone service port, searches within a certain distance, and establishes communication connection with a second device with video processing capability through bluetooth, a wireless network, a local area network, and the like. The second device establishes a connection with the first device in response to the assistance processing request of the first device. The second device transmits video processing capability information of its own device, such as CPU, GPU information, etc., to the first device through network connection. And the first equipment verifies and confirms the equipment identity of the second equipment, and stores the received GPU and CPU information of the second equipment.
When the first equipment initiates an assistance request for assisting video processing, the second equipment determines the first equipment initiating the assisting video processing. And the second equipment receives the video processing parameter information and the first video slice sent by the first equipment. The video processing parameter information includes parameter information of the video to be processed, such as time length, file format, encoding, resolution, code stream, etc. of the video, and the video processing parameter information also includes processing information for processing the video to be processed. And the second equipment processes the first video slice based on the video processing parameter information and sends a second video slice obtained by processing to the first equipment. Generating, by the first device, a target video based on the second video slice.
According to the embodiment of the disclosure, the second device receives the video processing parameter information and the first video slice, processes the first video slice based on the video processing parameter information, and sends the processed second video slice to the first device, and the second device shares the video processing capacity with the first device, so that the auxiliary processing of the video to be processed is realized, and a guarantee is provided for the first device to effectively perform video processing.
Fig. 10 is a flowchart illustrating a method for establishing a connection between a first device and a second device according to an exemplary embodiment of the disclosure, where the method for establishing a connection between a first device and a second device includes the following steps, as shown in fig. 10.
In step S1101, a communication connection is established with a device having a video processing capability.
In step S1102, it is determined whether the device satisfies the video processing requirement.
When determining that the device satisfies the video processing requirement, step S1103 is executed, and when determining that the device does not satisfy the video processing requirement, step S1104 is executed.
In step S1103, a communication connection with the device is maintained, and video processing capability information is stored.
In step S1104, the communication connection with the device is disconnected.
In the embodiment of the disclosure, the first device opens a mobile phone service port and establishes a communication connection with a device having a video processing capability. After the connection is established, the assisting side device sends the video processing capability information of the self device, such as CPU and GPU information, to the first device through network connection. The first equipment verifies and confirms the equipment identity of the assisting party with the video processing capacity, when the equipment meets the video processing requirement, the first equipment keeps the communication connection with the assisting party equipment, and stores the received GPU and CPU information of the assisting party equipment. And when the equipment is determined not to meet the video processing requirement, disconnecting the communication connection with the equipment.
According to the embodiment of the disclosure, the first device establishes a communication connection with the device having the video processing capability, and acquires and stores video processing capability information of the device having the video processing capability. The method comprises the steps of determining at least one second device meeting video processing requirements based on video processing capacity information, carrying out slicing processing on a video to be processed, distributing a first video slice obtained after the slicing processing and video processing parameter information to the second device, respectively carrying out the processing on the first video slice by the at least one second device, obtaining a second video slice obtained by the processing of the second device by the first device, and generating a target video based on the second video slice. The first device processes the video by the aid of at least one second device and shares the video processing capacity of the second device, so that the video processing speed is increased, and the video processing efficiency is improved.
Fig. 11 is a flowchart illustrating a method of a second device establishing a connection with a first device according to an exemplary embodiment of the present disclosure. As shown in fig. 11, the method for establishing connection between the second device and the first device includes the following steps.
In step S1201, a communication connection is established with the first device in response to the video processing assistance request initiated by the first device.
In step S1202, the own video processing capability information is transmitted,
in step S1203, it is determined whether the first device is disconnected. When it is determined that the first device is not disconnected, step S1204 is performed,
in step S1204, the communication connection with the first device is maintained.
In the embodiment of the disclosure, a first device opens a mobile phone service port, initiates a video processing assistance request, and establishes a communication connection with a second device with video processing capability through bluetooth, a wireless network, a local area network, and the like. The second device establishes a connection with the first device in response to the assistance processing request of the first device. The second device transmits video processing capability information of its own device, such as CPU, GPU information, etc., to the first device through network connection. The first equipment verifies and confirms the equipment identity of the assisting party with the video processing capacity, when the equipment meets the video processing requirement, the first equipment keeps the communication connection with the assisting party equipment, and stores the received GPU and CPU information of the assisting party equipment. And when the equipment is determined not to meet the video processing requirement, disconnecting the communication connection with the equipment.
According to the embodiment of the disclosure, the second device responds to a request for assisting video processing initiated by the first device, determines the first device, receives video processing parameter information and a first video slice, processes the first video slice based on the video processing parameter information, and sends a second video slice obtained by processing to the first device, and the second device shares video processing capacity with the first device, so that assisting processing of a video to be processed is realized, and a guarantee is provided for the first device to effectively perform video processing.
Fig. 12 is a flowchart illustrating a method of video processing by a first device according to an exemplary embodiment of the present disclosure. As shown in fig. 12, the method for video processing by the first device includes the following steps.
In step S1301, the video processing parameter information is distributed to at least one second device.
In step S1302, a video to be processed is sliced to obtain a first video slice, and the first video slice is distributed to at least one second device.
In step S1303, at least one second device processed second video slice is obtained.
In step S1304, it is determined whether the second video slice passes the data check. When the second video slice passes the data verification, step S1305 is performed.
When the second video slice does not pass the data verification, step S1302 is executed to distribute the first video slice corresponding to the second video slice to at least one second device.
In step S1305, the second video slice whose data check passes is buffered.
In step S1306, it is determined whether all second video slices are accepted. When it is determined that all the second video slices are accepted, step S1307 is performed.
In step S1307, a target video is generated.
In the embodiment of the disclosure, a first device performs slice processing on a video to be processed, and distributes video processing parameter information of the video to be processed and video slices of the video to be processed to at least one second device. And the at least one second device processes the received video slices respectively based on the video processing parameter information. And after the processing of the second equipment is finished, the coded video slice is returned to the first equipment, and the first equipment performs data verification on the obtained video slice to determine whether the video slice passes the data verification. When the first device determines that the video slice passes the data check, the video slice passing the data check is cached. And when the video slice does not pass the data verification, redistributing the video slice which does not pass the data verification to at least one second device.
And determining whether all video slices are received, and packing the video slices passing the data verification to generate a target video when all the video slices are determined to be received.
According to the embodiment of the disclosure, a video to be processed is sliced according to a key frame by a first device, video slices obtained after slicing are distributed to an idle second device according to a time stamp sequence, the second device processes the video slices, video slice data processed by the second device is obtained, data verification is performed on the video slices, and the video slices passing the data verification are combined into a target video.
Fig. 13 is a flowchart illustrating a method of video processing by a second device according to an exemplary embodiment of the present disclosure. As shown in fig. 13, the method for video processing by the first device includes the following steps.
In step S1401, video processing parameter information and a first video slice are received.
In step S1402, the first video slice is decoded based on the video processing parameter information.
In step S1403, the decoded first video slice is rendered.
In step S1404, the rendered first video slice is encoded.
In step S1405, the encoded second video slice is transmitted to the first device.
In the embodiment of the present disclosure, the second device receives the video processing parameter information and the first video slice sent by the first device. The video processing parameter information comprises parameter information of a video to be processed, video processing information and the like, and the first video slice is obtained by slicing the video to be processed. The second device decodes the first video slice based on the parameter information of the first video slice. Rendering the decoded first video slice based on the processing information included in the first video information, and encoding the rendered first video slice based on the parameter information to obtain a second video slice. And the second equipment sends the processed second video slice to the first equipment, and the first equipment generates a target video based on the second video slice.
According to the embodiment of the disclosure, the second device receives the video processing parameter information and the first video slice, decodes, renders and encodes the first video slice based on the video processing parameter information, and sends the encoded second video slice to the first device, so that the second device shares the video processing capacity with the first device, assists in processing the video to be processed, and provides guarantee for the first device to effectively perform video processing.
Based on the same conception, the embodiment of the disclosure also provides a video processing device.
It is understood that the video processing apparatus provided by the embodiments of the present disclosure includes a hardware structure and/or a software module for performing the above functions. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Fig. 14 is a block diagram illustrating a video processing device according to an exemplary embodiment of the present disclosure. Referring to fig. 14, the video processing apparatus 100 is applied to a first device, and the video processing apparatus 100 includes a processing module 101 and an acquisition module 102.
The processing module 101 is configured to perform slicing processing on a video to be processed to obtain a first video slice, distribute the first video slice and video processing parameter information to at least one second device, and generate a target video based on the second video slice.
The obtaining module 102 is configured to obtain a second video slice processed by at least one second device, where the second video slice is obtained by processing a received video slice by the second device based on the video processing parameter information.
In an embodiment, the processing module 101 performs slicing processing on a video to be processed in the following manner to obtain a first video slice, and distributes the first video slice and video processing parameter information to at least one second device: slicing a video to be processed according to the key frames to obtain first video slices, and recording a timestamp of each first video slice; and distributing the first video slice carrying the time stamp to the second device.
In one embodiment, the processing module 101 generates the target video based on the second video slice in the following manner: performing data verification on the obtained second video slice; caching the second video slice passing the data verification, retransmitting the first video slice corresponding to the second video slice failing the data verification to the second equipment, and processing the first video slice failing the data verification again by the second equipment; and if all the second video slices pass the data verification, combining the second video slices passing the data verification into the target video.
In an embodiment, the obtaining module 102 is further configured to obtain and store video processing capability information of a device having video processing capability.
Fig. 15 is a block diagram illustrating a video processing apparatus according to yet another exemplary embodiment of the present disclosure. Referring to fig. 15, the video processing apparatus 100 further includes: a determination module 103.
The determining module 103 is configured to determine that other devices are needed to assist in processing the video to be processed, and determine at least one second device that meets the video processing requirement based on the video processing capability information.
In one embodiment, the determining module 103 determines the at least one second device that meets the video processing requirements in the following manner: for each first video slice, determining a second device that satisfies video processing requirements according to one or a combination of the following conditions: the video processing capability information of the second device is matched with the video processing requirement information included in the video processing parameter information corresponding to the first video slice; the state of the second device is an idle state.
Fig. 16 is a block diagram illustrating a video processing device according to an exemplary embodiment of the present disclosure. Referring to fig. 16, the video processing apparatus 200 is applied to the second device, and the video processing apparatus 200 includes a receiving module 201 and a processing module 202.
The receiving module 201 is configured to receive video processing parameter information and a first video slice, where the video processing parameter information and the first video slice are distributed to a first device.
And the processing module 202 is configured to process the first video slice based on the video processing parameter information, and send a second video slice obtained by the processing to the first device.
In one embodiment, the processing module 202 processes the first video slice based on the video processing parameter information in the following manner: decoding a first video slice based on video information of a video to be processed; rendering the decoded first video slice based on the video information, and encoding the rendered first video slice to obtain a processed second video slice.
Fig. 17 is a block diagram illustrating a video processing apparatus according to still another exemplary embodiment of the present disclosure. Referring to fig. 17, the video processing apparatus 200 further includes: a connection module 203 and a sending module 204.
A connection module 203, configured to establish a communication connection with the first device in response to the video processing assistance request initiated by the first device.
And a sending module 204, configured to send the video processing capability information of the user based on the communication connection.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 18 is a block diagram illustrating an apparatus 300 for video processing according to an exemplary embodiment of the present disclosure. For example, the apparatus 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 18, the apparatus 300 may include one or more of the following components: a processing component 302, a memory 304, a power component 306, a multimedia component 308, an audio component 310, an input/output (I/O) interface 313, a sensor component 314, and a communication component 316.
The processing component 302 generally controls overall operation of the device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 302 may include one or more processors 320 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 302 can include one or more modules that facilitate interaction between the processing component 302 and other components. For example, the processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.
The memory 304 is configured to store various types of data to support operations at the apparatus 300. Examples of such data include instructions for any application or method operating on device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 304 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 306 provide power to the various components of device 300. The power components 306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 300.
The multimedia component 308 includes a screen that provides an output interface between the device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 310 is configured to output and/or input audio signals. For example, audio component 310 includes a Microphone (MIC) configured to receive external audio signals when apparatus 300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.
The I/O interface 313 provides an interface between the processing component 302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for the device 300. For example, sensor assembly 314 may detect an open/closed state of device 300, the relative positioning of components, such as a display and keypad of device 300, the change in position of device 300 or a component of device 300, the presence or absence of user contact with device 300, the orientation or acceleration/deceleration of device 300, and the change in temperature of device 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate communication between the apparatus 300 and other devices in a wired or wireless manner. The device 300 may access a wireless network based on a communication standard, such as WiFi, 3G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 316 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 304 comprising instructions, executable by the processor 320 of the apparatus 300 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is understood that "a plurality" in this disclosure means two or more, and other words are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like, are used to describe various information and should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A video processing method applied to a first device, the video processing method comprising:
the method comprises the steps that a video to be processed is subjected to slicing processing to obtain a first video slice, and the first video slice and video processing parameter information are distributed to at least one second device;
acquiring a second video slice processed by the at least one second device, wherein the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information;
generating a target video based on the second video slice.
2. The video processing method according to claim 1, wherein slicing the video to be processed to obtain a first video slice, and distributing the first video slice and the video processing parameter information to at least one second device comprises:
slicing the video to be processed according to the key frames to obtain first video slices, and recording the time stamp of each first video slice;
distributing the first video slice carrying the timestamp to the second device.
3. The video processing method according to claim 1 or 2, wherein generating a target video based on the second video slice comprises:
performing data verification on the obtained second video slice;
caching the second video slice passing the data verification, retransmitting the first video slice corresponding to the second video slice failing the data verification to the second equipment, and processing the first video slice failing the data verification again by the second equipment;
and if all the second video slices pass the data verification, combining the second video slices passing the data verification into the target video.
4. The video processing method of claim 1, wherein the method further comprises:
video processing capability information of a device having video processing capability is acquired and stored.
5. The video processing method of claim 4, wherein the method further comprises:
and determining that other equipment is needed to assist in processing the video to be processed, and determining the at least one second equipment meeting the video processing requirement based on the video processing capacity information.
6. The video processing method according to claim 5, wherein said determining the at least one second device that satisfies video processing requirements comprises:
for each of the first video slices, determining the second device that satisfies video processing requirements according to one or a combination of the following conditions:
the video processing capability information of the second device is matched with video processing requirement information included in the video processing parameter information corresponding to the first video slice;
the state of the second device is an idle state.
7. A video processing method, applied to a second device, the method comprising:
receiving video processing parameter information and a first video slice, wherein the video processing parameter information and the first video slice are distributed to first equipment;
and processing the first video slice based on the video processing parameter information, and sending a second video slice obtained by processing to the first equipment.
8. The video processing method of claim 7, wherein processing the first video slice based on the video processing parameter information comprises:
decoding the first video slice based on the video processing parameter information;
rendering the decoded first video slice, and encoding the rendered first video slice to obtain the processed second video slice.
9. The video processing method according to claim 7 or 8, wherein the method further comprises:
responding to a video processing assistance request initiated by a first device, and establishing a communication connection with the first device;
and sending self video processing capacity information based on the communication connection.
10. A video processing apparatus applied to a first device, the video processing apparatus comprising:
the processing module is used for carrying out slicing processing on a video to be processed to obtain a first video slice, distributing the first video slice and video processing parameter information to at least one second device, and generating a target video based on the second video slice;
and the obtaining module is configured to obtain the second video slice processed by the at least one second device, where the second video slice is obtained by processing the received video slice by the second device based on the video processing parameter information.
11. A video processing apparatus, applied to a second device, the apparatus comprising:
the receiving module is used for receiving video processing parameter information and a first video slice, wherein the video processing parameter information and the first video slice are distributed to first equipment;
and the processing module is used for processing the first video slice based on the video processing parameter information and sending a second video slice obtained by processing to the first equipment.
12. A video processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the video processing method of any of claims 1 to 9.
13. A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the video processing method of any of claims 1 to 9.
CN202110163787.3A 2021-02-05 2021-02-05 Video processing method, video processing apparatus, and storage medium Pending CN114885192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110163787.3A CN114885192A (en) 2021-02-05 2021-02-05 Video processing method, video processing apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110163787.3A CN114885192A (en) 2021-02-05 2021-02-05 Video processing method, video processing apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN114885192A true CN114885192A (en) 2022-08-09

Family

ID=82667883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110163787.3A Pending CN114885192A (en) 2021-02-05 2021-02-05 Video processing method, video processing apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN114885192A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389820A (en) * 2023-03-28 2023-07-04 北京睿芯通量科技发展有限公司 Video processing method, video processing device, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294612A (en) * 2020-01-22 2020-06-16 腾讯科技(深圳)有限公司 Multimedia data processing method, system and storage medium
CN111695505A (en) * 2020-06-11 2020-09-22 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN111953903A (en) * 2020-08-13 2020-11-17 北京达佳互联信息技术有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112291590A (en) * 2020-10-30 2021-01-29 北京字节跳动网络技术有限公司 Video processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294612A (en) * 2020-01-22 2020-06-16 腾讯科技(深圳)有限公司 Multimedia data processing method, system and storage medium
CN111695505A (en) * 2020-06-11 2020-09-22 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN111953903A (en) * 2020-08-13 2020-11-17 北京达佳互联信息技术有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112291590A (en) * 2020-10-30 2021-01-29 北京字节跳动网络技术有限公司 Video processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389820A (en) * 2023-03-28 2023-07-04 北京睿芯通量科技发展有限公司 Video processing method, video processing device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
US11388453B2 (en) Method for processing live-streaming interaction video and server
EP3046309B1 (en) Method, device and system for projection on screen
US20170304735A1 (en) Method and Apparatus for Performing Live Broadcast on Game
US11388403B2 (en) Video encoding method and apparatus, storage medium, and device
EP3264774B1 (en) Live broadcasting method and device for live broadcasting
CN112114765A (en) Screen projection method and device and storage medium
CN112104807A (en) Control method, system and device for front camera
CN110996117B (en) Video transcoding method and device, electronic equipment and storage medium
CN108829475B (en) UI drawing method, device and storage medium
CN114885192A (en) Video processing method, video processing apparatus, and storage medium
EP3086584A1 (en) Method and device for controlling access
CN112667074A (en) Display method, display device and storage medium
CN111953980A (en) Video processing method and device
US10085050B2 (en) Method and apparatus for adjusting video quality based on network environment
EP3599763A2 (en) Method and apparatus for controlling image display
CN110311961B (en) Information sharing method and system, client and server
CN112153404B (en) Code rate adjusting method, code rate detecting method, code rate adjusting device, code rate detecting device, code rate adjusting equipment and storage medium
EP4007229A1 (en) Bandwidth determination method and apparatus, and terminal, and storage medium
CN110177275B (en) Video encoding method and apparatus, and storage medium
CN110312117B (en) Data refreshing method and device
CN111478914B (en) Timestamp processing method, device, terminal and storage medium
CN117813652A (en) Audio signal encoding method, device, electronic equipment and storage medium
CN113660513A (en) Method, device and storage medium for synchronizing playing time
CN112130787A (en) Electronic equipment, display signal transmission system, method and device
EP2940981B1 (en) Method and device for synchronizing photographs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination