CN115515008A - Video processing method, terminal and video processing system - Google Patents

Video processing method, terminal and video processing system Download PDF

Info

Publication number
CN115515008A
CN115515008A CN202211136032.5A CN202211136032A CN115515008A CN 115515008 A CN115515008 A CN 115515008A CN 202211136032 A CN202211136032 A CN 202211136032A CN 115515008 A CN115515008 A CN 115515008A
Authority
CN
China
Prior art keywords
video
server
terminal
editing
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211136032.5A
Other languages
Chinese (zh)
Other versions
CN115515008B (en
Inventor
胡游乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD
Original Assignee
NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD filed Critical NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD
Priority to CN202211136032.5A priority Critical patent/CN115515008B/en
Publication of CN115515008A publication Critical patent/CN115515008A/en
Application granted granted Critical
Publication of CN115515008B publication Critical patent/CN115515008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video processing method, a terminal and a shooting system. Wherein, the method comprises the following steps: acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video aim at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video; sending the first video to a server so that the server analyzes the first video and obtains an editing scheme; receiving an editing scheme from a server; and editing the second video by using the editing scheme to obtain an edited third video. The efficiency of video processing can be improved, better shooting experience is provided for the user, and therefore the convenience of the user in using the terminal is improved.

Description

Video processing method, terminal and video processing system
Technical Field
The invention relates to the technical field of electronics, in particular to a video processing method, a terminal and a video processing system.
Background
At present, with the development of video acquisition equipment and display equipment and the expansion of network bandwidth, a large number of videos with high definition resolution are shot and edited and are played on high-quality display equipment to be presented to users.
The shooting device and the cloud server device both have video processing capability. However, the image pickup apparatus is limited in apparatus capability and limited in calculation power. Therefore, another way can be to send the shot video to the cloud server device to perform clipping processing with the high computational power of the cloud server.
However, the size of the video file with high resolution is too large, and under the condition that the network bandwidth is limited, the duration occupied by the camera device and the cloud processor during the video file transmission is too long, so that the video processing efficiency is reduced.
Disclosure of Invention
The embodiment of the application provides a video processing method, a terminal and a video processing system, which can improve the video processing efficiency and provide better shooting experience for a user, thereby improving the convenience of the user in using the terminal.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are directed at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
sending the first video to a server so that the server analyzes the first video and obtains an editing scheme;
receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
Optionally, after the editing scheme is utilized to edit the second video and obtain an edited third video, the method further includes:
and sending the third video to the server for storage by the server.
Optionally, the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score parameter.
Optionally, before sending the first video to the server, the method further includes:
detecting a network state between the first video and the server, and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server.
Optionally, after detecting the network status with the server, the method further includes:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server, so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
receiving the fourth video from the server.
In a second aspect, an embodiment of the present application provides a video processing system, where the system includes a terminal and a server, where:
the terminal is used for acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are directed at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
the terminal is further used for sending the first video to a server;
the server is used for analyzing the first video and obtaining an editing scheme;
the terminal is also used for receiving the editing scheme from the server;
and the terminal is further used for editing the second video by using the editing scheme to obtain an edited third video.
Optionally, the terminal is further configured to send the third video to the server;
the server is further used for storing the third video.
Optionally, the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score parameter.
Optionally, the terminal is further configured to:
detecting a network state between the first video and the server, and/or detecting whether the size of the first video is larger than a preset threshold value;
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server.
Optionally, the terminal is further configured to: when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server;
the server is further used for analyzing the second video to obtain an editing scheme and editing the second video according to the editing scheme to obtain a fourth video;
the terminal is further configured to receive the fourth video from the server.
In a third aspect, an embodiment of the present application provides a terminal, where the terminal includes: one or more processors, memory, cameras;
the memory coupled with the one or more processors, the memory to store computer program code, the computer program code comprising computer instructions;
the computer instructions, when executed by the one or more processors, cause the terminal to:
acquiring images through the camera to obtain a first video and a second video, wherein the first video and the second video are directed at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
sending the first video to a server so that the server analyzes the first video and obtains an editing scheme;
receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
Optionally, the processor is further configured to invoke the computer instruction, so that the terminal performs the following operations:
and sending the third video to the server for storage by the server.
Optionally, the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score parameter.
Optionally, the processor is further configured to invoke the computer instruction, so that the terminal performs the following operations:
detecting a network state between the first video and the server, and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server.
Optionally, the processor is further configured to invoke the computer instruction, so that the terminal performs the following operations:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server, so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
receiving the fourth video from the server.
In a fourth aspect, an embodiment of the present application provides a server, including a processor, a memory, and a communication module, where the memory is used to store program codes, and the processor is used to call the program codes to implement the server according to any one of the optional manners of the second aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method of the first aspect and any one of the alternatives thereof.
It can be seen that according to the video processing method, the terminal and the video processing system provided by the embodiment of the application, when a video is shot, the terminal can simultaneously capture high-quality and low-quality videos (namely, the first video and the second video) in the same time period. And sending the low-quality video, namely the first video, to a server with high processing capacity through network connection for processing, obtaining an editing scheme by the server according to the first video, and sending the editing scheme back to the terminal. And the terminal can edit the second video with high quality according to the editing scheme. Therefore, the server with high calculation power can be used for analyzing the video to obtain the editing scheme, and only the editing scheme is transmitted to the terminal, so that the time occupied by data transmission can be shortened, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user for using the terminal is improved.
Drawings
Fig. 1 is a schematic architecture diagram of a video processing system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a user interface provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of another video processing method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
Technical solutions in some embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided by the present disclosure are within the scope of protection of the present disclosure.
Throughout the specification and claims, the term "comprising" is to be interpreted in an open, inclusive sense, i.e., as "including, but not limited to," unless the context requires otherwise. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. In the description herein, the terms "one embodiment," "some embodiments," "an example embodiment," "exemplary" or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the disclosure. The schematic representations of the above terms are not necessarily referring to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be included in any suitable manner in any one or more embodiments or examples.
Hereinafter, the terms "first" and "second" are used only for convenience of description. And are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present disclosure, "a plurality" means two or more unless otherwise specified.
In describing some embodiments, expressions of "coupled" and "connected," along with their derivatives, may be used. Rather, the term "connected" may be used in describing some embodiments to indicate that two or more elements are in direct physical or electrical contact with each other. As another example, some embodiments may be described using the term "coupled" to indicate that two or more elements are in direct physical or electrical contact. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments disclosed herein are not necessarily limited to the contents herein.
In order to better understand a video processing method, a terminal and a video processing system provided by the embodiments of the present invention, a network architecture used in the embodiments of the present invention is described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an architecture of a video processing system according to an embodiment of the present disclosure. It is to be understood that the present application illustrates the system architecture of fig. 1 only for explaining the present application embodiments, and should not be construed as limiting. As shown in fig. 1, the photographing system may include, for example, a terminal 100 and a server 200, wherein:
the terminal 100 and the server 200 establish a communication connection. The terminal 200 and the server 300 may also establish a communication connection. The communication connection between the terminal 100 and the server 200 may include, for example, any one or more of the following:
wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and other solutions for wireless communication.
The communication connection between the terminal 100 and the server 200 may also include, for example, any one or more of the following:
global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
In the embodiment of the present application, the wireless communication technology is not limited to the above example, and may also be a 5G communication technology or a communication technology that newly appears in the future, which is not limited in the embodiment of the present application.
The terminal 100 is, for example, a three-axis pan-tilt, and the terminal 100 may be integrated in a terminal device of a user, and may further include but is not limited to a mobile phone, a tablet computer, a multimedia playing device, a smart wearable device, and the like. The terminal 100 may also be a smart watch, a smart bracelet, a head-mounted device (e.g., a Virtual Reality (VR) helmet, an Augmented Reality (AR), wearable glasses, etc.), a mobile phone, a tablet, a camera, etc. It is understood that the specific product form of the terminal 100 is not limited to the above examples, which are only used to explain the embodiments of the present application.
The server 200 may be any device suitable for image processing, and may be, for example, a workstation dedicated to processing images and video data, a processing device cluster, or a personal computer such as a desktop computer and a notebook computer, or may be a mobile phone, a tablet computer, an internet of things device, and the like, but is not limited thereto. Server 200 may be a more computationally powerful device or system with greater processing power than terminal 100.
In this embodiment of the present application, the terminal 100 is configured to acquire a first video and a second video through a camera, where the first video and the second video are of the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
the terminal 100 is further configured to send the first video to the server 200;
the server 200 is configured to analyze the first video and obtain an editing scheme;
the terminal 100 is further configured to receive the editing scheme from the server 200;
the terminal 100 is further configured to edit the second video by using the editing scheme, so as to obtain an edited third video.
The terminal 100 is further configured to send the third video to the server 200;
the server 200 is further configured to store the third video.
Optionally, the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score parameter.
Optionally, the terminal 100 is further configured to:
detecting a network state with the server 200, and/or detecting whether the size of the first video is greater than a preset threshold;
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to the server 200.
Optionally, the terminal 100 is further configured to: when the network status is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server 200;
the server 200 is further configured to analyze the second video to obtain an editing scheme, and edit the second video according to the editing scheme to obtain a fourth video;
the terminal 100 is further configured to receive the fourth video from the server 200.
In the above-described photographing system, the terminal 100 may simultaneously capture high-quality and low-quality videos (i.e., the first video and the second video) during the same period of time when the videos are photographed. And sending the low-quality video, i.e. the first video, to the server 200 with high processing capability through network connection for processing, and the server 200 obtaining an editing scheme according to the first video and sending the editing scheme back to the terminal. And the terminal can edit the second video with high quality according to the editing scheme. In this way, the server 200 with high calculation power can be used to analyze the video to obtain the editing scheme, and only the editing scheme is transmitted to the terminal 100, so that the time occupied by data transmission can be reduced, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user in using the terminal 100 is improved.
A video processing method provided in the embodiment of the present application is described below based on the system architecture shown in fig. 1. In this scenario, the terminal 100 may establish a communication connection with the server 200 through a wireless communication technology. For example, the terminal 100 is connected to a WiFi network or a connected data network so that the terminal 100 can access the server 200. The terminal 100 may also access the server 200, or access the server 200 through other devices. Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a video processing method according to an embodiment of the present disclosure. As shown in fig. 2, the video processing method may include:
s101, the terminal 100 receives a shooting user operation.
In the embodiment of the present application, the photographing user operation may be, for example, a user operation performed on the terminal 100 for starting photographing. Specifically, please refer to fig. 3, fig. 3 is a schematic diagram of a user interface according to an embodiment of the present disclosure. The user interface is for example a user interface of the terminal 100. As shown in FIG. 3, the user interface 400 is, for example, a video capture interface, and the video capture interface 400 includes, for example, a "record" option 401, a capture control 402. Wherein record option 401 is in a selected state. The capture control 402 can be used to begin capturing video in response to user operation. In the embodiment of the present application, the shooting user operation is, for example, a user operation acting on the shooting control 402.
S102, responding to the operation of a shooting user, the terminal 100 acquires images through a camera to acquire a first video and a second video.
The first video and the second video are directed to the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video.
The resolution of the second video may be higher, such as 1080P resolution (1920 × 1080 pixels), 4K resolution (4096 × 2160 pixels), 8K resolution (7680 × 4320 pixels), and the like. The first video may be of lower resolution relative to the second video. It is understood that the first video and the second video are not limited to those captured in steps S101-S102, and the first video and the second video may be any videos, and the present invention does not limit the sources, types, subjects, etc. of the videos. For example, the first video and the second video may be movie videos, tv program videos, and the like, which are captured by professional video capturing devices such as a video camera, and the like, for example, refer to S101 to S102. Not limited to this, the life video collected by a general user using a mobile phone, a tablet computer, or other terminal device may also be used.
The quality of the video may further include an acquisition frame rate, an acquisition resolution, and the like, which is not limited in the embodiment of the present application.
In the embodiment of the application, the second video is, for example, an original video collected by a camera, and the quality of the original video is higher than that of the first video. The first video is, for example, an original video, and is processed, for example, by frame decimation, resolution reduction, and the like, to obtain a video with lower quality. The second video may also be a higher quality video obtained by processing the original video, for example, by performing resolution enhancement processing. In other embodiments, the first video and the second video may be both high and low quality videos recorded simultaneously. For example, the first video and the second video are respectively passed through a high-resolution camera and a low-resolution camera, and two high-quality videos and two low-quality videos are simultaneously collected in a period of time.
S103, the terminal 100 sends the first video to the server 200.
Wherein the size of the first video is smaller than the second video. The first video may be transmitted to the server 200 over a 5G network.
S104, the server 200 analyzes the first video and obtains an editing scheme.
The editing scheme analyzed and obtained by the server 200 may be a part that is obtained by image recognition and is likely to be of more interest to people. The server 200 can identify the material segments of the video by setting a predetermined condition, material tags (material tags; and splicing the material segments according to the material tags and a preset digital template to generate a target video), and can identify a portion of the video that is more wonderful or more interesting to people as the material segment, so that the target video generated by splicing the material segments has a better effect, thereby obtaining an editing scheme. The server 200 may also determine the editing scheme based on the content quality of the first video, the rendering visual effect, and the like.
Wherein the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score parameter.
The clip range may for example comprise a range of segments selected in the videos (first and second videos). The first video and the second video may include corresponding same time ranges, e.g., both the first video and the second video may be played for 30 seconds. Through analysis of the server, the range of the clip of the first video comprises 2-5 seconds, 7-15 seconds and 18-25 seconds. The server 200 may send the clipping range to the terminal 100 and the terminal 100 may select the second video with video segments at the positions of 2-5 seconds, 7-15 seconds, 18-25 seconds, based on the clipping range as well.
The splicing parameter may comprise, for example, the order of the video segment splices for each position. For example, the order of the edited video is: 7-15 seconds, 2-5 seconds, and 18-25 seconds. The server 200 may transmit the splicing parameter to the terminal 100 and the terminal 100 may arrange the second video in video segments of 7-15 seconds, 2-5 seconds, and 18-25 seconds, also based on the splicing parameter.
The image adjustment parameters may include, for example, any one or more of: brightness, contrast, rotation angle, saturation, exposure parameter, highlight parameter, shadow parameter, color temperature parameter, hue parameter, sharpening parameter, definition and other parameters.
The picture intelligent adjustment parameter may include any one or more of the following: AI map parameters, stretch parameters, face beautification parameters, makeup parameters, filter parameters, etc. The AI repair parameters may include repair sharpness, repair hue, AI toning parameters, and the like. The stretching parameters may include, for example, stretching positions, stretching ratios, and the like of the pictures therein. The face beautifying parameters comprise whitening parameters, skin grinding parameters, freckle and acne removing parameters, face thinning parameters, large eye parameters and the like. The cosmetic effect parameter and the filter parameter may include, for example, a picture adjustment parameter for selecting a corresponding template.
The special effect production parameters may include, for example, special effect parameters for video clip switching, etc.
The soundtrack parameters for example comprise the background music parameters used.
The server 200 can perform image recognition, semantic recognition, and the like by using the processing capability thereof, and according to the recognized image and semantic, the effect of the target video generated by splicing the material segments is better, so as to obtain an editing scheme.
The editing scheme further includes, for example, a length range of each video segment, video arrangement habit data (e.g., landscape-first portrait-second portrait, landscape portrait cross arrangement, landscape image preference, etc.), style of score, whether score is pressed, and beauty and fitness preference data.
S105, the server 200 transmits the edit scheme to the terminal 100.
The editing scheme may be transmitted to the server 200 via a 5G network.
And S106, the terminal 100 edits the second video by using the editing scheme to obtain an edited third video.
The terminal may edit the second video according to the first video editing scheme determined by the server 200 and the corresponding editing scheme. In this way, the terminal 100 can perform editing using the editing scheme obtained by the processing power of the server 200.
In the video processing method provided in fig. 2, when a video is captured, a terminal can simultaneously capture high-quality and low-quality videos (i.e., a first video and a second video) in the same time period. And sending the low-quality video, namely the first video, to a server with high processing capacity through network connection for processing, obtaining an editing scheme by the server according to the first video, and sending the editing scheme back to the terminal. And the terminal can edit the second video with high quality according to the editing scheme. Therefore, the server with high calculation power can be used for analyzing the video to obtain the editing scheme, and only the editing scheme is transmitted to the terminal, so that the time occupied by data transmission can be shortened, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user for using the terminal is improved.
Another video processing method provided in the embodiment of the present application is described below based on the system architecture shown in fig. 1. In this scenario, the terminal 100 may establish a communication connection with the server 200 through a wireless communication technology. For example, the terminal 100 is connected to a WiFi network or a connected data network so that the terminal 100 can access the server 200. The terminal 100 may also access the server 200, or access the server 200 through other devices. Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another video processing method according to an embodiment of the present disclosure. As shown in fig. 4, the video processing method may include:
s201, the terminal 100 receives a shooting user operation.
S202, responding to the operation of the shooting user, the terminal 100 acquires the first video and the second video through the camera to acquire the images.
The descriptions of steps S201 to S202 refer to the descriptions of steps S101 to S102, which are not repeated herein.
S203, the terminal 100 detects a network status with the server, and/or detects whether the size of the first video is greater than a preset threshold.
The terminal 100 may detect whether a network status with the server 200 is lower than a set threshold condition, for example, whether a current network bandwidth between the terminal 100 and the server 200 is greater than the set threshold, and further, whether a network connection type between the terminal 100 and the server 200 is a 5G network connection. If the connection is a 5G network connection, it indicates that the network state with the server 200 is higher than the set threshold condition, and if the connection is not a 5G network connection, it indicates that the network state with the server 200 is lower than the set threshold condition.
The terminal 100 may also detect whether the size of the first video is greater than a preset threshold. For example, whether the size of the first video is smaller than 100MB. In other embodiments of the present application, the terminal 100 may further detect whether the size of the second video is smaller than a set threshold, for example, smaller than 200MB.
In some embodiments of the present application, in step S204, the terminal 100 may perform the steps corresponding to case 1, that is, perform steps S204-S208, and when detecting that any one or any two of the above conditions are satisfied, the terminal 100 sends the first video to the server 200. That is, the terminal 100 may perform case 1 when the following is satisfied:
1. the size of the first video is larger than a preset threshold value;
2. the network state with the server 200 is less than a set threshold condition;
3. the size of the first video is larger than a preset threshold, or the network state between the first video and the server 200 is smaller than a set threshold condition;
4. the size of the first video is larger than a preset threshold, and the network state between the first video and the server 200 is smaller than a set threshold condition.
Case 1: S204-S208.
And S204, when the network state is lower than a set threshold condition and/or the size of the first video is larger than a preset threshold, the terminal 100 sends the first video to the server 200.
S205, the server 200 analyzes the first video and obtains an editing scheme.
S206, the server 200 transmits the edit scheme to the terminal 100.
S207, the server 200 edits the second video by using the editing scheme, and obtains an edited third video.
The description of steps S205-S207 may refer to S104-S106.
And S208, the terminal 100 sends the third video to the server 200 for the server 200 to store.
The server 200 may store a third video clipped by the terminal 100. Other terminals can obtain the third video by logging in the same account and obtaining the third video from the server 200 through the network.
Case 2:
and S209, when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, the terminal 100 sends a second video to the server 200.
S210, the server 200 analyzes the second video to obtain an editing scheme, and edits the second video according to the editing scheme to obtain a fourth video.
S211, the server 200 transmits the fourth video to the terminal 100.
S209 is not limited to that the two conditions satisfy one, and when the network status is higher than the set threshold condition and the size of the first video is smaller than the preset threshold, the terminal 100 may send the second video to the server 200. In other embodiments, the terminal 100 transmits the second video to the server 200 when the network status is above a set threshold condition and/or the size of the second video is less than a predetermined threshold.
In the embodiment of the present application, when the size of the captured high-quality video (second video) is small, that is, smaller than the predetermined threshold, and/or the network speed is fast, the terminal 100 may directly send the high-quality video to the server for processing, and when the size of the captured high-quality video (second video) is large, that is, larger than the predetermined threshold, and/or the network speed is slow, send the first video with the smaller size to the server for processing, and obtain the editing scheme, and edit the second video according to the editing scheme. Therefore, no matter the network speed and the video size, the terminal can edit the video by means of the processing capacity of the server, the video processing can be completed quickly, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user in using the terminal is improved.
In S210, the server 200 may obtain the editing scheme for the second video analysis by image recognition, which may be a part that is obtained by image recognition and is likely to be more interested, which may refer to the description of step S104. The specific contents of the editing scheme may also be executed with reference to step S104. The server 200 may edit the second video according to the editing scheme with reference to the descriptions of steps S104 and S106, which are not described herein again.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure. As shown in fig. 5, the terminal 100 may include at least:
at least one processor 501, at least one network interface 504, a user interface 503, memory 505, and at least one communication bus 502.
Wherein a communication bus 502 is used to enable connective communication between these components.
The user interface 503 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 503 may also include a standard wired interface and a wireless interface.
The network interface 504 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 501 may include one or more processing cores, among other things. The processor 501 connects various parts within the overall terminal 100 using various interfaces and lines, performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 505, and calling data stored in the memory 505.
Optionally, the processor 501 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 501 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 501, but may be implemented by a single chip.
The Memory 505 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 505 includes a non-transitory computer-readable medium. The memory 505 may be used to store instructions, programs, code sets, or instruction sets. The memory 505 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 505 may alternatively be at least one memory device located remotely from the processor 501. As shown in fig. 5, the memory 505, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module.
In the terminal 100 shown in fig. 5, the user interface 503 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and processor 501 may be used to invoke applications stored in memory 505 and perform program operations in particular.
In some embodiments of the present application, the computer instructions, when executed by the one or more processors, cause the terminal to:
acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are directed at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
sending the first video to a server through a network interface 504, so that the server analyzes the first video and obtains an editing scheme;
receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
The terminal here may be the terminal 100 or the terminal 100 in the example shown in fig. 2 or fig. 4.
In the embodiment of the application, when the video is shot, the terminal can simultaneously acquire the high-quality video and the low-quality video (namely the first video and the second video) in the same time period. And sending the low-quality video, namely the first video, to a server with high processing capacity through network connection for processing, obtaining an editing scheme by the server according to the first video, and sending the editing scheme back to the terminal. And the terminal can edit the second video with high quality according to the editing scheme. Therefore, the server with high calculation power can be used for analyzing the video to obtain the editing scheme, and only the editing scheme is transmitted to the terminal, so that the time occupied by data transmission can be shortened, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user for using the terminal is improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a server according to an embodiment of the present disclosure, where the server 200 may be the server 200 shown in fig. 1, fig. 2, and fig. 4. The server shown in fig. 6 includes: one or more processors 601; one or more input devices 602, one or more output devices 603, and memory 604. The processor 601, the input device 602, the output device 603, and the memory 604 are connected by a bus 605. The memory 602 is used to store instructions and the processor 601 is used to execute instructions stored by the memory 602.
Wherein, in case the device is used as a server, when said one or more processors 601 execute said application stored in memory 604, causing said server to perform the video processing method illustrated in fig. 2 or 4.
According to the video processing method, the terminal and the video processing system, when the terminal shoots the video, the terminal can simultaneously acquire the high-quality video and the low-quality video (namely the first video and the second video) in the same time period. And sending the low-quality video, namely the first video, to a server with high processing capacity through network connection for processing, obtaining an editing scheme by the server according to the first video, and sending the editing scheme back to the terminal. And the terminal can edit the second video with high quality according to the editing scheme. Therefore, the server with high calculation power can be used for analyzing the video to obtain the editing scheme, and only the editing scheme is transmitted to the terminal, so that the time occupied by data transmission can be shortened, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user for using the terminal is improved.
It should be understood that the foregoing examples of specific implementations of the video processing method, the terminal and the video processing system are only used for explaining the embodiments of the present application, and should not be construed as limiting. Other implementations may also be employed.
Embodiments of the present application further provide a computer-readable storage medium, which stores instructions that, when executed on a computer or a processor, cause the computer or the processor to perform one or more steps performed by the terminal in the embodiment shown in fig. 2 or fig. 4. The respective constituent modules of the above-described terminal, if implemented in the form of software functional units and sold or used as independent products, may be stored in the computer-readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. And the aforementioned storage medium includes: various media capable of storing program codes, such as a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely preferred embodiments of the present application, and are not intended to limit the scope of the present application, and various modifications and improvements made to the technical solutions of the present application by those skilled in the art without departing from the design spirit of the present application should fall within the protection scope defined by the claims of the present application.
The video processing method, the terminal and the video processing system disclosed by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the embodiment of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. A method of video processing, the method comprising:
acquiring an image through a camera to obtain a first video and a second video, wherein the first video and the second video are directed at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
sending the first video to a server so that the server analyzes the first video and obtains an editing scheme;
receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
2. The method according to claim 1, wherein after the editing of the second video by using the editing scheme to obtain an edited third video, the method further comprises:
and sending the third video to the server for storage by the server.
3. The method of claim 1, wherein the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score matching parameter.
4. The method of any of claims 1-3, wherein prior to sending the first video to the server, the method further comprises:
detecting a network state between the first video and the server, and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server.
5. The method of claim 4, wherein after detecting the network status with the server, the method further comprises:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server, so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
receiving the fourth video from the server.
6. A video processing system, the system comprising a terminal and a server, wherein:
the terminal is used for acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are directed at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
the terminal is further used for sending the first video to a server;
the server is used for analyzing the first video and obtaining an editing scheme;
the terminal is also used for receiving the editing scheme from the server;
the terminal is further configured to edit the second video by using the editing scheme to obtain an edited third video.
7. The video processing system of claim 6, wherein the terminal is further configured to send the third video to the server;
the server is further used for storing the third video.
8. The video processing system of claim 6, wherein the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score parameter.
9. The video processing system according to any of claims 5-8, wherein the terminal is further configured to:
detecting a network state between the first video and the server, and/or detecting whether the size of the first video is larger than a preset threshold value;
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server.
10. The video processing system of claim 9, wherein the terminal is further configured to: when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server;
the server is further used for analyzing the second video to obtain an editing scheme and editing the second video according to the editing scheme to obtain a fourth video;
the terminal is further configured to receive the fourth video from the server.
11. A terminal, characterized in that the terminal comprises: one or more processors, memory, cameras;
the memory coupled with the one or more processors, the memory to store computer program code, the computer program code comprising computer instructions;
the computer instructions, when executed by the one or more processors, cause the terminal to:
acquiring images through the camera to obtain a first video and a second video, wherein the first video and the second video are directed at the same shooting object, the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
sending the first video to a server so that the server analyzes the first video and obtains an editing scheme;
receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
12. The terminal of claim 11, wherein the processor is further configured to invoke the computer instructions to cause the terminal to:
and sending the third video to the server for storage by the server.
13. The terminal of claim 11, wherein the editing scheme comprises any one or more of: clipping range, splicing parameter, zooming parameter, image adjusting parameter, picture intelligent adjusting parameter, special effect making parameter and score parameter.
14. The terminal of any of claims 11-13, wherein the processor is further configured to invoke the computer instructions to cause the terminal to:
detecting a network state between the first video and the server, and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server.
15. The terminal of claim 14, wherein the processor is further configured to invoke the computer instructions to cause the terminal to:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server, so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
receiving the fourth video from the server.
CN202211136032.5A 2022-09-19 2022-09-19 Video processing method, terminal and video processing system Active CN115515008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211136032.5A CN115515008B (en) 2022-09-19 2022-09-19 Video processing method, terminal and video processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211136032.5A CN115515008B (en) 2022-09-19 2022-09-19 Video processing method, terminal and video processing system

Publications (2)

Publication Number Publication Date
CN115515008A true CN115515008A (en) 2022-12-23
CN115515008B CN115515008B (en) 2024-02-27

Family

ID=84503489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211136032.5A Active CN115515008B (en) 2022-09-19 2022-09-19 Video processing method, terminal and video processing system

Country Status (1)

Country Link
CN (1) CN115515008B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150281305A1 (en) * 2014-03-31 2015-10-01 Gopro, Inc. Selectively uploading videos to a cloud environment
US20160163353A1 (en) * 2012-01-26 2016-06-09 Ambarella, Inc. Video editing with connected high-resolution video camera and video cloud server
CN107111620A (en) * 2014-10-10 2017-08-29 三星电子株式会社 Video editing using context data and the content discovery using group
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN112261416A (en) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 Cloud-based video processing method and device, storage medium and electronic equipment
CN112672170A (en) * 2020-06-18 2021-04-16 体奥动力(北京)体育传播有限公司 Event video centralization method and system
US20210358524A1 (en) * 2020-05-14 2021-11-18 Shanghai Bilibili Technology Co., Ltd. Method and device of editing a video
WO2021237619A1 (en) * 2020-05-28 2021-12-02 深圳市大疆创新科技有限公司 Video file editing method, and device, system and computer-readable storage medium
CN114095755A (en) * 2021-11-19 2022-02-25 上海众源网络有限公司 Video processing method, device and system, electronic equipment and storage medium
WO2022133782A1 (en) * 2020-12-23 2022-06-30 深圳市大疆创新科技有限公司 Video transmission method and system, video processing method and device, playing terminal, and movable platform

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160163353A1 (en) * 2012-01-26 2016-06-09 Ambarella, Inc. Video editing with connected high-resolution video camera and video cloud server
US20150281305A1 (en) * 2014-03-31 2015-10-01 Gopro, Inc. Selectively uploading videos to a cloud environment
CN107111620A (en) * 2014-10-10 2017-08-29 三星电子株式会社 Video editing using context data and the content discovery using group
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
US20210358524A1 (en) * 2020-05-14 2021-11-18 Shanghai Bilibili Technology Co., Ltd. Method and device of editing a video
WO2021237619A1 (en) * 2020-05-28 2021-12-02 深圳市大疆创新科技有限公司 Video file editing method, and device, system and computer-readable storage medium
CN112672170A (en) * 2020-06-18 2021-04-16 体奥动力(北京)体育传播有限公司 Event video centralization method and system
CN112261416A (en) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 Cloud-based video processing method and device, storage medium and electronic equipment
WO2022133782A1 (en) * 2020-12-23 2022-06-30 深圳市大疆创新科技有限公司 Video transmission method and system, video processing method and device, playing terminal, and movable platform
CN114095755A (en) * 2021-11-19 2022-02-25 上海众源网络有限公司 Video processing method, device and system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115515008B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN111464761A (en) Video processing method and device, electronic equipment and computer readable storage medium
JP2016537922A (en) Pseudo video call method and terminal
WO2023051185A1 (en) Image processing method and apparatus, and electronic device and storage medium
WO2023125374A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN113810596B (en) Time-delay shooting method and device
WO2022022019A1 (en) Screen projection data processing method and apparatus
WO2019227429A1 (en) Method, device, apparatus, terminal, server for generating multimedia content
CN113012082A (en) Image display method, apparatus, device and medium
US20240129576A1 (en) Video processing method, apparatus, device and storage medium
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
WO2023040749A1 (en) Image processing method and apparatus, electronic device, and storage medium
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN113411498A (en) Image shooting method, mobile terminal and storage medium
US11871137B2 (en) Method and apparatus for converting picture into video, and device and storage medium
WO2023160295A9 (en) Video processing method and apparatus
CN114979785B (en) Video processing method, electronic device and storage medium
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
CN109963106B (en) Video image processing method and device, storage medium and terminal
US20140194152A1 (en) Mixed media communication
KR20150083491A (en) Methed and system for synchronizing usage information between device and server
JP2024502117A (en) Image processing method, image generation method, device, equipment and medium
CN108389165B (en) Image denoising method, device, terminal system and memory
CN110069641B (en) Image processing method and device and electronic equipment
CN115515008B (en) Video processing method, terminal and video processing system
CN116847147A (en) Special effect video determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant