CN115515008B - Video processing method, terminal and video processing system - Google Patents

Video processing method, terminal and video processing system Download PDF

Info

Publication number
CN115515008B
CN115515008B CN202211136032.5A CN202211136032A CN115515008B CN 115515008 B CN115515008 B CN 115515008B CN 202211136032 A CN202211136032 A CN 202211136032A CN 115515008 B CN115515008 B CN 115515008B
Authority
CN
China
Prior art keywords
video
server
terminal
editing
network state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211136032.5A
Other languages
Chinese (zh)
Other versions
CN115515008A (en
Inventor
胡游乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD
Original Assignee
NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD filed Critical NETVIEW TECHNOLOGIES (SHENZHEN) CO LTD
Priority to CN202211136032.5A priority Critical patent/CN115515008B/en
Publication of CN115515008A publication Critical patent/CN115515008A/en
Application granted granted Critical
Publication of CN115515008B publication Critical patent/CN115515008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video processing method, a terminal and a shooting system. Wherein the method comprises the following steps: acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is smaller than that of the second video; transmitting the first video to a server so that the server analyzes the first video and obtains an editing scheme; receiving an editing scheme from a server; and editing the second video by using an editing scheme to obtain an edited third video. The video processing efficiency can be improved, better shooting experience is provided for the user, and therefore convenience in using the terminal by the user is improved.

Description

Video processing method, terminal and video processing system
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a video processing method, a terminal, and a video processing system.
Background
Currently, with the development of video capture devices and display devices and the expansion of network bandwidth, a large amount of high-definition video is photographed and edited, and is played on a high-quality display device to be presented to a user.
The shooting equipment and the cloud server equipment both have video processing capability. However, the image pickup apparatus is limited in apparatus capability and limited in computing power. Thus, another way may send the captured video to the cloud server device for clipping with the high computing power of the cloud server.
However, the size of the video file with high definition is too large, and when the video file is transmitted by the camera device and the cloud processor under the condition of limited network bandwidth, the occupied time is too long, so that the efficiency of video processing is reduced.
Disclosure of Invention
The embodiment of the application provides a video processing method, a terminal and a video processing system, which can improve the video processing efficiency, provide better shooting experience for users, and further improve the convenience of the users in using the terminal.
In a first aspect, embodiments of the present application provide a video processing method, where the method includes:
acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is smaller than that of the second video;
transmitting the first video to a server, so that the server analyzes the first video and obtains an editing scheme;
Receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
Optionally, after the second video is edited by using the editing scheme to obtain the edited third video, the method further includes:
and sending the third video to the server for storage by the server.
Optionally, the editing scheme comprises any one or more of the following: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
Optionally, before the sending the first video to the server, the method further includes:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold value, sending the first video to a server.
Optionally, after detecting the network state with the server, the method further includes:
When the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
the fourth video from the server is received.
In a second aspect, embodiments of the present application provide a video processing system, where the system includes a terminal and a server, where:
the terminal is used for acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is lower than that of the second video;
the terminal is further used for sending the first video to a server;
the server is used for analyzing the first video and obtaining an editing scheme;
the terminal is further used for receiving an editing scheme from the server;
the terminal is further configured to edit the second video by using the editing scheme, so as to obtain an edited third video.
Optionally, the terminal is further configured to send the third video to the server;
the server is further configured to store the third video.
Optionally, the editing scheme comprises any one or more of the following: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
Optionally, the terminal is further configured to:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold value, sending the first video to a server.
Optionally, the terminal is further configured to: when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server;
the server is further configured to analyze the second video to obtain an editing scheme, and edit the second video according to the editing scheme to obtain a fourth video;
the terminal is further configured to receive the fourth video from the server.
In a third aspect, an embodiment of the present application provides a terminal, including: one or more processors, memory, cameras;
the memory is coupled to the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions;
the computer instructions, when executed by the one or more processors, cause the terminal to:
acquiring images through the camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is smaller than that of the second video;
transmitting the first video to a server, so that the server analyzes the first video and obtains an editing scheme;
receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
Optionally, the processor is further configured to invoke the computer instruction to cause the terminal to perform the following operations:
and sending the third video to the server for storage by the server.
Optionally, the editing scheme comprises any one or more of the following: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
Optionally, the processor is further configured to invoke the computer instruction to cause the terminal to perform the following operations:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold value, sending the first video to a server.
Optionally, the processor is further configured to invoke the computer instruction to cause the terminal to perform the following operations:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
The fourth video from the server is received.
In a fourth aspect, an embodiment of the present application provides a server, including a processor, a memory, and a communication module, where the memory is configured to store program code, and the processor is configured to invoke the program code to implement the server in any of the optional manners of the second aspect.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of the first aspect and any of its alternatives described above.
It can be seen that, according to the video processing method, the terminal and the video processing system provided by the embodiments of the present application, when a video is shot, the terminal can collect high-quality and low-quality videos (i.e., a first video and a second video) at the same time period. And the server obtains an editing scheme according to the first video, and sends the editing scheme back to the terminal. The terminal can edit the second video with high quality according to the editing scheme. Therefore, the video can be analyzed by the server with strong calculation power to obtain the editing scheme, and the editing scheme is only transmitted to the terminal, so that the time occupied by data transmission can be reduced, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user in using the terminal is improved.
Drawings
Fig. 1 is a schematic architecture diagram of a video processing system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a video processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a user interface provided by an embodiment of the present application;
fig. 4 is a flowchart of another video processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present disclosure. All other embodiments obtained by one of ordinary skill in the art based on the embodiments provided by the present disclosure are within the scope of the present disclosure.
Throughout the specification and claims, the term "comprising" is to be interpreted as an open, inclusive meaning, i.e. "comprising, but not limited to, unless the context requires otherwise. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. In the description of the present specification, the terms "one embodiment," "some embodiments," "example embodiments," "exemplary," or "some examples," etc., are intended to indicate that a particular feature, structure, material, or characteristic associated with the embodiment or example is included in at least one embodiment or example of the present disclosure. The schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
Hereinafter, the terms "first", "second" are used for descriptive convenience only. And are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
In describing some embodiments, expressions of "coupled" and "connected" and their derivatives may be used. Rather, the term "coupled" may be used in describing some embodiments to indicate that two or more elements are in direct physical or electrical contact with each other. As another example, the term "coupled" may be used in describing some embodiments to indicate that two or more elements are in direct physical or electrical contact. However, the term "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments disclosed herein are not necessarily limited to the disclosure herein.
In order to better understand a video processing method, a terminal and a video processing system provided by the embodiments of the present invention, a network architecture used by the embodiments of the present invention is described below.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of a video processing system according to an embodiment of the present application. It is to be understood that the system architecture of fig. 1 is illustrated herein for purposes of explaining the embodiments of the present application, and should not be construed as limiting. As shown in fig. 1, the photographing system may include, for example, a terminal 100 and a server 200, wherein:
the terminal 100 and the server 200 establish a communication connection. The terminal 200 and the server 300 may also be established with a communication connection. The communication connection between the terminal 100 and the server 200 may comprise, for example, any one or more of the following:
wireless local area networks (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) networks), bluetooth (BT), global navigation satellite systems (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), and the like.
The communication connection between the terminal 100 and the server 200 may also include, for example, any one or more of the following:
global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
In the embodiment of the present application, the wireless communication technology is not limited to the above example, but may be a 5G communication technology or a future new communication technology, which is not limited to this embodiment of the present application.
The terminal 100 is, for example, a three-axis cradle head, and the terminal 100 may be integrated in a terminal device of a user, and may further include, but is not limited to, a mobile phone, a tablet computer, a multimedia playing device, an intelligent wearable device, and the like. The terminal 100 may also be a smart watch, smart bracelet, headset (e.g., virtual Reality (VR) helmet, augmented reality (augmented reality, AR), wearable glasses, etc.), cell phone, tablet, camera, etc. It is understood that the specific product form of the terminal 100 is not limited to the above examples, which are merely for explaining the embodiments of the present application.
The server 200 may be any device suitable for image processing, for example, but not limited to, a workstation dedicated to processing images and video data, a processing device cluster, or a personal configured computer such as a desktop computer, a notebook computer, or a mobile phone, a tablet computer, or an internet of things device. Server 200 may be a device or system that is computationally more powerful and more processing power than terminal 100.
In this embodiment, the terminal 100 is configured to acquire, by capturing an image by using a camera, a first video and a second video, where the first video and the second video are for the same shooting object, and the quality of the first video is lower than that of the second video, and the size of the first video is smaller than that of the second video;
the terminal 100 is further configured to send the first video to the server 200;
the server 200 is configured to analyze the first video and obtain an editing scheme;
the terminal 100 is further configured to receive an editing scheme from the server 200;
the terminal 100 is further configured to edit the second video by using the editing scheme, so as to obtain an edited third video.
The terminal 100 is further configured to send the third video to the server 200;
the server 200 is further configured to store the third video.
Optionally, the editing scheme comprises any one or more of the following: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
Optionally, the terminal 100 is further configured to:
detecting a network state with the server 200 and/or detecting whether the size of the first video is greater than a preset threshold;
And when the network state is lower than a set threshold condition and/or the size of the first video is greater than the preset threshold, transmitting the first video to the server 200.
Optionally, the terminal 100 is further configured to: when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, the second video is sent to the server 200;
the server 200 is further configured to analyze the second video to obtain an editing scheme, and edit the second video according to the editing scheme to obtain a fourth video;
the terminal 100 is further configured to receive the fourth video from the server 200.
In the above-described photographing system, the terminal 100 may collect high and low quality videos (i.e., the first video and the second video) at the same time period when the videos are photographed. And transmits the low-quality video, i.e., the first video, to the server 200 with high processing capability through the network connection for processing, and the server 200 obtains an editing scheme according to the first video and transmits the editing scheme back to the terminal. The terminal can edit the second video with high quality according to the editing scheme. In this way, the video can be analyzed by the server 200 with strong calculation power to obtain the editing scheme, and only the editing scheme is transmitted to the terminal 100, so that the time occupied by transmitting data can be reduced, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user in using the terminal 100 is improved.
The following describes a video processing method according to an embodiment of the present application based on the system architecture shown in fig. 1. In this scenario the terminal 100 may establish a communication connection with the server 200 via a wireless communication technology. For example, the terminal 100 is connected to a WiFi network or a connected data network so that the terminal 100 can access the server 200. The terminal 100 may also access the server 200 or access the server 200 through other devices. Referring to fig. 2, fig. 2 is a flowchart of a video processing method according to an embodiment of the present application. As shown in fig. 2, the video processing method may include:
s101, the terminal 100 receives a photographing user operation.
In the embodiment of the present application, the photographing user operation may be, for example, a user operation for starting photographing performed on the terminal 100. Specifically, referring to fig. 3, fig. 3 is a schematic diagram of a user interface according to an embodiment of the present application. The user interface is, for example, a user interface of the terminal 100. As shown in fig. 3, the user interface 400 is, for example, a video capture interface, and the video capture interface 400 includes, for example, a "record" option 401 and a capture control 402. Wherein the recording option 401 is in a selected state. The capture control 402 may be used to begin capturing video in response to a user operation. In this embodiment, the shooting user operation is, for example, a user operation acting on the shooting control 402.
S102, in response to shooting user operation, the terminal 100 acquires a first video and a second video through a camera acquisition image.
The first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is smaller than that of the second video.
The resolution of the second video may be higher, such as 1080P resolution (1920×1080 pixels), 4K resolution (4096×2160 pixels), 8K resolution (7680×4320 pixels), and the like. The first video may be of a lower resolution relative to the second video. It can be understood that the first video and the second video are not limited to the video captured in the steps S101 to S102, and the first video and the second video may be any video, and the source, the type, the subject, and the like of the video are not limited in the present invention. For example, the first video and the second video may be movie videos, television drama videos, television program videos, and the like acquired by a professional video acquisition device such as a video camera, and the like, for example, refer to S101 to S102. The method is not limited to this, and life videos collected by a common user by using terminal devices such as a mobile phone and a tablet computer can be used.
The quality of the video may also include an acquisition frame rate, an acquisition resolution, and the like, which is not limited by the embodiments of the present application.
In this embodiment of the present application, the second video is, for example, an original video acquired by a camera, and the quality of the second video is higher than that of the first video. The first video is, for example, processing the original video, such as frame extraction, resolution reduction, and the like, to obtain a lower quality video. The second video may also be a higher quality video obtained by processing the original video, such as enhancing resolution. In other embodiments, the first video and the second video may be high and low quality videos recorded simultaneously. For example, the first video and the second video are respectively acquired by a high-resolution camera and a low-resolution camera in a period of time simultaneously.
S103, the terminal 100 transmits the first video to the server 200.
Wherein the first video is smaller in size than the second video. The first video may be transmitted to the server 200 through a 5G network.
S104, the server 200 analyzes the first video and obtains an editing scheme.
The editing scheme analyzed and obtained by the server 200 may be a part that is obtained through image recognition and that may be of more interest to people. The server 200 may identify the material segments of the video by setting predetermined conditions, material tags (material tags; and splicing material segments according to the material tags and a preset digital template to generate a target video.) and may identify a more prominent or interesting part of the video as a material segment, so that the effect of splicing the target video generated by the material segments is better, thereby obtaining an editing scheme. The server 200 may also determine an editing scheme according to the content quality of the first video, the rendering visual effect, etc.
Wherein the editing scheme comprises any one or more of the following: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
The clip range may contain, for example, a clip range selected from the videos (first video and second video). The first video and the second video may comprise corresponding identical time ranges, e.g., the first video and the second video both play for 30 seconds. And (5) analyzing by a server, and if the clip range of the first video contains 2-5 seconds, 7-15 seconds and 18-25 seconds. The server 200 may transmit the clip range to the terminal 100 and the terminal 100 may select video clips of the second video at positions 2-5 seconds, 7-15 seconds, 18-25 seconds, based on the clip range as well.
The splicing parameters may include, for example, the order in which video clips at each location are spliced. For example, the order of the edited video is: 7-15 seconds, 2-5 seconds and 18-25 seconds. The server 200 may transmit the splicing parameter to the terminal 100 and the terminal 100 may arrange the second video in video clips of 7-15 seconds, 2-5 seconds, and 18-25 seconds according to the splicing parameter as well.
The image adjustment parameters may include, for example, any one or more of the following: brightness, contrast, rotation angle, saturation, exposure parameters, highlight parameters, shading parameters, color temperature parameters, hue parameters, sharpening parameters, sharpness, and the like.
The intelligent picture adjustment parameters can comprise any one or more of the following: AI picture repair parameters, stretching parameters, face beautifying parameters, cosmetic effect parameters, filter parameters and the like. The AI repair map parameters may include repair sharpness, repair chromaticity, AI color parameters, and the like. The stretching parameters may include, for example, parameters such as stretching position, stretching ratio, etc. of the picture therein. The face beautifying parameters comprise whitening parameters, skin grinding parameters, freckle and acne removing parameters, face thinning parameters, large-eye parameters and the like. The cosmetic effect parameters and filter parameters may include, for example, picture adjustment parameters for selecting a corresponding template.
The special effect parameters may include, for example, special effect parameters for video clip switching.
The score parameters include, for example, the background music parameters used.
The server 200 may perform image recognition, semantic recognition, etc. by using its processing capability, and obtain an editing scheme according to the recognized image and semantic, so as to make the effect of the target video generated by splicing the material segments better.
The editing scheme also includes, for example, a length range of each video clip, video arrangement habit data (for example, landscape-then-portrait, landscape-portrait cross arrangement, landscape-image preference, etc.), a score style, whether a score is stepped on or not, and beauty preference data.
S105, the server 200 transmits the editing scheme to the terminal 100.
The editing scheme may be transmitted to the server 200 through a 5G network.
S106, the terminal 100 edits the second video by using the editing scheme to obtain an edited third video.
The terminal may edit the second video according to the first video editing scheme determined by the server 200, and the corresponding editing scheme. In this way, the terminal 100 can edit with the editing scheme obtained by the processing capability of the server 200.
The video processing method provided in fig. 2 allows the terminal to collect high and low quality videos (i.e., the first video and the second video) simultaneously in the same time period when the terminal shoots the videos. And the server obtains an editing scheme according to the first video, and sends the editing scheme back to the terminal. The terminal can edit the second video with high quality according to the editing scheme. Therefore, the video can be analyzed by the server with strong calculation power to obtain the editing scheme, and the editing scheme is only transmitted to the terminal, so that the time occupied by data transmission can be reduced, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user in using the terminal is improved.
Another video processing method provided in the embodiment of the present application is described below based on the system architecture shown in fig. 1. In this scenario the terminal 100 may establish a communication connection with the server 200 via a wireless communication technology. For example, the terminal 100 is connected to a WiFi network or a connected data network so that the terminal 100 can access the server 200. The terminal 100 may also access the server 200 or access the server 200 through other devices. Referring to fig. 4, fig. 4 is a flowchart of another video processing method according to an embodiment of the present application. As shown in fig. 4, the video processing method may include:
s201, the terminal 100 receives a photographing user operation.
S202, in response to shooting user operation, the terminal 100 acquires a first video and a second video through a camera acquisition image.
The descriptions of steps S201 to S202 may refer to the descriptions of steps S101 to S102, and are not repeated here.
S203, the terminal 100 detects a network state with the server, and/or detects whether the size of the first video is greater than a preset threshold.
The terminal 100 may detect whether the network state with the server 200 is lower than a set threshold condition, for example, whether the current network bandwidth between the terminal 100 and the server 200 is greater than a set threshold, and for example, whether the network connection type between the terminal 100 and the server 200 is a 5G network connection. If the 5G network connection is made, it indicates that the network state with the server 200 is higher than the set threshold condition, and if the network connection is not made, it indicates that the network state with the server 200 is lower than the set threshold condition.
The terminal 100 may also detect whether the size of the first video is greater than a preset threshold. For example, it is detected whether the size of the first video is less than 100MB. In other embodiments of the present application, the terminal 100 may also detect whether the size of the second video is less than a set threshold, for example, less than 200MB.
In some embodiments of the present application, in step S204, the terminal 100 may perform the step corresponding to the case 1, that is, perform steps S204-S208, when detecting that any one or any two of the above conditions are satisfied, where the terminal 100 sends the first video to the server 200. That is, the terminal 100 may perform case 1 when:
1. the size of the first video is larger than a preset threshold value;
2. the network state with the server 200 is less than the set threshold condition;
3. the size of the first video is larger than a preset threshold value, or the network state between the first video and the server 200 is smaller than a set threshold value condition;
4. the size of the first video is greater than a preset threshold, and the network state with the server 200 is less than a set threshold condition.
Case 1: S204-S208.
S204, when the network status is lower than the set threshold condition and/or the size of the first video is greater than the preset threshold, the terminal 100 sends the first video to the server 200.
S205, the server 200 analyzes and obtains an editing scheme for the first video.
S206, the server 200 transmits the editing scheme to the terminal 100.
S207, the server 200 edits the second video by using the editing scheme to obtain an edited third video.
The description of steps S205 to S207 may refer to S104 to S106.
S208, the terminal 100 sends the third video to the server 200 for the server 200 to store.
The server 200 may store the third video clipped by the terminal 100. Other terminals can obtain the third video from the server 200 through the network by logging in the same account.
Case 2:
s209, when the network status is higher than the set threshold condition, or the size of the first video is smaller than the preset threshold, the terminal 100 sends the second video to the server 200.
S210, the server 200 analyzes the second video to obtain an editing scheme, and edits the second video according to the editing scheme to obtain a fourth video.
S211, the server 200 transmits the fourth video to the terminal 100.
The step S209 is not limited to one of the two conditions, and may be that the terminal 100 transmits the second video to the server 200 when the network state is higher than a set threshold condition and the size of the first video is smaller than a preset threshold. In other embodiments, the terminal 100 transmits the second video to the server 200 when the network status is above a set threshold condition and/or the size of the second video is less than a predetermined threshold.
In this embodiment of the present application, when the size of the collected high-quality video (second video) is smaller, that is, smaller than a predetermined threshold, and/or when the network speed is faster, the terminal 100 may directly send the high-quality video to the server for processing, and when the size of the collected high-quality video (second video) is larger, that is, larger than the predetermined threshold, and/or when the network speed is slower, send the first video with the smaller size to the server for processing, and obtain an editing scheme, and edit the second video according to the editing scheme. Therefore, no matter how fast and slow the network is and the video is, the terminal can edit the video by the processing capacity of the server, can rapidly complete the video processing, can improve the efficiency of the video processing, and provides better shooting experience for the user, thereby improving the convenience of the user in using the terminal.
In S210, the second video analysis by the server 200 may obtain the editing scheme as a portion that may be more interesting to people through image recognition, and specific reference may be made to the description of step S104. The specific contents of the editing scheme can also be performed with reference to step S104. The server 200 may edit the second video according to the editing scheme with reference to the descriptions of steps S104 and S106, which will not be described herein.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 5, the terminal 100 may include at least:
at least one processor 501, at least one network interface 504, a user interface 503, a memory 505, and at least one communication bus 502.
Wherein a communication bus 502 is used to enable connected communications between these components.
The user interface 503 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 503 may further include a standard wired interface and a standard wireless interface.
The network interface 504 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 501 may include one or more processing cores. The processor 501 connects various parts within the overall terminal 100 using various interfaces and lines, performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 505, and invoking data stored in the memory 505.
Alternatively, the processor 501 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 501 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 501 and may be implemented by a single chip.
The Memory 505 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 505 comprises a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 505 may be used to store instructions, programs, code sets, or instruction sets. The memory 505 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 505 may also optionally be at least one storage device located remotely from the processor 501. As shown in fig. 5, an operating system, a network communication module, a user interface module may be included in the memory 505, which is a type of computer storage medium.
In the terminal 100 shown in fig. 5, the user interface 503 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 501 may be used to invoke the application programs stored in the memory 505 and to specifically perform program operations.
In some embodiments of the present application, the one or more processors, when executing the computer instructions, cause the terminal to perform the following:
acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is smaller than that of the second video;
transmitting the first video to a server through a network interface 504 to cause the server to analyze and obtain an editing scheme for the first video;
receiving an editing scheme from the server;
and editing the second video by using the editing scheme to obtain an edited third video.
Here, the terminal may be the terminal 100 or the terminal 100 in the example shown in fig. 2 or fig. 4.
In the embodiment of the application, when the terminal shoots the video, the terminal can simultaneously acquire the high-quality video and the low-quality video (namely the first video and the second video) in the same time period. And the server obtains an editing scheme according to the first video, and sends the editing scheme back to the terminal. The terminal can edit the second video with high quality according to the editing scheme. Therefore, the video can be analyzed by the server with strong calculation power to obtain the editing scheme, and the editing scheme is only transmitted to the terminal, so that the time occupied by data transmission can be reduced, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user in using the terminal is improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application, and the server 200 may be the server 200 shown in fig. 1, fig. 2, and fig. 4. The server as shown in fig. 6 includes: one or more processors 601; one or more input devices 602, one or more output devices 603, and a memory 604. The processor 601, input device 602, output device 603, and memory 604 are connected by a bus 605. The memory 602 is used for storing instructions and the processor 601 is used for executing the instructions stored by the memory 602.
Where the device is used as a server, the one or more processors 601, when executing the application program stored in the memory 604, cause the server to perform the video processing method shown in fig. 2 or 4.
According to the video processing method, the terminal and the video processing system, when the terminal shoots videos, the terminal can collect high-quality videos and low-quality videos (namely, a first video and a second video) at the same time period. And the server obtains an editing scheme according to the first video, and sends the editing scheme back to the terminal. The terminal can edit the second video with high quality according to the editing scheme. Therefore, the video can be analyzed by the server with strong calculation power to obtain the editing scheme, and the editing scheme is only transmitted to the terminal, so that the time occupied by data transmission can be reduced, the video processing efficiency can be improved, better shooting experience is provided for the user, and the convenience of the user in using the terminal is improved.
It will be appreciated that the above examples of specific implementations of the video processing method, terminal and video processing system are only for explaining the embodiments of the present application and should not be construed as limiting. Other implementations may also be employed.
Embodiments of the present application also provide a computer-readable storage medium having instructions stored therein, which when executed on a computer or processor, cause the computer or processor to perform one or more steps performed by the terminal in the embodiments shown in fig. 2 or fig. 4 described above. The respective constituent modules of the above terminal may be stored in the computer-readable storage medium if implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a digital versatile Disk (Digital Versatile Disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program, which may be stored in a computer-readable storage medium, instructing relevant hardware, and which, when executed, may comprise the embodiment methods as described above. And the aforementioned storage medium includes: a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, or the like. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely illustrative of the preferred embodiments of the present application and are not intended to limit the scope of the present application, and various modifications and improvements made by those skilled in the art to the technical solutions of the present application should fall within the protection scope defined by the claims of the present application without departing from the design spirit of the present application.
The video processing method, the terminal and the video processing system disclosed in the embodiments of the present invention are described in detail, and specific examples are applied to illustrate the principles and the implementation of the present invention, and the description of the above embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (13)

1. A method of video processing, the method comprising:
acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is smaller than that of the second video;
transmitting the first video to a server, so that the server analyzes the first video and obtains an editing scheme;
receiving an editing scheme from the server;
editing the second video by using the editing scheme to obtain an edited third video;
before the first video is sent to the server, the method further comprises:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server;
after detecting the network state between the server and the network state, the method further comprises:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
Receiving the fourth video from the server;
the server may also determine the editing scheme based on the content quality of the first video, rendering visual effects.
2. The method of claim 1, wherein the editing the second video using the editing scheme, after obtaining an edited third video, further comprises:
and sending the third video to the server for storage by the server.
3. The method of claim 1, wherein the editing scheme comprises any one or more of: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
4. A video processing system, the system comprising a terminal and a server, wherein:
the terminal is used for acquiring images through a camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is lower than that of the second video;
the terminal is further used for sending the first video to a server;
Before the first video is sent to the server, the method further comprises:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server;
after detecting the network state between the server and the network state, the method further comprises:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
receiving the fourth video from the server;
the server is used for analyzing the first video and obtaining an editing scheme, and the server can also determine the editing scheme according to the content quality and the rendering visual effect of the first video;
the terminal is further used for receiving an editing scheme from the server;
The terminal is further configured to edit the second video by using the editing scheme, so as to obtain an edited third video.
5. The video processing system of claim 4, wherein the terminal is further configured to send the third video to the server;
the server is further configured to store the third video.
6. The video processing system of claim 4, wherein the editing scheme comprises any one or more of: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
7. The video processing system according to any of claims 4-6, wherein the terminal is further configured to:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold value, sending the first video to a server.
8. The video processing system of claim 7, wherein the terminal is further configured to: when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server;
The server is further configured to analyze the second video to obtain an editing scheme, and edit the second video according to the editing scheme to obtain a fourth video;
the terminal is further configured to receive the fourth video from the server.
9. A terminal, the terminal comprising: one or more processors, memory, cameras;
the memory is coupled to the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions;
the computer instructions, when executed by the one or more processors, cause the terminal to:
acquiring images through the camera to obtain a first video and a second video, wherein the first video and the second video are aimed at the same shooting object, and the quality of the first video is lower than that of the second video and the size of the first video is smaller than that of the second video;
transmitting the first video to a server, so that the server analyzes the first video and obtains an editing scheme;
before the first video is sent to the server, the method further comprises:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
The sending the first video to a server includes:
when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold, sending the first video to a server;
after detecting the network state between the server and the network state, the method further comprises:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
receiving the fourth video from the server;
receiving an editing scheme from the server;
the server can also determine the editing scheme according to the content quality and the rendering visual effect of the first video;
and editing the second video by using the editing scheme to obtain an edited third video.
10. The terminal of claim 9, wherein the processor is further configured to invoke the computer instructions to cause the terminal to:
And sending the third video to the server for storage by the server.
11. The terminal of claim 9, wherein the editing scheme comprises any one or more of: clipping scope, splicing parameters, scaling parameters, image adjustment parameters, picture intelligent adjustment parameters, special effect manufacturing parameters and score parameters.
12. The terminal according to any of the claims 9-11, wherein the processor is further configured to invoke the computer instructions to cause the terminal to:
detecting a network state between the first video and the server and/or detecting whether the size of the first video is larger than a preset threshold value;
the sending the first video to a server includes:
and when the network state is lower than a set threshold condition and/or the size of the first video is larger than the preset threshold value, sending the first video to a server.
13. The terminal of claim 12, wherein the processor is further configured to invoke the computer instructions to cause the terminal to:
when the network state is higher than the set threshold condition or the size of the first video is smaller than the preset threshold, sending the second video to the server so that the server analyzes the second video to obtain an editing scheme, and editing the second video according to the editing scheme to obtain a fourth video;
The fourth video from the server is received.
CN202211136032.5A 2022-09-19 2022-09-19 Video processing method, terminal and video processing system Active CN115515008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211136032.5A CN115515008B (en) 2022-09-19 2022-09-19 Video processing method, terminal and video processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211136032.5A CN115515008B (en) 2022-09-19 2022-09-19 Video processing method, terminal and video processing system

Publications (2)

Publication Number Publication Date
CN115515008A CN115515008A (en) 2022-12-23
CN115515008B true CN115515008B (en) 2024-02-27

Family

ID=84503489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211136032.5A Active CN115515008B (en) 2022-09-19 2022-09-19 Video processing method, terminal and video processing system

Country Status (1)

Country Link
CN (1) CN115515008B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107111620A (en) * 2014-10-10 2017-08-29 三星电子株式会社 Video editing using context data and the content discovery using group
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN112261416A (en) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 Cloud-based video processing method and device, storage medium and electronic equipment
CN112672170A (en) * 2020-06-18 2021-04-16 体奥动力(北京)体育传播有限公司 Event video centralization method and system
WO2021237619A1 (en) * 2020-05-28 2021-12-02 深圳市大疆创新科技有限公司 Video file editing method, and device, system and computer-readable storage medium
CN114095755A (en) * 2021-11-19 2022-02-25 上海众源网络有限公司 Video processing method, device and system, electronic equipment and storage medium
WO2022133782A1 (en) * 2020-12-23 2022-06-30 深圳市大疆创新科技有限公司 Video transmission method and system, video processing method and device, playing terminal, and movable platform

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8768142B1 (en) * 2012-01-26 2014-07-01 Ambarella, Inc. Video editing with connected high-resolution video camera and video cloud server
US20150281710A1 (en) * 2014-03-31 2015-10-01 Gopro, Inc. Distributed video processing in a cloud environment
CN112437342B (en) * 2020-05-14 2022-09-23 上海哔哩哔哩科技有限公司 Video editing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107111620A (en) * 2014-10-10 2017-08-29 三星电子株式会社 Video editing using context data and the content discovery using group
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
WO2021237619A1 (en) * 2020-05-28 2021-12-02 深圳市大疆创新科技有限公司 Video file editing method, and device, system and computer-readable storage medium
CN112672170A (en) * 2020-06-18 2021-04-16 体奥动力(北京)体育传播有限公司 Event video centralization method and system
CN112261416A (en) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 Cloud-based video processing method and device, storage medium and electronic equipment
WO2022133782A1 (en) * 2020-12-23 2022-06-30 深圳市大疆创新科技有限公司 Video transmission method and system, video processing method and device, playing terminal, and movable platform
CN114095755A (en) * 2021-11-19 2022-02-25 上海众源网络有限公司 Video processing method, device and system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115515008A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
KR101768980B1 (en) Virtual video call method and terminal
US20220377259A1 (en) Video processing method and apparatus, electronic device, and non-transitory computer readable storage medium
WO2020078026A1 (en) Image processing method and apparatus, and device
US11587317B2 (en) Video processing method and terminal device
CN110706310B (en) Image-text fusion method and device and electronic equipment
WO2020192692A1 (en) Image processing method and related apparatus
CN114331820A (en) Image processing method, image processing device, electronic equipment and storage medium
US11928152B2 (en) Search result display method, readable medium, and terminal device
CN113810596B (en) Time-delay shooting method and device
US10749923B2 (en) Contextual video content adaptation based on target device
US20160012851A1 (en) Image processing device, image processing method, and program
KR102228457B1 (en) Methed and system for synchronizing usage information between device and server
CN113012082A (en) Image display method, apparatus, device and medium
WO2023160295A1 (en) Video processing method and apparatus
US9325776B2 (en) Mixed media communication
CN114979785B (en) Video processing method, electronic device and storage medium
CN113747240A (en) Video processing method, apparatus, storage medium, and program product
CN109167939B (en) Automatic text collocation method and device and computer storage medium
JP2023538825A (en) Methods, devices, equipment and storage media for picture to video conversion
WO2023241377A1 (en) Video data processing method and device, equipment, system, and storage medium
CN115515008B (en) Video processing method, terminal and video processing system
WO2023182937A2 (en) Special effect video determination method and apparatus, electronic device and storage medium
WO2022160965A1 (en) Video processing method, and electronic device
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
KR20120103363A (en) Virtual hair styling service system and method, and device supporting the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant