CN111683280A - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN111683280A
CN111683280A CN202010499112.1A CN202010499112A CN111683280A CN 111683280 A CN111683280 A CN 111683280A CN 202010499112 A CN202010499112 A CN 202010499112A CN 111683280 A CN111683280 A CN 111683280A
Authority
CN
China
Prior art keywords
video
performance
video effect
effect element
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010499112.1A
Other languages
Chinese (zh)
Inventor
黄归
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010499112.1A priority Critical patent/CN111683280A/en
Publication of CN111683280A publication Critical patent/CN111683280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video processing method and device and electronic equipment, and relates to the technical field of computers. Wherein, the method comprises the following steps: displaying a first video effect element on a video processing interface of a target device, the first video effect element being a video effect element corresponding to a capability of the target device; and responding to the effect selection operation aiming at the video to be processed, determining a second video effect element corresponding to the effect selection operation from the displayed first video effect elements, and processing the video to be processed according to the second video effect element to obtain a target video with the video effect corresponding to the second video effect element. Therefore, the video effect elements displayed by the target equipment conform to the performance of the target equipment, and the problems of blockage and unsmooth running of processed videos in the process of video processing of the target equipment according to the video effect elements can be solved.

Description

Video processing method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method and apparatus, and an electronic device.
Background
With the rapid development of multimedia technology and the internet, video sharing application programs are increasingly integrated into the lives of people. The video publisher may process (e.g., edit or clip) the recorded or stored video through the video sharing application program, and upload the processed video to the server, so that other users can download and play the video through the video sharing application program.
In the related art, in order to enrich video content, a video sharing application program may obtain a large number of effect elements, such as beauty filter information, special effect information, sticker information, a video template, and the like, from a server, so that a user may select to process a video according to the corresponding effect elements in a process of processing the video. However, in the process of processing a video according to some effect elements, problems such as a processing process being stuck, a processed video running being unsmooth, and the like often occur.
Disclosure of Invention
The application provides a video processing method, a video processing device and electronic equipment, which can solve the problems.
In one aspect, an embodiment of the present application provides a video processing method, including: displaying a first video effect element on a video processing interface of a target device, wherein the first video effect element is a video effect element corresponding to a capability of the target device; responding to an effect selection operation aiming at the video to be processed, and determining a second video effect element corresponding to the effect selection operation from the displayed first video effect elements; and processing the video to be processed according to the second video effect element to obtain the target video with the video effect corresponding to the second video effect element.
In another aspect, an embodiment of the present application provides a video processing apparatus, including: the device comprises a display module, an effect selection module and a video processing module. The display module is used for displaying a first video effect element on a video processing interface of the target device, wherein the first video effect element is a video effect element corresponding to the performance of the target device. The effect selection module is used for responding to the effect selection operation aiming at the video to be processed and determining a second video effect element corresponding to the effect selection operation from the displayed first video effect elements. The video processing module is used for processing the video to be processed according to the second video effect element to obtain a target video with a video effect corresponding to the second video effect element.
In another aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs operable to perform the methods described above.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, on which program code is stored, and the program code can be called by a processor to execute the method described above.
According to the scheme, the first video effect elements corresponding to the performance of the target device are displayed on the video processing interface of the target device, the effect selection operation aiming at the video to be processed is responded, the second video effect elements corresponding to the effect selection operation are determined from the displayed first video effect elements, the video to be processed is processed according to the second video effect elements, and the target video with the video effect corresponding to the second video effect elements is obtained and is shared by the user. Therefore, the target device can display the selectable video effect elements to the user according to the performance of the target device on the video processing interface, correspondingly, the second video effect element selected by the user for the video to be processed is in accordance with the performance of the target device, so that the blockage problem of the target device in the video processing process according to the video effect element selected by the user can be solved, and the running of the video processed according to the video effect element on the target device can be smoother.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 is a schematic flowchart illustrating a video processing method according to an embodiment of the present application.
Fig. 3A is a schematic diagram illustrating a display interface of a client according to an embodiment of the present application.
Fig. 3B is a schematic diagram illustrating a video processing interface of a client according to an embodiment of the present application.
Fig. 3C is a schematic diagram illustrating another video processing interface of the client according to the embodiment of the present application.
Fig. 3D is a schematic diagram illustrating a further video processing interface of the client according to an embodiment of the present application in one scenario.
Fig. 3E shows a schematic view of the video processing interface of fig. 3D in another scenario.
FIG. 3F shows a schematic view of the video processing interface of FIG. 3D in yet another scenario.
FIG. 3G shows a schematic view of the video processing interface of FIG. 3D in yet another scenario.
Fig. 4 shows another flow diagram of the video processing method in the embodiment shown in fig. 2.
Fig. 5 is a flowchart illustrating another video processing method according to an embodiment of the present application.
Fig. 6 shows a schematic diagram of the substeps of step S204 shown in fig. 4.
Fig. 7 shows another flow diagram of the video processing method in the embodiment shown in fig. 2.
Fig. 8 shows an architecture diagram of a test component according to an embodiment of the present application.
Fig. 9 shows a sub-step diagram of step S702 shown in fig. 7.
Fig. 10 shows another sub-step diagram of step S702 shown in fig. 7.
Fig. 11 shows a schematic diagram of a further sub-step of step S702 shown in fig. 7.
Fig. 12 is a schematic flow chart of a video processing method provided by the embodiment shown in fig. 2.
Fig. 13 is a flowchart illustrating a video processing method in an application scenario according to an embodiment of the present application.
Fig. 14 shows a block diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 15 is a block diagram of an electronic device according to an embodiment of the present application for executing a video processing method according to an embodiment of the present application.
Fig. 16 is a storage unit for storing or carrying program codes for implementing a video processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In practical applications, when a video sharing application program is started or enters a video processing interface (such as a video uploading interface, a video editing interface or a video clipping interface), available video effect elements can be obtained from a server. The video effect element, which may also be referred to as an effect component, refers to a video clip element for implementing a specific effect, such as facial filter information, sticker information, text information, special effect information, a video template, and the like, which may be configured in a video or may be merged with a video to implement a specific video effect.
In some scenes, video effect elements obtained from a server are undifferentiated by video sharing applications running on different devices, and the running requirements of different video effect elements on the performance of the devices are different, so that when a user selects to process a video according to some video effect elements through the video sharing applications on the devices, the performance requirements of the video effect elements on the devices are not consistent with the performance of the devices, the processing process is stuck, the running effect of the video with the video effects corresponding to the video effect elements is not good, the user experience is very poor, and the user stickiness is reduced.
The inventor provides a video processing method, a video processing device and electronic equipment through long-term research, and the video processing method, the video processing device and the electronic equipment can realize graded issuing or displaying of video effect elements based on equipment performance difference, so that the problems that the video processing process is blocked, the video processed according to the video effect elements runs unsmoothly and the like can be solved. This will be explained in detail below.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment suitable for the embodiment of the present application. The server 100 may be in communication connection with the terminal device 200 through a network, the terminal device 200 runs a client 210, the client 210 may be a video sharing application, where the video sharing application may be a video sharing application (e.g., various short video applications) or an application having a video sharing function, such as a social platform, a content interaction platform, an education platform, and the like having the video sharing function. The present embodiment does not limit this. The terminal device 200 may log in to the server 100 through the client 210 to provide a service, which may be, for example, a video sharing service, to the user.
In this embodiment, the server 100 may be an independent server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as cloud computing, big data, and an artificial intelligence platform. The terminal device 200 may be, but is not limited to, a smart phone, a tablet Computer, a Personal Computer (PC), a smart tv, a portable wearable device, and the like.
The video processing method provided by the embodiment of the application can be applied to an electronic device, where the electronic device can be the server 100 or the terminal device 200 shown in fig. 1.
Referring to fig. 2, fig. 2 is a flowchart illustrating a video processing method according to an embodiment of the present application, where the method can be applied to an electronic device, and the present embodiment takes the electronic device as a terminal device 200 as an example, and describes steps of the method.
S201, displaying a first video effect element on a video processing interface of the target device, wherein the first video effect element is a video effect element corresponding to the performance of the target device.
In this embodiment, the target device may be a terminal device running a video sharing application, where the video sharing application refers to an application having a video sharing function, such as the social platform, the content interaction platform, the education platform, and the like having the video sharing function, and as another example, a short video application. Taking the example that the client 210 shown in fig. 1 is a video sharing application, the terminal device 200 running the client 210 shown in fig. 1 may be regarded as a target device.
Among other things, the client 210 may have a video processing interface that may be used to display video effect elements for selection by a user.
The video processing interface may be an interface for a user to perform processing or operation related to a video, for example, an interface for the user to obtain a desired video (e.g., a video recording interface, a video selection interface, etc.), and for example, may be a post-processing interface for the user to perform post-processing on a recorded or selected video. The post-processing may be, for example, adding a video effect to the video in terms of video effect elements.
Based on this, in the implementation process, when the client 210 enters the video processing interface, or when a trigger operation for triggering the client 210 to enter the video processing interface is detected, the terminal device 200 may obtain, from the video effect elements provided by the server 100 for displaying on the video processing interface, a video effect element corresponding to the device performance of the device, which is the first video effect element.
In this embodiment, different video processing interfaces of the client 210 display different types of video effect elements, and in one example, the video processing interfaces may be used to recommend a certain type of video effect element.
For example, in the video processing interface 301 shown in fig. 3A, the video processing interface 301 is an interface for a user to obtain a video, and a video template recommendation area 301-1 is displayed on the interface, where the video template recommendation area 301-1 may be used to recommend a video template, further, at least one video template may be displayed in the video recommendation area 301-1, and the user may trigger the client 210 to update the video template displayed in the video template recommendation area 301-1 by performing a sliding operation in the video template recommendation area 301-1. Wherein, each video template displayed in the video template recommendation area 301-1 can be regarded as a first video effect element.
In detail, when the client 210 enters the interface 301 or detects a trigger operation for entering the interface 301, the terminal device 200 may determine that a video template meeting a recommendation condition (e.g., the heat reaches a threshold) needs to be displayed in the video template recommendation area 301-1 of the interface 301, so that a video template meeting the device performance of the terminal device 200 may be determined from the video templates meeting the heat as a template to be recommended, where the template to be recommended may be regarded as a first video effect element in S201. Further, the terminal device 200 may display at least one template to be recommended in the video template recommendation area 301-1, for example, only one template to be recommended is displayed at a time, and when a sliding operation is detected in the area 301-1, the template to be recommended displayed in the area 301-1 may be updated. In other words, each video template displayed in region 301-1 may be considered a first video effect element.
It will be appreciated that the triggering operation described above for entry into interface 301 may be detected on interfaces other than interface 301. Such as may be detected on the display interface 302 shown in fig. 3B.
The display interface 302 may be, for example, an interface displayed after the client 210 is started. An option tab L3 is provided on the display interface 302, and when the option tab L3 is operated (e.g., clicked, pressed, etc.), the client 210 may enter a video processing interface for the user to obtain video, such as the video processing interface 301 shown in fig. 3A.
In another example, the video processing interface may be a selection interface for a certain type of video effect element, and correspondingly, the video processing interface may display a plurality of video effect elements of the certain type. Taking the video processing interface 303 shown in fig. 3C as an example, the video processing interface 303 is a video template selecting interface, which can be used to display a plurality of video templates provided by the server 100. In this case, the video templates displayed by the interface 303 may be regarded as the first video effect elements.
In detail, the terminal device 200 may determine that all video templates provided by the server 100 need to be displayed on the interface 303 when entering the interface 303 or detecting a trigger operation for entering the interface 303, so that a video template meeting the performance of the terminal device 200 may be obtained from all video templates provided by the server 100 as a video template to be displayed, where the video template to be displayed may be regarded as a first video effect element. Terminal device 2000 may then display on interface 303 an element identifier of the video template to be displayed, where the element identifier may include an icon of the video template (e.g., a template cover image). For example, the video templates corresponding to the cover images displayed by the regions 303-1, 303-2, 303-3, 303-4 shown in FIG. 3C are all first video effect elements.
It should be noted that, because the size of the display screen of the terminal device 2000 is limited, only a portion of the video template to be displayed (i.e., the first video effect element) may be displayed on the interface 303, and when a sliding operation or other operation for triggering update of the display information is detected on the interface 303, the video template to be displayed on the interface 303 is updated.
It is further worth noting that the trigger for entering the interface 303 may be detected on an interface other than the interface 303, such as the interface 301 shown in fig. 3A.
Referring to fig. 3A again, a selection entry of a video template, such as an "intelligent template", may also be displayed on the interface 301. When a selection operation (e.g., clicking, long-pressing, double-clicking, etc.) on the template 1 is detected on the interface 301, the selection operation may be determined as a trigger operation for entering the interface 301, so as to obtain a video template to be displayed on the interface 301, and further control the client 210 to enter the interface 303 shown in fig. 3C. After a certain video template is displayed on the interface 303 by the user, a video recording or a selection of a local video may be performed.
In addition, the selection of the video recording or the local video may also be triggered by an entry on the interface 301 without the selection of the video template. For example, a video recording entry, for example, an option tab L1 with the word "capture" may be displayed on the interface 301, and when the user operates the option tab L1 on the video processing interface 301, the client 210 may perform video recording or image capture through an image capturing device (e.g., a camera) of the terminal device 200.
As another example, the video processing interface 301 may also display entries with video selections. The entry for video selection may be, for example, an option tag L2 with the word "local upload". When the user operates the option tab L2 on the video processing interface 301, the client 210 can select a video file from among video files stored in the terminal device 200.
When the video recording or the video selection is completed, and the recorded video or the selected video is obtained, the client 210 may enter a video processing interface, such as the video processing interface 304 shown in fig. 3D, for the user to perform post-processing on the recorded or selected video. The video processing interface 304 displays a preview picture 304-1 of a recorded or selected video and an effect selection area 304-2, where the effect selection area 304-2 may display type identifiers of at least two types of video effect elements, where the type identifiers may be type names of the video effect elements, for example, a video template I-1, music I-2, text information I-3, a sticker I-4, a special effect I-5, and the like displayed on the interface 304.
Further, when the user selects a type identifier of a certain type of video effect element in the interface 304, the effect selection area 304-2 may display various video effect elements of that type. The video effect element displayed in the effect selection area 304-2 can be regarded as the first video effect element.
For example, FIG. 3E, which shows the video processing interface 304 after the user clicks on the video template I-1, the effect selection area 304-2 thereon displays a plurality of video templates (e.g., T1) that are available for selection by the user, and the video template displayed on the effect selection area 304-2 can be updated by a sliding operation. The video template displayed in the effect selection area 304-2 can be regarded as the first video effect element.
Referring again to FIG. 3F, which shows the video processing interface 304 after the user clicks on music I-2, the effect selection area 304-2 displays a plurality of music information (e.g., T-2) available for selection by the user, and further more music information may be displayed on the effect selection area 304-2 by a sliding operation or clicking on a music library. The music information displayed in the effect selection area 304-2 can be regarded as the first video effect element.
Referring again to FIG. 3G, the video processing interface 304 is shown after the user has clicked on the special effect I-5, whereupon the effect selection area 304-2 displays a plurality of special effect information (e.g., T-3) available for selection by the user. Similarly, the effect information displayed in the effect selection area 304-2 may be updated by a sliding operation or the like.
In detail, upon entering the video processing interface 304 or detecting a trigger operation for entering the video processing interface 304, the terminal device 200 may determine that the video processing interface 304 needs to display type identifications of various types of video effect elements, and determine a video effect element having a capability conforming to the terminal device 200 from among all the video effect elements of each type provided by the server 100.
When a trigger operation for a particular type of identifier is detected on the effect selection area 304-2, at least one video effect element of the type indicated by the particular type of identifier (i.e., the first video effect element) may be displayed on the effect selection area 304-2. It is worth noting that if all video effect elements of a certain type do not comply with the capabilities of the terminal device 200, the type identification of the video effect element of the type may not be displayed on the video processing interface 304.
It should be noted that the interfaces shown in fig. 3A to 3G are only examples and are not intended to limit the present application.
S202, responding to the effect selection operation aiming at the video to be processed, and determining a second video effect element corresponding to the effect selection operation from the displayed first video effect elements.
And the first video effect element corresponding to the effect selection operation in the first video effect elements displayed on the video processing interface is the second video effect element. The video to be processed may be a video that currently needs to be post-processed, and for example, may be a video displayed in the preview screen 304-1 in fig. 3D. For the effect selection operation of the video to be processed, it can be understood that: and selecting video effect elements for the video to be processed. For example, on the interface 304, a selection operation for any content displayed in the effect selection area 304-2 can be regarded as an effect selection operation for a to-be-processed video in the preview screen 304-1.
The execution process of S202 is described by taking the interface 303 for selecting a video template shown in fig. 3C as an example. If the user clicks the icon of the video template 303-1 on the interface 303, the terminal device 200 may determine the video template 303-1 as the second video effect element, and perform post-processing on the video to be processed according to the video template 303-1 through step S203. The video to be processed here may be, for example, a video displayed in the preview screen 304-1 of the interface 304 before jumping to the interface 303; as another example, this may be a video that the user re-recorded or selected after selecting the video template 303-1.
Taking the interface 304 shown in fig. 3E as an example, the implementation process of S202 may be: if an effect selection operation for the video template T-1 is detected in the effect selection area 304-2 of the interface 304, the video template T-1 may be determined as a second video effect element, and the post-processing is performed on the video to be processed according to the video target T-1 through S203. The video to be processed here may be the video currently displayed on the preview screen 304-1.
Taking the interface 304 shown in fig. 3F as an example, the implementation process of S202 may be: if an effect selection operation for the music information T-2 is detected in the effect selection area 304-2 of the interface 304, the music information T-2 may be determined as a second video effect element, and in S203, the video to be processed displayed in the preview screen 304-1 is processed according to the music information T-2, such as merging the music information T-2 with the video to be processed.
And S203, processing the video to be processed according to the second video effect element to obtain a target video with a video effect corresponding to the second video effect element.
The target video refers to the video to be processed added with the video effect corresponding to the second video effect element. In one example, in the case that the second video effect element is a video template, the video content of the video to be processed can be filled into the video template, so as to obtain the target video. In another example, in the case that the second video effect element is dynamic sticker information, the dynamic sticker information may be loaded to a specific position of a specific video frame of the video to be processed according to the relevant configuration of the dynamic sticker information, so that the target video may be obtained.
Through the design, the performance requirements of the first video effect element displayed on the video processing interface of the terminal equipment are consistent with the equipment performance of the terminal equipment, in other words, the first video effect element selected by the terminal equipment presented to the user on the video processing interface is consistent with the performance of the terminal equipment. Then, the performance requirement of the second video effect element selected by the user from the first video effect elements presented by the terminal equipment is also consistent with the equipment performance of the terminal equipment, so that the problem of blockage of the terminal equipment when adding the video effect to the video to be processed according to the video effect element selected by the user and the problem of unsmooth operation of the video after adding the video effect in the equipment can be solved.
Referring to fig. 2 and 4 together, the steps shown in fig. 2 will be described in further detail. Optionally, before executing S201, the video processing method provided in this embodiment may further include the steps shown in fig. 4.
S204, determining a target gear to which the performance of the target equipment belongs from at least two performance gears.
The terminal device 200 may obtain a device identifier of the present device when the client 210 (e.g., a video sharing application) enters a video processing interface or detects a trigger operation for triggering the client 210 to enter the video processing interface, so as to obtain the video effect element based on the device identifier. Illustratively, the terminal device 200 may acquire the device identification of the terminal device 200 upon detecting an operation for triggering entry into a video processing interface (e.g., any one of the interfaces 301, 303, and 304). The terminal device 200 may also obtain the device identifier of the terminal device 200 when the client 210 enters any one of the video processing interfaces 301, 303, and 304.
The device identifier may include, for example, device parameter information and device parameter information of the device, where the device parameter information may be used to uniquely represent a piece of device, and may include, for example, device model information, vendor information, and the like. The device parameter information may indicate a device type used by the apparatus, and may include, for example, a Central Processing Unit (CPU) name, a CPU model, a Graphics Processing Unit (GPU) name, and the like.
In this embodiment, the terminal device 200 or the server 100 may store a corresponding relationship between a device identifier and a performance gear, for example, the corresponding relationship may be a gear table. The corresponding relationship may be obtained from a designated data system, for example, the corresponding relationship may be obtained through a benchmark system, and the benchmark system may perform comprehensive scoring on each type of device according to performance parameters of a CPU, a GPU, a memory, and the like of the device, and a resolution, an aspect ratio, a number of processing cores, and the like of an image displayed by the device, and divide each type of device into at least two gears, for example, 3 gears or 4 gears, according to the scoring. In addition, the correspondence relationship may be obtained by performance evaluation in advance and stored in the terminal device 200 or the server 100. The present embodiment does not limit this.
It is understood that, in this embodiment, the correspondence relationship between the device identifier and the performance gear may be a data record including the device identifier and the performance gear.
Based on the above correspondence, the terminal device 200 may use the device identifier of the target device as an index, search for a data record containing the device identifier of the target device, and determine the performance gear in the data record as the gear to which the performance of the target device belongs, that is, the target gear in S202.
And S205, determining the video effect element divided into the target gear as a first video effect element.
In this embodiment, in the terminal device 200 or the server 100, each available video effect element may be divided according to the performance requirement of the video effect element on the device, so as to determine the corresponding relationship between different video effect elements and different performance levels of the device. Each video effect element here may be, for example, all available video effect elements provided by the server 100. For convenience of description, hereinafter, the correspondence relationship between the device identifier and the performance level is described as a first correspondence relationship, and the correspondence relationship between the video effect element and the performance level is described as a second correspondence relationship. Wherein the second correspondence may be a data record comprising an element identification of the video effect element and the performance level. The element identification may comprise any identifier capable of uniquely identifying a video effect element, such as an element id (identity).
In an implementation process, the terminal device 200 may use the target gear as an index to search for a second corresponding relationship including the target gear, and obtain, according to an element identifier in the searched second corresponding relationship, a video effect element having the element identifier, where the obtained video effect element is a video effect element corresponding to the target gear.
Optionally, in this embodiment, the element identifier of the video effect element may further include an icon of the video effect element, where the icon may be, for example, a thumbnail of the video effect element; as another example, when the video effect element is a video template, the icon may be a cover image of the video template. The terminal device 200 may enter a selection interface of any one of the video effect elements (e.g., the type a video effect element) in response to a user operation of a selection entry for the video effect element. And displaying icons of the video effect elements corresponding to the target gear in the video effect elements of the type A on the effect element selection interface. For example, in the video template selection interface shown in fig. 3C, 303-1, 303-2, 303-3, and 303-4 may be cover images of three video templates, respectively. The user may select any video effect element by selecting the icon for that video effect element.
In this embodiment, the terminal device 200 may obtain the video effect elements divided into the target gear in various ways. In one embodiment, after finding the corresponding element identifier according to the target gear to which the performance of the target device belongs, the video effect element having the element identifier (i.e., the video effect element divided into the target gear) may be directly acquired locally (i.e., the terminal device 200) for storage, when a user triggers to enter the effect element selection interface, the icon of the video effect element divided into the target gear may be displayed, and when the user selects the icon of a certain video effect element on the video effect element selection interface, the terminal device 200 may acquire the specific content of the video effect element from the device.
In another embodiment, after finding the corresponding element identifier according to the target gear to which the performance of the target device belongs, the terminal device 200 may determine the video effect element having the element identifier, and obtain the element identifier (including the element ID and the icon) of the video effect element to be locally stored, correspondingly, when the user triggers to enter the effect element selection interface, the locally stored icon of each video effect element may be displayed, and when the user selects the icon of a certain video effect element on the effect element selection interface, the terminal device 200 may obtain the specific content of the video effect element from the server 100 according to the element ID of the video effect element.
In the embodiment of the present application, the video processing method may also be implemented by the interaction between the server 100 and the terminal device 200. Referring to fig. 5, an interaction flow of the server 100 and the terminal device 200 in implementing the video processing method is exemplarily shown.
S501, when detecting a trigger operation for entering the video processing interface of the client 210, the terminal device 200 obtains a device identifier of the device.
S502, the terminal device 200 transmits the device identification to the server 100.
S503, the server 100 obtains the target gear to which the performance of the terminal device 200 belongs according to the device identifier.
S504, the server 100 determines the video effect element classified into the target gear as the first video effect element.
S505, the server 100 transmits the element identification of the first video effect element to the terminal device 200. Wherein the element identification may include an icon and an element ID of the first video effect element.
S506, the terminal apparatus 200 displays an icon of the first video effect element.
And S507, responding to the effect selection operation aiming at the video to be processed by the terminal equipment 200, determining a second video effect element corresponding to the effect selection operation from the displayed first video effect elements, and processing the video to be processed according to the second video effect element to obtain the target video with the video effect corresponding to the second video effect element.
The terminal device 200 may be regarded as a target device in the foregoing embodiment, and the detailed implementation process of the flow shown in fig. 5 is similar to the flow shown in fig. 3 and 4, and the steps shown in fig. 3 and 4 may be described in detail with reference to the foregoing embodiment.
Through the process shown in fig. 5, the server can implement the hierarchical display of the video effect elements according to the difference of the device performances of the terminal devices, so that the performance required by the operation of the video effect element selected by the user on the terminal device conforms to the performance of the terminal device, thereby improving the problem of serious jam in the process of processing the video according to the selected video effect element or the problem of unsmooth operation of the video processed according to the selected video effect element.
Referring to fig. 2 and fig. 6 together, the video processing method shown in fig. 2 will be described in detail. In this embodiment, S204 may be implemented by the flow shown in fig. 6.
S204-1, searching whether a target corresponding relation comprising the equipment model information of the target equipment exists in the stored corresponding relation between the equipment identification and the performance gear. If not, S204-2 can be executed; if so, S204-4 may be performed.
The correspondence in S204-1 may be understood as the first correspondence described above. Since the first correspondence is predetermined, all devices may not be covered. A piece of equipment can be uniquely determined by the equipment parameter information of the piece of equipment, so after the equipment identifier of the target equipment is obtained, the equipment parameter information in the equipment identifier can be used as an index to search for a first corresponding relation including the equipment parameter information. Wherein, the first corresponding relation including the device parameter information is the target corresponding relation in S204-1. In this embodiment, the device parameter information may include device model information, manufacturer information, and the like.
S204-2, searching the corresponding relation with the maximum similarity between the included device parameter information and the device parameter information of the target equipment from the stored corresponding relations, and determining the performance gear in the searched corresponding relation as the target gear.
In the implementation process, if the predetermined first corresponding relationships do not cover the target device (e.g., the terminal device 200), the first corresponding relationship including the device parameter information cannot be found from the stored first corresponding relationships based on the device parameter information of the target device. In this case, the terminal device 200 may determine, based on the device parameter information of the device, a device having the greatest similarity to the device from among devices which have been covered by the predetermined first correspondence relationship. As described above, the device parameter information may include at least two items of parameter information, which may include, for example, a CPU name, a CPU model, a GPU name, the number of processing cores, and the like.
The terminal device 200 may determine similarity between the stored device parameter information in each first corresponding relationship and the device parameter information of the terminal device, where the similarity between the device parameter information of two terminal devices may refer to the number of items of the same parameter information in the two device parameter information. The greater the number of items of the same parameter information, the greater the similarity. For example, the device parameter information data1, data2, and data3 each include 4 items of parameter information, and if 2 items of parameter information are the same for the device parameter information data1 and data2 and 3 items of parameter information are the same for the data1 and data3, it can be considered that the similarity of the device parameter information data1 and data3 is greater than that of the data1 and data 2. If the device parameter information of at least two devices is completely the same as the device parameter information of the terminal device 200, in one embodiment, one device having the manufacturer information the same as that of the terminal device 200 may be selected from the at least two devices as a device most similar to the terminal device 200; in another embodiment, one device may be randomly selected from the at least two devices as the device most similar to the terminal device 200.
In this manner, the terminal apparatus 200 may determine, from among the stored first correspondences, a first correspondence in which the degree of similarity between the included device parameter information and the device parameter information of the present apparatus is the greatest, and a performance stage in the determined first correspondence may be determined as a target stage.
And S204-3, establishing and storing the corresponding relation between the equipment identification of the target equipment and the target gear.
In the case where the target device (e.g., the terminal device 200) is not covered in the stored first correspondence, the terminal device 200 may establish a data record including the device identifier and the target gear of the terminal device 200 after determining the target gear to which the device belongs by S204-2. The data record is the corresponding relation described in S204-3.
S204-4, determining the performance gear in the target corresponding relation as the target gear.
If the stored first corresponding relations cover the target device, the first corresponding relations containing the device parameter information, namely the target corresponding relations, can be found by taking the device parameter information of the target device as an index. The performance gear in the target correspondence may be determined as a gear to which the performance of the target apparatus belongs, i.e., a target gear.
Through the process shown in fig. 6, the gear to which the performance of each terminal device belongs can be automatically determined, and then the video effect elements that need to be issued to the terminal devices are determined based on the gear.
In this embodiment, before determining the corresponding video effect element based on the gear to which the device performance belongs, a second corresponding relationship between different performance gears and different video effect elements may be determined. Correspondingly, before executing S205, the video processing method provided by this embodiment may further include the steps shown in fig. 7.
S701, determining at least two performance gears.
Wherein the at least two performance gears may be determined according to the acquired first corresponding relationships. In one example, each performance gear present in each first correspondence (e.g., gear table) may be determined as the at least two performance gears. For example, if performance gear 1, performance gear 2, performance gear 3, and performance gear 4 occur in each first corresponding relationship, the performance gear 1, performance gear 2, performance gear 3, and performance gear 4 may be determined as the at least two performance gears.
In another example, some adjacent performance steps occurring in each first correspondence may be combined into one performance step. Performance gears 1 and 2 such as those described above may be combined into one performance gear. It will be appreciated that the merging does not affect the data processing flow, but only distinguishes when recording the second correspondence of performance level to video effect element. In this example, a second correspondence relationship between the performance level 12 obtained by merging the performance levels 1 and 2 and the video effect element is recorded. In detail, the video effect element corresponding to performance level 12 includes video effect elements corresponding to performance levels 1 and 2, respectively.
S702, aiming at each performance level, dividing at least one of the video effect elements into the performance level according to first performance parameter information of each video effect element running on the test equipment of the performance level and the user value weight of each video effect element.
The first performance parameter information of each video effect element running on the test equipment in the performance level refers to parameter information collected when the video effect element runs on the test equipment in the performance level, and is used for representing the performance requirement of the video effect element on the test equipment in the performance level.
In this embodiment, there may be multiple types of devices belonging to the same performance gear. In other words, one performance gear may correspond to multiple device models. The implementation of S702 will be explained below by taking one performance gear k as an example.
In one embodiment, for each device model belonging to the performance gear k, one device of the device model can be used as one test device of the performance gear k. Thus, the number of the test devices in the performance gear k is the same as the number of the device models corresponding to the performance gear k. In another embodiment, a specified proportion (e.g., 8% to 12%, such as 10%) of the device models corresponding to the performance gear k may be selected, for example, if 125 devices are commercially available and belong to the performance gear k, 10 device models may be selected from the 125 device models. For the selected 10 device models, one device of each device model can be respectively obtained as a test device of the performance gear k. In this way, the amount of test data can be reduced while maintaining the accuracy of the first performance parameter information.
In this embodiment, each video effect element refers to all video effect elements that can be provided by the server 100, and each video effect element may be integrated into a test component, as shown in fig. 8, the test component 800 may include the video effect elements and a run switch K corresponding to each video effect element, and the run switch K may be used to control whether the corresponding video effect element is called in this run before the test component 800 is run each time, for example. In this manner, the video effect elements that are run at a time can be flexibly selected from the test component 800 as needed, and the video effect elements can be flexibly combined and tested as needed. It is worth mentioning that the running of the video effect element can be understood as the rendering, on-screen and display process of the video effect element by the terminal device.
After the test equipment of the performance level k is determined, the test component 800 described above may be run on each test equipment, and the test component 800 may run each video effect element in sequence and count the first performance parameter information of the video effect element running on the test equipment. Illustratively, assuming that the number of test devices of the performance level k is N (positive integer), and the number of video effect elements is M (M is positive integer), for each video effect element Ci (1 ≦ i ≦ M, i being an integer), at least N pieces of first performance parameter information corresponding to the N test devices of the performance level k one to one may be obtained by the test component 800.
In this embodiment, the first performance parameter information of each video effect element running on the test equipment of the performance level k may be obtained through the following process: acquiring the CPU occupancy rate and the memory occupancy rate of each video effect element running on the test equipment of the performance gear k and the consumed time for processing each video frame, and determining the acquired CPU occupancy rate, memory occupancy rate and consumed time as the first performance parameter information of the video effect element.
In other words, the first performance parameter information herein may include, for example, CPU occupancy, memory occupancy, and time consumption for processing one video frame. The time consumed by the test device to process each video frame may be replaced by the video frame rate, which is not limited in this embodiment.
In addition, for each video effect element Ci, a user value weight for that video effect element Ci may also be obtained. In this embodiment, the user value weight of the video effect element Ci may be inversely proportional to the user value of the video effect element Ci, that is, the larger the user value of the video effect element is, the smaller the user value weight of the video effect element may be; conversely, if the user value of a video effect element is smaller, the user value weight for that video effect element may be greater.
In one example, in S702, the user value weight of each video effect element can be implemented by the following process: the user quantity of the video effect elements is obtained, the user value weight of the video effect elements is determined according to the user quantity, and the user value weight is inversely proportional to the user quantity. It is understood that the larger the user amount, the larger the user value representing the video effect element, and correspondingly, the user value weight is inversely proportional to the user amount, which can be considered as inversely proportional to the magnitude of the user value.
In this embodiment, the first performance parameter information of the video effect element running on the test equipment in the performance level k represents the performance requirement of the video effect element on the test equipment in the performance level k, and the user value weight of the video effect element can reflect the user value of the video effect element, so that the video effect element is divided into the performance level k based on the two information, and the user value can be maximized as much as possible under the condition that the performance requirement is met (that is, the video effect element divided into the performance level k basically conforms to the performance of the equipment in the performance level k).
In detail, S702 may have various implementations. In one possible implementation, S702 may be implemented by the process shown in fig. 9.
S702-1, obtaining a performance evaluation result of each video effect element on the test equipment in the performance level according to the first performance parameter information of each video effect element running on the test equipment in the performance level and the user value weight of the video effect element.
Taking the video effect element Ci as an example, for each testing device Dj (j is more than or equal to 1 and less than or equal to N, j is an integer) of the performance gear k, based on the first performance parameter information of the video effect element Ci running on the testing device Dj and the user value weight of the video effect element Ci, a performance evaluation result can be calculated. Taking the first performance parameter information including three performance parameters, such as the CPU occupancy, the memory occupancy, and the time consumption for processing one video frame as examples, in a possible implementation, the performance evaluation result may be calculated as follows:
and respectively calculating products of the CPU occupancy rate p1 and the memory occupancy rate p2 of the video effect element Ci running on the testing device Dj, the consumed time p3 for processing one video frame and the user value weight corresponding to the video effect element Ci to obtain three products, and acquiring the sum of the three products. The sum here can be considered as a performance evaluation result, denoted as Rij.
In another possible implementation, the influence factor of each performance parameter may be determined according to the influence degree of the performance parameter on the operation effect of the video effect element, and the product of the CPU occupancy p1, the memory occupancy p2, the consumed time p3 for processing one video frame and the corresponding influence factor is calculated respectively, so as to obtain the CPU occupancy p1 ', the memory occupancy p 2', and the consumed time p3 'for processing one video frame, and then according to the processing procedure of the previous implementation, the performance evaluation result Rij is calculated according to the CPU occupancy p 1', the memory occupancy p2 ', the consumed time p 3' for processing one video frame, and the user value weight corresponding to the video effect element Ci.
As described above, the first performance parameter information represents the performance requirements of the video effect elements on the operating device, and therefore, the process of calculating the performance evaluation result based on the user value of the video effect elements and the first performance parameter information can be regarded as adjusting the performance requirements of the video effect elements on the operating device based on the user value of the video effect elements, so that some video effect elements with higher performance requirements but higher user values can be also divided into performance levels k.
It can be understood that the above calculation method of the performance evaluation result is only an example, and in this embodiment, the performance evaluation result may also be calculated in other manners. For example, the performance evaluation result can also be the reciprocal of Rij, 1/Rij, described above.
S702-2, sorting the performance evaluation results of the video effect elements on the test equipment of the performance level according to the size relationship.
In this embodiment, in the implementation process, the M individual performance evaluation results of the M video effect elements on the testing device Dj may be sorted in the order from large to small or from small to large. For example, if the performance evaluation result of the video effect element Ci at the testing device Dj is Rij, the larger the performance evaluation result is, the higher the performance requirement of the video effect element is. For another example, if the performance evaluation result of the video effect element Ci at the testing device Dj is 1/Rij, the lower the performance evaluation result, the higher the performance requirement of the video effect element.
S702-3, selecting the performance evaluation results of the target proportion from the performance evaluation results according to the sequence.
S702-4, dividing the video effect elements corresponding to the performance evaluation results of the target proportion into the performance gears.
In this embodiment, under the condition that the larger the performance evaluation result is, the higher the performance requirement of the video effect element is, the performance evaluation result of the target proportion may be selected according to the sequence of the performance requirements from low to high, that is, the sequence of the performance evaluation results from small to large. For example, if there are M performance evaluation results in total, and the target proportion is 22%, the number of the selected performance evaluation results may be 22% xm if 22% xm is an integer, or the number of the selected performance evaluation results may be a maximum integer smaller than 10% xm or a minimum integer larger than 10% xm if 22% xm is not an integer, that is, a result obtained by taking 20% xm as an integer or less.
The target ratio can be flexibly set, and in one example, the target ratio can be fixed and can be set according to the target gear. For example, in the case where there are 4 shift positions, if performance shift position k is a shift position in which shift position 1 and shift position 2 are combined, i.e., two shift positions in which the performance of the apparatus is the best, the target ratio may be 100%. Performance gear k is gear 3, the target ratio may be 60% -80%. With performance gear k at gear 4, the target ratio may be 30% -50%. The present embodiment does not limit this.
In another example, the target proportion may be dynamically changed, for example, in relation to a performance requirement of the client 210, where the performance requirement may be a time-consuming (or frame rate) limitation of processing a video frame by the client 210, and the limitation may be, for example, less than the target duration.
In detail, in the implementation process, performance evaluation results can be sequentially selected according to the performance requirements of the video effect elements from low to high, and when the sum of the time consumed for processing one video frame by the video effect elements corresponding to the selected performance evaluation results reaches the target duration, the selection can be stopped. At this time, the ratio of the number of the selected performance evaluation results in each performance evaluation result is the target ratio.
Under the condition that the performance evaluation result is larger and the performance requirement of the video effect element is lower, the performance evaluation results of the target proportion can be selected according to the sequence of the performance evaluation results from large to small.
After the performance evaluation results of the target proportion are selected, the video effect element corresponding to each selected performance evaluation result can be determined as the video effect element corresponding to the performance level k. Based on the processing result of S702-4, a second corresponding relationship between the performance level k and the video effect element may be established, and second corresponding relationships between other performance levels and the video effect element may also be determined and established according to the flow shown in fig. 7, which is not limited in this embodiment.
In another possible embodiment, S702 may be implemented by the process shown in fig. 10, which is described in detail as follows.
S702-5, obtaining a performance evaluation result of each video effect element on each type of test equipment in the performance level according to first performance parameter information of each video effect element running on each type of test equipment in the performance level and the user value weight of the video effect element.
The detailed implementation process of S702-5 is similar to that of S702-1 described above, and is not described herein again.
S702-6, sorting the performance evaluation results of the video effect elements on each type of test equipment of the performance level according to the size relationship to obtain a performance evaluation result sequence.
S702-7, determining the performance evaluation result of the target proportion from the performance evaluation result sequence of each type of test equipment according to the sequence, and determining the video effect elements corresponding to the performance evaluation results of the target proportion to obtain the effect element group corresponding to the type of test equipment.
Still taking the performance level k as an example, for each type of testing device Dj and M video effect elements of the performance level k, M performance evaluation results can be obtained, and the M performance evaluation results can be regarded as a group of evaluation data corresponding to the testing device Dj. Correspondingly, for N types of test equipment, N groups of evaluation data can be obtained, and each group of evaluation data comprises M individual performance evaluation results. In the implementation process, each group of evaluation data can be sequenced, so that a performance evaluation result sequence is obtained. It can be understood that the N sets of evaluation data are sorted in the same manner, and the sorting manner may be from small to large or from large to small, which is not limited in this embodiment.
Based on S702-7, N performance evaluation result sequences can be obtained, and for each performance evaluation result sequence, performance evaluation results of a target proportion can be selected from the performance evaluation result sequences according to the sequence from low to high of performance requirements represented by the performance evaluation results. The selection process is similar to the above-mentioned S703-2, and is not described herein again.
And S702-8, dividing the same video effect elements in the effect element groups corresponding to the various types of test equipment in the performance level into the performance level.
The video effect elements corresponding to the performance evaluation results of the target proportion selected from the performance evaluation result sequence j corresponding to the test equipment Dj are the effect element group j corresponding to the test equipment Dj of the performance level k, and the video effect elements in the effect element group j are suitable for the test equipment Dj of the performance level k. In this embodiment, the video effect elements that are respectively suitable for the N types of test equipment of the performance level k may be compared, and the same video effect element in the effect element group that corresponds to the N types of test equipment of the performance level k, that is, the video effect elements that are all suitable for the N types of test equipment of the performance level k, are determined as the video effect element that corresponds to the performance level k.
Based on the processing result of S702-8, a second correspondence relationship of the performance stage k and the video effect element can be established. It is understood that the second corresponding relationship between other performance gears and video effect elements can also be determined and established according to the flow shown in fig. 10, which is not limited by the embodiment.
In yet another possible implementation, S702 may be implemented by the flow shown in fig. 11, which is described in detail below.
S702-9, determining at least two video effect elements with incidence relation from the video effect elements, and acquiring first performance parameter information of the at least two video effect elements running on the test equipment of the performance level.
Some of the video effect elements provided by the server 100 are single effect elements, some are combined effect elements, and a combined effect element may be regarded as an entity formed by packaging a plurality of single effect elements, for example, a video template may be regarded as a combined effect element, and an entity formed by combining all the beauty filter information may also be regarded as a combined effect element. In practical application, some single effect elements can be used in cooperation with each other. Some single effect elements may then be superimposed onto the combined effect element for use with the combined effect element.
The terminal device 200 or the server 100 may store an association relationship between video effect elements suitable for use therewith. For example, if the video effect elements C1 and C2 are suitable for use in cooperation, the terminal device 200 or the server 100 may have data records stored therein including C1 and C2. When an association relationship exists between any group of video effect elements, first performance parameter information of each of the group of video effect elements respectively running on the test equipment can be acquired. It is to be understood that a set of video effect elements herein may include at least two video effect elements.
S702-10, according to the user value weight of each video effect element, carrying out weighted summation on the first performance parameter information of the at least two video effect elements, and if the obtained sum meets the target condition, dividing the at least two video effect elements into the performance gears.
Taking the above-mentioned group of video effect elements having an association relationship including video effect elements C1 and C2 as an example, assuming that the user value weight corresponding to the video effect element C1 is V1, and the user value weight corresponding to the video effect element C2 is V2, the first performance parameter information of the video effect element C1 running on the testing device Dj and the first performance parameter information of the video effect element C2 on the testing device Dj may be weighted and summed according to V1 and V2.
In this embodiment, for at least two video effect elements that can be used cooperatively, the terminal device 200 or the server 100 may store conditions that need to be satisfied for the performance parameters of the at least two video effect elements running on the test device, that is, the target conditions in S702-10. The target condition can be flexibly set, for example, according to statistical data or experience of a developer. It is understood that the target condition may be less than or equal to a threshold value or may be a range of values. In practice, when the result of the above weighted summation satisfies the target condition, it can be considered that the video effect elements C1 and C2 can be operated in parallel on the test device Dj of the performance stage k. In this case, both of the video effect elements C1 and C2 may be determined as the video effect element corresponding to performance level k.
For example, when it is determined that the video effect elements C1 and C2 can be run in parallel on a plurality of test devices, the video effect elements C1 and C2 may be determined as video effect elements corresponding to performance level k. The plurality of test devices may be, for example, more than 3 test devices, and may also be, for example, more than 30% of the test devices in performance level k, which is not limited in this embodiment.
In other scenarios, when at least two video effect elements (e.g., video effect elements C3, C4, C5) are used in conjunction, the variation of the first performance parameter information on the test equipment may be larger, rather than a simple superposition of the first performance parameter information of video effect elements C3, C4, C4 on the test equipment, respectively. In this case, the video effect elements C3, C4, and C5 may be executed in parallel on the testing device through the testing component 800, and the first performance parameter information of the whole of the three elements C3, C4, and C5 may be counted during the execution. Correspondingly, the video effect elements C3, C4, C5 may be regarded as a combined effect element at this time, the user value weight thereof may be newly determined based on the sum of the user amounts of the video effect elements C3, C4, C5, and the product of the newly determined user value weight and the first performance parameter information of the video effect element C3, C4, C5 as a whole may be regarded as the sum obtained by the weighted sum in S703-8. Correspondingly, when the product satisfies the above-described target condition, the video effect elements C3, C4, C5 may all be determined as the video effect elements of the target gear.
Correspondingly, when the result of the weighted summation does not meet the target condition, one of the at least two video effect elements with the association relationship can be selected as the video effect element corresponding to the performance level k. The selection may be random or according to the size of the user amount, for example, selecting the user with the largest amount.
It should be noted that in this embodiment, S702 may include only the flow shown in any one of fig. 8, fig. 9, and fig. 10, or may include two or more flows shown therein at the same time, which is not limited in this embodiment.
In practical application, some video effect elements may exist, which have a high performance requirement on the operating device, and result in performance evaluation results corresponding to the video effect elements, and in the execution process of S702, the performance evaluation results do not always belong to the range of the performance evaluation results of the target proportion, in the implementation process, the video effect elements may be simplified to form new video effect elements, and the new video effect elements may be subsequently used as one of the video effect elements provided by the server 100 to participate in the processing flow shown in fig. 7, so that the video effect elements may be divided into at least one corresponding performance level through S702.
In this embodiment, the second corresponding relationship between the video effect element and the performance level is considered, that is, the video effect element to be issued by the device in each performance level is obtained by testing a part of the devices in each performance level, so that the performance level of the video effect element can be corrected based on the performance parameter information of the actual operation of the video effect element in the process of actually operating the video effect element by the terminal device 200. Based on this, after executing S203, the video processing method provided by the present embodiment may further include the steps illustrated in fig. 12.
And S206, when the target device runs the second video effect element, monitoring second performance parameter information of the second video effect element running on the target device.
In this embodiment, the second performance parameter information refers to actual performance parameter information of the second video effect element running on the terminal device 200, and the first performance parameter information of the second video effect element is test performance parameter information of the second video effect element running on the test device. The first performance parameter information and the second performance parameter information may include the same parameters.
In S206, the terminal device 200 may process the video to be processed according to the second video effect element selected by the user, and in the process, the second video effect element is executed on the terminal device 200. At this time, the terminal device 200 may monitor CPU occupancy, memory occupancy, time consumed for processing one video frame, and the like of the second video effect element running on the terminal device 200 as the second performance parameter information through S205.
And S207, searching target first performance parameter information with the minimum difference value with the second performance parameter information from the first performance parameter information of the second video effect element running on the test equipment of each performance level.
And S208, determining the performance gear corresponding to the target first performance parameter information as the actual gear of the second video effect element.
In this embodiment, taking 4 individual performance gears as an example, assuming that performance gear 1 has N1 types of test equipment, performance gear 2 has N2 types of test equipment, performance gear 3 has N3 types of test equipment, and performance gear 4 has N4 types of test equipment, the number of pieces of first performance parameter information corresponding to the second video effect element is N1+ N2+ N3+ N4.
Since the first performance parameter information includes a plurality of parameters, for each of N1+ N2+ N3+ N4 pieces of first performance parameter information, for example, the parameters may be multiplied by a weighted sum of influence factors on the operational effect according to each parameter. For convenience of description, the sum found based on the first performance parameter information will be referred to as a test performance value. Thus, N1+ N2+ N3+ N4 test performance values were obtained.
Similarly, the parameters in the second performance parameter information may also be weighted and summed according to the influence factor of each parameter on the operation effect, and the resulting sum is referred to as the actual performance value.
The terminal device 200 may search for one having the smallest difference from the actual performance value among the N1+ N2+ N3+ N4 test performance values, and determine the performance notch corresponding to the first performance parameter information corresponding to the test performance value as the actual notch of the target device.
S209, if the actual gear of the second video effect element is not consistent with the performance gear corresponding to the element identifier of the second video effect element, updating the performance gear in the second corresponding relationship where the element identifier of the second video effect element is located to the actual gear.
In this embodiment, the performance gear corresponding to the element identifier of the second video effect element is a gear determined for the second video effect element in advance based on the test data, and may be understood as a predicted gear. After determining the actual gear of the second video effect element, the actual gear and the predicted gear of the second video effect element may be compared, and if the two are the same, no processing may be performed. If the two are not consistent, the predicted gear is not accurate, so that the gear corresponding to the element identifier of the second video effect element can be adjusted to the actual gear. Therefore, the corresponding relation between the video effect elements and the gears can be verified and dynamically adjusted, and the matching degree of the performance requirements of the video effect elements issued to the terminal equipment on the running equipment and the actual performance of the terminal equipment is higher.
In order to make the processing flow of the above embodiment more clearly understood by those skilled in the art, the processing flow is further described below with reference to the processing flow diagram shown in fig. 13.
The server may obtain a device performance gear table in which a first correspondence between device identifications of different types of devices and gears to which the performance of the type of device belongs is set. Each device identification may characterize a device of one model, and thus a device identification herein may also be understood as a device model.
Optionally, in some cases, the device models in the device performance level table are more, and at this time, the device model with a larger user amount may be selected, for example, after the user amounts are sorted from large to small, the device models with the Top500 names (hereinafter referred to as Top500 devices) are ranked, and the devices with other models are removed. The Top500 device includes devices belonging to multiple models of different performance levels (e.g., performance levels 1,2,3,4), and for a device model corresponding to each performance level i (i ═ 1,2,3,4), a specified proportion of device models, for example, 10%, may be extracted from the multiple device models corresponding to the performance level i, and the device corresponding to 10% of the extracted device models is used as the test device of the performance level i.
Each test device may be configured to run the test component 800 described above to perform statistics on the performance of the effect elements of the test device, so as to obtain the first performance parameter information of each video effect element running in the test device. For the performance level i, the server may obtain a performance evaluation result of each video effect element in the test equipment of the performance level i based on the first performance parameter information of the test equipment of the video effect element in the performance level i and the user value weight of the video effect element. Further, the server may determine, based on the performance evaluation result of each video effect element in the test device of the performance level i, the video effect element corresponding to the performance level i, that is, establish a second correspondence between the performance level and the video effect element. In this embodiment, the second corresponding relationship may be stored in the server or the terminal device in the form of a hierarchical configuration table.
It can be understood that, in some cases, although there is a certain performance difference between the devices in two adjacent gears, when the video effect element is running, the performance difference is not obvious to the user, at this time, the two gears may be merged into one gear in the hierarchical configuration table, for example, in the scenario shown in fig. 13, performance gears 1 and 2 may be merged into one performance gear, and correspondingly, the video effect element divided into the merged performance gears includes the video effect elements divided into performance gears 1 and 2, respectively.
In the implementation process, when a user opens a video sharing application through a target device (for example, opens the client 210 through the terminal device 200), and enters a video processing interface of the video sharing application or executes a trigger operation for entering the video processing interface, the target device sends a device identifier of the device to the server, and the server may determine a target gear to which the target device belongs based on the device identifier, further determine an element identifier of a video effect element corresponding to the target gear based on the hierarchical configuration table, and send the element identifier to the video sharing application for the video sharing application to display. Therefore, the video effect elements selected by the user on the target equipment are consistent with the performance of the target equipment, and the problems of processing blockage and the like, unsmooth running of processed videos and the like can be avoided.
Optionally, in this embodiment, a preview tool may be further provided, and based on the preview tool, the terminal device 200 may respond to a preview operation to display the second performance parameter information of the second video effect element executed by the target device. In addition, the video effect element in this embodiment may be designed through a third-party platform, and correspondingly, the preview tool may also be run on the third-party platform, so that a designer of the video effect element may view, on the third-party platform, actual performance parameter information of the video effect element running on the third-party platform through the preview tool.
The preview tool may provide a preview interface, where the preview interface may show actual performance parameter information of any video effect element running on a device where the preview tool is located, where the actual performance parameter information may include CPU occupancy, memory occupancy, time consumption for processing one video frame, frame rate, width and height of the video frame, and the like, where the time consumption for processing one video frame may be displayed in each stage according to different stages of video frame processing. For example, the processing of one video frame may include three stages of rendering (Render), image decoding (image), and screen-on (Present), and the actual time consumption, the average time consumption, and the maximum time consumption of the three stages may be respectively displayed on the preview interface. Through the preview tool, a designer can determine the equipment of the performance gear applicable to the designed video effect element, and then can adjust the design as required.
Referring to fig. 14, a block diagram of a video processing apparatus 1400 according to an embodiment of the present disclosure is shown. The apparatus 1400 may include a display module 1410, an effect selection module 1420, and a video processing module 1430.
The display module 1410 is configured to display a first video effect element on the video processing interface of the target device, where the first video effect element is a video effect element corresponding to the capability of the target device.
The effect selecting module 1420 is configured to determine, in response to an effect selecting operation for the video to be processed, a second video effect element corresponding to the effect selecting operation from the displayed first effect elements.
The video processing module 1430 is configured to process the video to be processed according to the second video effect element, so as to obtain a target video having a video effect corresponding to the second video effect element.
Optionally, the apparatus 1400 may further comprise a determination module.
The determination module may be configured to determine a target gear to which the performance of the target device belongs from among the at least two performance gears, and determine the video effect element classified into the target gear as the first video effect element.
Optionally, the apparatus 1400 may further include a staging module. The grading module is used for dividing at least one of the video effect elements to the performance gear according to first performance parameter information of the video effect elements running on the test equipment of the performance gear and the user value weight of each video effect element for each performance gear before the video effect elements divided to the target gear are determined as the first video effect elements by the determining module.
Optionally, the mode of the grading module dividing at least one of the video effect elements into the performance level according to the first performance parameter information of each video effect element running on the test equipment of the performance level and the user value weight of each video effect element may be:
obtaining a performance evaluation result of each video effect element on the test equipment of the performance level according to first performance parameter information of each video effect element running on the test equipment of the performance level and the user value weight of the video effect element; sequencing the performance evaluation results of the video effect elements on the test equipment of the performance level according to the size relationship; selecting performance evaluation results of a target proportion from the performance evaluation results according to the sequence; and dividing the video effect elements corresponding to the performance evaluation results of the target proportion into the performance gears.
Optionally, the mode that the grading module divides at least one of the video effect elements into the performance level according to the first performance parameter information of each video effect element running on the test equipment of the performance level and the user value weight of each video effect element may also be:
obtaining a performance evaluation result of each video effect element on each type of test equipment at the performance level according to first performance parameter information of each video effect element running on each type of test equipment at the performance level and the user value weight of the video effect element; sequencing the performance evaluation results of the video effect elements on each type of test equipment of the performance level according to the size relationship to obtain a performance evaluation result sequence; according to the sequence, determining the performance evaluation result of the target proportion from the performance evaluation result sequence of each type of test equipment, and determining video effect elements corresponding to the performance evaluation results of the target proportion respectively to obtain an effect element group; and dividing the same video effect elements in the effect element groups corresponding to the various types of test equipment in the performance level into the performance level.
Optionally, the mode that the grading module divides at least one of the video effect elements into the performance level according to the first performance parameter information of each video effect element running on the test equipment of the performance level and the user value weight of each video effect element may also be:
determining at least two video effect elements with an incidence relation from the video effect elements, and acquiring first performance parameter information of the at least two video effect elements running on the test equipment of the performance level;
and according to the user value weight of each video effect element, carrying out weighted summation on the first performance parameter information of the at least two video effect elements, and if the obtained sum meets the target condition, determining the at least two video effect elements as the video effect elements corresponding to the performance gears.
Optionally, the grading module may obtain first performance parameter information of each video effect element running on the test equipment in the performance level by:
acquiring the CPU occupancy rate and the memory occupancy rate of each video effect element running on the test equipment of the performance level and the time consumption for processing each video frame; and determining the acquired CPU occupancy rate, memory occupancy rate and time consumption as the first performance parameter information of the video effect element.
Alternatively, the grading module may obtain the user value weight for each video effect element by: acquiring the user quantity of the video effect elements; determining a user value weight for the video effect element as a function of the user quantity, the user value weight being inversely proportional to the user quantity.
Optionally, the device identifier of the target device may include device parameter information and device parameter information of the target device. Correspondingly, the manner in which the determination module determines the target gear to which the performance of the target device belongs from the at least two performance gears may be: searching whether a target corresponding relation comprising equipment parameter information of the target equipment exists in the stored corresponding relation between the equipment identification and the performance gear; if not, searching the corresponding relation with the maximum similarity between the included device parameter information and the device parameter information of the target equipment from the stored corresponding relations, and determining the performance gear in the searched corresponding relation as the target gear.
Further, the determining module may be further configured to: and establishing and storing a corresponding relation between the equipment identification of the target equipment and the target gear.
Optionally, the manner of determining the target gear to which the performance of the target device belongs from the at least two performance gears by the determination module may also be: and when a target corresponding relation including the equipment parameter information of the target equipment exists, determining a performance gear in the target corresponding relation as the target gear.
Optionally, the apparatus 1400 may further comprise a preview module. The preview module can be to: when the target device runs a second video effect element, monitoring second performance parameter information of the second video effect element running on the target device; and responding to preview operation and displaying the second performance parameter information.
Optionally, the grading module may be further configured to: searching target first performance parameter information with the minimum difference value with the second performance parameter information from first performance parameter information of the second video effect element running on the test equipment of each gear; determining a performance gear corresponding to the target first performance parameter information as an actual gear of the second video effect element; and if the actual gear of the second video effect element is not consistent with the performance gear corresponding to the element identifier of the second video effect element, updating the performance gear corresponding to the element identifier of the second video effect element to the actual gear.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 15, a block diagram of an electronic device according to an embodiment of the present application is shown. The electronic apparatus 1500 may be the terminal apparatus 200 or the server 100 shown in fig. 1. The electronic device 1500 in the present application may include one or more of the following components: a processor 1510, a memory 1520, and one or more applications, wherein the one or more applications may be stored in the memory 1520 and configured to be executed by the one or more processors 1510, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 1510 may include one or more processing cores. The processor 1510 interfaces with various parts throughout the electronic device 1500 using various interfaces and lines, and performs various functions of the electronic device 1500 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1520, and calling data stored in the memory 1520. Alternatively, the processor 1510 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1510 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is to be appreciated that the modem can be implemented as a single communication chip without being integrated into the processor 1510.
The Memory 1520 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 1520 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1520 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data (such as video effect elements, correspondences) created in use by the electronic device 1500, and so on.
It is to be understood that the configuration shown in fig. 15 is merely exemplary, and that electronic device 1500 may include more or fewer components than shown in fig. 15, or have a completely different configuration than that shown in fig. 15. For example, when the electronic device 1500 is the terminal device 200, it may further include an image capturing device such as a camera.
Referring to fig. 16, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer readable medium 1600 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 1600 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 1600 includes a non-transitory computer-readable storage medium. The computer readable storage medium 1600 has storage space for program code 1610 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. Program code 1610 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A video processing method, comprising:
displaying a first video effect element on a video processing interface of a target device, wherein the first video effect element is a video effect element corresponding to a capability of the target device;
responding to an effect selection operation aiming at a video to be processed, and determining a second video effect element corresponding to the effect selection operation from the displayed first video effect elements;
and processing the video to be processed according to the second video effect element to obtain a target video with a video effect corresponding to the second video effect element.
2. The method of claim 1, wherein prior to said displaying the first video effect element, the method further comprises:
and determining a target gear to which the performance of the target equipment belongs from at least two performance gears, and determining the video effect element divided into the target gear as the first video effect element.
3. The method of claim 2, wherein prior to said determining the video effect element classified into the target gear as the first video effect element, the method further comprises:
and aiming at each performance level, dividing at least one of the video effect elements into the performance level according to first performance parameter information of each video effect element running on the test equipment of the performance level and the user value weight of each video effect element.
4. The method according to claim 3, wherein the classifying at least one of the video effect elements into the performance level according to the first performance parameter information of the respective video effect element running on the test equipment of the performance level and the user value weight of each video effect element comprises:
obtaining a performance evaluation result of each video effect element on the test equipment of the performance level according to first performance parameter information of each video effect element running on the test equipment of the performance level and the user value weight of the video effect element;
sequencing the performance evaluation results of the video effect elements on the test equipment of the performance level according to the size relationship;
and selecting the performance evaluation results of the target proportion from the performance evaluation results according to the sequence, and dividing the video effect elements corresponding to the performance evaluation results of the target proportion into the performance levels.
5. The method according to claim 3, wherein the classifying at least one of the video effect elements into the performance level according to the first performance parameter information of the respective video effect element running on the test equipment of the performance level and the user value weight of each video effect element comprises:
obtaining a performance evaluation result of each video effect element on each type of test equipment at the performance level according to first performance parameter information of each video effect element running on each type of test equipment at the performance level and the user value weight of the video effect element;
sequencing the performance evaluation results of the video effect elements on each type of test equipment of the performance level according to the size relationship to obtain a performance evaluation result sequence;
according to the sequence, determining a performance evaluation result of the target proportion from a performance evaluation result sequence of each type of test equipment, and determining video effect elements corresponding to the performance evaluation results of the target proportion respectively to obtain an effect element group corresponding to the type of test equipment;
and dividing the same video effect elements in the effect element groups corresponding to the various types of test equipment in the performance level into the performance level.
6. The method according to claim 3, wherein the classifying at least one of the video effect elements into the performance level according to the first performance parameter information of the respective video effect element running on the test equipment of the performance level and the user value weight of each video effect element comprises:
determining at least two video effect elements with an incidence relation from the video effect elements, and acquiring first performance parameter information of the at least two video effect elements running on the test equipment of the performance level;
and according to the user value weight of each video effect element, carrying out weighted summation on the first performance parameter information of the at least two video effect elements, and if the obtained sum meets the target condition, dividing the at least two video effect elements into the performance gears.
7. The method of any of claims 3-6, wherein the user value weight for each video effect element is obtained by:
acquiring the user quantity of the video effect elements;
determining a user value weight for the video effect element as a function of the user quantity, the user value weight being inversely proportional to the user quantity.
8. The method according to any one of claims 2-6, wherein the device identification of the target device comprises device parameter information and device parameter information of the target device, and wherein determining the target gear to which the performance of the target device belongs from at least two performance gears comprises:
searching whether a target corresponding relation comprising equipment parameter information of the target equipment exists in the stored corresponding relation between the equipment identification and the performance gear;
if not, searching the corresponding relation with the maximum similarity between the included device parameter information and the device parameter information of the target equipment from the stored corresponding relations, and determining the performance gear in the searched corresponding relation as the target gear.
9. The method of claim 8, further comprising:
and establishing and storing a corresponding relation between the equipment identification of the target equipment and the target gear.
10. The method of claim 8, wherein the determining a target gear to which the performance of the target device belongs from among at least two performance gears, further comprises:
and if the target corresponding relation including the equipment parameter information of the target equipment exists, determining a performance gear in the target corresponding relation as the target gear.
11. The method according to any one of claims 1-6, characterized in that the method comprises:
when the target device runs a second video effect element, monitoring second performance parameter information of the second video effect element running on the target device;
and responding to preview operation and displaying the second performance parameter information.
12. The method of claim 11, further comprising:
searching target first performance parameter information with the minimum difference value with the second performance parameter information from first performance parameter information of the second video effect element running on the test equipment of each performance gear;
determining a performance gear corresponding to the target first performance parameter information as an actual gear of the second video effect element;
and if the actual gear of the second video effect element is not consistent with the performance gear corresponding to the element identifier of the second video effect element, updating the performance gear corresponding to the element identifier of the second video effect element to the actual gear.
13. A video processing apparatus, comprising:
the display module is used for displaying a first video effect element on a video processing interface of a target device, wherein the first video effect element is a video effect element corresponding to the performance of the target device;
the effect selection module is used for responding to effect selection operation aiming at the video to be processed and determining a second video effect element corresponding to the effect selection operation from the displayed first video effect elements;
and the video processing module is used for processing the video to be processed according to the second video effect element to obtain a target video with a video effect corresponding to the second video effect element.
14. An electronic device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-12.
15. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to perform the method according to any of claims 1-12.
CN202010499112.1A 2020-06-04 2020-06-04 Video processing method and device and electronic equipment Pending CN111683280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010499112.1A CN111683280A (en) 2020-06-04 2020-06-04 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499112.1A CN111683280A (en) 2020-06-04 2020-06-04 Video processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111683280A true CN111683280A (en) 2020-09-18

Family

ID=72434601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499112.1A Pending CN111683280A (en) 2020-06-04 2020-06-04 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111683280A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532896A (en) * 2020-10-28 2021-03-19 北京达佳互联信息技术有限公司 Video production method, video production device, electronic device and storage medium
CN114286181A (en) * 2021-10-25 2022-04-05 腾讯科技(深圳)有限公司 Video optimization method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578395A (en) * 2015-12-28 2016-05-11 中国联合网络通信集团有限公司 Method and device for updating terminal attributes in terminal information database
CN107770626A (en) * 2017-11-06 2018-03-06 腾讯科技(深圳)有限公司 Processing method, image synthesizing method, device and the storage medium of video material
CN107823881A (en) * 2017-11-28 2018-03-23 杭州电魂网络科技股份有限公司 Special display effect method and device
WO2018127091A1 (en) * 2017-01-09 2018-07-12 腾讯科技(深圳)有限公司 Image processing method and apparatus, relevant device and server
CN108289185A (en) * 2017-01-09 2018-07-17 腾讯科技(深圳)有限公司 A kind of video communication method, device and terminal device
CN109284417A (en) * 2018-08-27 2019-01-29 广州飞磨科技有限公司 Video pushing method, device, computer equipment and storage medium
CN110163050A (en) * 2018-07-23 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device, terminal device, server and storage medium
CN110475158A (en) * 2019-08-30 2019-11-19 北京字节跳动网络技术有限公司 Providing method, device, electronic equipment and the readable medium of video study material
CN110688270A (en) * 2019-09-27 2020-01-14 北京百度网讯科技有限公司 Video element resource processing method, device, equipment and storage medium
WO2020038128A1 (en) * 2018-08-23 2020-02-27 Oppo广东移动通信有限公司 Video processing method and device, electronic device and computer readable medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578395A (en) * 2015-12-28 2016-05-11 中国联合网络通信集团有限公司 Method and device for updating terminal attributes in terminal information database
WO2018127091A1 (en) * 2017-01-09 2018-07-12 腾讯科技(深圳)有限公司 Image processing method and apparatus, relevant device and server
CN108289185A (en) * 2017-01-09 2018-07-17 腾讯科技(深圳)有限公司 A kind of video communication method, device and terminal device
CN107770626A (en) * 2017-11-06 2018-03-06 腾讯科技(深圳)有限公司 Processing method, image synthesizing method, device and the storage medium of video material
CN107823881A (en) * 2017-11-28 2018-03-23 杭州电魂网络科技股份有限公司 Special display effect method and device
CN110163050A (en) * 2018-07-23 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device, terminal device, server and storage medium
WO2020038128A1 (en) * 2018-08-23 2020-02-27 Oppo广东移动通信有限公司 Video processing method and device, electronic device and computer readable medium
CN109284417A (en) * 2018-08-27 2019-01-29 广州飞磨科技有限公司 Video pushing method, device, computer equipment and storage medium
CN110475158A (en) * 2019-08-30 2019-11-19 北京字节跳动网络技术有限公司 Providing method, device, electronic equipment and the readable medium of video study material
CN110688270A (en) * 2019-09-27 2020-01-14 北京百度网讯科技有限公司 Video element resource processing method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532896A (en) * 2020-10-28 2021-03-19 北京达佳互联信息技术有限公司 Video production method, video production device, electronic device and storage medium
CN114286181A (en) * 2021-10-25 2022-04-05 腾讯科技(深圳)有限公司 Video optimization method and device, electronic equipment and storage medium
CN114286181B (en) * 2021-10-25 2023-08-15 腾讯科技(深圳)有限公司 Video optimization method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108921221B (en) User feature generation method, device, equipment and storage medium
CN106326391B (en) Multimedia resource recommendation method and device
US10719769B2 (en) Systems and methods for generating and communicating application recommendations at uninstall time
CN108090208A (en) Fused data processing method and processing device
CN104813256A (en) Gathering and organizing content distributed via social media
CN111541917B (en) Determination method of recommended video, video playing method, device and equipment
CN110413867B (en) Method and system for content recommendation
CN114490375B (en) Performance test method, device, equipment and storage medium of application program
CN111683280A (en) Video processing method and device and electronic equipment
CN111159563A (en) Method, device and equipment for determining user interest point information and storage medium
CN105844107B (en) Data processing method and device
CN111241381A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN113412481B (en) Resource pushing method, device, server and storage medium
CN106452808A (en) Data processing method and data processing device
CN110377821A (en) Generate method, apparatus, computer equipment and the storage medium of interest tags
CN111263241B (en) Method, device and equipment for generating media data and storage medium
CN111435369A (en) Music recommendation method, device, terminal and storage medium
CN114020960A (en) Music recommendation method, device, server and storage medium
KR101976816B1 (en) APPARATUS AND METHOD FOR PROVIDING MASH-UP SERVICE OF SaaS APPLICATIONS
CN113010790A (en) Content recommendation method, device, server and storage medium
CN112565902B (en) Video recommendation method and device and electronic equipment
CN111641868A (en) Preview video generation method and device and electronic equipment
CN110569447A (en) network resource recommendation method and device and storage medium
CN114491093B (en) Multimedia resource recommendation and object representation network generation method and device
CN116089490A (en) Data analysis method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028933

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination