CN110489572B - Multimedia data processing method, device, terminal and storage medium - Google Patents

Multimedia data processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN110489572B
CN110489572B CN201910785581.7A CN201910785581A CN110489572B CN 110489572 B CN110489572 B CN 110489572B CN 201910785581 A CN201910785581 A CN 201910785581A CN 110489572 B CN110489572 B CN 110489572B
Authority
CN
China
Prior art keywords
audio data
target
motion
data type
multimedia data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910785581.7A
Other languages
Chinese (zh)
Other versions
CN110489572A (en
Inventor
宁小东
郑云飞
宋玉岩
章佳杰
李马丁
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910785581.7A priority Critical patent/CN110489572B/en
Publication of CN110489572A publication Critical patent/CN110489572A/en
Application granted granted Critical
Publication of CN110489572B publication Critical patent/CN110489572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/433Query formulation using audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure relates to a multimedia data processing method, a multimedia data processing device, a terminal and a storage medium, and relates to the technical field of networks. According to the method and the device, firstly, the motion degree information of the multimedia data is obtained based on the inter-frame motion information of the multimedia data, then the target audio data type is determined based on the motion degree information and a plurality of audio data types, and finally the audio file corresponding to the target audio data type is determined as the target audio data of the multimedia data. In the multimedia data processing method, the inter-frame motion information is obtained in a simple process, and the motion degree information is calculated in a simple process, so that the time cost in the multimedia data processing process is saved, the multimedia data processing efficiency is improved, and the audio-visual experience of a user is improved.

Description

Multimedia data processing method, device, terminal and storage medium
Technical Field
The present disclosure relates to the field of network technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for processing multimedia data.
Background
With the development of multimedia technology, the variety of multimedia data processing software, such as image processing software, video processing software, etc., is increasing, and in order to provide more and more novel use experiences for users, the functions of the multimedia data processing software, such as intelligent beauty, automatic music matching, etc., are also increasing.
At present, taking a multimedia data processing method for automatically dubbing music for images as an example, the specific implementation method is as follows: and carrying out shooting scene analysis and color analysis on the image needing the music, obtaining shooting scene information and color information, searching in the internet based on the shooting scene information and the color information, and obtaining corresponding music.
Based on the example multimedia data processing method, the content identification process of shooting scene analysis and color analysis of the image is mentioned, and the content identification is difficult to realize in the specific implementation process, so that the processing process consumes a long time, namely the processing efficiency of the multimedia data is low, and the audio-visual experience of a user is reduced.
Disclosure of Invention
The present disclosure provides a multimedia data processing method, apparatus, terminal and storage medium, so as to at least solve the problems of long processing time and low processing efficiency in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a multimedia data processing method, including:
acquiring motion degree information of the multimedia data based on interframe motion information of the multimedia data, wherein the interframe motion information is used for representing motion change conditions between any two frames;
determining a target audio data type based on the motion degree information and a plurality of audio data types, wherein the music beat of the target audio data type is matched with the motion degree information of the multimedia data;
and determining the audio file corresponding to the target audio data type as the target audio data of the multimedia data.
According to a second aspect of the embodiments of the present disclosure, there is provided a multimedia data processing apparatus including:
a motion degree information acquisition unit configured to perform inter-frame motion information based on the multimedia data, the inter-frame motion information being used for representing motion change conditions between any two frames, and acquiring motion degree information of the multimedia data;
a target audio data type determination unit configured to perform determination of a target audio data type whose music tempo matches the motion degree information of the multimedia data based on the motion degree information and a plurality of audio data types;
and the target audio data determining unit is configured to determine the audio file corresponding to the target audio data type as the target audio data of the multimedia data.
In one possible implementation, the target audio data type determination unit is configured to perform:
and determining the audio data type corresponding to the music beat as a target audio data type when the matching degree of the motion degree information and the music beat accords with the target matching degree based on the motion degree information and the music beats of the plurality of audio data types.
In one possible implementation, the apparatus further includes:
a mapping unit configured to perform determining a motion score of the multimedia data based on the motion degree information; and mapping the motion scores in the motion score range corresponding to the plurality of audio data types to determine the target audio data type.
In one possible implementation, the apparatus further includes:
the adjusting unit is configured to determine the proportion of the picture files in the multimedia data to the total number of the files; when the ratio is larger than a target ratio, determining an audio data type with a music beat one level slower than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type; and when the first proportion is smaller than or equal to a target proportion, determining an audio data type with a music beat one level faster than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type.
In one possible implementation, the apparatus further includes:
the inter-frame motion information determining unit is configured to determine motion information of a specified object in two adjacent image frames as inter-frame motion information of the multimedia data based on the position of the specified object included in the image frame in the video file included in the multimedia data.
In one possible implementation, the apparatus further includes:
a downloading unit configured to execute sending a downloading request to a target server, the downloading request being used for instructing downloading of the target audio data; and when the target audio data fails to be downloaded from the target server, determining a local audio file as the target audio data.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, comprising a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the multimedia data processing method described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium including instructions that, when executed by a processor of a terminal, enable the terminal to perform the above-described multimedia data processing method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the method comprises the steps of firstly obtaining motion degree information of multimedia data based on interframe motion information of the multimedia data, then determining a target audio data type based on the motion degree information and a plurality of audio data types, and finally determining an audio file corresponding to the target audio data type as target audio data of the multimedia data. In the multimedia data processing method, the inter-frame motion information is obtained in a simple process, and the motion degree information is calculated in a simple process, so that the time cost in the multimedia data processing process is saved, the multimedia data processing efficiency is improved, and the audio-visual experience of a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a block diagram illustrating a resource service system according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of multimedia data processing according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating a method of multimedia data processing according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a multimedia data processing apparatus according to an example embodiment.
Fig. 5 is a schematic diagram illustrating a structure of a terminal according to an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating a configuration of a server according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a block diagram illustrating the construction of an audio service system according to an exemplary embodiment. The audio service system includes: a terminal 110 and an audio service platform 120.
The terminal 110 is connected to the audio service platform 120 through a wireless network or a wired network. The terminal 110 may be a mobile terminal, for example, the mobile terminal may be at least one of a smartphone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The terminal 110 is installed and operated with an application program supporting an audio service. The application program can be any one of a multimedia data processing program, a social application program, an instant messaging application program and an information sharing program. Illustratively, the terminal 110 is a terminal used by a user, and an application running in the terminal 110 may have a user account registered therein.
The audio service platform 120 may include at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Optionally, the audio service platform 120 comprises: audio servers, user information databases, audio databases, and the like. The audio server is used for providing the terminal 110 with an audio service. The audio server may be one or more. When there are multiple audio servers, there are at least two audio information servers for providing different services, and/or there are at least two audio servers for providing the same service, for example, providing the same service in a load balancing manner, which is not limited by the embodiments of the present disclosure. The audio database is used for storing user information of the audio service platform, and the audio database is used for providing audio data so as to provide functions of audio data determination and the like for the terminal subsequently. Of course, the audio service platform 120 may also include other functional servers to provide more comprehensive and diversified services.
The terminal 110 may be generally referred to as one of a plurality of terminals, and the embodiment is only illustrated by the terminal 110. Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminal may be only one, or several tens or hundreds, or more, in which case the audio service system further includes other terminals. The number of terminals and the type of the device are not limited in the embodiments of the present disclosure.
Fig. 2 is a flowchart illustrating a multimedia data processing method according to an exemplary embodiment, and as shown in fig. 2, an execution subject of the multimedia data processing method is a terminal, including the following steps.
In step 201, the terminal obtains motion degree information of the multimedia data based on inter-frame motion information of the multimedia data, wherein the inter-frame motion information is used for indicating motion change between any two frames.
In step 202, the terminal determines a target audio data type whose music tempo matches the motion degree information of the multimedia data based on the motion degree information and a plurality of audio data types.
In step 203, the terminal determines the audio file corresponding to the target audio data type as the target audio data of the multimedia data.
According to the method provided by the embodiment of the disclosure, motion degree information of multimedia data is acquired based on inter-frame motion information of the multimedia data, a target audio data type is determined based on the motion degree information and a plurality of audio data types, and finally an audio file corresponding to the target audio data type is determined as the target audio data of the multimedia data. In the multimedia data processing method, the inter-frame motion information is obtained in a simple process, and the motion degree information is calculated in a simple process, so that the time cost in the multimedia data processing process is saved, the multimedia data processing efficiency is improved, and the audio-visual experience of a user is improved.
Optionally, the determining the target audio data type based on the motion level information and a plurality of audio data types includes:
and determining the audio data type corresponding to the music beat as the target audio data type when the matching degree of the motion degree information and the music beat accords with the target matching degree based on the motion degree information and the music beats of the plurality of audio data types.
Optionally, the determining the target audio data type based on the motion level information and a plurality of audio data types includes:
determining a motion score of the multimedia data based on the motion degree information;
and mapping the motion score in a motion score range corresponding to the plurality of audio data types to determine the target audio data type.
Optionally, after determining the target audio data type, the method further includes:
determining the proportion of the picture files in the multimedia data to the total number of the files;
when the ratio is larger than a target ratio, determining an audio data type with a music beat one level slower than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type;
when the first ratio is less than or equal to a target ratio, an audio data type having a music tempo one level faster than the target audio data type is determined from the plurality of audio data types, and the target audio data type is adjusted to the newly determined audio data type.
Optionally, before obtaining the motion degree information of the multimedia data based on the inter-frame motion information of the multimedia data, the method further includes:
and determining the motion information of the specified object in two adjacent image frames as the inter-frame motion information of the multimedia data based on the position of the specified object included in the image frame in the video file contained in the multimedia data.
Optionally, after the determining the audio file corresponding to the target audio data type as the target audio data of the multimedia data, the method further includes:
sending a downloading request to a target server, wherein the downloading request is used for indicating the downloading of the target audio data;
when downloading the target audio data from the target server fails, the local audio file is determined as the target audio data.
Fig. 3 is a flowchart illustrating a multimedia data processing method according to an exemplary embodiment, and as shown in fig. 3, an execution subject of the multimedia data processing method is a terminal, including the following steps.
In step 301, the terminal determines the motion information of a specified object in two adjacent image frames as the inter-frame motion information of the multimedia data based on the position of the specified object included in the image frame in the video file included in the multimedia data.
The multimedia data may be data obtained by mixing and editing a picture file and a video file, or data obtained by editing only a video file, where the multimedia data may be data that has not been exported to a video format, so as to perform data compression and other processing after further processing.
The designated object may be an object capable of representing content to be expressed by the multimedia data, and the object may be at least one of a human, an animal, and an object. The number of the designated target objects may be one, plural, or a designated number.
The determination method of the designated target object may be: the terminal determines the specified object based on at least one of the frequency of appearance, the time length of appearance, the position displayed on the screen, the proportion of the display size to the screen, and the like of the object in the video file of the multimedia data in the image frame.
Wherein the process of determining the designated target object based on different factors may include at least one of the following:
the method comprises the steps of carrying out target detection on image frames of a video file, determining the occurrence frequency of a target object in each image frame, and determining the target object with the occurrence frequency meeting a target condition as the specified target object, wherein the target condition can be that the occurrence frequency is the largest, for example, if a person runs in the content of the video file and the occurrence frequency of the person in the image frames is the largest, the person can be taken as the specified target object.
The method comprises the steps of carrying out target detection on image frames of a video file, determining the occurrence time of a target object in each image frame, and determining the target object with the occurrence time meeting a target condition as the specified target object, wherein the target condition can be that the occurrence time is longest, for example, if the content of the video file is that a person runs and the occurrence time of the person in the image frames is longest, the person can be taken as the specified target object.
The method comprises the steps of carrying out target detection on image frames of a video file, determining the position of a target object displayed on a screen in each image frame, and determining the target object with the position on the screen meeting a target condition as the specified target object, wherein the target condition can be that the position on the screen is in the middle, for example, if the content of the video file is that a person runs and the display position of the person in the image frames is in the center of the screen, the person can be used as the specified target object.
The method comprises the steps of carrying out target detection on image frames of a video file, determining the proportion of the display size of a target object in each image frame to a screen, and determining the target object with the proportion of the display size to the screen meeting a target condition as the specified target object, wherein the target condition can be that the proportion of the display size to the screen is the largest, for example, if the content of the video file is that a person runs, the proportion of the display size of the person in the image frames to the screen is 80%, and the proportion of the display size to the screen is the largest, the person can be taken as the specified target object.
It should be noted that when the specified object is determined based on a plurality of factors, the determination may be made based on the weight of each factor, for example, if the weight of the position displayed on the screen is 0.6, the weight of the number of occurrences is 0.3, the weight of the duration of occurrence is 0.1, the content is that a person is running, there is a tree in the background, the number of occurrences and the duration of occurrence of the tree in the image frame are greater than the person, but the position of the person displayed on the screen is in the center, the position of the tree displayed on the screen is on the left side, and the person may be taken as the specified object based on the weights of these three factors.
In addition, the specified object may be determined based on the degree of coincidence between the object and the title, for example, if the title of the video file is "run", the content is that a person is running, and the person may be determined as the specified object.
In one possible implementation of the process involved in step 301, the inter-frame motion vector between two adjacent image frames is determined by: the terminal determines the position coordinates of the specified object in a first frame of a video file of multimedia data, then determines the position coordinates of the specified object in a second frame adjacent to the first frame, and obtains an inter-frame motion vector of the specified object pointing to the second frame from the first frame based on the position coordinates of the specified object in the first frame and the second frame, namely the inter-frame motion vector between the two adjacent image frames. An inter-frame motion vector can be determined between every two adjacent image frames in the video file, and all the inter-frame motion vectors are determined as the inter-frame motion information of the multimedia data. The inter-frame motion information is used for indicating the motion change situation between any two frames.
The method for determining the position coordinates of the designated object in two adjacent image frames mentioned in the above process may be performed based on a x265 motion vector estimation algorithm, which is a core algorithm of the video encoding and decoding process, and may divide the designated object into a plurality of blocks or macro blocks, and try to search out the position coordinates of each block or macro block in the adjacent frame, that is, the position coordinates of the designated object in the adjacent frame.
Specifically, in the above process, the inter-frame motion vector may be obtained by matching pixels in adjacent frames, for example, a target region is determined by taking a certain pixel point in a first frame as a center, and in the target region in a second frame, a pixel having a similarity to a pixel in the first frame reaching a target similarity is found, so that one-time matching is completed, and the inter-frame motion vector may be determined based on a position of the similar pixel in the adjacent frame.
In a possible implementation manner, when the number of the designated objects included in the multimedia data is multiple, the terminal determines inter-frame motion vectors of each designated object, determines an average value of the inter-frame motion vectors of all the designated objects as an average motion vector, and determines the average motion vector as inter-frame motion information of the multimedia data. For example, if the content of the video file is that a plurality of vehicles are traveling, the number of occurrences, the length of occurrences, and the display size of the plurality of vehicles are all the same, and the display positions are arranged in a line in the middle of the screen, the plurality of vehicles may be determined as the specified object, the inter-frame motion vector of each vehicle may be determined separately, the average value of the inter-frame motion vectors of all the vehicles may be determined as the average motion vector, and the average motion vector may be determined as the inter-frame motion information of the multimedia data. It should be noted that, by determining an average motion vector to replace a plurality of inter-frame motion vectors, the calculation amount of the subsequent motion level information determination process can be reduced, the calculation time can be shortened, and the multimedia data processing efficiency can be improved.
It should be noted that the process mentioned in step 301 is a process of determining inter-frame motion information of the multimedia data, and the inter-frame motion information can be determined by determining the designated target object included in the multimedia data and then determining the inter-frame motion information, so that the inter-frame motion information can be consistent with the multimedia data, the target audio data acquired in the subsequent process is more suitable, and the audio-visual experience of the user is improved.
In step 302, the terminal acquires motion level information of the multimedia data based on inter-frame motion information of the multimedia data.
The motion degree information is used for indicating the motion intensity of the specified object in the picture and reflecting the change situation of the motion change of the specified object along with the time in unit time length. One possible implementation of the process involved in step 302 is: the terminal calculates the module length of each inter-frame motion vector in the inter-frame motion information and the x/y component of the module length, arranges the module length and the x/y component of the module length according to the time sequence to obtain a sequence, calculates the module length mean value and the variance of the module length component of the sequence, performs weighted average on the module length mean value and the variance of the module length component of the sequence to obtain an average value, and takes the average value as the motion degree information.
For example, the inter-frame motion information is denoted as BMV, and the features of the BMV are composed of three values of a module length of each inter-frame motion vector in the BMV and an x/y component of the module length, where the module length of each inter-frame motion vector in the BMV is: BMV _ norm = average ((| BMV |), where |, denotes vector modulo length, average denotes mean, and x/y component of the modulo length of each inter-frame motion vector in BMV is BMV _ x = BMV _ norm cos (Theta), BMV _ y = BMV _ norm sin (Theta), where Theta is the angle of the inter-frame motion vector, and is calculated by Theta = Atan (bmV _ y)/averagex), where Atan is an arctangent function, after obtaining the features of BMV (BMV _ norm, V _ x, BMore V _ y), the features of the inter-frame motion information BMV are sorted in time order of the frames, are recorded as a sequence vec, the mean of the modulo length of the bmvec and the variance of the modulo length of the average and the variance of the modulo length of the inter-frame motion vector are calculated as 2 average |: average |, average |), where vec _ t is the modulo length component and vec mean is the average vector of all vectors. Score _ amplitude, core _ shake may be divided by 7.5, 45.0 for normalization, and the motion information of the video is: score _ motion = w1 score _ amplitude + w2 score _ shake, where w1 and w2 are parameters, and may be taken as w1=0.5 and w2= 0.5. It should be noted that, in the above process, normalization processing of score _ amplitude and core _ shake may not be performed, but the target matching degree of the matching process between the music beat and the motion degree information involved in the subsequent steps may be changed accordingly.
It should be noted that, by obtaining the motion degree information of the multimedia data based on the inter-frame motion information of the multimedia data, the resource consumption of the calculation process can be reduced, the processing time of the calculation process can be shortened, and the efficiency of processing the multimedia data is improved.
In step 303, the terminal determines a target audio data type whose music tempo matches the motion level information of the multimedia data based on the motion level information and a plurality of audio data types.
Wherein the audio data type is divided audio files based on how fast the music tempo is. In a possible implementation manner, the terminal may mark the audio data type with a positive integer, the larger the positive integer, the faster the music beat is, for example, when the music beat of an audio file contained in the audio data type is 40-45 beats per minute, the audio data type is marked as 1, and when the music beat of an audio file contained in the audio data type is 46-51 beats per minute, the audio data type is marked as 2.
One possible implementation of the process involved in step 303 is: the terminal determines the audio data type corresponding to the music beat as a target audio data type when the matching degree of the motion degree information and the music beat conforms to a target matching degree based on the motion degree information and the music beats of the multiple audio data types, wherein the motion degree information is used for representing the change situation of the action change of the specified target object along with time in unit time length, the music beat refers to the beat number in the unit time length of the audio file, for example, the motion degree information is 42, the music beat number of the audio data type 1 is 40-45 beats per minute, the matching degree of the motion degree information and the music beat is considered to conform to the target matching degree, and the target audio data type is audio data type 1.
In one possible implementation, the process involved in step 303 may also be: determining a motion score of the multimedia data based on the motion degree information; and mapping the motion score in a motion score range corresponding to the plurality of audio data types to determine the target audio data type. The motion degree information may be divided into different ranges, and each range corresponds to a motion score. For example, if the motion score of multimedia data having the motion level information of 20 to 29 is determined to be 12 points, the motion score of multimedia data having the motion level information of 30 to 39 is determined to be 13 points, and if the motion score of the multimedia data is 12 points, the motion score range corresponding to the audio data type 1 is 5 to 10, and the motion score range corresponding to the audio data type 2 is 11 to 15, the audio data type 2 may be determined to be the target audio data type.
In a possible implementation manner, when the multimedia data includes both the video file and the picture file, the target audio data type may be adjusted based on a ratio of the picture file in the multimedia data to the total number of files, and the specific implementation manner is as follows, in steps 303A to 303C:
303a. the terminal determines the proportion of the picture files in the multimedia data to the total number of files.
The total number of files refers to the total number of the picture files and the video files in the multimedia data.
For example, if 4 picture files and 3 video files are imported into the multimedia data, the total number of files in the multimedia data is 7, and the ratio of the picture files to the total number of files is 4/7.
And 303B, when the ratio is larger than the target ratio, the terminal determines an audio data type with the music beat one level slower than the target audio data type from the plurality of audio data types, and adjusts the target audio data type to the newly determined audio data type.
Wherein, the ratio is larger than the target ratio, which indicates that the number of the picture files in the multimedia data file is larger, and the motion degree information of the picture file is slower than that of the video file, so that the real motion degree information is slower than the acquired motion degree information, the music tempo is adjusted to an audio data type one step slower than the target audio data type, the slower audio data type is an audio data type with a beat per minute smaller by one, for example, if the target audio data type is audio data type 2, the music tempo of the audio file contained in the audio data type 2 is 46-51 beats per minute, the audio data type one level slower than the target audio data type is audio data type 1, the music tempo of the audio files contained in the audio data class 1 is 40-45 beats per minute.
And 303C, when the first proportion is less than or equal to the target proportion, the terminal determines an audio data type with a music beat one level faster than the target audio data type from the plurality of audio data types, and adjusts the target audio data type into the newly determined audio data type.
Wherein the ratio is less than or equal to the target ratio, which indicates that there are more video files in the multimedia data file, and the motion degree information of the video file is more intense than the motion degree information of the picture file, so that the real motion degree information is more intense than the acquired motion degree information, the audio data type determination to adjust the music tempo to be one level faster than the target audio data type, the audio data type at the first level is the audio data type with the beat per minute greater than the corresponding music beat, for example, if the target audio data type is audio data type 1, the music tempo of the audio file contained in the audio data type 1 is 40-45 beats per minute, the audio data type one level slower than the target audio data type is the audio data type 2, the music tempo of the audio files contained in the audio data class 2 is 46-51 beats per minute.
In addition, before the adjustment process of the above steps 303A to 303C is performed, whether the target audio data type is adjusted may be determined by generating a random number, which is specifically implemented as follows: the terminal compares the generated random number with a target threshold, and performs the process of adjusting the target audio data type mentioned in steps 303B to 303D when the random number is smaller than the target threshold, and does not perform the process of adjusting the target audio data type mentioned in steps 303B to 303D when the random number is greater than or equal to the target threshold. By using the random number to judge whether the adjustment process is carried out or not, the computing resources can be saved, and the efficiency of the multimedia processing process can be improved.
It should be noted that the target audio data type is determined by matching the music tempo with the motion degree information of the multimedia data or based on the motion score, so as to improve the matching degree between the subsequently determined target audio data and the multimedia data.
In step 304, the terminal determines the audio file corresponding to the target audio data type as the target audio data of the multimedia data.
One possible implementation of the process involved in step 304 is: the terminal determines the selected proportion of each audio file based on the number of audio files contained in the target audio data type, and ensures that the selected proportion of each audio file is equal, for example, when the target audio data type contains 5 audio files, the probability that each audio file is selected is set to be 30%, one audio file is randomly selected from the equal proportion in the target audio data type, and the audio file is determined to be the target audio data of the multimedia data. By the method, the mode of randomly selecting the audio files in equal proportion is simple, computing resources are saved, and the efficiency of processing the media data is improved.
In a possible implementation manner, the specific implementation manner of the process involved in step 304 may further: the terminal obtains at least one of the download amount and the comment information of each audio file in the target audio data type, and determines the audio files of which the download amount meets a target condition and/or the positive evaluation number in the comment information meets the target condition as the target audio data based on the download amount and the comment information. In this way, the determined target audio file can conform to the current popular trend, and the requirements of the user are met more accurately.
In a possible implementation manner, the specific implementation manner of the process involved in step 304 may also be: the terminal acquires the time length of each audio file in the target audio data type, determines the deviation value of the time length of each audio file and the time length of the multimedia data, and determines the audio file with the deviation value within the target deviation range as the target audio data. When the duration of the audio file is equal to the duration of the multimedia data, the audio file may be determined as the target audio data, when the duration of the audio file is greater than the duration of the multimedia data, the offset value is positive, when the duration of the audio file is less than the duration of the multimedia data, the offset value is negative, and the target offset range may be ± 5 seconds, which is not limited by the present disclosure. It should be noted that, when the deviation value between the duration of the multimedia data file and the duration of all the audio files is greater than the target deviation range, the deviation value between the sum of the durations of the multiple audio files and the duration of the multimedia data file is obtained, the multiple audio files with the deviation values within the target deviation range are determined as the target audio data, and when the deviation value between the duration of the multimedia data file and the durations of all the audio files is less than the target deviation range, the audio file with the shortest duration in the audio data type can be used as the target audio file. By the method, the target audio file can be determined according to the multimedia data and the duration of the audio file, so that the determined target audio file is higher in conformity with the multimedia data, the process that a user needs to select for the second time is avoided, the time of the user is saved, and the satisfaction degree of the user is improved.
According to the method provided by the embodiment of the disclosure, motion degree information of multimedia data is acquired based on inter-frame motion information of the multimedia data, a target audio data type is determined based on the motion degree information and a plurality of audio data types, and finally an audio file corresponding to the target audio data type is determined as the target audio data of the multimedia data. In the multimedia data processing method, the inter-frame motion information is obtained in a simple process, and the motion degree information is calculated in a simple process, so that the time cost in the multimedia data processing process is saved, the multimedia data processing efficiency is improved, and the audio-visual experience of a user is improved.
In step 305, the terminal transmits a download request to the target server, the download request indicating downloading of the target audio data.
Wherein, the download request may carry an audio identifier of the target audio data. And when the server receives the downloading request, the target audio data is sent to the terminal, and the downloading of the target audio data is completed.
In a possible implementation manner, after the target audio data is downloaded, the terminal may add a sound track to the multimedia data, import the target audio data into the sound track, export mixed multimedia data containing the target audio data, display the mixed multimedia data on the terminal, and wait for a user to select a next operation. In addition, if the user is not satisfied with the target audio data in the mixed multimedia data, the process of determining the target audio data in step 304 may be performed again, or the audio file included in the target audio type may be displayed on the terminal, and the user may perform trial listening and select a suitable audio file as the target audio data, which is not limited in this disclosure. By the method, when the user is not satisfied with the automatically selected target audio data, the user can reselect, different preferences of different users can be met, and user experience is improved.
In one possible implementation, when downloading the target audio data from the target server fails, the terminal determines a local audio file as the target audio data. The specific implementation manner of determining the local audio file as the target audio data is as follows: the target audio data may be determined by proportionally and randomly selecting from the local audio files, or the local audio files may be displayed on the terminal, and the user may perform trial listening and select a suitable local audio file as the target audio data, which is not limited in this disclosure.
It should be noted that, by determining the local audio file as the target audio data when the target audio data is unsuccessfully downloaded from the target server, the embarrassment that the target audio data cannot be obtained from the server can be avoided, and the user satisfaction is improved.
Fig. 4 is a block diagram illustrating a multimedia data processing apparatus according to an example embodiment. Referring to fig. 4, the apparatus includes a motion degree information acquisition unit 401, a target audio data type determination unit 402, and a target audio data determination unit 403.
A motion degree information obtaining unit 401 configured to obtain motion degree information of the multimedia data based on inter-frame motion information of the multimedia data, the inter-frame motion information being used for representing motion change conditions between any two frames;
a target audio data type determining unit 402 configured to perform determining a target audio data type whose music tempo matches the motion degree information of the multimedia data based on the motion degree information and a plurality of audio data types;
a target audio data determining unit 403 configured to determine an audio file corresponding to the target audio data type as target audio data of the multimedia data.
In one possible implementation, the target audio data type determination unit is configured to perform:
and determining the audio data type corresponding to the music beat as the target audio data type when the matching degree of the motion degree information and the music beat accords with the target matching degree based on the motion degree information and the music beats of the plurality of audio data types.
In one possible implementation, the apparatus further includes:
a mapping unit configured to perform determining a motion score of the multimedia data based on the motion degree information; and mapping the motion score in a motion score range corresponding to the plurality of audio data types to determine the target audio data type.
In one possible implementation, the apparatus further includes:
the adjusting unit is configured to determine the proportion of the picture files in the multimedia data to the total number of the files; when the ratio is larger than a target ratio, determining an audio data type with a music beat one level slower than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type; when the first ratio is less than or equal to a target ratio, an audio data type having a music tempo one level faster than the target audio data type is determined from the plurality of audio data types, and the target audio data type is adjusted to the newly determined audio data type.
In one possible implementation, the apparatus further includes:
and the inter-frame motion information determining unit is configured to determine the motion information of the specified object in two adjacent image frames as the inter-frame motion information of the multimedia data based on the position of the specified object included in the image frame in the video file contained in the multimedia data.
In one possible implementation, the apparatus further includes:
a downloading unit configured to perform sending a download request to a target server, the download request being for instructing downloading of the target audio data; when downloading the target audio data from the target server fails, the local audio file is determined as the target audio data.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 5 is a schematic diagram illustrating a structure of a terminal according to an exemplary embodiment. The terminal 500 may be any mobile terminal, and of course, may also be a smart phone, a tablet computer, a motion Picture Experts Group Audio Layer 3 (MP 3) player, a motion Picture Experts Group Audio Layer 4 (MP 4) player, a notebook computer or a desktop computer. Terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a Graphics Processing Unit (GPU) which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 501 may also include an Artificial Intelligence (AI) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 502 is used to store at least one program code for execution by the processor 501 to implement the multimedia data processing method provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a communication circuit 504, a display screen 505, and an audio circuit 506.
The peripheral interface 503 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The communication circuit 504 is used for receiving and transmitting Radio Frequency (RF) signals, also called electromagnetic signals. The communication circuit 504 communicates with a communication network and other communication devices via electromagnetic signals. The communication circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the communication circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The communication circuit 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wireless Fidelity (WiFi) networks. In some embodiments, the Communication circuit 504 may further include Near Field Communication (NFC) related circuits, which are not limited by this disclosure.
The display screen 505 is used to display a User Interface (UI). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED), or the like.
The audio circuitry 506 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the communication circuit 504 to realize voice communication. The microphones may be plural for stereo sound collection or noise reduction purposes, may be respectively provided at different portions of the terminal 500 or, when the terminal is an in-vehicle terminal, the microphones thereof may be arranged at different portions of the vehicle. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the communication circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 506 may also include a headphone jack.
The terminal 500 may be powered by an onboard power source, and those skilled in the art will appreciate that the configuration shown in FIG. 4 is not intended to be limiting of the terminal 500 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 6 is a schematic structural diagram of a server according to an exemplary embodiment, where the server 600 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where at least one instruction is stored in the one or more memories 602, and the at least one instruction is loaded and executed by the one or more processors 601 to implement the methods provided by the method embodiments. Of course, the server 600 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 600 may also include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including program code, which is executable by a processor in a terminal to perform the multimedia data processing method in the above-described embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for processing multimedia data, comprising:
determining motion information of a specified object in two adjacent image frames as inter-frame motion information of the multimedia data based on the position of the specified object included in the image frame in a video file contained in the multimedia data, wherein the inter-frame motion information is used for representing the motion change condition between any two adjacent frames;
acquiring motion degree information of the multimedia data based on the inter-frame motion information of the multimedia data;
determining a target audio data type based on the motion degree information and a plurality of audio data types, wherein the music beat of the target audio data type is matched with the motion degree information of the multimedia data;
determining the proportion of the picture files in the multimedia data to the total number of the files;
when the ratio is larger than a target ratio, determining an audio data type with a music beat one level slower than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type;
when the proportion is smaller than or equal to a target proportion, determining an audio data type with a music beat one level faster than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type;
and determining the audio file corresponding to the adjusted target audio data type as the target audio data of the multimedia data.
2. The method of claim 1, wherein determining a target audio data type based on the motion level information and a plurality of audio data types comprises:
and determining the audio data type corresponding to the music beat as a target audio data type when the matching degree of the motion degree information and the music beat accords with the target matching degree based on the motion degree information and the music beats of the plurality of audio data types.
3. The method of claim 1, wherein determining a target audio data type based on the motion level information and a plurality of audio data types comprises:
determining a motion score of the multimedia data based on the motion degree information;
and mapping the motion scores in the motion score range corresponding to the plurality of audio data types to determine the target audio data type.
4. The method according to claim 1, wherein after determining the audio file corresponding to the adjusted target audio data type as the target audio data of the multimedia data, the method further comprises:
sending a downloading request to a target server, wherein the downloading request is used for indicating the downloading of the target audio data;
and when the target audio data fails to be downloaded from the target server, determining a local audio file as the target audio data.
5. A multimedia data processing apparatus, comprising:
the inter-frame motion information determining unit is configured to determine motion information of a specified object in two adjacent image frames as inter-frame motion information of the multimedia data based on the position of the specified object included in the image frame in a video file contained in the multimedia data, wherein the inter-frame motion information is used for representing motion change conditions between any two adjacent frames;
a motion degree information acquisition unit configured to perform acquiring motion degree information of the multimedia data based on inter-frame motion information of the multimedia data;
a target audio data type determination unit configured to perform determination of a target audio data type whose music tempo matches the motion degree information of the multimedia data based on the motion degree information and a plurality of audio data types;
the adjusting unit is configured to determine the proportion of the picture files in the multimedia data to the total number of the files; when the ratio is larger than a target ratio, determining an audio data type with a music beat one level slower than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type; when the proportion is smaller than or equal to a target proportion, determining an audio data type with a music beat one level faster than the target audio data type from the plurality of audio data types, and adjusting the target audio data type to the newly determined audio data type;
and the target audio data determining unit is configured to determine the audio file corresponding to the adjusted target audio data type as the target audio data of the multimedia data.
6. The apparatus according to claim 5, wherein the target audio data type determination unit is configured to perform:
and determining the audio data type corresponding to the music beat as a target audio data type when the matching degree of the motion degree information and the music beat accords with the target matching degree based on the motion degree information and the music beats of the plurality of audio data types.
7. The apparatus of claim 5, further comprising:
a mapping unit configured to perform determining a motion score of the multimedia data based on the motion degree information; and mapping the motion scores in the motion score range corresponding to the plurality of audio data types to determine the target audio data type.
8. The apparatus of claim 5, further comprising:
a downloading unit configured to execute sending a downloading request to a target server, the downloading request being used for instructing downloading of the target audio data; and when the target audio data fails to be downloaded from the target server, determining a local audio file as the target audio data.
9. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the multimedia data processing method of any of claims 1 to 4.
10. A storage medium in which instructions, when executed by a processor of a terminal, enable the terminal to perform the multimedia data processing method of any one of claims 1 to 4.
CN201910785581.7A 2019-08-23 2019-08-23 Multimedia data processing method, device, terminal and storage medium Active CN110489572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910785581.7A CN110489572B (en) 2019-08-23 2019-08-23 Multimedia data processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910785581.7A CN110489572B (en) 2019-08-23 2019-08-23 Multimedia data processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110489572A CN110489572A (en) 2019-11-22
CN110489572B true CN110489572B (en) 2021-10-08

Family

ID=68553452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910785581.7A Active CN110489572B (en) 2019-08-23 2019-08-23 Multimedia data processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110489572B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503034A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of method and device for motion picture soundtrack
CN108805171A (en) * 2018-05-07 2018-11-13 广东数相智能科技有限公司 Image is to the conversion method of music rhythm, device and computer readable storage medium
CN109309862A (en) * 2018-07-26 2019-02-05 浠诲嘲 Multi-medium data editing system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503034A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of method and device for motion picture soundtrack
CN108805171A (en) * 2018-05-07 2018-11-13 广东数相智能科技有限公司 Image is to the conversion method of music rhythm, device and computer readable storage medium
CN109309862A (en) * 2018-07-26 2019-02-05 浠诲嘲 Multi-medium data editing system

Also Published As

Publication number Publication date
CN110489572A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN109166593B (en) Audio data processing method, device and storage medium
WO2019114514A1 (en) Method and apparatus for displaying pitch information in live broadcast room, and storage medium
CN111031386B (en) Video dubbing method and device based on voice synthesis, computer equipment and medium
CN109784351B (en) Behavior data classification method and device and classification model training method and device
CN110992963B (en) Network communication method, device, computer equipment and storage medium
JP7361890B2 (en) Call methods, call devices, call systems, servers and computer programs
CN109640125A (en) Video content processing method, device, server and storage medium
CN111445901A (en) Audio data acquisition method and device, electronic equipment and storage medium
WO2021114808A1 (en) Audio processing method and apparatus, electronic device and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN114996168A (en) Multi-device cooperative test method, test device and readable storage medium
CN111432245A (en) Multimedia information playing control method, device, equipment and storage medium
CN110493635A (en) Video broadcasting method, device and terminal
CN113344776A (en) Image processing method, model training method, device, electronic device and medium
CN113038232A (en) Video playing method, device, equipment, server and storage medium
CN112133319A (en) Audio generation method, device, equipment and storage medium
CN110489572B (en) Multimedia data processing method, device, terminal and storage medium
CN113763932B (en) Speech processing method, device, computer equipment and storage medium
CN115086888B (en) Message notification method and device and electronic equipment
CN112151017B (en) Voice processing method, device, system, equipment and storage medium
CN115665504A (en) Event identification method and device, electronic equipment and storage medium
CN114328815A (en) Text mapping model processing method and device, computer equipment and storage medium
CN114332709A (en) Video processing method, video processing device, storage medium and electronic equipment
CN111062709B (en) Resource transfer mode recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant