CN110570841A - Multimedia playing interface processing method, device, client and medium - Google Patents

Multimedia playing interface processing method, device, client and medium Download PDF

Info

Publication number
CN110570841A
CN110570841A CN201910868451.XA CN201910868451A CN110570841A CN 110570841 A CN110570841 A CN 110570841A CN 201910868451 A CN201910868451 A CN 201910868451A CN 110570841 A CN110570841 A CN 110570841A
Authority
CN
China
Prior art keywords
data
target
interface
feature
playing interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910868451.XA
Other languages
Chinese (zh)
Inventor
黄小凤
曹超利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910868451.XA priority Critical patent/CN110570841A/en
Publication of CN110570841A publication Critical patent/CN110570841A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The embodiment of the invention discloses a method, a device, a client and a medium for processing a multimedia playing interface, wherein the method comprises the following steps: acquiring playing data from a multimedia playing interface, and determining voice data to be analyzed from the playing data; carrying out voice recognition processing on the voice data to be analyzed, and extracting a characteristic data set of the voice data to be analyzed; determining target characteristic data from the characteristic data set, and determining target decoration elements corresponding to the target characteristic data; and the multimedia playing interface is decorated based on the target decoration element, so that the updating efficiency of the multimedia playing interface can be improved, and the interestingness of a user in watching the multimedia playing interface can be improved.

Description

multimedia playing interface processing method, device, client and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a client, and a medium for processing a multimedia playing interface.
background
with the continuous development of client technology, various application software is continuously developed to enrich the daily life of users, and if the current live broadcast software is developed, the users can interact with the live broadcast users in real time in the live broadcast watching process, so that the current live broadcast watching form of the users is greatly changed. Meanwhile, the speech recognition technology is mature with the development of the internet technology, and the current speech recognition technology can effectively recognize the volume characteristics and the tone characteristics in the speech data.
at present, the speech recognition technology's application popularity is lower, can not realize the effective application to speech recognition, and if present customer end can not be based on the difference of the characteristic data in the speech data, present different animation effects at user interface and with the different speech characteristics in the speech data, moreover, along with the continuous development of live broadcast technique, how to adjust the broadcast interface to attract more users to watch the live broadcast and become current research focus.
disclosure of Invention
the embodiment of the invention provides a method, a device, a server and a medium for processing a multimedia playing interface, which can improve the updating efficiency of the multimedia playing interface and simultaneously improve the interestingness of a user in watching the multimedia playing interface.
In one aspect, an embodiment of the present invention provides a method for processing a multimedia playing interface, where the method includes:
acquiring playing data from a multimedia playing interface, and determining voice data to be analyzed from the playing data;
Carrying out voice recognition processing on the voice data to be analyzed, and extracting a characteristic data set of the voice data to be analyzed;
Determining target characteristic data from the characteristic data set, and determining target decoration elements corresponding to the target characteristic data;
and carrying out decoration processing on the multimedia playing interface based on the target decoration element.
in another aspect, an embodiment of the present invention provides a device for processing a multimedia playing interface, where the device includes:
The acquisition unit is used for acquiring playing data from a multimedia playing interface;
a determining unit, configured to determine, from the play data, voice data to be analyzed;
the recognition unit is used for carrying out voice recognition processing on the voice data to be analyzed;
An extraction unit, configured to extract a feature data set of the voice data to be analyzed;
the determining unit is further configured to determine target feature data from the feature data set and determine a target decoration element corresponding to the target feature data;
and the processing unit is used for carrying out decoration processing on the multimedia playing interface based on the target decoration element.
in another aspect, an embodiment of the present invention provides a client, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program that supports a terminal to execute the foregoing method, where the computer program includes program instructions, and the processor is configured to call the program instructions, and perform the following steps:
Acquiring playing data from a multimedia playing interface, and determining voice data to be analyzed from the playing data;
Carrying out voice recognition processing on the voice data to be analyzed, and extracting a characteristic data set of the voice data to be analyzed;
determining target characteristic data from the characteristic data set, and determining target decoration elements corresponding to the target characteristic data;
And carrying out decoration processing on the multimedia playing interface based on the target decoration element.
In still another aspect, an embodiment of the present invention provides a computer-readable storage medium, where computer program instructions are stored, and when executed by a processor, the computer program instructions are configured to perform the processing method of the multimedia playing interface according to the first aspect.
in the embodiment of the present invention, after the client acquires the playing data from the multimedia playing interface in the live broadcasting process of the live broadcasting user, the client may determine the voice data to be analyzed from the playing data, further, the client may perform voice recognition processing on the voice data to be analyzed, so as to extract a feature data set from the voice data to be analyzed, and after the client determines the target feature data from the feature data set, the client may determine a target decoration element for decorating the multimedia playing interface based on the target feature data, and perform decoration processing on the multimedia playing interface according to the target decoration element, based on the change of the voice data, the client may update the multimedia playing interface in real time, improve the updating efficiency of the multimedia playing interface, and may automatically update the multimedia playing interface along with the change of the voice data, the interest of the playing interface is improved, and meanwhile the watching performance of the multimedia playing interface is enhanced.
drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic flow chart of a processing method of a multimedia playing interface according to an embodiment of the present invention;
fig. 1b is a schematic diagram of a processing method of a multimedia playing interface according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a processing method of a multimedia playing interface according to another embodiment of the present invention;
fig. 3a is a schematic diagram of processing an audio playing interface according to an embodiment of the present invention;
FIG. 3b is a schematic view of a decorative image provided by an embodiment of the present invention;
FIG. 3c is a schematic diagram of an audio playing interface according to another embodiment of the present invention;
FIG. 4 is a schematic flow chart of a processing method for a multimedia playing interface according to another embodiment of the present invention;
FIG. 5 is a schematic flow chart of a processing method for a multimedia playing interface according to another embodiment of the present invention;
Fig. 6 is a schematic diagram of processing a video playing interface according to an embodiment of the present invention;
FIG. 7a is a diagram of a playlist interface according to an embodiment of the present invention;
FIG. 7b is a diagram illustrating a playlist interface according to another embodiment of the present invention;
FIG. 7c is a diagram of an information query interface according to an embodiment of the present invention;
Fig. 8 is a schematic block diagram of a processing device of a multimedia playing interface according to an embodiment of the present invention;
fig. 9 is a schematic block diagram of a client according to an embodiment of the present invention.
Detailed Description
the embodiment of the invention provides a processing method of a multimedia playing interface, which can flexibly dress up an interface of multimedia playing data, so that playing experience of a user when the user watches the playing data can be enriched, wherein the multimedia playing interface can be a live playing interface or a recorded playing interface (namely a recording playing interface), the multimedia playing interface comprises an audio playing interface for playing audio, a video playing interface for playing video and the like, when the multimedia playing interface is the audio playing interface, the multimedia playing interface comprises a user icon of a live user, the live user refers to a user for performing audio live broadcasting, and the user icon can be a head portrait icon and the like of the live user. When the multimedia playing data is a video playing interface, the corresponding video playing interface comprises the live broadcast user (or the main broadcast). In the embodiment of the present invention, a detailed description is mainly given by taking the multimedia playing interface as a live interface (the live interface includes an interface for performing audio live broadcasting or an interface for performing video live broadcasting), and when the playing interface is a recording playing interface or other playing interfaces, reference may be made to the embodiment of the present invention. In an embodiment, the processing method of the multimedia playing interface may be applied to a client, where the client may be an intelligent terminal or an Application program (APP) running in the intelligent terminal, where the APP refers to software installed in the intelligent terminal and is used to improve the deficiency and personalized requirements of an original system in the intelligent terminal.
in an embodiment, please refer to the schematic flowchart of a processing method for a multimedia playing interface shown in fig. 1a, where the processing method for the multimedia playing interface shown in fig. 1a is mainly performed based on interaction between a live user and a client. In the embodiment of the present invention, the live user is taken as an audio live user for a detailed description, and when the live user is a video live user, reference may be made to the embodiment of the present invention. Specifically, the live broadcast user can send a live broadcast starting request to the client when having a live broadcast requirement, after receiving the live broadcast starting request, the client can start a receiving inlet for receiving voice data in the client, such as a microphone, so that the client receives the voice data of the live broadcast user based on the receiving inlet, wherein the voice data received by the client includes sound data of the live broadcast user and music data played by the live broadcast user.
After the client receives the voice data of the live broadcast user, the client can perform voice recognition on the voice data to determine the voice data to be analyzed from the voice data, wherein the voice data to be analyzed is obtained by matching the received voice data with a preset feature database respectively by the client, and the voice data with corresponding feature types in the feature database in the received voice data is used as the voice data to be analyzed. Further, a feature data set can be determined from the voice data to be analyzed, so that target feature data can be determined from the feature data set, and after the target feature data is determined, the client can decorate the multimedia playing interface based on a target decoration element corresponding to the target feature data. The client side determines the target decoration elements based on one or more combination elements of color elements, graphic elements and animation elements, so that the client side has more interest in decoration of the multimedia playing interface based on the target decoration elements, and the watching experience of a watching user can be improved.
in an embodiment, after the client determines the target feature data, further, the client may perform feature marking on the multimedia playing interface corresponding to the live user based on the target feature data at a playing entrance corresponding to the multimedia playing interface, based on the feature marking, other watching users who need to perform live watching may perform feature searching (or feature searching), specifically, the watching users may send query information to the client, and based on the feature keywords included in the query information, the client may search the multimedia playing interface matching the feature keywords from the database, and display the query result to the watching users, so that the searching watching users may obtain the target multimedia playing interface based on the query result, and perform live watching, thereby improving the query efficiency of the watching users.
In an embodiment, please refer to fig. 1b, the client may determine the voice data to be analyzed from the multimedia playing interface based on the obtained playing data, and further, the client may perform recognition processing on the voice technology to be analyzed by using a voice recognition technology to extract and obtain a feature data set corresponding to the voice data to be analyzed. When the feature data corresponding to the voice data is extracted by adopting a voice recognition technology, the voice data can be matched with a preset feature database, wherein the preset feature database comprises: one or more of a preset volume characteristic library, a preset tone characteristic library and a preset music characteristic library, as shown in fig. 1b, the voice data can be respectively matched with the preset volume characteristic library, the preset tone characteristic library and the preset music characteristic library, so as to extract target characteristic data respectively matched with the preset volume characteristic library, the preset tone characteristic library and the preset music characteristic library from the voice data, and correspondingly, the target characteristic data is extracted from one or more of the volume characteristic data, the tone characteristic data and the background music characteristic data.
In one embodiment, the sound volume is also called loudness and sound intensity, and refers to the subjective feeling of the human ear on the magnitude of the heard sound, and its objective evaluation scale is the amplitude of the sound, and this feeling is derived from the pressure generated when the object vibrates, i.e. the sound pressure, and the object vibration conducts away its vibration energy through different media, and in order to quantify the feeling of the sound into a monitorable index, the sound pressure level is divided into levels, i.e. the sound pressure level, so as to objectively represent the magnitude of the sound, which is called decibel (dB). The timbre refers to that different sound frequencies always have distinctive characteristics in terms of waveform, different object vibrations have different characteristics, and different sounding bodies have different timbres due to different materials and structures, for example, pianos, violins and human sounds have different timbres and different human sounds have different timbres. The music type is the music style, the music style comprises the song style, the music melody, the genre and the like, and the music is a representative unique appearance presented on the whole, the music style is similar to other artistic styles, and the music style can reflect external marks of inherent characteristics such as thought, aesthetic ideality, spiritual quality and the like of times, nationalities or musicians more essentially through relative stability, inherent and profound expression of songs. The formation of the music is a mark that the times, nationalities or musicians exceed the young stage in the understanding and realization of the music, and get rid of various patterned constraints, so that the music tends to or reaches maturity.
in one embodiment, the preset volume feature library includes one or more volume features, as shown in the figure, the a volume feature may be quiet and comfortable, the B volume feature may be anechoic, the C volume feature may be a natural sound room, and the D volume feature may be a hi full field, wherein the volume feature of the hi full field means that the voice data under the feature is felt to be loud by the human ear, the volume feature of the anechoic volume means that the voice data under the feature is felt to be loud by the human ear, the volume feature of the natural sound room means that the voice data under the feature is felt to be weak by the human ear, and the quiet and comfortable volume feature means that the voice data under the feature is felt to be weak by the human ear. Wherein, strong, weak and weak sound refers to the amplitude range of the sound. The preset timbre feature library comprises one or more timbre features, wherein the timbre A can be a Roly sound, the timbre B can be a Miss sound, the timbre C can be a warm man sound, the timbre D can be a Copeno sound, in one embodiment, the Roly sound refers to a female sound with soft sound, the Miss sound refers to a mature female sound opposite to the Roly sound, the warm man sound refers to a warm heart and an affinity male sound, and the Copeno sound refers to a warm male sound. The preset music feature library comprises one or more music types, such as a music type a may be balladry, B music type B may be JAZZ (JAZZ), C music type may be rock, and D music type may be pure music.
In an embodiment, after the client determines the target feature data, the client may further transform the target feature data according to a preset transformation rule to obtain a corresponding target decoration element, so that the multimedia playing interface may be decorated based on the target decoration element. The client presets a rule for transforming the target characteristic data, the transformation rule includes different decoration elements corresponding to different characteristic data, if the client presets characteristic data as volume characteristic data, the corresponding decoration elements are color elements, if the characteristic data is tone characteristic data, the corresponding decoration elements are graphic elements, and if the characteristic data is music characteristic data, the corresponding decoration elements are animation elements. Further, the client may further preset target colors corresponding to different volume characteristic data, target graphics corresponding to different tone characteristic data, and target animations corresponding to different music characteristic data, where, for example, the client may preset the target color corresponding to the volume characteristic a to be red, the target graphic corresponding to the tone color a to be small flower, and the target animation corresponding to the music type a to be animation 1, where animation 1 may rotate clockwise, for example, so as to enrich the background of the playing interface, so that the visual experience of the viewing user better conforms to the playing situation corresponding to the currently played voice data. Meanwhile, the ornamental value and the interesting value of the playing interface can be increased, furthermore, based on the analysis of the feature data of the voice data in the playing process, the feature classification can be carried out on different playing interfaces, the automatic classification process of the playing interfaces is realized, and the watching user can conveniently find out the corresponding interesting interface based on each classified playing interface, wherein the playing interface can be a live broadcasting interface in the live broadcasting process. In one embodiment, live broadcasting refers to an information distribution mode which is used for synchronously making and distributing information along with the occurrence and development process of an event and has a bidirectional circulation process on site.
after determining the target decoration element, the client may determine a corresponding decoration image based on the target decoration element, so as to perform decoration processing on the multimedia playing interface (i.e., a live broadcast room) according to the decoration image, where in one embodiment, the decoration image is an image obtained by combining one or more of color elements, graphic elements, and animation elements.
After the client determines the target feature data, the client may further perform feature marking on the play entry corresponding to the live broadcast room based on the target feature data, that is, based on one or more feature types corresponding to the volume feature data, the tone feature data, and the background music feature data, the play entry may be marked based on the determined feature types. In one embodiment, the automatic classification of the different live broadcast rooms can be realized by performing feature marking on the play entries corresponding to different multimedia play interfaces, and when the voice data played by the multimedia play interfaces changes, the corresponding characteristic mark can be automatically updated, furthermore, the client can display the determined characteristic mark in the playing entrance, so that the viewing user can quickly find the interested live room based on the feature labels displayed in the play entry of the live room, therefore, the viewing experience of the viewing user on live broadcasting is improved, meanwhile, the viewing user can quickly find interested live broadcasting rooms, broadcasting viewing users or recorded and broadcast multimedia data and the like in the playlist interface by searching the characters matched with the characteristic marks, and the viewing user can conveniently and quickly find the interested live broadcasting rooms from the mass live broadcasting rooms in the playlist interface.
referring to fig. 2, a flow chart of a processing method for a multimedia playing interface according to an embodiment of the present invention is shown, where the method includes:
S201, obtaining playing data from a multimedia playing interface, and determining voice data to be analyzed from the playing data.
In one embodiment, the multimedia playing interface includes a live broadcasting interface for currently performing live video or audio broadcasting and a recorded and broadcasted video or audio playing interface, when the multimedia playing interface is a live broadcasting interface for performing live video broadcasting, the multimedia playing interface includes a live broadcasting user and an interface of all or part of surrounding environment of the live broadcasting user, and when the multimedia playing interface is an interface for performing live audio broadcasting, the corresponding multimedia playing interface includes a user icon of the live broadcasting user. In the embodiment of the present invention, a specific description is mainly given by taking the multimedia playing interface as an interface for a live user to perform video live broadcasting and an interface for an audio live broadcasting as examples, and when the multimedia playing interface is another playing interface, reference may be made to the embodiment of the present invention.
In an embodiment, the client may obtain, through different receiving entries, sound data of a live user and background music of the live user during live broadcasting, so as to collect playing data in a multimedia playing interface, where the client may obtain, through a first receiving entry, the sound data of the live user, and may obtain, through a second receiving entry, the background music of the live user during live broadcasting, where the first receiving entry may be, for example, a microphone, and the second receiving entry may be, for example, an earphone or the like. After the client acquires the playing data from the multimedia playing interface, the voice data to be analyzed can be determined based on the receiving entry information corresponding to each voice data in the playing data, wherein the determined voice data to be analyzed is the voice data of the live user acquired through a microphone and the background music played by the live user during live playing. The voice data of the live users acquired by the client from the multimedia playing interface may be voice data corresponding to one or more live users, and one or more pieces of background music may also be acquired from the multimedia playing interface, that is, the playing data acquired by the client from the multimedia playing interface includes one or more pieces of voice data.
S202, carrying out voice recognition processing on the voice data to be analyzed, and extracting a feature data set of the voice data to be analyzed.
in an embodiment, after determining the voice data to be analyzed, the client may perform voice recognition processing on the voice data to be analyzed, specifically, the client may match the voice data to be analyzed with a preset feature database, that is, match the voice data to be analyzed with a preset volume feature database, to determine volume feature data matched with the preset volume feature database from the voice data to be analyzed, and may match the voice data to be analyzed with a preset tone feature database, to determine tone feature data matched with the preset tone feature database from the voice data to be analyzed, and may also match the voice data to be analyzed with a preset music feature database, to determine music feature data matched with the preset music feature database from the voice data to be analyzed, the volume characteristic data, the tone characteristic data and the music characteristic data are characteristic data sets corresponding to the voice data to be analyzed.
In an embodiment, the feature data set includes feature data of the speech data to be analyzed in at least one feature type, where the feature type includes the above-mentioned volume type corresponding to a preset volume feature database, a tone type corresponding to the preset tone feature database, a music type corresponding to the preset music feature database, and the like.
in an embodiment, the client may determine one or more pieces of feature data corresponding to the voice data to be analyzed based on a matching result between the voice data to be analyzed and a preset feature database, so as to obtain the feature data set, and further, the client may determine core feature data (or target feature data) based on the feature data set, that is, perform step S203.
s203, determining target characteristic data from the characteristic data set, and determining a target decoration element corresponding to the target characteristic data.
in one embodiment, the feature data set includes one or more of volume feature data, tone feature data, and music feature data, and the client determines target volume feature data from the volume feature data, target tone feature data from the tone feature data, and target music feature data from the music feature data when determining the target feature data from the feature data set, that is, the target feature data determined by the client includes one or more of the target volume feature data, the target tone feature data, and the target music feature data. The target volume characteristic data determined by the client may be data with the longest time occupied by corresponding voice data in the volume characteristic data of the characteristic data set, or may also be data with the highest volume corresponding to the voice data in the volume characteristic data of the characteristic data set, and the like; the target tone characteristic data determined by the client may be data with the longest time occupied by corresponding voice data in the tone characteristic data of the characteristic data set, and the determined target music characteristic data may also be data with the longest time occupied by corresponding voice data in the music characteristic data of the characteristic data.
in one embodiment, the client presets the decoration types to which the decoration elements corresponding to different feature types respectively belong, and if the client presets that the feature type corresponding to the feature data is the volume type, the decoration type to which the corresponding decoration element belongs is the color element type; when the feature type corresponding to the feature data is a tone type, the decoration type to which the corresponding decoration element belongs is a graphic element type; and when the characteristic type corresponding to the characteristic data is a music type, presetting that the decoration type to which the corresponding decoration element belongs is an animation element type. The client is used for carrying out color filling on the playing background when the multimedia playing interface is decorated based on the color element type, carrying out corresponding graphic display in the playing background which is subjected to the color filling when the multimedia playing interface is decorated based on the graphic element type, and dynamically displaying the graphic displayed in the playing background according to the determined animation element when the multimedia playing interface is decorated based on the animation element type.
In an embodiment, the client further specifically presets decorative elements corresponding to different feature data under each feature type, as shown in fig. 1B, the client further specifically presets that the decorative element corresponding to the volume characteristic a is red, the decorative element corresponding to the volume characteristic B is orange, and the like under the volume type, the client further specifically presets that the decorative element corresponding to the volume characteristic a is small broken flowers, the decorative element corresponding to the volume characteristic B is rose, and the like under the tone type, the client further specifically presets that the decorative element corresponding to the music type a is animation 1, and the decorative element corresponding to the music type B is animation 2, and the like. For example, when the target feature data is an a volume feature, a B tone feature, and an a music feature, the target decoration element determined by the client based on the target feature data includes red, rose, and animation 1, and the client may change to execute step S204 after determining the target decoration element corresponding to the target feature data.
s204, decorating the multimedia playing interface based on the target decoration element.
In one embodiment, the target decoration element determined by the client comprises one or more of a target color element corresponding to the target volume characteristic data in the target characteristic data, a target graphic element corresponding to the target timbre characteristic data, and a target animation element corresponding to the target music characteristic data. When the client side decorates a multimedia playing interface based on the target color element, the target graphic element and the target animation element, the client side can perform color filling on the multimedia playing interface based on the target color element, and can fill the determined target graphic element into the interface after the color filling, and further, if the target decoration element further comprises the target animation element, the client side can dynamically display the target graphic element according to the indication of the target animation element after the target graphic element is filled. Specifically, if the multimedia playing interface is an audio playing interface as shown in 30 in fig. 3a, if the corresponding target color element determined by the client based on the target volume characteristic data is gray, the target graphic element is a rose, and the target animation element is animation 1 (where the dynamic effect indicated by animation 1 is left-right swing display), the client may perform color filling on the multimedia playing interface based on the gray, and simultaneously fill the rose into the interface after the gray filling, and further, the client dynamically displays the rose filled in the multimedia playing interface according to left-right swing, so as to obtain the multimedia audio playing interface as shown in 31 in fig. 3 a.
In an embodiment, when the client decorates the multimedia playing interface based on the target color element, the target graphic element, and the target animation element, a decoration image used for filling the multimedia playing interface may be generated based on the target color element, the target graphic element, and the target animation element, so that the decoration image may be filled into a filling area corresponding to the multimedia playing interface, where when the multimedia playing interface is an audio playing interface, the corresponding filling area is an interface area other than the live user icon in the audio playing interface, where the user icon is a head portrait icon corresponding to a live user. Or, when the multimedia playing interface is a video playing interface, the corresponding filling area is an interface area except an area where the live user is located in the video playing interface. Specifically, if the corresponding target color element determined by the client based on the target volume characteristic data is gray, the target graphic element is a rose, and the target animation element is animation 1 (where the dynamic effect indicated by animation 1 is left-right swing display), the decoration image determined by the client is gray as shown in fig. 3b, the filling image is a rose, and the display animation of the rose is left-right swing display, so that the client can fill the decoration image shown in fig. 3b into the filling area corresponding to the multimedia playing interface. Or, in the audio playing interface shown in fig. 3c, if the client determines that the corresponding target color element is gray, the target graphic element is sun, and the target animation element is animation 2 (where the dynamic effect indicated by animation 2 is left-right rotation display), based on the gray, the client may perform color filling on the multimedia playing interface and simultaneously fill the sun into the interface after performing gray filling, and further, the client dynamically displays the rose flowers filled in the multimedia playing interface according to left-right rotation, so as to obtain the multimedia audio playing interface shown on the right side in fig. 3 c.
in the embodiment of the present invention, after the client acquires the playing data from the multimedia playing interface in the live broadcasting process of the live broadcasting user, the client may determine the voice data to be analyzed from the playing data, further, the client may perform voice recognition processing on the voice data to be analyzed, so as to extract a feature data set from the voice data to be analyzed, and after the client determines the target feature data from the feature data set, the client may determine a target decoration element for decorating the multimedia playing interface based on the target feature data, and perform decoration processing on the multimedia playing interface according to the target decoration element, based on the change of the voice data, the client may update the multimedia playing interface in real time, improve the updating efficiency of the multimedia playing interface, and may automatically update the multimedia playing interface along with the change of the voice data, the interest of the playing interface is improved, and meanwhile the watching performance of the multimedia playing interface is enhanced.
Referring to fig. 4, a schematic flow chart of a processing method for a multimedia playing interface according to another embodiment of the present invention is shown in fig. 4, where the method includes:
s401, obtaining playing data from a multimedia playing interface, and determining voice data to be analyzed from the playing data.
in an embodiment, the playing data acquired by the client from the multimedia playing interface includes voice data received from a plurality of receiving entries, and based on the acquired playing data, when determining the voice data to be analyzed from the playing data, the client may first determine source information corresponding to each voice data in the playing data, where the source information includes receiving entry information for receiving each voice data, the receiving entry information is used to indicate which receiving entry of the client the voice information is acquired by, and the receiving entry information may be number information corresponding to each receiving entry in the client, and the like. Further, the client may use, as the voice data to be analyzed, the voice data received by the first receiving entry or the second receiving entry indicated by the receiving entry information in the playing data, specifically, the voice data received by the first receiving entry is first-class voice data, the first receiving entry may be a microphone, for example, and the corresponding first-class voice data is voice data of a live user; the voice data received by the second-type receiving entry is second-type voice data, the second-type receiving entry may be a receiver, for example, and the corresponding second-type voice data is music data in a live broadcast process.
in one embodiment, the client can acquire the playing data from the multimedia playing interface according to a preset time interval, so that the multimedia playing interface can be decorated based on the playing data acquired at different times, real-time updating of the multimedia playing interface can be realized, a background image which is more consistent with a playing scene can be presented on the multimedia playing interface, and the visual experience of a watching user can be improved.
S402, carrying out voice recognition processing on the voice data to be analyzed, and extracting a feature data set of the voice data to be analyzed.
In an embodiment, the specific implementation of step S402 can refer to the specific implementation of step S202 in the above embodiments, and is not described herein again.
S403, determining target characteristic data from the characteristic data set, and determining target decoration elements corresponding to the target characteristic data.
in one embodiment, after determining target feature data, when determining a corresponding target decoration element based on the target feature data, a client may first search, in a feature database, for standard feature data associated with the target feature data, where when the target feature data is target volume feature data, the standard feature data associated with the target volume feature data is: for example, if the volume value corresponding to the volume characteristic data a in the volume characteristic database and the target volume value indicated by the target characteristic data are less than or equal to the preset volume threshold, the volume characteristic data a is determined to be standard characteristic data associated with the target volume characteristic data.
when the target characteristic data is tone color characteristic data, the standard characteristic data associated with the target tone color characteristic data is: for example, if a tone value corresponding to B tone feature data in the tone feature database and a target tone value indicated by the target feature data are less than or equal to the preset tone threshold, it is determined that the B tone feature data is standard feature data associated with the target tone feature data. Or, when the target feature data is music feature data, the standard feature data associated with the target music feature data is: the music type corresponding to each piece of music feature data in a preset music feature database is the same as the target music type indicated by the target music feature data, for example, if the music type corresponding to C music in the music feature database is the same as the target music type indicated by the target feature data, the C music feature data is determined to be standard feature data associated with the target music feature data.
In an embodiment, based on the determined one or more standard feature data, the client may further use a decoration element preset for the standard feature data in the feature database as a target decoration element corresponding to the target feature data, for example, if the standard feature data determined by the client is a volume feature data a, a timbre feature data B and a music feature data a, and the decoration element preset for the client corresponding to the volume feature data a is red, the decoration element corresponding to the timbre feature data B is a rose, and the decoration element corresponding to the music feature data a is animation 1, the target decoration element determined by the client based on the target feature data includes: red, rose, and animation 1.
in an embodiment, as shown in fig. 5, the client may further send the voice data to be analyzed to an analysis server, so that the analysis server performs voice recognition processing on the voice data to be analyzed, and determines target feature data corresponding to the voice data to be analyzed, further, the analysis server may determine a target decoration element corresponding to the target feature data, and send the determined target decoration element to the client, so that the client may receive the target decoration element, and perform decoration processing on the multimedia playing interface based on the target decoration element. Specifically, the client may send a notification message to an analysis server, where the notification message carries the feature data set, so that the analysis server determines target feature data according to each feature data in the feature data set; further, the receiving the target decoration element determined by the analysis server based on the target feature data.
s404, decorating the multimedia playing interface based on the target decoration element.
in an embodiment, when the client performs decoration processing on the multimedia playing interface based on the target decoration element, a decoration image may be generated based on one or more of a target color element included in the target decoration element, the target graphic element, and the target animation element, and a decoration image as shown in fig. 3b may be generated. Specifically, when the client performs decoration processing on the multimedia playing interface based on a decoration image, if the multimedia playing interface is an audio playing interface, such as the playing interface marked by 30 in fig. 3a, after determining a target decoration element corresponding to target feature data, the client may determine an interface area corresponding to a live broadcast user icon from the multimedia playing interface, further, the client may fill the decoration image into an interface area other than the interface area corresponding to the live broadcast user icon, and the live broadcast interface obtained after the filling is the playing interface marked by 31 in fig. 3 a.
in an embodiment, if the multimedia playing interface is a video playing interface, such as the video playing interface marked by 60 in fig. 6, if the target decoration element determined by the client based on the target feature data includes: gray, rose and animation 1, the client may determine a user area corresponding to the live user from the multimedia playing interface, and fill the decoration image in an interface area other than the user area, and the filled video playing interface may be the playing interface marked as 61 in fig. 6.
S405, generating a feature mark according to the target feature data.
S406, displaying the feature mark on a live broadcast entrance corresponding to the multimedia playing interface on a playing list interface.
In steps S405 and S406, after the client determines target feature data from the feature data set, the client may further generate a feature mark based on the target feature data, where the feature mark is used for being displayed on the playlist interface and corresponds to a live entry of the multimedia playback interface, and the feature mark includes one or more of a volume mark generated based on the volume feature data, a tone mark generated based on the tone feature data, and a music mark generated based on the background music feature data. Specifically, as shown in fig. 7a, the playlist interface may show feature marks corresponding to different multimedia playing interfaces (i.e., live broadcasting rooms) on the playlist interface, such as a live broadcasting entry marked with 70, where the feature marks displayed by the live broadcasting entry include a volume feature, a tone color, and a music, that is, in the live broadcasting room corresponding to the playing entry 70, the volume feature of the voice data acquired by the client from the corresponding multimedia playing interface is quiet and comfortable, the tone color feature is miss sound, and the music feature is balladry. In one embodiment, the live entry refers to a trigger window for receiving a user command to watch to trigger entry into a corresponding live room. In an embodiment, when the client adds the feature mark to each live entry on the playlist interface, the client may further fill, based on the color element determined by the target feature data, the corresponding color element to each live entry in the playlist interface, and the playlist interface after color filling may be as shown in fig. 7 b.
in one embodiment, the playlist interface is configured to display, based on the feature tags displayed at the playlist interface and at the live entry of the playlist interface, the client may also retrieve query information from the playlist interface, as shown in figure 7c, wherein, the client can obtain the query information from the query key of the play list interface, when the client detects the click command of the query key, the query interface labeled 71 in fig. 7c may be output to obtain the feature keywords of the query desired by the viewing user, and further, the client may extract a feature keyword from the query information, the feature keyword being used to describe a target feature tag of the query indicated by the query information, and further, a target play entry set including the target feature tag may be determined, and play entries in the target play entry set are presented on the playlist interface. For example, if the feature keyword detected by the client in the query window is music a, a live entry set of which the feature tag includes music a may be searched in a playlist interface currently in live broadcasting, and the live entry set including music a is displayed in the playlist interface.
In the embodiment of the present invention, after the client acquires the playing data from the multimedia playing interface, the client may determine the voice data to be analyzed from the playing data, and further, may perform voice recognition processing on the voice data to be analyzed to extract the feature data set of the voice data to be analyzed, so as to determine the target feature data from the feature data set, and thus determine the corresponding target decoration element based on the target feature data. Furthermore, the multimedia playing interface can be decorated based on the target decoration element, the feature mark can be generated based on the target feature data, and the feature mark can be displayed at a corresponding live-broadcasting entrance of the multimedia playing interface of the playlist interface, so that a watching user can search the live-broadcasting interface based on the feature mark, and the query efficiency of the watching user on the playing interface is effectively improved.
Based on the description of the foregoing method for processing a multimedia playing interface, an embodiment of the present invention further provides a processing apparatus for a multimedia playing interface, where the processing apparatus for a multimedia playing interface may be a computer program (including a program code) running in the foregoing client. The processing device of the multimedia playing interface can execute the processing method of the multimedia playing interface as described in fig. 2 and fig. 4, please refer to fig. 8, and the device includes: an acquisition unit 801, a determination unit 802, a recognition unit 803, an extraction unit 804, and a processing unit 805.
An obtaining unit 801, configured to obtain playing data from a multimedia playing interface;
A determining unit 802, configured to determine, from the playing data, voice data to be analyzed;
a recognition unit 803, configured to perform voice recognition processing on the voice data to be analyzed;
an extracting unit 804, configured to extract a feature data set of the voice data to be analyzed;
the determining unit 802 is further configured to determine target feature data from the feature data set, and determine a target decoration element corresponding to the target feature data;
The processing unit 805 is configured to perform decoration processing on the multimedia playing interface based on the target decoration element.
in one embodiment, the target characteristic data comprises one or more of target volume characteristic data, target timbre characteristic data and target music characteristic data;
the target decoration elements comprise one or more of target color elements corresponding to the target volume characteristic data, target graphic elements corresponding to the target tone characteristic data and target animation elements corresponding to the target music characteristic data;
the processing unit 805 is specifically configured to:
generating a decorative image based on one or more of the target color element, the target graphic element, and the target animation element;
And carrying out decoration processing on the multimedia playing interface based on the decoration image.
In an embodiment, the processing unit 805 is specifically configured to:
If the multimedia playing interface is an audio playing interface, determining an interface area corresponding to a live user icon from the multimedia playing interface, and filling the decoration image into the interface area except the interface area corresponding to the live user icon;
And if the multimedia playing interface is a video playing interface, determining a live user area corresponding to the live user from the multimedia playing interface, and filling the decorative image into an interface area except the live user area.
in one embodiment, the apparatus further comprises: a generating unit 806 and a display unit 807.
a generating unit 806, configured to generate feature labels according to the target feature data, where the feature labels include one or more of a volume label generated based on volume feature data, a tone label generated based on tone feature data, and a music label generated based on background music feature data;
A display unit 807, configured to display the feature tag on a playlist interface corresponding to a live entry of the multimedia playing interface.
In an embodiment, the obtaining unit 801 is further configured to obtain query information from the playlist interface;
the extracting unit 804 is further configured to extract a feature keyword from the query information, where the feature keyword is used to describe a target feature tag of the query indicated by the query information;
the determining unit 802 is further configured to determine a target play entry set including the target feature mark, and display each play entry in the target play entry set on the playlist interface.
in an embodiment, the determining unit 802 is specifically configured to:
Searching standard characteristic data associated with the target characteristic data in a characteristic database;
And taking the decoration elements preset for the standard characteristic data in the characteristic database as target decoration elements corresponding to the target characteristic data.
In an embodiment, the playing data includes voice data received from a plurality of receiving entries, and the determining unit 802 is specifically configured to:
acquiring playing data from a multimedia playing interface, and determining source information corresponding to each voice data in the playing data, wherein the source information comprises receiving entry information for receiving each voice data;
And taking the voice data which is received by the first receiving entrance or the second receiving entrance and indicated by the receiving entrance information in the playing data as the voice data to be analyzed.
In one embodiment, the playing data received from the first receiving entry is a first type of voice data, and the playing data received from the second receiving entry is a second type of voice data, where the first type of voice data and the second type of voice data are different.
In an embodiment, the determining unit 802 is specifically configured to:
Sending a notification message to an analysis server, wherein the notification message carries the characteristic data set, so that the analysis server determines target characteristic data according to the characteristic data in the characteristic data set;
the determining the target decoration element corresponding to the target feature data includes:
Receiving a target decoration element determined by the analysis server based on the target feature data.
in this embodiment of the present invention, after the obtaining unit 801 obtains playing data from a multimedia playing interface in a live broadcasting process of a live broadcasting user, the determining unit 802 may determine to-be-analyzed voice data from the playing data, further, the recognizing unit 803 may perform voice recognition processing on the to-be-analyzed voice data, so that the extracting unit 804 may extract a feature data set from the to-be-analyzed voice data, and after the determining unit 802 determines target feature data from the feature data set at the client, the processing unit 805 may determine a target decoration element for decorating the multimedia playing interface based on the target feature data, and perform decoration processing on the multimedia playing interface according to the target decoration element, based on a change of the voice data, a real-time update of the multimedia playing interface may be achieved, and an update efficiency of the multimedia playing interface is improved, the multimedia playing interface can be automatically updated along with the change of the voice data, the interestingness of the playing interface is improved, and meanwhile the watching performance of the multimedia playing interface is enhanced.
Referring to fig. 9, which is a schematic block diagram of a structure of an intelligent terminal according to an embodiment of the present invention, the intelligent terminal according to the embodiment of the present invention shown in fig. 9 may include: one or more processors 901; one or more input devices 902, one or more output devices 903, and memory 904. The processor 901, the input device 902, the output device 903, and the memory 904 are connected by a bus 905. The memory 902 is used to store a computer program comprising program instructions, and the processor 901 is used to execute the program instructions stored by the memory 902.
the memory 904 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory 904 may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a solid-state drive (SSD), etc.; the memory 904 may also comprise a combination of the above-described types of memory.
The processor 901 may be a Central Processing Unit (CPU). The processor 901 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or the like. The PLD may be a field-programmable gate array (FPGA), a General Array Logic (GAL), or the like. The processor 901 may also be a combination of the above structures.
in one embodiment, the processor 901 calls program instructions for performing:
Acquiring playing data from a multimedia playing interface, and determining voice data to be analyzed from the playing data;
Carrying out voice recognition processing on the voice data to be analyzed, and extracting a characteristic data set of the voice data to be analyzed;
Determining target characteristic data from the characteristic data set, and determining target decoration elements corresponding to the target characteristic data;
and carrying out decoration processing on the multimedia playing interface based on the target decoration element.
in one embodiment, the target characteristic data comprises one or more of target volume characteristic data, target timbre characteristic data and target music characteristic data;
the target decoration elements comprise one or more of target color elements corresponding to the target volume characteristic data, target graphic elements corresponding to the target tone characteristic data and target animation elements corresponding to the target music characteristic data; the processor 901 calls program instructions to further perform:
Generating a decorative image based on one or more of the target color element, the target graphic element, and the target animation element;
And carrying out decoration processing on the multimedia playing interface based on the decoration image.
In one embodiment, the processor 901 calls program instructions for performing:
if the multimedia playing interface is an audio playing interface, determining an interface area corresponding to a live user icon from the multimedia playing interface, and filling the decoration image into the interface area except the interface area corresponding to the live user icon;
And if the multimedia playing interface is a video playing interface, determining a live user area corresponding to the live user from the multimedia playing interface, and filling the decorative image into an interface area except the live user area.
in one embodiment, the processor 901 calls program instructions for performing:
generating feature marks according to the target feature data, wherein the feature marks comprise one or more of volume marks generated based on volume feature data, tone marks generated based on tone feature data and music marks generated based on background music feature data;
And displaying the characteristic mark on a live broadcast inlet corresponding to the multimedia playing interface on a playing list interface.
In one embodiment, the processor 901 calls program instructions for performing:
acquiring query information from the playlist interface;
Extracting a characteristic keyword from the query information, wherein the characteristic keyword is used for describing a target characteristic mark of the query information indicating query;
and determining a target play entry set comprising the target feature marks, and displaying each play entry in the target play entry set on the playlist interface.
In one embodiment, the processor 901 calls program instructions for performing:
Searching standard characteristic data associated with the target characteristic data in a characteristic database;
and taking the decoration elements preset for the standard characteristic data in the characteristic database as target decoration elements corresponding to the target characteristic data.
In one embodiment, the playing data includes voice data received from a plurality of receiving entries, and the processor 901 calls program instructions to execute:
acquiring playing data from a multimedia playing interface, and determining source information corresponding to each voice data in the playing data, wherein the source information comprises receiving entry information for receiving each voice data;
and taking the voice data which is received by the first receiving entrance or the second receiving entrance and indicated by the receiving entrance information in the playing data as the voice data to be analyzed.
In one embodiment, the playing data received from the first receiving entry is a first type of voice data, and the playing data received from the second receiving entry is a second type of voice data, where the first type of voice data and the second type of voice data are different.
In one embodiment, the processor 901 calls program instructions for performing:
Sending a notification message to an analysis server, wherein the notification message carries the characteristic data set, so that the analysis server determines target characteristic data according to the characteristic data in the characteristic data set;
the determining the target decoration element corresponding to the target feature data includes:
receiving a target decoration element determined by the analysis server based on the target feature data.
it will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
While the invention has been described with reference to a particular embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A processing method of a multimedia playing interface is characterized by comprising the following steps:
acquiring playing data from a multimedia playing interface, and determining voice data to be analyzed from the playing data;
carrying out voice recognition processing on the voice data to be analyzed, and extracting a characteristic data set of the voice data to be analyzed;
determining target characteristic data from the characteristic data set, and determining target decoration elements corresponding to the target characteristic data;
and carrying out decoration processing on the multimedia playing interface based on the target decoration element.
2. The method of claim 1, wherein the target feature data comprises one or more of target volume feature data, target timbre feature data, and target music feature data;
the target decoration elements comprise one or more of target color elements corresponding to the target volume characteristic data, target graphic elements corresponding to the target tone characteristic data and target animation elements corresponding to the target music characteristic data;
The decorating the multimedia playing interface based on the target decoration element comprises the following steps:
generating a decorative image based on one or more of the target color element, the target graphic element, and the target animation element;
And carrying out decoration processing on the multimedia playing interface based on the decoration image.
3. the method according to claim 2, wherein the decorating the multimedia playing interface based on the decoration image comprises:
If the multimedia playing interface is an audio playing interface, determining an interface area corresponding to a live user icon from the multimedia playing interface, and filling the decoration image into the interface area except the interface area corresponding to the live user icon;
And if the multimedia playing interface is a video playing interface, determining a live user area corresponding to a live user from the multimedia playing interface, and filling the decorative image into an interface area except the live user area.
4. the method of claim 1, further comprising:
Generating feature marks according to the target feature data, wherein the feature marks comprise one or more of volume marks generated based on volume feature data, tone marks generated based on tone feature data and music marks generated based on background music feature data;
and displaying the characteristic mark on a live broadcast inlet corresponding to the multimedia playing interface on a playing list interface.
5. The method of claim 4, further comprising:
acquiring query information from the playlist interface;
Extracting a characteristic keyword from the query information, wherein the characteristic keyword is used for describing a target characteristic mark of the query information indicating query;
And determining a target play entry set comprising the target feature marks, and displaying each play entry in the target play entry set on the playlist interface.
6. the method of claim 1, wherein the determining the target decoration element corresponding to the target feature data comprises:
searching standard characteristic data associated with the target characteristic data in a characteristic database;
And taking the decoration elements preset for the standard characteristic data in the characteristic database as target decoration elements corresponding to the target characteristic data.
7. The method of claim 1, wherein the playing data comprises voice data received from a plurality of receiving entries, and wherein obtaining the playing data from the multimedia playing interface and determining the voice data to be analyzed from the playing data comprises:
Acquiring playing data from a multimedia playing interface, and determining source information corresponding to each voice data in the playing data, wherein the source information comprises receiving entry information for receiving each voice data;
And taking the voice data which is received by the first receiving entrance or the second receiving entrance and indicated by the receiving entrance information in the playing data as the voice data to be analyzed.
8. The method of claim 7, wherein the broadcast data received from the first receiving entry is a first type of voice data, and the broadcast data received from the second receiving entry is a second type of voice data, wherein the first type of voice data and the second type of voice data are different.
9. The method of claim 1, wherein said determining target feature data from said set of feature data comprises:
sending a notification message to an analysis server, wherein the notification message carries the characteristic data set, so that the analysis server determines target characteristic data according to the characteristic data in the characteristic data set;
the determining the target decoration element corresponding to the target feature data includes:
Receiving a target decoration element determined by the analysis server based on the target feature data.
10. a processing apparatus for a multimedia playing interface, comprising:
The acquisition unit is used for acquiring playing data from a multimedia playing interface;
A determining unit, configured to determine, from the play data, voice data to be analyzed;
The recognition unit is used for carrying out voice recognition processing on the voice data to be analyzed;
an extraction unit, configured to extract a feature data set of the voice data to be analyzed;
the determining unit is further configured to determine target feature data from the feature data set and determine a target decoration element corresponding to the target feature data;
and the processing unit is used for carrying out decoration processing on the multimedia playing interface based on the target decoration element.
11. a client comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is used for storing a computer program, the computer program comprising program instructions, and the processor is configured to invoke the program instructions to execute the processing method of the multimedia playback interface according to any one of claims 1 to 9.
12. A computer-readable storage medium, in which computer program instructions are stored, which, when executed by a processor, are configured to perform the processing method of the multimedia playback interface according to any one of claims 1 to 9.
CN201910868451.XA 2019-09-12 2019-09-12 Multimedia playing interface processing method, device, client and medium Pending CN110570841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910868451.XA CN110570841A (en) 2019-09-12 2019-09-12 Multimedia playing interface processing method, device, client and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910868451.XA CN110570841A (en) 2019-09-12 2019-09-12 Multimedia playing interface processing method, device, client and medium

Publications (1)

Publication Number Publication Date
CN110570841A true CN110570841A (en) 2019-12-13

Family

ID=68779962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910868451.XA Pending CN110570841A (en) 2019-09-12 2019-09-12 Multimedia playing interface processing method, device, client and medium

Country Status (1)

Country Link
CN (1) CN110570841A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309961A (en) * 2020-02-12 2020-06-19 深圳市腾讯计算机系统有限公司 Song cover generation method and device, computer readable storage medium and equipment
CN111343493A (en) * 2020-02-20 2020-06-26 北京达佳互联信息技术有限公司 Live broadcast interface processing method and device, electronic equipment and storage medium
CN111596841A (en) * 2020-04-28 2020-08-28 维沃移动通信有限公司 Image display method and electronic equipment
CN113885829A (en) * 2021-10-25 2022-01-04 北京字跳网络技术有限公司 Sound effect display method and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930035A (en) * 2016-05-05 2016-09-07 北京小米移动软件有限公司 Interface background display method and apparatus
CN106575424A (en) * 2014-07-31 2017-04-19 三星电子株式会社 Method and apparatus for visualizing music information
CN106649586A (en) * 2016-11-18 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Playing method of audio files and device of audio files
CN108668017A (en) * 2018-04-19 2018-10-16 Oppo广东移动通信有限公司 Volume reminding method, mode switching method, device, terminal and storage medium
CN108986842A (en) * 2018-08-14 2018-12-11 百度在线网络技术(北京)有限公司 Music style identifying processing method and terminal
CN109739354A (en) * 2018-12-28 2019-05-10 广州励丰文化科技股份有限公司 A kind of multimedia interaction method and device based on sound

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106575424A (en) * 2014-07-31 2017-04-19 三星电子株式会社 Method and apparatus for visualizing music information
CN105930035A (en) * 2016-05-05 2016-09-07 北京小米移动软件有限公司 Interface background display method and apparatus
CN106649586A (en) * 2016-11-18 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Playing method of audio files and device of audio files
CN108668017A (en) * 2018-04-19 2018-10-16 Oppo广东移动通信有限公司 Volume reminding method, mode switching method, device, terminal and storage medium
CN108986842A (en) * 2018-08-14 2018-12-11 百度在线网络技术(北京)有限公司 Music style identifying processing method and terminal
CN109739354A (en) * 2018-12-28 2019-05-10 广州励丰文化科技股份有限公司 A kind of multimedia interaction method and device based on sound

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309961A (en) * 2020-02-12 2020-06-19 深圳市腾讯计算机系统有限公司 Song cover generation method and device, computer readable storage medium and equipment
CN111309961B (en) * 2020-02-12 2024-04-02 深圳市腾讯计算机系统有限公司 Song cover generation method, device, computer readable storage medium and equipment
CN111343493A (en) * 2020-02-20 2020-06-26 北京达佳互联信息技术有限公司 Live broadcast interface processing method and device, electronic equipment and storage medium
CN111596841A (en) * 2020-04-28 2020-08-28 维沃移动通信有限公司 Image display method and electronic equipment
CN111596841B (en) * 2020-04-28 2021-09-07 维沃移动通信有限公司 Image display method and electronic equipment
CN113885829A (en) * 2021-10-25 2022-01-04 北京字跳网络技术有限公司 Sound effect display method and terminal equipment
CN113885829B (en) * 2021-10-25 2023-10-31 北京字跳网络技术有限公司 Sound effect display method and terminal equipment

Similar Documents

Publication Publication Date Title
CN110570841A (en) Multimedia playing interface processing method, device, client and medium
US20230185847A1 (en) Audio Identification During Performance
US9159338B2 (en) Systems and methods of rendering a textual animation
CN107172449A (en) Multi-medium play method, device and multimedia storage method
CN101557483B (en) Methods and systems for generating a media program
US10506268B2 (en) Identifying media content for simultaneous playback
JP2007531903A (en) Feature extraction in mobile devices connected to a network
JP2007531903A5 (en)
CN107864410B (en) Multimedia data processing method and device, electronic equipment and storage medium
CN109299318A (en) Method, apparatus, storage medium and the terminal device that music is recommended
KR20070070217A (en) Data-processing device and method for informing a user about a category of a media content item
CN104574453A (en) Software for expressing music with images
CN101996627A (en) Speech processing apparatus, speech processing method and program
CN110675886A (en) Audio signal processing method, audio signal processing device, electronic equipment and storage medium
Lavengood What makes it sound’80s? The Yamaha DX7 electric piano sound
KR20060020114A (en) System and method for providing music search service
CN113691909B (en) Digital audio workstation with audio processing recommendations
CN110691271A (en) News video generation method, system, device and storage medium
CN104038774B (en) Generate the method and device of ring signal file
Lindborg Interactive sonification of weather data for the locust wrath, a multimedia dance performance
CN103562909A (en) Methods and systems for identifying content in data stream by client device
CN106775567B (en) Sound effect matching method and system
CN110503991A (en) Voice broadcast method, device, electronic equipment and storage medium
JP6568351B2 (en) Karaoke system, program and karaoke audio playback method
Furmanovsky American country music in Japan: lost piece in the popular music history puzzle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination