CN107590197B - Music cover generation method and mobile terminal - Google Patents

Music cover generation method and mobile terminal Download PDF

Info

Publication number
CN107590197B
CN107590197B CN201710698721.8A CN201710698721A CN107590197B CN 107590197 B CN107590197 B CN 107590197B CN 201710698721 A CN201710698721 A CN 201710698721A CN 107590197 B CN107590197 B CN 107590197B
Authority
CN
China
Prior art keywords
vocabulary
frequency
cover
music
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710698721.8A
Other languages
Chinese (zh)
Other versions
CN107590197A (en
Inventor
林雄周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710698721.8A priority Critical patent/CN107590197B/en
Publication of CN107590197A publication Critical patent/CN107590197A/en
Application granted granted Critical
Publication of CN107590197B publication Critical patent/CN107590197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Telephone Function (AREA)

Abstract

The invention provides a music cover generation method and a mobile terminal, and relates to the technical field of computers. The method comprises the following steps: acquiring a lyric text of music currently played by the mobile terminal; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture. The technical problems that the existing music cover is relatively fixed, personalized adjustment cannot be performed according to different mobile terminals, the picture is monotonous, and the abundance degree is still insufficient are solved. The music front cover of the currently played music can be adjusted in a personalized mode according to different mobile terminals, the richness of the music front cover is improved, and the personalized visual requirements of users on the music front cover are met.

Description

Music cover generation method and mobile terminal
Technical Field
The invention relates to the technical field of computers, in particular to a music cover generation method and a mobile terminal.
Background
With the rapid development and popularization of mobile terminals, people can use the mobile terminals to watch and listen to music during leisure and entertainment. Moreover, in order to improve the visual effect of the user while listening to music, currently, a common music player can display cover information corresponding to currently played music when playing music, so as to improve the attractiveness and the interface richness of a music playing interface.
However, the cover pictures of the existing music are generally fixedly set, that is, the cover pictures of the same song played in different mobile terminals are relatively fixed, and cannot be individually adjusted according to the difference of the mobile terminals, so that the pictures are monotonous, and the richness is still insufficient.
Disclosure of Invention
In order to solve the problems that the existing music cover is relatively fixed, cannot be adjusted individually according to different mobile terminals, and causes monotonous pictures and deficient richness, the embodiment of the invention provides a music cover generation method and a mobile terminal.
In order to solve the technical problem, the invention is realized as follows: a music cover generation method is applied to a mobile terminal and is characterized by comprising the following steps:
acquiring a lyric text of music currently played by the mobile terminal;
extracting specific high-frequency words in the lyric text;
acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album;
and generating the music cover of the current playing music by the cover picture.
An embodiment of the present invention further provides a mobile terminal, including:
the lyric text acquisition module is used for acquiring the lyric text of the music currently played by the mobile terminal;
the high-frequency vocabulary extraction module is used for extracting specific high-frequency vocabularies in the lyric text;
the cover picture acquiring module is used for acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album;
and the music cover generation module is used for generating the music cover of the currently played music by using the cover picture.
An embodiment of the present invention further provides a mobile terminal, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the music cover generation method as described above.
In the embodiment of the invention, the lyric text of the music currently played by the mobile terminal is obtained; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture. Therefore, the music front cover of the currently played music can be adjusted in a personalized manner according to different mobile terminals, the richness of the music front cover is improved, and the personalized visual requirements of users are met.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 is a flowchart illustrating steps of a method for generating a music cover according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for generating a music cover according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a mobile terminal in a fourth embodiment of the present invention;
fig. 5 is a block diagram of a mobile terminal according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a mobile terminal in a sixth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The embodiment of the invention provides a music cover generation method.
Referring to fig. 1, a flow chart of steps of a music cover generation method in an embodiment of the present invention is shown.
And step 110, acquiring a lyric text of the music currently played by the mobile terminal.
The scheme of the invention is that the cover page is determined based on the lyrics of the currently played music, so that the lyrics text of the currently played music of the mobile terminal is required to be acquired at first. In practical application, when a music player of the mobile terminal plays music, the lyric text corresponding to the currently played music can be automatically retrieved according to music information, such as the name of a music song, the name of a singer performing a singing, the name of a word writer, the name of a song writer and the like; if the currently played music is downloaded music, generally speaking, the lyric text corresponding to the music can be downloaded while the music is downloaded, and then the corresponding lyric text can be obtained according to the preset lyric text downloading path; and so on. In the embodiment of the present application, the lyric text of the currently playing music may be obtained in any available manner, which is not limited in the embodiment of the present application.
And 120, extracting specific high-frequency words in the lyric text.
In order to enable the determined cover picture to have higher relevance with the currently played music, and further improve the visual experience of a user, high-frequency words in the lyric text can be extracted, and the corresponding cover picture can be determined according to the high-frequency words. However, in practical applications, the lyric text may contain many words such as "yes", "o", etc., or many abstract nouns that do not express the meaning of music. Such as "patience", "life", etc. Therefore, in the embodiment of the present application, in order to ensure the validity of the extracted high-frequency vocabulary, the extracted high-frequency vocabulary may be set as a specific high-frequency vocabulary including a specific type of vocabulary. The type of the vocabulary that the specific high-frequency vocabulary may include may be preset before this step or before any step before this step according to the requirement, and this embodiment of the present application is not limited thereto. For example, it may be set that the extracted specific high-frequency vocabulary does not include the aforementioned vocabulary auxiliary words and abstract nouns, and so on. In addition, in the embodiment of the present application, one or more specific high-frequency words may be extracted, and the number of specific high-frequency words to be extracted may also be preset before the step or before any step before the step according to requirements, which is not limited in the embodiment of the present application.
Moreover, in order to extract the specific high-frequency vocabulary in the lyric text, the text analysis can be performed on the lyric text, then the occurrence frequency of each vocabulary corresponding to the lyric text in the lyric text is counted, and the specific high-frequency vocabulary is determined based on the part of speech and the occurrence frequency of each vocabulary.
For example, the number of the preset specific high-frequency words to be extracted is three, and the extracted specific high-frequency words are set to not include the mood auxiliary words and the abstract nouns. Then, at this time, the three words with the highest frequency of occurrence in the lyric text may be obtained first, and it is assumed that the words are "yes", "lake" and "patience", and then it is determined whether the obtained three words are "mood assist words" or abstract nouns, and it can be seen that "yes" is a mood assist word and "patience" is an abstract noun, so "yes" and "patience" are not specific high-frequency words. Then, two vocabularies with the fourth and fifth highest occurrence frequency in the lyric text can be further obtained, and whether the two vocabularies obtained at this time are mood assist words or abstract nouns is judged until three specific high-frequency vocabularies are obtained.
Optionally, in the embodiment of the present application, the specific high-frequency vocabulary includes an emotional vocabulary and/or a non-abstract noun.
In the embodiment of the present application, in consideration of the effectiveness of picture recognition, it may be set that the specific high-frequency vocabulary includes emotional vocabulary and/or non-abstract nouns. The emotion vocabulary refers to the vocabulary related to the emotion of the character, and may include, but is not limited to, related vocabularies mainly representing joy, anger, sadness, burglars, and the like. Non-abstract terms are terms that exclude abstract terms. In which abstract terms refer to terms that represent actions, states, qualities, or other abstractions, such as "dangerous," "patience," "life," "work," "history," and so forth.
Step 130, acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the photo album.
After the specific high frequency vocabulary of the lyric text is determined, a cover picture matched with the specific high frequency vocabulary can be obtained in an album of the mobile terminal playing the current music. In the embodiment of the application, pictures related to each mobile terminal, such as pictures taken by a user, received pictures, and pictures saved to the local of the mobile terminal, may be stored in the album of the mobile terminal. Therefore, in the embodiment of the present application, cover pictures that are not identical can be obtained for different mobile terminals based on the same specific high-frequency vocabulary. Moreover, the mobile terminal user can store the pictures meeting the self-demand in the corresponding mobile terminal photo album according to the self-demand, for example, the personalized pictures can be obtained by shooting and stored in the photo album, or the personalized pictures can be searched in the network and stored in the photo album, and the like, so that the personalized demands of different mobile terminal users can be met.
Moreover, in order to determine a cover picture from an album of the mobile terminal according to a specific high-frequency vocabulary, the cover picture is a picture including a vocabulary tag in the album, that is, in the embodiment of the present application, the album of the mobile terminal includes a picture including a vocabulary tag. Then, when the cover picture matched with the specific high-frequency vocabulary is obtained from the photo album of the mobile terminal, the picture of the vocabulary tag matched with the specific high-frequency vocabulary in the photo album can be obtained as the corresponding cover picture. The matching may be completely the same or approximately the same, for example, if there is "sad" in a specific high-frequency vocabulary, and there is a picture with "sad" vocabulary label in the album, the two pictures may also be associated, and the picture with "sad" vocabulary label is used as the cover picture matching with the specific high-frequency vocabulary "sad". In the embodiment of the present application, a matching criterion for determining the vocabulary label of the picture and the specific high-frequency vocabulary may also be preset, which is not limited in the embodiment of the present application.
In addition, in practical application, a cover picture matched with a specific high-frequency vocabulary may not be matched in an album of the mobile terminal, so that in order to avoid that the visual effect of a user is affected by the fact that the mobile terminal has no cover for currently played music, in the embodiment of the application, at least one default cover picture may be preset for the mobile terminal, and then when the cover pictures matched with all the specific high-frequency vocabularies corresponding to the currently listened music cannot be found in the album of the mobile terminal, the default cover picture may be directly used as the cover picture of the current audio-visual music. Alternatively, the user may be prompted in any available form, such as a pop-up box, a floating box, or the like, that the currently played music does not have an adapted cover picture, and a data interface may be provided for the user to set or change the cover picture for the currently played music, which is not limited in this embodiment of the application.
Step 140, generating a music cover of the currently played music by using the cover picture.
After determining the cover picture of the currently playing music, the music cover of the currently playing music can be generated by the cover picture. In the embodiment of the present application, the generation manner of the music cover may be set before this step or before any step before this step according to requirements, and this embodiment of the present application is not limited.
For example, if only one frame of cover picture is matched, the cover picture can be directly used to generate and display the music cover of the currently played music; if a plurality of cover pictures are matched, when the music cover is generated, the specific high-frequency words corresponding to the cover pictures can be displayed on the song playing interface in a carousel mode according to the sequence of the occurrence frequency from high to low, the display time of each frame of cover picture is set to be the same, or the display time of each frame of cover picture is set to be different, for example, the display time of the cover picture corresponding to the specific high-frequency word with higher occurrence frequency is longer, and the like.
In the embodiment of the invention, the lyric text of the music currently played by the mobile terminal is obtained; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture. Therefore, the music front cover of the currently played music can be adjusted in a personalized manner according to different mobile terminals, the richness of the music front cover is improved, and the personalized visual requirements of users are met.
Example two
The embodiment of the invention provides a music cover generation method.
Referring to fig. 1, a flow chart of steps of a music cover generation method in an embodiment of the present invention is shown.
Step 210, obtaining a lyric text of the music currently played by the mobile terminal.
Step 220, extracting the vocabulary with the highest frequency of occurrence in the lyric text.
As described above, if it is set that the specific high-frequency vocabulary includes an emotion vocabulary and a non-abstract noun, it is necessary to ensure that the extracted vocabulary satisfies both the conditions of high frequency of occurrence and the condition that the part of speech is an emotion vocabulary and/or a non-abstract noun when extracting the specific high-frequency vocabulary from the lyric text. Where high frequency of occurrence is operable when extracting words, in the embodiment of the present application, the words with the highest frequency of occurrence in the lyric text may be extracted first. Specifically, text analysis can be performed on the lyric text, the lyric text is divided into different vocabularies, then the occurrence frequency of each vocabulary in the lyric text is counted, and then the vocabulary with the highest occurrence frequency can be obtained. Of course, in the embodiment of the present application, the vocabulary with the highest frequency of occurrence in the lyric text may also be extracted in any other available manner, which is not limited in the embodiment of the present application.
Step 230, determining whether the vocabulary currently extracted is at least one of an emotion vocabulary and a non-abstract noun.
After the high-frequency vocabulary is extracted, part-of-speech determination may be further performed on the currently extracted vocabulary, and specifically, when the specific high-frequency vocabulary includes an emotion vocabulary and a non-abstract noun, it is necessary to determine whether the currently extracted vocabulary is at least one of the emotion vocabulary and the non-abstract noun. Specifically, the judgment sequence of the emotion vocabulary and the non-abstract nouns can be set, and if the currently extracted vocabulary is any one of the emotion vocabulary and the non-abstract nouns, the judgment is not required to be continued. For example, if the judgment sequence is set to be an emotion vocabulary and a non-abstract noun in sequence, the currently extracted vocabulary does not need to be further judged whether the vocabulary is the non-abstract noun or not if the vocabulary is judged to be the emotion vocabulary; if the word is not an emotion word, further judgment is needed to determine whether the word is a non-abstract noun.
Of course, in the embodiment of the present application, it may also be determined whether the currently extracted vocabulary is at least one of an emotion vocabulary and a non-abstract noun in a random order without setting a determination order, or whether the currently extracted vocabulary is at least one of an emotion vocabulary and a non-abstract noun at the same time, which is not limited in this application.
And 240, if the vocabulary is at least one of emotional vocabulary and non-abstract nouns, confirming that the vocabulary is a specific high-frequency vocabulary.
Obviously, if the currently extracted vocabulary is at least one of emotional vocabulary and non-abstract nouns, the vocabulary is indicated as a specific high-frequency vocabulary.
And step 250, counting whether the number of the specific high-frequency words reaches a preset number.
In the embodiment of the present application, a user may set, according to a requirement, a preset number of specific high-frequency words to be extracted before the step, or before any step before the step, which is not limited in this embodiment of the present application. Moreover, the preset number of the specific high-frequency words corresponding to different mobile terminals may be different, and may also be the same, and the embodiment of the present application is not limited thereto.
In order to ensure that the specific high-frequency words with the preset number are obtained from the lyric text, it is necessary to count whether the number of the specific high-frequency words reaches the preset number. In the embodiment of the present application, before extracting a vocabulary from a lyric text for the first time and determining whether the currently extracted vocabulary is a specific high-frequency vocabulary, the number of the specific high-frequency vocabulary for the lyric text is zero, and in the subsequent determination process, if the extracted vocabulary for the lyric text is determined to be the specific high-frequency vocabulary each time, the number of the corresponding specific high-frequency vocabulary may be increased by one, and further, it is determined whether the number of the latest specific high-frequency vocabulary reaches a preset number.
For example, for a lyric text of currently playing music, before starting to extract a specific high-frequency vocabulary for the lyric text, the default number of the specific high-frequency vocabularies is zero, and after starting to extract the specific high-frequency vocabulary therein, each time the extracted vocabulary is determined to be the specific high-frequency vocabulary, the number of the specific high-frequency vocabulary corresponding to the lyric text can be increased by one.
Step 260, if the number of the specific high-frequency words does not reach the preset number, extracting words with the highest occurrence frequency of the extracted words from the lyric text, and then entering step 230 until the number of the specific high-frequency words reaches the preset number.
If the number of the specific high-frequency vocabularies does not reach the preset number, whether other high-frequency vocabularies in the lyric text are the specific high-frequency vocabularies or not needs to be continuously judged, then in order to ensure that the vocabularies extracted from the lyric text are relatively high-frequency vocabularies in the lyric text, the vocabularies with the highest occurrence frequency of the extracted vocabularies can be continuously extracted from the lyric text, then whether the currently extracted vocabularies are at least one of emotion vocabularies and non-abstract nouns or not is continuously judged, and if the vocabularies are at least one of emotion vocabularies and non-abstract nouns, the vocabularies are confirmed to be the specific high; if the vocabulary is neither an emotional vocabulary nor an abstract noun, then it may be determined that the vocabulary is not a particular high frequency vocabulary. And when the number of the determined specific high-frequency words reaches the preset number, the words can not be extracted from the lyric text at the moment.
And step 270, setting vocabulary labels of pictures contained in the photo album of the mobile terminal.
In the embodiment of the application, a cover picture is obtained from an album of the mobile terminal based on a specific high-frequency vocabulary, and the cover picture is a picture containing a vocabulary tag in the album. Therefore, before acquiring the cover picture, the vocabulary tags of the pictures contained in the current album of the mobile terminal need to be determined. In the embodiment of the present application, a user may set vocabulary tags of pictures in an album of a mobile terminal used by the user in advance according to a personalized requirement of the user, which is not limited in the embodiment of the present application.
In practical applications, the photo album of the mobile terminal may be subdivided into at least one folder. For example, it is divided by person, by acquisition path, by shooting location, and the like. Moreover, the user may only want to use a picture in a folder in the album as the music cover when setting the music cover. In the embodiment of the application, the user can also preset to acquire the cover picture matched with the specific high-frequency vocabulary from a preset folder in the photo album of the mobile terminal according to the requirement of the user, wherein the preset folder can be preset according to the requirement. Then only the vocabulary tags may be set for the pictures in the preset folder at this time.
In the embodiment of the present application, step 270 may be performed before step 280, or before any step before step 280, and this embodiment of the present application is not limited.
Optionally, in this embodiment of the present application, the step 270 further includes:
and a substep 271 of respectively performing object scene recognition and/or character expression recognition on the pictures contained in the album of the mobile terminal.
In order to determine the vocabulary labels of the pictures in the album to match with the specific high-frequency vocabulary, in the embodiment of the application, object scene recognition and/or character expression recognition can be carried out on the pictures. In the embodiment of the present application, any available method or device may be used to perform object scene recognition and/or character expression recognition on a picture, which is not limited in this embodiment of the present application.
In the embodiments of the present application, object scene recognition may include, but is not limited to, the following categories:
plant: trees, flowers, vines, grasses, fruits, shrubs, aquatic plants, forests;
animals: cats, dogs, birds, fish, insects;
non-man-made objects: rain, fog, fire, geological rock soil;
unnatural image: screenshot, historical black and white pictures, design drawings, bills, white boards, blackboards, posters, maps, artistic paintings, cartoons, books, certificate cards, forms and two-dimensional codes;
building: street roads, along street pavements, home rooms, sculptures, catering places, building night scenes, park gardens, hotels, bridges, commercial buildings, railway tracks, swimming pools, traditional buildings, construction sites, squares;
people and activities in which people participate: fireworks night scenes, crowd gathering, exhibition activities, ceremony activities, meeting and learning activities, identification photos, infants, literature and art activities, sports activities, figures, self-timer, flag raising and shooting memorial;
man-made object: automobiles, ships, airplanes, bicycles, motorcycles, public facilities, wearing, musical instruments, electronic devices, drinks, foods, machine equipment, raw materials, personal care cosmetics, cases, toys, stationery, mobile phone digital products, computer office, household appliances, furniture, household goods;
natural landscape: lakes, sky, cave caverns, grasslands, rivers, waterfalls, deserts, oceans, ice and snow, and mountains; and so on.
And the character expression recognition is mainly realized by recognizing the character facial expression characteristics on the picture, such as: happy, sad, fear, anger, and the like.
In the embodiment of the present application, the user may set the degree of fineness of the object scene recognition and/or the character expression recognition according to the user's own needs, which is not limited in the embodiment of the present application. Moreover, since there may be many scenes or things in the picture, the object scene recognition result may include one or more of the above categories for the same picture, for example, for a picture in which people and dogs may be included, when the object scene recognition is performed on the picture, the recognition result may be obtained by people and people gathering in the activities in which people participate and dogs in the animals; of course, the recognition result may be accurate only to a large class, and then the recognition result of the object scene for the picture may be a person, an activity in which the person participates, and an animal.
Substep 272, setting the vocabulary label of the picture based on the recognition result.
After the object scene recognition and/or the character expression recognition are/is carried out on each picture, the vocabulary labels of the corresponding pictures can be set on the basis of the recognition result. Specifically, the recognition result of each picture can be directly used as the vocabulary label. For example, for a certain picture a, the recognition result of the object scene is people and people group in the activities in which people participate, and the recognition result of the character expression is happy, then the vocabulary label of the picture a may be set as people group and happy, or the vocabulary label of the picture a may be set as people and activities in which people participate and happy. It can be seen that in the embodiments of the present application, each picture may have one or more vocabulary tags, and it is also possible that some pictures do not have vocabulary tags, which is not limited in the embodiments of the present application. Moreover, for the picture that cannot automatically determine the vocabulary label, the mobile terminal may be controlled to prompt the user to manually add the vocabulary label, and the embodiment of the present application is not limited thereto.
In the embodiment of the present application, other available manners may also be used to set the vocabulary tags of the pictures based on the recognition result, which is not limited in the embodiment of the present application.
Step 280, acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the photo album.
Step 290, generating the music cover of the currently played music by the cover picture.
In the embodiment of the invention, the lyric text of the music currently played by the mobile terminal is obtained; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture. Therefore, the music front cover of the currently played music can be adjusted in a personalized manner according to different mobile terminals, the richness of the music front cover is improved, and the personalized visual requirements of users are met.
Moreover, in the embodiment of the application, when the specific high-frequency vocabulary comprises emotion vocabularies and non-abstract nouns, the vocabulary with the highest frequency of occurrence in the lyric text can be extracted; judging whether the vocabulary obtained by current extraction is at least one of emotion vocabulary and non-abstract nouns; if the vocabulary is at least one of emotion vocabulary and non-abstract nouns, confirming that the vocabulary is a specific high-frequency vocabulary; counting whether the number of the specific high-frequency words reaches a preset number or not; and if the number of the specific high-frequency words does not reach the preset number, extracting words with the highest frequency of appearance of the extracted words from the lyric text, and then, judging whether the words obtained by current extraction are at least one of emotion words and non-abstract nouns until the number of the specific high-frequency words reaches the preset number. Therefore, the preset number of specific high-frequency words can be accurately acquired, and the accuracy and efficiency of acquiring the specific high-frequency words are further improved.
In addition, in the embodiment of the application, a vocabulary label of a picture contained in an album of the mobile terminal can be set. Respectively carrying out object scene recognition and/or character expression recognition on pictures contained in an album of the mobile terminal; and setting the vocabulary label of the picture on the basis of the recognition result. Therefore, the individuation of the music cover can be further improved, and the individuation visual requirements of the user can be better met.
EXAMPLE III
A mobile terminal according to an embodiment of the present invention is described in detail.
Referring to fig. 3, a schematic structural diagram of a mobile terminal in the embodiment of the present invention is shown.
The mobile terminal 300 of the embodiment of the present invention includes: a lyric text acquisition module 310, a high frequency vocabulary extraction module 320, a cover picture acquisition module 330, and a music cover generation module 340.
The functions of the modules and the interaction relationship between the modules are described in detail below.
A lyric text obtaining module 310, configured to obtain a lyric text of music currently played by the mobile terminal.
And a high-frequency vocabulary extracting module 320 for extracting a specific high-frequency vocabulary in the lyric text.
A cover picture obtaining module 330, configured to obtain a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the photo album.
And the music cover generation module 340 is configured to generate a music cover of the currently played music by using the cover picture.
In the embodiment of the invention, the lyric text of the music currently played by the mobile terminal is obtained; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture. Therefore, the music front cover of the currently played music can be adjusted in a personalized manner according to different mobile terminals, the richness of the music front cover is improved, and the personalized visual requirements of users are met.
Example four
A mobile terminal according to an embodiment of the present invention is described in detail.
Referring to fig. 4, a schematic structural diagram of a mobile terminal in the embodiment of the present invention is shown.
The mobile terminal 400 of the embodiment of the present invention includes: a lyric text acquisition module 410, a high frequency vocabulary extraction module 420, a vocabulary tag setting module 430, a cover picture acquisition module 440, and a music cover generation module 450.
The functions of the modules and the interaction relationship between the modules are described in detail below.
A lyric text obtaining module 410, configured to obtain a lyric text of music currently played by the mobile terminal.
And the high-frequency vocabulary extracting module 420 is used for extracting a specific high-frequency vocabulary in the lyric text.
Optionally, in the embodiment of the present application, the specific high-frequency vocabulary includes an emotional vocabulary and/or a non-abstract noun.
Optionally, in this embodiment of the application, when the specific high-frequency vocabulary includes emotion vocabularies and non-abstract nouns, the high-frequency vocabulary extracting module 420 includes:
the first vocabulary extracting submodule 421 is configured to extract a vocabulary with the highest occurrence frequency in the lyric text;
and the part-of-speech judging submodule 422 is configured to judge whether the vocabulary obtained by current extraction is at least one of an emotion vocabulary and a non-abstract noun.
And the specific high-frequency vocabulary confirming submodule 423 is used for confirming that the vocabulary is the specific high-frequency vocabulary if the vocabulary is at least one of emotion vocabulary and non-abstract nouns.
The number counting submodule 424 is configured to count whether the number of the specific high-frequency vocabulary reaches a preset number.
The second vocabulary extracting sub-module 425 is configured to extract the words with the highest occurrence frequency from the lyric text if the number of the specific high-frequency words does not reach the preset number, and then enter the part-of-speech determining sub-module 422 until the number of the specific high-frequency words reaches the preset number.
And the vocabulary label setting module 430 is used for setting vocabulary labels of pictures contained in the photo album of the mobile terminal.
Optionally, in this embodiment of the present application, the vocabulary tag setting module 430 may further include:
and the picture identification submodule is used for respectively carrying out object scene identification and/or character expression identification on the pictures contained in the photo album of the mobile terminal.
And the vocabulary label setting submodule is used for setting the vocabulary labels of the pictures on the basis of the recognition result.
A cover picture obtaining module 440, configured to obtain a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the photo album.
The music cover generating module 450 is configured to generate a music cover of the currently played music according to the cover picture.
In the embodiment of the invention, the lyric text of the music currently played by the mobile terminal is obtained; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture. Therefore, the music front cover of the currently played music can be adjusted in a personalized manner according to different mobile terminals, the richness of the music front cover is improved, and the personalized visual requirements of users are met.
Moreover, in the embodiment of the application, when the specific high-frequency vocabulary comprises emotion vocabularies and non-abstract nouns, the vocabulary with the highest frequency of occurrence in the lyric text can be extracted; judging whether the vocabulary obtained by current extraction is at least one of emotion vocabulary and non-abstract nouns; if the vocabulary is at least one of emotion vocabulary and non-abstract nouns, confirming that the vocabulary is a specific high-frequency vocabulary; counting whether the number of the specific high-frequency words reaches a preset number or not; and if the number of the specific high-frequency words does not reach the preset number, extracting words with the highest frequency of appearance of the extracted words from the lyric text, and then, judging whether the words obtained by current extraction are at least one of emotion words and non-abstract nouns until the number of the specific high-frequency words reaches the preset number. Therefore, the preset number of specific high-frequency words can be accurately acquired, and the accuracy and efficiency of acquiring the specific high-frequency words are further improved.
In addition, in the embodiment of the application, a vocabulary label of a picture contained in an album of the mobile terminal can be set. Respectively carrying out object scene recognition and/or character expression recognition on pictures contained in an album of the mobile terminal; and setting the vocabulary label of the picture on the basis of the recognition result. Therefore, the individuation of the music cover can be further improved, and the individuation visual requirements of the user can be better met.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
EXAMPLE five
A mobile terminal according to an embodiment of the present invention is described in detail.
Referring to fig. 5, a block diagram of a mobile terminal in an embodiment of the present invention is shown.
The mobile terminal 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and a user interface 503. The various components in the mobile terminal 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 502 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 502 of the subject systems and methods described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 502 stores elements, executable modules or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 5021 and application programs 5022.
The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 5022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. The program for implementing the method according to the embodiment of the present invention may be included in the application program 5022.
In the embodiment of the present invention, the processor 501 is configured to obtain a lyric text of music currently played by the mobile terminal by calling a program or an instruction stored in the memory 502, specifically, a program or an instruction stored in the application 5022; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 502, and the processor 501 reads the information in the memory 502 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 501 is further configured to: the specific high frequency vocabulary includes emotional vocabulary and/or non-abstract nouns.
Optionally, as another embodiment, the processor 501 is further configured to: when the specific high-frequency vocabulary comprises emotion vocabularies and non-abstract nouns, extracting the vocabulary with the highest occurrence frequency in the lyric text; judging whether the vocabulary obtained by current extraction is at least one of emotion vocabulary and non-abstract nouns; if the vocabulary is at least one of emotion vocabulary and non-abstract nouns, confirming that the vocabulary is a specific high-frequency vocabulary; counting whether the number of the specific high-frequency words reaches a preset number or not; and if the number of the specific high-frequency words does not reach the preset number, extracting words with the highest frequency of appearance of the extracted words from the lyric text, and then, judging whether the words obtained by current extraction are at least one of emotion words and non-abstract nouns until the number of the specific high-frequency words reaches the preset number.
Optionally, the processor 501 is further configured to: and setting a vocabulary label of the picture contained in the photo album of the mobile terminal.
Optionally, the processor 501 is further configured to: respectively carrying out object scene recognition and/or character expression recognition on pictures contained in an album of the mobile terminal; and setting the vocabulary label of the picture on the basis of the recognition result.
The mobile terminal 500 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and in order to avoid repetition, the detailed description is omitted here.
EXAMPLE six
Fig. 6 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal in fig. 6 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal in fig. 6 includes a Radio Frequency (RF) circuit 610, a memory 620, an input unit 630, a display unit 640, a processor 660, an audio circuit 670, a wifi (wireless fidelity) module 680, and a power supply 690.
The input unit 630 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal. Specifically, in the embodiment of the present invention, the input unit 630 may include a touch panel 631. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 by using a finger, a stylus, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 660, and can receive and execute commands sent by the processor 660. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 631, the input unit 630 may also include other input devices 632, and the other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among them, the display unit 640 may be used to display information input by a user or information provided to the user and various menu interfaces of the mobile terminal. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of an LCD or an Organic Light-Emitting Diode (OLED).
It should be noted that the touch panel 631 may cover the display panel 641 to form a touch display screen, and when the touch display screen detects a touch operation thereon or nearby, the touch display screen is transmitted to the processor 660 to determine the type of the touch event, and then the processor 660 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 660 is a control center of the mobile terminal, connects various parts of the whole mobile phone by using various interfaces and lines, and executes various functions and processes data of the mobile terminal by running or executing software programs and/or modules stored in the first memory 621 and calling data stored in the second memory 622, thereby performing overall monitoring on the mobile terminal. Optionally, processor 660 may include one or more processing units.
In an embodiment of the present invention, the processor 660 is configured to receive a target image input by a user by calling a software program and/or module stored in the first memory 621 and/or data stored in the second memory 622; when a user-defined beautifying instruction is received, determining an image beautifying model corresponding to the user-defined beautifying instruction; the image beautification model is obtained by training at least one frame of training reference image which is beautified by self definition; and beautifying the target image through the image beautifying model.
Optionally, as another embodiment, the processor 660 is further configured to: the specific high frequency vocabulary includes emotional vocabulary and/or non-abstract nouns.
Optionally, as another embodiment, the processor 660 is further configured to: when the specific high-frequency vocabulary comprises emotion vocabularies and non-abstract nouns, extracting the vocabulary with the highest occurrence frequency in the lyric text; judging whether the vocabulary obtained by current extraction is at least one of emotion vocabulary and non-abstract nouns; if the vocabulary is at least one of emotion vocabulary and non-abstract nouns, confirming that the vocabulary is a specific high-frequency vocabulary; counting whether the number of the specific high-frequency words reaches a preset number or not; and if the number of the specific high-frequency words does not reach the preset number, extracting words with the highest frequency of appearance of the extracted words from the lyric text, and then, judging whether the words obtained by current extraction are at least one of emotion words and non-abstract nouns until the number of the specific high-frequency words reaches the preset number.
Optionally, the processor 660 is further configured to: and setting a vocabulary label of the picture contained in the photo album of the mobile terminal.
Optionally, the processor 660 is further configured to: respectively carrying out object scene recognition and/or character expression recognition on pictures contained in an album of the mobile terminal; and setting the vocabulary label of the picture on the basis of the recognition result.
Therefore, in the embodiment of the invention, the lyric text of the music currently played by the mobile terminal is obtained; extracting specific high-frequency words in the lyric text; acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; and generating the music cover of the current playing music by the cover picture. Therefore, the music front cover of the currently played music can be adjusted in a personalized manner according to different mobile terminals, the richness of the music front cover is improved, and the personalized visual requirements of users are met.
Moreover, in the embodiment of the application, when the specific high-frequency vocabulary comprises emotion vocabularies and non-abstract nouns, the vocabulary with the highest frequency of occurrence in the lyric text can be extracted; judging whether the vocabulary obtained by current extraction is at least one of emotion vocabulary and non-abstract nouns; if the vocabulary is at least one of emotion vocabulary and non-abstract nouns, confirming that the vocabulary is a specific high-frequency vocabulary; counting whether the number of the specific high-frequency words reaches a preset number or not; and if the number of the specific high-frequency words does not reach the preset number, extracting words with the highest frequency of appearance of the extracted words from the lyric text, and then, judging whether the words obtained by current extraction are at least one of emotion words and non-abstract nouns until the number of the specific high-frequency words reaches the preset number. Therefore, the preset number of specific high-frequency words can be accurately acquired, and the accuracy and efficiency of acquiring the specific high-frequency words are further improved.
In addition, in the embodiment of the application, a vocabulary label of a picture contained in an album of the mobile terminal can be set. Respectively carrying out object scene recognition and/or character expression recognition on pictures contained in an album of the mobile terminal; and setting the vocabulary label of the picture on the basis of the recognition result. Therefore, the individuation of the music cover can be further improved, and the individuation visual requirements of the user can be better met.
An embodiment of the present invention further provides a mobile terminal, including: the computer program is executed by the processor to implement each process of the above-mentioned embodiment of the music cover generation method, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the embodiment of the music cover generation method, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A music cover generation method is applied to a mobile terminal and is characterized by comprising the following steps:
acquiring a lyric text of music currently played by the mobile terminal;
extracting specific high-frequency words in the lyric text;
acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; the photo album stores pictures related to the mobile terminal, and different mobile terminals obtain the cover pictures which are not identical based on the same specific high-frequency vocabulary;
generating a music cover of the currently played music according to the cover picture;
wherein, with the music front cover of front cover picture generation present broadcast music includes: if the cover pictures are matched, displaying the cover pictures in a carousel mode according to the sequence of the occurrence frequency of the specific high-frequency words corresponding to the cover pictures from high to low;
wherein, with the music front cover of front cover picture generation present broadcast music includes: if a plurality of frames of cover pictures are matched, setting the display time of the corresponding cover pictures according to the sequence of the occurrence frequency of the specific high-frequency words corresponding to the plurality of frames of cover pictures from high to low, wherein the display time is in direct proportion to the occurrence frequency;
when the specific high-frequency vocabulary comprises emotion vocabularies and non-abstract nouns, the step of extracting the specific high-frequency vocabulary in the lyric text comprises the following steps:
extracting the vocabulary with the highest occurrence frequency in the lyric text;
judging whether the vocabulary obtained by current extraction is at least one of emotion vocabulary and non-abstract nouns;
if the vocabulary is at least one of emotion vocabulary and non-abstract nouns, confirming that the vocabulary is a specific high-frequency vocabulary;
counting whether the number of the specific high-frequency words reaches a preset number or not;
and if the number of the specific high-frequency words does not reach the preset number, extracting words with the highest frequency of appearance of the extracted words from the lyric text, and then, judging whether the words obtained by current extraction are at least one of emotion words and non-abstract nouns until the number of the specific high-frequency words reaches the preset number.
2. The method according to claim 1, wherein before the step of obtaining the cover picture matching the specific high-frequency vocabulary in the album of the mobile terminal, the method further comprises:
and setting a vocabulary label of the picture contained in the photo album of the mobile terminal.
3. The method according to claim 2, wherein the step of setting the vocabulary tags of the pictures contained in the album of the mobile terminal comprises:
respectively carrying out object scene recognition and/or character expression recognition on pictures contained in an album of the mobile terminal;
and setting the vocabulary label of the picture on the basis of the recognition result.
4. A mobile terminal, comprising:
the lyric text acquisition module is used for acquiring the lyric text of the music currently played by the mobile terminal;
the high-frequency vocabulary extraction module is used for extracting specific high-frequency vocabularies in the lyric text;
the cover picture acquiring module is used for acquiring a cover picture matched with the specific high-frequency vocabulary from an album of the mobile terminal; the cover picture is a picture containing a vocabulary label in the album; the photo album stores pictures related to the mobile terminal, and different mobile terminals obtain the cover pictures which are not identical based on the same specific high-frequency vocabulary;
the music cover generation module is used for generating a music cover of the currently played music by using the cover picture;
the music cover generation module is further used for displaying a plurality of frames of cover pictures in a carousel mode according to the sequence from high to low of the appearance frequency of the specific high-frequency vocabulary corresponding to the plurality of frames of cover pictures when the plurality of frames of cover pictures are matched;
the music cover generation module is further used for setting the display time of the corresponding cover pictures according to the sequence of the high-frequency occurrence frequency of the specific high-frequency words corresponding to the cover pictures from high to low if multiple frames of the cover pictures are matched, wherein the display time is in direct proportion to the occurrence frequency;
when the specific high-frequency vocabulary includes emotion vocabularies and non-abstract nouns, the high-frequency vocabulary extraction module includes:
the first vocabulary extraction submodule is used for extracting the vocabulary with the highest occurrence frequency in the lyric text;
the part of speech judging submodule is used for judging whether the vocabulary obtained by current extraction is at least one of emotion vocabulary and non-abstract nouns;
the specific high-frequency vocabulary confirming submodule is used for confirming that the vocabulary is the specific high-frequency vocabulary if the vocabulary is at least one of emotion vocabulary and non-abstract nouns;
the number counting submodule is used for counting whether the number of the specific high-frequency words reaches a preset number or not;
and the second vocabulary extraction submodule is used for extracting the vocabulary with the highest occurrence frequency of the extracted vocabulary from the lyric text if the number of the specific high-frequency vocabulary does not reach the preset number, and then entering the part of speech judgment submodule until the number of the specific high-frequency vocabulary reaches the preset number.
5. The mobile terminal of claim 4, further comprising:
and the vocabulary label setting module is used for setting vocabulary labels of pictures contained in the photo album of the mobile terminal.
6. The mobile terminal of claim 5, wherein the vocabulary tag setting module comprises:
the picture identification submodule is used for respectively carrying out object scene identification and/or character expression identification on pictures contained in an album of the mobile terminal;
and the vocabulary label setting submodule is used for setting the vocabulary labels of the pictures on the basis of the recognition result.
7. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the music cover generation method of any one of claims 1 to 3.
CN201710698721.8A 2017-08-15 2017-08-15 Music cover generation method and mobile terminal Active CN107590197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710698721.8A CN107590197B (en) 2017-08-15 2017-08-15 Music cover generation method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710698721.8A CN107590197B (en) 2017-08-15 2017-08-15 Music cover generation method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107590197A CN107590197A (en) 2018-01-16
CN107590197B true CN107590197B (en) 2021-01-08

Family

ID=61042464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710698721.8A Active CN107590197B (en) 2017-08-15 2017-08-15 Music cover generation method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107590197B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110389759A (en) * 2018-04-17 2019-10-29 北京搜狗科技发展有限公司 A kind of target interface generation method and device
CN108614656B (en) * 2018-04-28 2020-11-10 网易传媒科技(北京)有限公司 Information processing method, medium, device and computing equipment
CN108921923A (en) * 2018-05-04 2018-11-30 北京物灵智能科技有限公司 A kind of method, apparatus and storage equipment based on music animation
CN110895576A (en) * 2018-09-13 2020-03-20 沈阳美行科技有限公司 Display method and device for terminal screen protection
CN109471930B (en) * 2018-11-08 2021-09-14 浙江大学城市学院 Emotional board interface design method for user emotion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003316805A (en) * 2002-04-24 2003-11-07 Konica Minolta Holdings Inc Image processing method, program, recording medium and information recording terminal
CN103049564A (en) * 2012-12-31 2013-04-17 广东欧珀移动通信有限公司 Method for batch setting of music album picture covers
CN103793446A (en) * 2012-10-29 2014-05-14 汤晓鸥 Music video generation method and system
CN105930485A (en) * 2016-04-28 2016-09-07 深圳市金立通信设备有限公司 Audio media playing method, communication device and network system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003316805A (en) * 2002-04-24 2003-11-07 Konica Minolta Holdings Inc Image processing method, program, recording medium and information recording terminal
CN103793446A (en) * 2012-10-29 2014-05-14 汤晓鸥 Music video generation method and system
CN103049564A (en) * 2012-12-31 2013-04-17 广东欧珀移动通信有限公司 Method for batch setting of music album picture covers
CN105930485A (en) * 2016-04-28 2016-09-07 深圳市金立通信设备有限公司 Audio media playing method, communication device and network system

Also Published As

Publication number Publication date
CN107590197A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
CN107590197B (en) Music cover generation method and mobile terminal
CN104461318B (en) Reading method based on augmented reality and system
CN105159639B (en) Audio cover display methods and device
US20160041981A1 (en) Enhanced cascaded object-related content provision system and method
CN106790120B (en) Terminal equipment and video stream associated information live broadcast control and interaction method
CN109599079A (en) A kind of generation method and device of music
CN107864410B (en) Multimedia data processing method and device, electronic equipment and storage medium
CN105022487A (en) Reading method and apparatus based on augmented reality
CN109194644A (en) Sharing method, device, server and the storage medium of network works
CN112988002A (en) Method and device for processing picture book, electronic equipment and storage medium
US20190110110A1 (en) Recording medium recording content display program, content display method, and information processing apparatus
CN112269898A (en) Background music obtaining method and device, electronic equipment and readable storage medium
US20200090656A1 (en) Sensor Based Semantic Object Generation
CN112911052A (en) Information sharing method and device
US20240118787A1 (en) Video generating method and apparatus, and terminal device and storage medium
CN110414001A (en) Sentence generation method and device, storage medium and electronic device
Travis et al. Routledge handbook of the digital environmental humanities
CN105956963A (en) Habit formation method and device
US20170018203A1 (en) Systems and methods for teaching pronunciation and/or reading
TWI698835B (en) Image processing method and device and computer-readable storage medium
KR101995271B1 (en) Method and appratus for making personalized contents
Manoharan et al. The Soundscape of” Bermulanya Di Sini… Kedah Tua”!
CN105630333B (en) Display device and display methods
Steeds Project Earth and Art’s Exposability
CN110089095A (en) For the method for content-label and the electronic equipment of support the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant