CN117763224A - Music recommendation method, device, equipment, storage medium and vehicle - Google Patents

Music recommendation method, device, equipment, storage medium and vehicle Download PDF

Info

Publication number
CN117763224A
CN117763224A CN202211171823.1A CN202211171823A CN117763224A CN 117763224 A CN117763224 A CN 117763224A CN 202211171823 A CN202211171823 A CN 202211171823A CN 117763224 A CN117763224 A CN 117763224A
Authority
CN
China
Prior art keywords
music
user
target music
target
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211171823.1A
Other languages
Chinese (zh)
Inventor
仇彬
蒙越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Co Wheels Technology Co Ltd
Original Assignee
Beijing Co Wheels Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Co Wheels Technology Co Ltd filed Critical Beijing Co Wheels Technology Co Ltd
Priority to CN202211171823.1A priority Critical patent/CN117763224A/en
Publication of CN117763224A publication Critical patent/CN117763224A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a music recommendation method, a device, equipment, a storage medium and a vehicle. Wherein the method comprises the following steps: acquiring a facial image of a user; determining the expression type of the user according to the facial image; determining a target music style corresponding to the expression type from a plurality of music styles; and recommending at least one target music corresponding to the target music style to the user. According to the music recommendation method, the accuracy of music recommendation can be improved, and the satisfaction degree of users is improved.

Description

Music recommendation method, device, equipment, storage medium and vehicle
Technical Field
The application belongs to the technical field of music recommendation, and particularly relates to a music recommendation method, device, equipment, storage medium and vehicle.
Background
With rapid development and increasing popularity of technology, music websites and music software are increasing, and more users can listen to music through the internet.
In the prior art, music is recommended to users from different dimensions mainly based on their historical play records. The recommended dimensions of the music may be, for example, the same style, the same author, the same album, and the like.
In practice, as the emotion of the user changes, music that the user wants to listen to in different time periods may differ from the style, author, album, etc. of the music in the history play record. Therefore, the music recommended to the user in the above manner is not accurate, resulting in lower satisfaction of the user.
Disclosure of Invention
The embodiment of the application provides a music recommendation method, device, equipment, storage medium and vehicle, which can improve the accuracy of music recommendation and the satisfaction of users.
In a first aspect, an embodiment of the present application provides a music recommendation method, including:
acquiring a facial image of a user;
determining the expression type of the user according to the facial image;
determining a target music style corresponding to the expression type from a plurality of music styles;
and recommending at least one target music corresponding to the target music style to the user.
In one possible implementation, the method further includes:
playing the at least one target music;
and returning to execute the facial image of the acquired user in a first preset time period before the playing of the at least one target music is completed.
In one possible implementation manner, the determining the expression type of the user according to the facial image includes:
identifying facial features of the user from the facial image;
and determining the expression type of the user according to the facial features.
In one possible implementation, the acquiring the facial image of the user includes:
acquiring a plurality of facial images of a user in a second preset time period;
the determining the expression type of the user according to the facial image comprises the following steps:
identifying a plurality of facial features of the user from the plurality of facial images, respectively;
and comprehensively analyzing the facial features to determine the expression type of the user.
In one possible implementation, before determining the target music style corresponding to the expression type from among a plurality of music styles, the method further includes:
establishing a corresponding relation between the expression type and the music style;
the obtaining the target music style corresponding to the expression type from the plurality of music styles comprises the following steps:
and determining a target music style corresponding to the expression type from a plurality of music styles according to the corresponding relation between the expression type and the music style.
In one possible implementation, the number of the target music is a plurality;
the recommending at least one target music corresponding to the target music style to the user comprises:
acquiring a target music recommendation list corresponding to the target music style;
determining recommendation sequences of a plurality of target music according to the target music recommendation list;
and sequentially recommending a plurality of target music corresponding to the target music style to the user according to the recommendation sequence.
In one possible implementation manner, before determining the recommendation order of the plurality of target music according to the target music recommendation list, the method further includes:
counting the playing times of a plurality of pieces of music corresponding to each of the plurality of music styles in a third preset time period according to a historical playing record, wherein the historical playing record comprises at least one of the historical playing record of the user and the historical playing record of the whole network user;
and respectively sequencing the plurality of music corresponding to each music style according to the playing times to obtain a music recommendation list corresponding to each music style.
In one possible implementation, the method further includes:
acquiring the playing progress of each target music in a plurality of target music;
under the condition that the playing progress reaches a preset progress, advancing the recommendation sequence of the target music in the target music recommendation list;
and removing the target music from the target music recommendation list under the condition that the playing progress does not reach the preset progress.
In a second aspect, an embodiment of the present application provides a music recommendation apparatus, including:
the acquisition module is used for acquiring the facial image of the user;
a first determining module, configured to determine an expression type of the user according to the facial image;
a second determining module, configured to determine a target music style corresponding to the expression type from a plurality of music styles;
and the recommending module is used for recommending at least one target music corresponding to the target music style to the user.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the method of any one of the possible implementation methods of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, implement a method according to any one of the possible implementation methods of the first aspect.
In a fifth aspect, embodiments of the present application provide a vehicle comprising at least one of:
a music recommendation device as in any one of the embodiments of the second aspect;
an electronic device as in any of the embodiments of the third aspect;
a computer readable storage medium as in any one of the embodiments of the fourth aspect.
According to the music recommendation method, the device, the equipment, the storage medium and the vehicle, the current emotion of the user can be determined by acquiring the facial image of the user and determining the expression type of the user according to the facial image. Therefore, by recommending at least one target music which accords with the emotion of the user to the user, the accuracy of music recommendation can be improved, and the satisfaction degree of the user can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a music recommendation method according to an embodiment of the present application;
fig. 2 is a flowchart of another music recommendation method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a music recommendation device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application may be more clearly understood, a further description of the aspects of the present application will be provided below. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the application.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As described in the background section, in order to solve the problems in the prior art, the embodiments of the present application provide a music recommendation method, apparatus, device, storage medium, and vehicle.
The following first describes a music recommendation method provided in the embodiment of the present application.
Fig. 1 shows a flowchart of a music recommendation method according to an embodiment of the present application. As shown in fig. 1, the music recommendation method provided in the embodiment of the present application includes the following steps:
s110, acquiring a facial image of a user;
s120, determining the expression type of the user according to the facial image;
s130, determining a target music style corresponding to the expression type from a plurality of music styles;
s140, recommending at least one target music corresponding to the target music style to the user.
According to the music recommendation method, the current emotion of the user can be determined by acquiring the facial image of the user and determining the expression type of the user according to the facial image. Therefore, by recommending at least one target music which accords with the emotion of the user to the user, the accuracy of music recommendation can be improved, and the satisfaction degree of the user can be improved.
A specific implementation of each of the above steps is described below.
In some embodiments, in S110, the facial image of the user may be acquired by a camera, and the facial image acquired by the camera may be in the form of a video or a picture.
In some embodiments, in S120, the expression type of the user may be happy, sad, serious, etc., for example, that is, the current emotion of the user may be determined by the expression type of the user.
Based on this, in some embodiments, the S120 may specifically include:
identifying facial features of the user from the facial image;
the expression type of the user is determined according to the facial features.
Here, the facial features may be, for example, radian of mouth angle, distance between eyes and eyebrows, whether teeth are missing, and the like. The facial features of the user can be identified from the facial image by a deep learning object detection algorithm. The deep learning target detection algorithm may be, for example, fast_rcnn algorithm, YOLO algorithm, or the like.
In addition, the correspondence relationship between the expression type and the facial feature may be set in advance. Based on this, after obtaining the facial features, the expression type corresponding to the facial features can be determined using a classification algorithm.
Based on this, in order to improve the accuracy of determining the expression type, in some embodiments, the step S110 may specifically include:
acquiring a plurality of facial images of a user in a second preset time period; based on this, S120 may specifically include:
identifying a plurality of facial features of the user from the plurality of facial images, respectively;
and comprehensively analyzing the facial features to determine the expression type of the user.
Here, the second preset period of time may be, for example, 1 minute, 2 minutes, 5 minutes, or the like. In addition, the plurality of facial images of the user may be a plurality of images obtained by cutting frames of a video in a second preset time period, or may be a plurality of images obtained by photographing the user in the second preset time period. Each facial image may correspond to a facial feature.
As an example, since each facial feature may correspond to one expression type, a plurality of expression types may be obtained by a plurality of facial features. The expression types may be the same or different. By counting the probability of each expression type, the expression type of the user can be determined.
In this way, by determining the expression type of the user based on the plurality of face images in the second preset period, the accuracy of determining the expression type can be improved.
In some embodiments, in S130, a correspondence relationship between expression types and music styles may be preset. The music style may be, for example, cheerful, light music, inspirational, rap, thinking, eased, etc. The target music style may be any one of a plurality of music styles.
Based on this, in order to determine a target music style corresponding to the expression type from among the plurality of music styles, in some embodiments, before S130 described above, it may further include:
establishing a corresponding relation between the expression type and the music style;
based on this, S130 may specifically include:
and determining a target music style corresponding to the expression type from the plurality of music styles according to the corresponding relation between the expression type and the music style.
Here, the correspondence between the expression type and the music style may be a many-to-many relationship, that is, one expression type may correspond to a plurality of music styles, and one music style may also correspond to a plurality of expression types. For example, the music style corresponding to the expression type of the happiness may be cheerful, inspirational, rap, etc.; the music style corresponding to the serious expression type can be soothing, light music and the like; the music style corresponding to the sad expression type may be a mind, a light music, etc.
In this way, by establishing the correspondence between the expression type and the music style, the target music style corresponding to the expression type can be determined from among the plurality of music styles based on the correspondence.
In some embodiments, in S140, the correspondence between the target music style and the target music may be preset. The correspondence between the target music style and the target music may be a many-to-many relationship, i.e., one target music may correspond to a plurality of target music styles, and one target music style may also correspond to a plurality of target music.
As an example, the correspondence between the target music style and the target music may be directly obtained from internet data, or a deep learning model may be used to determine the target music style to which each target music corresponds.
In order to preferentially recommend the target music with higher matching degree to the user, and improve the user satisfaction, in some embodiments, the number of target music may be plural, and based on this, S140 may specifically include:
acquiring a target music recommendation list corresponding to a target music style;
determining the recommendation sequence of a plurality of target music according to the target music recommendation list;
and sequentially recommending a plurality of target music corresponding to the target music style to the user according to the recommendation sequence.
Here, a plurality of target music and a recommendation order corresponding to each target music may be included in each target music recommendation list. The target music with the higher recommendation order can be the target music with higher matching degree with the user.
As an example, all target music in the target music recommendation list may be sequentially recommended and played to the user according to the recommendation order. In the case where all the target music is played, the playback of the target music may be returned to again.
As another example, a preset number of target music may be sequentially recommended to the user according to the recommendation order. For example, the first n pieces of music in the target music recommendation list are recommended to the user, where n may be a positive integer.
In this way, by sequentially recommending a plurality of target music corresponding to the target music style to the user according to the recommendation order, the target music with higher matching degree can be preferentially recommended to the user, and the user satisfaction can be improved.
Based on this, in order to obtain a music recommendation list corresponding to each music style, in some embodiments, before determining the recommendation order of the plurality of target music according to the target music recommendation list, the method may further include:
counting the playing times of a plurality of pieces of music corresponding to each of a plurality of music styles in a third preset time period according to a historical playing record, wherein the historical playing record comprises at least one of a historical playing record of a user and a historical playing record of a whole network user;
and respectively sequencing the plurality of music corresponding to each music style according to the playing times to obtain a music recommendation list corresponding to each music style.
Here, the third period of time may be, for example, the last week, two weeks, one month, or the like before the music recommendation list is established. In addition, the music recommendation list may be updated every third period of time. In addition, each music style may correspond to one music recommendation list or may correspond to a plurality of music recommendation lists. If the history play record is a history play record of the user, the music recommendation list may be a music recommendation list corresponding to the user. If the history play record is the history play record of the whole network user, the music recommendation list may be a music recommendation list corresponding to the whole network user. If the history play records include the history play records of the user and the history play records of the whole network user, the music recommendation list may include a music recommendation list corresponding to the user and a music recommendation list corresponding to the whole network user.
As an example, after counting the number of times of playing a plurality of pieces of music corresponding to a certain music style, the plurality of pieces of music may be ordered in order of the number of times of playing from large to small, to obtain a music recommendation list corresponding to the music style. Wherein the music style may include a target music style.
In this way, by sorting the plurality of pieces of music corresponding to each music style according to the number of times of play, a music recommendation list corresponding to each music style can be obtained.
Based on this, in order to improve accuracy of music recommendation and improve user satisfaction, in some embodiments, the method may further include:
acquiring the playing progress of each target music in a plurality of target music;
under the condition that the playing progress reaches the preset progress, advancing the recommendation sequence of the target music in the target music recommendation list;
and removing the target music from the target music recommendation list under the condition that the playing progress does not reach the preset progress.
Here, the preset schedule may be, for example, 90%, 95%, 100%, or the like. In addition, the target music recommendation list may include a music recommendation list corresponding to the user's personal and a music recommendation list corresponding to the full-network user.
As an example, if the target music is music in a music recommendation list corresponding to the user's individual and the playing progress has reached the preset progress, the recommendation order of the target music in the target music recommendation list may be advanced by one place. On the other hand, if the target music is music in the music recommendation list corresponding to the whole network user and the playing progress has reached the preset progress, the target music may be directly located at a preset position in the target music recommendation list, for example, the third position, the fourth position, etc., which is not limited herein.
As another example, if the playing progress of the target music does not reach the preset progress, the target music may be removed from the target music recommendation list, i.e., the target music is not recommended to the user any more. However, if the user actively plays the target music, the target music may participate in the ranking as candidate target music when the target music recommendation list is updated again.
Therefore, the target music recommendation list is updated in real time according to the playing progress of the target music, so that the accuracy of music recommendation can be improved, and the satisfaction degree of a user can be improved.
Based on this, to further improve user satisfaction, in some embodiments, it may further include:
playing at least one target music;
and returning to execute the acquisition of the facial image of the user in a first preset time period before the completion of playing of at least one target music.
Here, the first preset time period may be the same as or different from the second preset time period. Since it takes some time to acquire a face image of the user and determine the expression type of the user based on the face image, the acquisition of the face image of the user may be performed back in a first preset period in which at least one target music is played.
Based on this, as an example, the time for playing at least one target music may be limited to a certain time range. That is, in order to recommend appropriate music to the user according to the type of expression of the user in real time, at least one target music may be defined as 5 target music, 6 target music, etc., and the number of specific target music is not limited herein.
In some specific examples, for example, if 5 pieces of target music have been recommended to the user, after the 4 th piece of target music is played, execution may be returned to perform acquisition of the face image of the user to immediately recommend the target music corresponding to the current expression type to the user in the case where the 5 th piece of target music is played.
In other specific examples, for example, if 5 pieces of target music have been recommended to the user, from the time when the first piece of target music starts to be played, after 15 minutes, execution of acquiring the face image of the user may be returned to be performed to immediately recommend the target music corresponding to the current expression type to the user in the case where the 5 th piece of target music is played.
In this way, in the process of playing the target music, the expression type of the user can be determined again in the preset time period, so that the target music corresponding to the latest expression type is recommended to the user, and the user satisfaction can be further improved.
In order to better describe the whole solution, some specific examples are given based on the above embodiments.
For example, a flowchart of a music recommendation method as shown in fig. 2. The music recommendation method may include S210-S280, which will be explained in detail below.
S210, acquiring a facial image of a user, and determining the expression type of the user based on the facial image;
s220, acquiring a first music recommendation list and a second music recommendation list corresponding to the expression type;
s230, acquiring target music of the first two ranks and one target music of the random ranks in the first music recommendation list, and determining target music of the first two ranks and one target music of the random ranks in the second music recommendation list;
s240, playing the target music according to a preset playing sequence;
s250, determining whether the playing progress of each target music is larger than a preset progress, if so, executing S260, and if not, executing S270;
s260, advancing the recommendation sequence of the target music in the music recommendation list;
s270, removing the target music from the music recommendation list;
s280, determining whether the target music is played completely, if yes, executing S210, and if not, executing S250.
In some specific examples, the first music recommendation list may be, for example, a personal music recommendation list of the user, and the second music recommendation list may be, for example, a music recommendation list of the whole network user.
If the first-ranked target music in the first music recommendation list is denoted as A1, the second-ranked target music is denoted as A2, the random-ranked (excluding the first two digits) target music is denoted as A3, the first-ranked target music in the second music recommendation list is denoted as B1, the second-ranked target music is denoted as B2, and the random-ranked (excluding the first two digits) target music is denoted as B3, the preset play order may be, for example: A1-B1-A2-B2-A3-B3.
Further, if the recommendation order of the target music in the music recommendation list is advanced, it may be determined that the target music has been played. On the other hand, if the target music is removed from the music play list, it can also be determined that the target music has been played. Wherein the music recommendation list may include a first music recommendation list and a second music recommendation list.
Based on this, after the target music is played all the time, acquisition of the face image of the user can be performed back. Alternatively, the capturing of the face image of the user may be performed back in a first preset period of time before the target music is played.
Thus, by acquiring the facial image of the user and determining the expression type of the user from the facial image, the current emotion of the user can be determined. Therefore, by recommending at least one target music which accords with the emotion of the user to the user, the accuracy of music recommendation can be improved, and the satisfaction degree of the user can be improved. In addition, in the process of playing the target music, the expression type of the user can be determined again in a preset time period, so that the target music corresponding to the latest expression type is recommended to the user, and the user satisfaction can be further improved.
Based on the music recommendation method provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the music recommendation device. Please refer to the following examples.
As shown in fig. 3, the music recommendation device 300 provided in the embodiment of the present application includes the following modules:
an acquisition module 310 for acquiring a face image of a user;
a first determining module 320, configured to determine an expression type of the user according to the facial image;
a second determining module 330, configured to determine a target music style corresponding to the expression type from the plurality of music styles;
a recommending module 340, configured to recommend at least one target music corresponding to the target music style to the user.
The music recommendation device 300 will be described in detail, specifically as follows:
in some of these embodiments, the music recommendation device 300 may further include:
the playing module is used for playing at least one target music;
and the execution module is used for returning to execute the acquisition of the facial image of the user in a first preset time period before the completion of playing of at least one piece of target music.
In some of these embodiments, the first determining module 320 may specifically include:
a first recognition sub-module for recognizing facial features of the user from the facial image;
and the first determining submodule is used for determining the expression type of the user according to the facial features.
In some of these embodiments, the acquisition module 310 may specifically include:
the first acquisition submodule is used for acquiring a plurality of facial images of the user in a second preset time period;
based on this, the first determining module 320 may specifically further include:
a second recognition sub-module for recognizing a plurality of facial features of the user from the plurality of facial images, respectively;
and the analysis module is used for comprehensively analyzing the facial features and determining the expression type of the user.
In some of these embodiments, the music recommendation device 300 may further include:
the building module is used for building a corresponding relation between the expression type and the music style before acquiring a target music style corresponding to the expression type from the plurality of music styles;
based on this, the second determining module 330 may specifically include:
and the second determining submodule is used for determining a target music style corresponding to the expression type from the plurality of music styles according to the corresponding relation between the expression type and the music style.
In some of these embodiments, the number of target music may be a plurality;
based on this, the recommendation module 340 may specifically include:
the second acquisition sub-module is used for acquiring a target music recommendation list corresponding to the target music style;
a third determining submodule, configured to determine a recommendation order of a plurality of target music according to the target music recommendation list;
and the recommending sub-module is used for sequentially recommending a plurality of target music corresponding to the target music style to the user according to the recommending sequence.
In some of these embodiments, the recommendation module 340 may specifically further include:
the statistics sub-module is used for counting the playing times of a plurality of pieces of music corresponding to each of a plurality of music styles in a third preset time period according to a history playing record before determining the recommendation sequence of a plurality of pieces of target music according to the target music recommendation list, wherein the history playing record comprises at least one of a user history playing record and a whole network user history playing record;
and the sequencing sub-module is used for sequencing the plurality of music corresponding to each music style according to the playing times to obtain a music recommendation list corresponding to each music style.
In some of these embodiments, the recommendation module 340 may specifically further include:
the third acquisition sub-module is used for acquiring the playing progress of each target music in the plurality of target music;
the forward moving sub-module is used for moving forward the recommendation sequence of the target music in the target music recommendation list under the condition that the playing progress reaches the preset progress;
and the removing sub-module is used for removing the target music from the target music recommendation list under the condition that the playing progress does not reach the preset progress.
According to the music recommendation device, the current emotion of the user can be determined by acquiring the facial image of the user and determining the expression type of the user according to the facial image. Therefore, by recommending at least one target music which accords with the emotion of the user to the user, the accuracy of music recommendation can be improved, and the satisfaction degree of the user can be improved.
Based on the music recommendation method provided by the embodiment, the embodiment of the application also provides a specific implementation mode of the electronic equipment. Fig. 4 shows a schematic diagram of an electronic device 400 according to an embodiment of the present application.
The electronic device 400 may include a processor 410 and a memory 420 storing computer program instructions.
In particular, the processor 410 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 420 may include mass storage for data or instructions. By way of example, and not limitation, memory 420 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. Memory 420 may include removable or non-removable (or fixed) media, where appropriate. Memory 420 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 420 is a non-volatile solid state memory.
The memory may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to a method according to an aspect of the present application.
The processor 410 implements any of the music recommendation methods of the above embodiments by reading and executing computer program instructions stored in the memory 420.
In one example, electronic device 400 may also include communication interface 430 and bus 440. As shown in fig. 4, the processor 410, the memory 420, and the communication interface 430 are connected and communicate with each other through a bus 440.
The communication interface 430 is mainly used to implement communication between each module, apparatus, unit and/or device in the embodiments of the present application.
Bus 440 includes hardware, software, or both that couple components of the electronic device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 440 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
By way of example, electronic device 400 may be a cell phone, tablet, notebook, palm, in-vehicle electronic device, ultra-mobile personal computer (UMPC), netbook, or personal digital assistant (personal digital assistant, PDA), or the like.
The electronic device can execute the music recommendation method in the embodiment of the application, so that the music recommendation method and the device described in connection with fig. 1 to 3 are realized.
In addition, in combination with the music recommendation method in the above embodiment, the embodiment of the application may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the music recommendation methods of the above embodiments.
In addition, the embodiment of the application also provides a vehicle, which may include at least one of the following:
a music recommendation device as in any one of the embodiments of the second aspect;
an electronic device as in any of the embodiments of the third aspect;
a computer readable storage medium as in any one of the embodiments of the fourth aspect. And will not be described in detail herein.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (12)

1. A music recommendation method, comprising:
acquiring a facial image of a user;
determining the expression type of the user according to the facial image;
determining a target music style corresponding to the expression type from a plurality of music styles;
and recommending at least one target music corresponding to the target music style to the user.
2. The method according to claim 1, wherein the method further comprises:
playing the at least one target music;
and returning to execute the facial image of the acquired user in a first preset time period before the playing of the at least one target music is completed.
3. The method of claim 1, wherein the determining the type of expression of the user from the facial image comprises:
identifying facial features of the user from the facial image;
and determining the expression type of the user according to the facial features.
4. A method according to claim 3, wherein said acquiring a facial image of a user comprises:
acquiring a plurality of facial images of a user in a second preset time period; the determining the expression type of the user according to the facial image comprises the following steps:
identifying a plurality of facial features of the user from the plurality of facial images, respectively;
and comprehensively analyzing the facial features to determine the expression type of the user.
5. The method of claim 1, wherein prior to determining a target musical style corresponding to the expression type from a plurality of musical styles, the method further comprises:
establishing a corresponding relation between the expression type and the music style;
the obtaining the target music style corresponding to the expression type from the plurality of music styles comprises the following steps:
and determining a target music style corresponding to the expression type from a plurality of music styles according to the corresponding relation between the expression type and the music style.
6. The method of claim 1, wherein the number of target music is a plurality;
the recommending at least one target music corresponding to the target music style to the user comprises:
acquiring a target music recommendation list corresponding to the target music style;
determining recommendation sequences of a plurality of target music according to the target music recommendation list;
and sequentially recommending a plurality of target music corresponding to the target music style to the user according to the recommendation sequence.
7. The method of claim 6, wherein prior to determining the recommendation order for the plurality of target music from the target music recommendation list, the method further comprises:
counting the playing times of a plurality of pieces of music corresponding to each of the plurality of music styles in a third preset time period according to a historical playing record, wherein the historical playing record comprises at least one of the historical playing record of the user and the historical playing record of the whole network user;
and respectively sequencing the plurality of music corresponding to each music style according to the playing times to obtain a music recommendation list corresponding to each music style.
8. The method of claim 6, wherein the method further comprises:
acquiring the playing progress of each target music in a plurality of target music;
under the condition that the playing progress reaches a preset progress, advancing the recommendation sequence of the target music in the target music recommendation list;
and removing the target music from the target music recommendation list under the condition that the playing progress does not reach the preset progress.
9. A music recommendation device, the device comprising:
the acquisition module is used for acquiring the facial image of the user;
a first determining module, configured to determine an expression type of the user according to the facial image;
a second determining module, configured to determine a target music style corresponding to the expression type from a plurality of music styles;
and the recommending module is used for recommending at least one target music corresponding to the target music style to the user.
10. An electronic device, the electronic device comprising: a processor and a memory storing computer program instructions;
the music recommendation method according to any of claims 1-8 being implemented when said computer program instructions are executed by said processor.
11. A computer readable storage medium, having stored thereon computer program instructions, which when executed by a processor, implement a music recommendation method according to any of claims 1-8.
12. A vehicle, comprising at least one of:
the music recommendation device of claim 9;
the electronic device of claim 10;
the computer readable storage medium of claim 11.
CN202211171823.1A 2022-09-26 2022-09-26 Music recommendation method, device, equipment, storage medium and vehicle Pending CN117763224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211171823.1A CN117763224A (en) 2022-09-26 2022-09-26 Music recommendation method, device, equipment, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211171823.1A CN117763224A (en) 2022-09-26 2022-09-26 Music recommendation method, device, equipment, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN117763224A true CN117763224A (en) 2024-03-26

Family

ID=90309207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211171823.1A Pending CN117763224A (en) 2022-09-26 2022-09-26 Music recommendation method, device, equipment, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN117763224A (en)

Similar Documents

Publication Publication Date Title
CN111988638B (en) Method and device for acquiring spliced video, electronic equipment and storage medium
CN111858869B (en) Data matching method and device, electronic equipment and storage medium
CN112200273B (en) Data annotation method, device, equipment and computer storage medium
CN107729928B (en) Information acquisition method and device
CN108491764B (en) Video face emotion recognition method, medium and device
EP3893125A1 (en) Method and apparatus for searching video segment, device, medium and computer program product
CN111797820B (en) Video data processing method and device, electronic equipment and storage medium
CN112417970A (en) Target object identification method, device and electronic system
CN112613508A (en) Object identification method, device and equipment
CN115205736A (en) Video data identification method and device, electronic equipment and storage medium
CN114494709A (en) Feature extraction model generation method, image feature extraction method and device
CN116450881B (en) Method and device for recommending interest segment labels based on user preference and electronic equipment
CN113140012A (en) Image processing method, image processing apparatus, image processing medium, and electronic device
CN111046288B (en) Content recommendation method, device, terminal and storage medium
CN117763224A (en) Music recommendation method, device, equipment, storage medium and vehicle
CN111353015B (en) Crowd-sourced question recommendation method, device, equipment and storage medium
CN110348369B (en) Video scene classification method and device, mobile terminal and storage medium
CN108733547B (en) Monitoring method and device
CN114721507A (en) Intelligent interaction method, intelligent glasses, intelligent interaction device and storage medium
CN113064497A (en) Statement identification method, device, equipment and computer storage medium
CN113051400A (en) Method and device for determining annotation data, readable medium and electronic equipment
CN113496288B (en) User stability determining method, device, equipment and storage medium
CN116910303B (en) Dance video recommendation method and device based on dance room learning stage
CN113657135A (en) In-vivo detection method and device based on deep learning and storage medium
CN113127674B (en) Song list recommendation method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination