CN111061908A - Recommendation method and system for movie and television dubbing author - Google Patents

Recommendation method and system for movie and television dubbing author Download PDF

Info

Publication number
CN111061908A
CN111061908A CN201911275715.7A CN201911275715A CN111061908A CN 111061908 A CN111061908 A CN 111061908A CN 201911275715 A CN201911275715 A CN 201911275715A CN 111061908 A CN111061908 A CN 111061908A
Authority
CN
China
Prior art keywords
music
attribute information
movie
video
pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911275715.7A
Other languages
Chinese (zh)
Other versions
CN111061908B (en
Inventor
刘杉
张嘉媛
刘明洋
郭佳琪
郭宇
蔡嘉诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Communication University of China
Original Assignee
Communication University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Communication University of China filed Critical Communication University of China
Priority to CN201911275715.7A priority Critical patent/CN111061908B/en
Publication of CN111061908A publication Critical patent/CN111061908A/en
Application granted granted Critical
Publication of CN111061908B publication Critical patent/CN111061908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Abstract

The disclosure relates to a recommendation method and a system for movie and television dubbing authors, wherein the method comprises the following steps: determining the matching degree of first video attribute information of a first video segment to be dubbed music and a plurality of second video attribute information, wherein the plurality of second video attribute information are video attribute information corresponding to a plurality of preset first music; determining third movie attribute information from the plurality of second movie attribute information, wherein the third movie attribute information is at least one movie attribute information with the highest matching degree with the first movie attribute information in the plurality of second movie attribute information; and determining the author of the second music corresponding to the third movie and television attribute information as the partner music author of the first movie and television segment. The embodiment of the disclosure can improve the pertinence and the accuracy of recommendation of the soundtrack author, and further can improve the production efficiency of the whole film and television works.

Description

Recommendation method and system for movie and television dubbing author
Technical Field
The disclosure relates to the technical field of information processing, in particular to a recommendation method and a recommendation system for movie and television dubbing music authors.
Background
A good movie and television work can not leave excellent soundtrack. However, finding a suitable partner writer for a movie work is usually based on experience or manual recommendation, which is a long time-consuming task and reduces the production efficiency of the movie work.
Disclosure of Invention
In view of the above, the present disclosure provides a recommendation method and system for movie and television soundtrack authors.
According to an aspect of the present disclosure, there is provided a recommendation method for a movie soundtrack author, the method comprising:
determining the matching degree of first video attribute information of a first video segment to be dubbed music and a plurality of second video attribute information, wherein the plurality of second video attribute information are video attribute information corresponding to a plurality of preset first music;
determining third movie attribute information from the plurality of second movie attribute information, wherein the third movie attribute information is at least one movie attribute information with the highest matching degree with the first movie attribute information in the plurality of second movie attribute information;
and determining the author of the second music corresponding to the third movie and television attribute information as the partner music author of the first movie and television segment.
In one possible implementation, the method further includes:
establishing a movie and television music library, wherein the movie and television music library comprises a plurality of music-movie pairs, and each music-movie pair comprises a first music and a second movie and television fragment corresponding to the first music;
determining a first fitting coefficient according to a preset fitting model, the music attribute information of a first music in the plurality of music-video pairs and the fourth video attribute information of a second video fragment;
and respectively determining second film and television attribute information corresponding to each first music according to the fitting model, the first fitting coefficient and the music attribute information of each first music.
In one possible implementation, the movie library comprises N track-movie pairs, N being a positive integer,
determining a first fitting coefficient according to a preset fitting model, music attribute information of a first music in a plurality of music-video pairs and fourth video attribute information of a second video fragment, wherein the first fitting coefficient comprises the following steps:
determining a second fitting coefficient corresponding to the 1 st music-video pair according to a preset fitting model, the music attribute information of the first music in the 1 st music-video pair and the fourth video attribute information of the second video fragment, wherein the 1 st music-video pair is one of the N music-video pairs;
determining fifth video attribute information corresponding to the first music in the ith music-video pair according to the fitting model, the second fitting coefficient corresponding to the (i-1) th music-video pair and the music attribute information of the first music in the ith music-video pair, wherein i is a positive integer and is more than or equal to 2 and less than or equal to N;
determining a second fitting coefficient corresponding to the ith music-video pair according to fourth video attribute information of a second video fragment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair and a second fitting coefficient corresponding to the (i-1) th music-video pair;
and determining a second fitting coefficient corresponding to the Nth music-video pair as the first fitting coefficient.
In one possible implementation manner, determining a second fitting coefficient corresponding to the ith music-video pair according to fourth video attribute information of a second video segment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair, and a second fitting coefficient corresponding to the ith-1 music-video pair includes:
determining a first coefficient deviation value corresponding to the ith music-video pair according to fourth video attribute information of a second video segment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair and preset weight;
and determining the difference value of the second fitting coefficient corresponding to the i-1 th music-video pair and the first coefficient deviation value as the second fitting coefficient corresponding to the i-th music-video pair.
In one possible implementation manner, determining a first coefficient bias value corresponding to an ith music-video pair according to fourth video attribute information of a second video segment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair, and a preset weight includes:
determining the difference value of fifth movie attribute information corresponding to the first music in the ith music-movie pair and fourth movie attribute information of the second movie fragment in the ith music-movie pair as a movie attribute information deviation;
determining the product of the movie and television attribute information deviation and the music attribute information of the first music in the ith music-movie pair as a second coefficient deviation value corresponding to the ith music-movie pair;
and determining the product of the second coefficient deviation value and the preset weight as a first coefficient deviation value corresponding to the ith music-video pair.
In one possible implementation manner, determining second movie attribute information corresponding to each first music according to the fitting model, the first fitting coefficient, and music attribute information of each first music respectively includes:
and for any first music, inputting the first fitting coefficient and the music attribute information of the first music into the fitting model for processing to obtain second film and television attribute information corresponding to the first music.
In a possible implementation manner, determining a matching degree between first movie attribute information and a plurality of second movie attribute information of a first movie fragment to be dubbed includes:
determining the Euclidean distance between the first film and television attribute information of the first film and television segment to be dubbed and the any second film and television attribute information;
and determining the matching degree of the first video attribute information and the second video attribute information according to the Euclidean distance.
In one possible implementation manner, the first movie attribute information includes a movie type, an emotion type, a plot effect, and a shooting method of the first movie fragment.
In one possible implementation manner, the music piece attribute information includes a rhythm, a style of key, a harmony progression type, a orchestrator, an emotion type, a style type, and a sound-painting relationship of the first music piece, where the rhythm includes a beat and a tempo.
According to another aspect of the present disclosure, there is provided a recommendation system for movie and television soundtrack authors, the system comprising:
the matching module is used for determining the matching degree of first video attribute information of a first video segment to be dubbed and a plurality of second video attribute information, wherein the plurality of second video attribute information are video attribute information corresponding to a plurality of preset first music;
the selecting module is used for determining third movie attribute information from the plurality of second movie attribute information, wherein the third movie attribute information is at least one movie attribute information with the highest matching degree with the first movie attribute information in the plurality of second movie attribute information;
and the determining module is used for determining the author of the second music corresponding to the third movie attribute information as the partner music author of the first movie fragment.
According to the embodiment of the disclosure, at least one music piece with the highest matching degree with the movie and television segment to be dubbed can be selected from the plurality of music pieces according to the matching degree of the movie and television attribute information of the movie and television segment to be dubbed and the preset movie and television attribute information corresponding to the plurality of music pieces, and the author of the music piece is determined as the dubbing author of the movie and television segment, so that the dubbing author can be recommended for the movie and television segment according to the matching degree of the movie and television attribute information, the pertinence and the accuracy of recommendation of the dubbing author are improved, and the production efficiency of the whole movie and television work can be improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a recommendation method of a movie soundtrack author according to an embodiment of the present disclosure.
Fig. 2 shows a block diagram of a recommendation system for movie soundtrack authors according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a recommendation method of a movie soundtrack author according to an embodiment of the present disclosure. As shown in fig. 1, the method includes:
step S100, determining the matching degree of first film and television attribute information of a first film and television segment to be dubbed and a plurality of second film and television attribute information, wherein the plurality of second film and television attribute information are film and television attribute information corresponding to a plurality of preset first music;
step S200, determining third movie attribute information from the plurality of second movie attribute information, wherein the third movie attribute information is at least one movie attribute information with the highest matching degree with the first movie attribute information in the plurality of second movie attribute information;
step S300, determining the author of the second music corresponding to the third movie attribute information as the partner music author of the first movie fragment.
According to the embodiment of the disclosure, at least one music piece with the highest matching degree with the movie and television segment to be dubbed can be selected from the plurality of music pieces according to the matching degree of the movie and television attribute information of the movie and television segment to be dubbed and the preset movie and television attribute information corresponding to the plurality of music pieces, and the author of the music piece is determined as the dubbing author of the movie and television segment, so that the dubbing author can be recommended for the movie and television segment according to the matching degree of the movie and television attribute information, the pertinence and the accuracy of recommendation of the dubbing author are improved, and the production efficiency of the whole movie and television work can be improved.
In one possible implementation, the movie or television work described in the embodiments of the present disclosure may refer to a work that is filmed on a substance (e.g., a film, a storable medium, etc.), is composed of a series of pictures with or without accompanying sound, and is shown and played by a suitable device (e.g., a player), and may include movie works and works created by a method similar to filming (e.g., a tv show, a video work, etc.). The present disclosure does not limit the details of the movie work.
In a possible implementation manner, when determining a soundtrack author of a movie work, the movie work may be divided into a plurality of movie fragments, and the soundtrack author recommendation is performed on each movie fragment to be dubbed, and then one or more soundtrack authors of the movie work may be determined according to the recommendation frequency of the soundtrack author, the weight of each movie fragment, and the like. The present disclosure does not limit the manner and specific number of the partner authors of the movie work.
In a possible implementation manner, before determining a music distributor of a first video segment to be distributed, a plurality of preset first music pieces and second video attribute information corresponding to the first music pieces can be determined. The recommendation method for the movie and television dubbing author can further comprise the following steps:
establishing a movie and television music library, wherein the movie and television music library comprises a plurality of music-movie pairs, and each music-movie pair comprises a first music and a second movie and television fragment corresponding to the first music;
determining a first fitting coefficient according to a preset fitting model, the music attribute information of a first music in the plurality of music-video pairs and the fourth video attribute information of a second video fragment;
and respectively determining second film and television attribute information corresponding to each first music according to the fitting model, the first fitting coefficient and the music attribute information of each first music.
In a possible implementation mode, a plurality of dubbing music video clips can be selected by selecting classic dubbing music or classic video clips or a combination mode of the classic dubbing music and the classic video clips; and then establishing a film and television music library comprising a plurality of music-film and television pairs according to the plurality of film and television fragments. Wherein each music-video pair comprises a first music and a second video segment corresponding to the first music.
For example, when a movie music library is established, a plurality of classical dubbing movie fragments can be selected from a plurality of released movie works; then, a plurality of music-film pairs are determined according to the plurality of music film fragments: for any one of the soundtrack video clips, the soundtrack of the soundtrack video clip can be determined as a first music piece of the music-video pair, and the soundtrack video clip can be determined as a second video clip of the music-video pair corresponding to the first music piece. For example, a music-film pair can be determined from the section of dubbing movies in XX Earth 2:39-6: 52: determining the score ' global XX and ' XX earth plan ' of the score movie fragment as a first music of a music-movie pair, and determining the score fragment of ' XX earth ' 2:39-6:52 as a second movie fragment corresponding to the first music in the music-movie pair.
In a possible implementation manner, after the movie and television music library is established, the music attribute information of the first music and the fourth movie and television attribute information of the second movie and television segment in each music-movie pair can be respectively determined.
In one possible implementation, the music piece attribute information of the first music piece may include a rhythm, a style of key, a type of harmony progression, a orchestrator, a type of emotion, a type of style, and a sound-painting relationship of the first music piece.
In one possible implementation, the rhythm of the first music piece may include a beat and a tempo. Where the tempo refers to the total note length of each bar in the first music piece, and may be represented by a beat number, which is denoted as N/M (where N represents the number of beats per bar and M represents the basic duration of the beat), for example, 2/4, 3/4, 4/4, 6/8, 7/8, 9/8, etc.; the speed may be expressed in beats Per minute BPM (beat Per minute), e.g., BPM 90.
In one possible implementation, in determining the tempo of the first music, all occurrences of different tempos in the first music may be analyzed, tempos whose duration elapsed from occurrence to termination is less than or equal to a first preset duration (e.g., duration of 4 bars) are discarded, and other tempos are determined as the tempo of the first music.
Wherein the time length of each bar can be determined by the tempo and speed of the first music. For example, the time length t of each bar can be determined by the following formula (1):
Figure BDA0002315507510000071
for example, the tempo of the first music "global XX and" XX earth plan "includes the tempo 4/4, the speed BPM 90.
It is to be understood that the tempo (including the beat and the speed) can be determined by a person skilled in the art from the score of the first music in combination with the actual situation, and the present disclosure does not limit this.
In one possible implementation manner, the mode analysis may be performed on the first music according to a preset mode type, so as to determine a plurality of main modes of the first music. Wherein, the preset mode type can include: natural major (i.e., yioni, abbreviated as C major), natural minor (i.e., eooli), harmony minor, melody minor, chinese quintuple (palace, quotient, angle, sign, feather), other schooling rotary (including other schooling rotary except for yioni and eooli), other (including japanese ballad, major frey scale, blue, various sixes, no-tone, etc.).
In one possible implementation, when performing a tonal analysis of the first music, the initial tonal analysis and the tonal analysis confirmed after the transposition may be determined as the dominant tonal analysis of the first music. Wherein, the modulation after modulation can be confirmed by the following modes: a transposition is considered to occur only if one established pitch is explicitly abandoned (e.g. over a second predetermined period), another pitch is redefined by its harmony and is closed, and other occurrences of unnatural sounds not belonging to the scale of the pitch are considered to be temporarily removed (or said to occur in a certain area of the pitch) rather than transposed. For example, a new style of each tune may be discarded for a time period less than or equal to 5% of the duration of the whole music or a time period of 2 bars after the appearance of the new style to the end of the style, and the new style of the other tune may be used as the confirmed style after the tune. The modified mode can be confirmed by those skilled in the art according to the actual situation, and the disclosure does not limit this.
For example, for the first music, "global XX and" XX earth plan ", the first music 0:00-1:40 is b natural minor and 1: 40-music is c natural minor, then the key style of the first music is natural minor.
In one possible implementation, the harmony progression type of the first music may be determined according to a degree of complexity of the harmony progression of the first music. The complexity of the harmony may include the frequency of the harmony transform, the aggressiveness of the pitch (i.e. the distance between the domain of the harmony arrival and the natural form of the pitch) and the frequency of the pitch. Harmonic progression types may include conservative, neutral, aggressive.
In one possible implementation, the frequency of harmonic transformation, the excitement degree of the key-off and the frequency of the key-off can be classified into three levels of 0,1 and 2 respectively.
Wherein the level of the degree of frequency of harmonic transformation of the first music piece may be determined according to the harmonic mean transformation frequency of the first music piece. For example, when the harmonic mean conversion frequency of the first music piece is less than or equal to 1 time per bar, the level of the frequency of harmonic conversion thereof is 0; when the average harmonic transformation frequency of the first music is more than or equal to 2 times per section, the level of the frequency degree of harmonic transformation is 2; in other cases, the first music has a harmonic transformation frequency on the order of 1.
In one possible implementation, the level of aggressiveness of the tune-away of the first music may be determined by: when the first music only uses the natural chord, the subordinate chord (for example, II7, II9, # IVdim7, etc.) and IVm (namely, harmony major) in the major scale of the current scale, the degree of excitement of the major scale is 0; when the chord used in the first music piece includes not only 0-level chord but also subordinate chord from the schdule tune and artificial chord of increasing three, decreasing three and half decreasing seven on the natural tone level of the tune, the level of the excitation degree of the off-tune is 1; when the first music piece uses a chord other than the above-described chords (for example, another chord in the field of the small subordinates, various wandering chords, or the like), the degree of excitation of the key-off is ranked at 2.
In one possible implementation, the level of how frequently a first musical composition is de-tuned may be determined by: when the proportion of chords of unnatural sounds in the harmony sequence by the harmony of the first music piece is greater than or equal to a first preset proportion (e.g., 30%), the degree of frequency of the off-tones thereof is ranked 2; when the ratio of the chord of the unnatural sound in the harmony sequence of the harmony progression of the first music piece is less than or equal to a second preset ratio (e.g., 5%), the level of the frequency degree of the off-tones is 0, wherein the second preset ratio is less than the first preset ratio; in other cases, the frequency of the tune-away of the first music is ranked at 2.
After the level of the frequency degree of the harmony transform of the first music, the level of the aggressiveness degree of the off-tune, and the level of the frequency degree of the off-tune are determined, the harmony progression type of the first music may be determined according to the sum of the three levels. For example, the harmony progression type of the first music is conservative when the sum of the three levels is 0,1, 2; when the sum of the three grades is 3, the harmony progression type of the first music is neutral; the harmony progress type of the first music is aggressive when the sum of the three levels is 4, 5, and 6.
For example, the level 0 of how frequently the harmony transforms of the first music "global XX and" XX earth plans "is analyzed; the degree of excitement of a key-off is on the order of 2, since it uses a harmony (bII) of the nature of the Napeles 6 chord at 1:31 to assist in the making of the key-off; the degree of frequency of the escape is 0. The sum of the three levels is 2, and therefore the harmony progression type of the first music can be determined to be conservative.
It should be understood that the specific level of how often the harmonic transformation is frequent, how aggressively the inflection is away, how frequently the inflection is away, can be determined by one skilled in the art based on the actual situation, and the disclosure is not limited thereto.
In one possible implementation, the musical instrument layout of the first music may be analyzed according to a predetermined musical instrument type, and at least one orchestrator of the first music may be determined. Wherein the preset instrument type may include: accordion (Organ), Bass (Bass), Brass Ensemble (Brass Ensemble), Trumpet (trumpt), Trombone (Trombone), Trumpet (Horn), Trumpet (Tuba), Timpani (Timpani), Bell (Bell), other percussion instruments (Mallett), woodpipe Ensemble (Wood), jazz drum (DrumSet), woodguitar (a. guitar), electric guitar (e.guitar), Harp (Harp), electric Piano (e.piano), Piano (Piano), pipe Organ (Organ), String Ensemble (String Ensemble), Violin (Violin), violoncello (violonsolo), harmh (harmony), vocal (Vocals), Saxophone (Saxophone), Flute (souchon), Flute (Flute), flapony (Flute).
For example, after analyzing the musical instrument layout of the first music piece "global XX and" XX earth plan ", it is determined that the orchestrator thereof includes string music group, brass pipe group, piano, and timpani.
In one possible implementation, the emotion type of the first music may be determined according to a preset emotion type and a style, a orchestration, a rhythm, and the like of the first music. Wherein, the preset emotion types can comprise sadness, calmness, thriller, excitement and joy.
For example, for the first music, "global XX and" XX earth plan ", the emotion type is determined to be" excited "according to the preset emotion type and the mode and rhythm of the first music.
In one possible implementation, the style of the first music piece may be determined according to a preset style type. The preset style types may include symphony, rock, jazz, rap, electronic music, New century music (New age), Vocal singing (Vocal), pop, ballad, and flip (Cover).
In a possible implementation manner, the sound-picture relationship between the first music and the second movie fragment can be determined according to the relationship between the pictures of the first music and the second movie fragment in the music-movie pair. The sound-picture relation comprises sound-picture parallelism, sound-picture synchronization and sound-picture alignment. The sound-picture parallelism means that pictures of the first music and the second movie and television fragment are not related, and the first music and the second movie and television fragment exist and do not influence each other; the sound and picture synchronization means that the pictures of the first music and the second movie and television segment are closely combined, the emotion of the first music and the emotion of the second music are kept consistent, and the rhythm is kept synchronous; the sound-picture alignment means that the emotion, rhythm and the like of the first music are inconsistent or opposite to the picture of the second movie fragment.
In one possible implementation manner, the fourth movie attribute information of the second movie fragment may include a movie type, an emotion type, a plot effect, and a shooting method of the second movie fragment.
The type of the second film and television segment can be one or more, for example, XX earth belongs to science fiction type, and war X belongs to action, war and military type.
In one possible implementation, all the movie types of the movie and television works can be preset, for example, all the movie and television types of the movie and television works can include history, war, love, science fiction, military affairs, action, comedy, tragedy, horror, suspicion, ethics; and then determining the film and television type of the second film and television segment according to the actual conditions of the scenario, the scene and the like of the second film and television segment. The skilled person can preset all the types of the movies of the movie works according to the actual situation, and the present disclosure is not limited to this.
In a possible implementation manner, at least one emotion type of the second movie fragment may be determined according to a preset emotion type, a scenario, a scene, and the like of the second movie fragment. Wherein, the preset emotion types can comprise sadness, calmness, thriller, excitement and joy.
For example, the movie fragment of XX Earth 2:39-6:52 states that the various countries constitute a joint government that jointly resists the upcoming global disaster and that plans for "pilots" and "migrations in underground cities" are made. From this scenario, it can be determined that the emotional type of movie & TV segments of XX Earth 2:39-6:52 is "calm".
For another example, for a movie fragment of XX Earth 49:30-53:32, a catatonic atmosphere before sacrifice of grandfather can correspond to a 'thrilling' emotion, and a protist and the like after sacrifice of grandfather lose victims and creeps of parent and can correspond to a 'sad' emotion, so that the emotion types of the movie fragment of XX Earth 49:30-53:32 can be determined to be 'thrilling' and 'sad'.
Also for example, in the movie & TV section of the "XX Earth" 1:17:55-1:21:53, hero et al found a method of saving the Earth and made a plan, the corresponding emotion was "excited", and thus, the type of emotion in the movie & TV section of the "XX Earth" 1:17:55-1:21:53 was determined to be "excited".
It should be understood that the specific emotion type of the second movie fragment can be determined by those skilled in the art according to actual situations, and the present disclosure is not limited thereto.
In a possible implementation manner, the plot effect of the second movie fragment in the movie work can be determined according to a preset plot effect, a scenario, a scene and the like of the second movie fragment. Wherein the predetermined episodic action may include: clue or promote the development of story lines, set suspense to arouse the interest of audiences, make the story fluctuate and draw people, follow the story of the previous movie and television segment, make a cushion or bury the pen for the development of the story line behind, represent the environment of character activities in the story line, depict the character characters, express the theme or deepen the theme, and leave a sufficient imagination space for the audiences. The plot effect of the movie fragments in the movie work can be preset by those skilled in the art according to the actual situation, and the disclosure does not limit this.
For example, the video segment of "XX Earth" 2:39-6:52, through explaining the beginning and end of the wandering Earth plan, i.e. the earth is about to encounter the great disaster of annihilation by the sun, the countries constitute the united government, decide to carry out the wandering Earth plan, and speak "goodbye, solar system" in more than ten languages, and simultaneously, the beginning and end of separation of the father and father of the owner and the passing of the father and grandfather into the underground city are described through pictures, so that the subsequent scenarios are more compact and complete. Therefore, the action of the scenario of the movie fragment can be determined as "laying a cushion or burying a pen for the development of the following scenario".
As another example, the movie fragment of XX Earth 49:30-53:32 is about the deceased episode of the princess ancestor father, and is the fire lead of the princess ancestor, and the movie fragment has the function of "pushing the story line to develop".
For example, the movie and television clips of XX Earth 1:17:55-1:21:53 are plots created by protists and the like and prepared to execute a plan for saving the earth, so that the whole story starts to turn, and a new hope is provided after the fear of the earth about to be destroyed, so that the story is a turning point of the development of the whole story and a turning point of the emotions of characters and audiences, and the story reaches a climax. Therefore, the plot effect of the movie fragment can be determined as "setting suspense to arouse interest of the viewer", "pushing the plot to develop".
In a possible implementation manner, the second movie fragment can be analyzed according to a preset shooting method of the movie and television works, and at least one shooting method of the second movie and television fragment is determined. The shooting technique of the preset film and television works can include multiple types, for example: incising manipulations (including incising, cutting, transecting, jumping, matching, crushing, cutting, invisible, L-shaped and J-shaped cuts), skill transition manipulations, non-skill transition manipulations, Magen manipulations, Monte-Tech, Long shot, scene description, detail description, white description manipulations, symbolic manipulations, paradox birth manipulations, pragmatic manipulations, space transformation, shot switching, etc. The shooting method of the preset movie and television works can be set by those skilled in the art according to actual conditions, and the disclosure does not limit this.
For example, the video segment of XX Earth 2:39-6:52 introduces the beginning and end of the wandering Earth's plan, the beginning and end of the separation of the father and the beginning and end of the grandfather entering the underground city with the grander to the audience through the space transformation of a plurality of scenes, and the broadcasting of news is not only non-abrupt, but also can connect the subsequent scenes. Therefore, the video clip adopts a spatial conversion shooting method.
As another example, the movie and television segment of XX Earth 49:30-53:32, which shapes the tension atmosphere of whether grandfathers can be successfully rescued, promotes the development of stories by switching between memories and reality. Therefore, the movie fragment adopts a shot method of lens switching.
Also for example, the 'XX Earth' 1:17:55-1:21:53 movie and television segment can shoot a person by using close-up, close-up and zoom-in for multiple times so as to embody the firm decision of the person on plan execution and the urgent hope of the success of the plan; the scene is shot by using the panorama and pulling and shaking the lens for many times so as to comprehensively represent the critical situation at that time and the severe situation that most teams begin to evacuate completely; the comparison of shooting a person in elevation versus shooting a Jupiter in elevation also represents the slightest time that a human is in disaster at that time. Therefore, the movie and television fragment adopts three shooting methods of scene depiction, detail depiction and space transformation.
It should be understood that those skilled in the art can determine the specific shooting method according to the actual situation of the movie and television segment, and the present disclosure does not limit the specific shooting method and the number of shooting methods of the movie and television segment.
In a possible implementation manner, after determining the music attribute information of the first music and the fourth movie attribute information of the second movie fragment in each music-movie pair, the first fitting coefficient may be determined according to a preset fitting model, the music attribute information of the first music in the plurality of music-movie pairs, and the fourth movie attribute information of the second movie fragment.
In one possible implementation, the music attribute information of the first music and the fourth movie attribute information of the second movie fragment in each music-movie pair may be quantized.
The fourth movie attribute information of the second movie fragment can be represented as a binary sequence Y, wherein the binary sequence corresponds to all possible attribute values of the movie attribute. When the second movie fragment has the kth attribute value, y corresponding to the kth attribute value in the binary sequence of the fourth movie attribute informationkHas a value of 1; when the second movie fragment does not have the k-th attribute, y corresponding to the k-th attribute value in the binary sequence of the fourth movie attribute informationkThe value of (d) is 0. Wherein, the value of k is 1, 2, …, and p represents the total number of all possible attribute values of the film and television attribute.
For example, as described above, the film and television attributes may include a film and television type, an emotion type, a scenario effect, and a shooting technique, wherein 11 attribute values of the film and television type, 5 attribute values of the emotion type, 9 attribute values of the scenario effect, and 14 attribute values of the shooting technique are available, and then all possible attribute values of the film and television attributes are 39. When the second film and television segment is the film and television segment of XX Earth 2:39-6:52, the corresponding binary system sequence of the fourth film and television attribute information is as follows: (0,0,0,1,0,0,0,0,0,0,0,0, 0,1,0) in this sequence, from left to right in this sequence, y is1To y39The value of (A) is as follows: the attribute value corresponding to the type of the film and the television is y1To y11,y41 indicates that the type of the second video segment is "science fiction"; the attribute value corresponding to the emotion type is y12To y16,y131 indicates that the mood type of the second movie fragment is "calm"; the attribute value corresponding to the action of the plot is y17To y25,y211 representsThe plot of the second movie segment is used as a 'bedding or buried pen for the development of the following plot'; the attribute value corresponding to the shooting method is y26To y39,y38The shooting method of the second video clip is "spatial conversion" denoted by 1.
In one possible implementation, the musical composition attribute information of the first musical composition may be represented as a q-dimensional vector X, where q represents the number of musical composition attributes and the vector element X represents the number of musical composition attributesjThe values of j are 1, 2, …, q.
In one possible implementation, x is a single attribute value when the music piece attribute is a single attribute valuejThe value can be directly taken. For discrete attribute values, different values may be set, respectively. For example, the attribute values of the harmony progression type include conservative, neutral, and aggressive, and the "conservative" value may be set to 0, the "neutral" value may be set to 1, and the "aggressive" value may be set to 2.
In one possible implementation, when the music piece attribute is a plurality of attribute values, the music piece attribute x may be determined in the following mannerjThe value of (A) is as follows: firstly, initializing a binary decimal number, wherein the integer part of the binary decimal number is 0, the digit of the decimal number part is +1 of the total number of all possible attribute values of the attribute of the music, the default value of the decimal number corresponding to all possible attribute values of the attribute of the music is 0, and the value of the last digit is 1; then, the decimal place corresponding to a plurality of attribute values of the music attribute is set to 1, and x is obtainedjA binary value of (c); converting the binary number into decimal number to obtain xjThe value of (a).
For example, as described above, the music attribute information of the first music "global XX and" XX earth plan "may be represented as an 8-dimensional vector, and the values of vector elements are: beat x12; velocity x290; main regulation type x30.01000001(2) 0.25390625 (10); harmony progress type x40; orchestrator x50.0010000100000000101000000000001(2) 0.12891578720882535 (10); mood type x63; genre type x70; relation between sound and picture x8=1。
In a possible implementation manner, after the music attribute information of the first music in each music-video pair is represented as a vector X and the fourth video attribute information of the second video segment is represented as a binary sequence Y, the first fitting coefficient may be determined through a preset fitting model. The fitting model can be used for predicting the fitting degree of the attribute values of the music attribute information and the movie and television attributes.
In one possible implementation, the fitting model may be represented by the following equation (2):
Figure BDA0002315507510000151
wherein X is the vector representation of the music attribute information of the first music, y is a known value and represents the attribute value of the film and television attribute, y is a predicted value and represents the fitting degree of the music attribute information X and the attribute value corresponding to w in the film and television attribute, w is the fitting coefficient corresponding to y, w is the vector with the same dimension as X, w is the fitting coefficient corresponding to y, and the fitting coefficient is the same as YTThe transpose of w is represented, and b is a fixed coefficient (which can be set according to actual conditions, for example, the value is 0, and the value remains unchanged after setting).
In a possible implementation manner, for any music-video pair, p Y music attribute information vectors X of the first music, Y binary sequences Y of the fourth video attribute information and b fixed coefficients can be determined according to the fitting model when the vector X of the music attribute information of the first music, the binary sequence Y of the fourth video attribute information and the fixed coefficient b are knownkCorresponding p fitting coefficients wk
In one possible implementation, the music attribute information of the first music and the fourth movie attribute information of the second movie fragment in a plurality of music-movie pairs in the movie music library can be used, and p y movie fragments are respectively paired in the above mannerkCorresponding p fitting coefficients wkPerforming multiple adjustments, and comparing the adjusted y with pkCorresponding p fitting coefficients wkAnd determined as a first fitting coefficient W.
In a possible implementation manner, after the first fitting coefficient is determined, second movie attribute information corresponding to each first music can be respectively determined according to the fitting model, the first fitting coefficient and the music attribute information of each first music.
For example, for any first music piece, the determined first fitting coefficients W (including p fitting coefficients W)k) Inputting the vector X of the music attribute information of the first music as a known value into the fitting model described in the formula (2), processing the known value, and determining p y pieces of music attribute information of the first music kP number of y kConstituting sequence YSecond movie attribute information corresponding to the first music is determined.
In this embodiment, the fitting coefficients in the fitting model can be adjusted through a plurality of music-video pairs in the video music library to obtain first fitting coefficients, and the second video attribute information corresponding to each first music is determined through the fitting model according to the first fitting coefficients and the music attribute information of the first music, so that the accuracy of the second video attribute information corresponding to the first music can be improved.
In one possible implementation, the movie library comprises N track-movie pairs, N being a positive integer,
determining a first fitting coefficient according to a preset fitting model, music attribute information of a first music in the plurality of music-video pairs and fourth video attribute information of the second video segment, which may include:
determining a second fitting coefficient corresponding to the 1 st music-video pair according to a preset fitting model, the music attribute information of the first music in the 1 st music-video pair and the fourth video attribute information of the second video fragment, wherein the 1 st music-video pair is one of the N music-video pairs;
determining fifth video attribute information corresponding to the first music in the ith music-video pair according to the fitting model, the second fitting coefficient corresponding to the (i-1) th music-video pair and the music attribute information of the first music in the ith music-video pair, wherein i is a positive integer and is more than or equal to 2 and less than or equal to N;
determining a second fitting coefficient corresponding to the ith music-video pair according to fourth video attribute information of a second video fragment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair and a second fitting coefficient corresponding to the (i-1) th music-video pair;
and determining a second fitting coefficient corresponding to the Nth music-video pair as the first fitting coefficient.
That is, when the movie library includes N track-movie pairs, the first fitting coefficient may be determined by:
and randomly selecting one music-video pair from the N music-video pairs as a 1 st music-video pair, and determining a second fitting coefficient corresponding to the 1 st music-video pair through a fitting model according to the music attribute information of the first music in the 1 st music-video pair and the fourth video attribute information of the second video segment. For example, the music attribute information of the first music in the 1 st music-movie pair may be represented as a vector X1Representing the fourth film attribute information of the second film segment as a binary sequence Y1(including p y1,k) Then, p y are determined by fitting the model described in equation (2)1,kCorresponding p w1,kP is equal to w1,kDetermining a second fitting coefficient W corresponding to the 1 st music-video pair1
And randomly selecting one music-video pair from the rest N-1 music-video pairs as a 2 nd music-video pair, and then determining fifth video attribute information corresponding to the first music in the 2 nd music-video pair according to the fitting model, a second fitting coefficient corresponding to the 1 st music-video pair and the music attribute information of the first music in the 2 nd music-video pair. And the value of the fifth film and television attribute information is a predicted value. For example, the music attribute information of the first music in the 2 nd music-movie pair may be expressed as a vector X2Then, a second fitting coefficient W corresponding to the 1 st music-video pair is applied1Vector X2Inputting the fitting model described in equation (2) to determine p y 2,kP number of y 2,kConstituting sequence Y2"fifth movie attribute information determined as corresponding to the first music in the 2 nd music-movie pair, wherein 0 < y 2,k<1;
Then, a second fitting coefficient corresponding to the 2 nd music-video pair may be determined according to fourth movie attribute information of the second movie fragment in the 2 nd music-video pair, music attribute information of the first music in the 2 nd music-video pair, fifth movie attribute information corresponding to the first music in the 2 nd music-video pair, and a second fitting coefficient corresponding to the 1 st music-video pair. That is, the second movie fragment in the 2 nd music-movie pair can be represented as the fourth movie attribute information (represented as the binary sequence Y)2) Music attribute information (expressed as vector X) of the first music in the 2 nd music-video pair2) And fifth movie attribute information (represented as a sequence Y) corresponding to the first music in the 2 nd music-movie pair2") of the second fitting coefficient W corresponding to the 1 st music-video pair1Adjusting the adjusted second fitting coefficient W1Determining a second fitting coefficient W corresponding to the 2 nd music-video pair2
The second fitting coefficient W can be determined2In a similar manner, a second fitting coefficient W corresponding to the ith music-video pair is determinediWherein i is a positive integer, and i is more than or equal to 2 and less than or equal to N; and a second fitting coefficient W corresponding to the Nth music-video pairNAnd determined as a first fitting coefficient W.
In this embodiment, the second fitting coefficient can be adjusted in an iterative manner by using N music-video pairs in the video music library until the second fitting coefficient W corresponding to the nth music-video pair is determinedNAnd W isNThe first fitting coefficient W is determined so that the accuracy of the first fitting coefficient can be improved.
In a possible implementation manner, determining a second fitting coefficient corresponding to the ith music-video pair according to fourth video attribute information of the second video segment in the ith music-video pair, music attribute information of the first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair, and a second fitting coefficient corresponding to the ith-1 music-video pair may include:
determining a first coefficient deviation value corresponding to the ith music-video pair according to fourth video attribute information of a second video segment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair and preset weight;
and determining the difference value of the second fitting coefficient corresponding to the i-1 th music-video pair and the first coefficient deviation value as the second fitting coefficient corresponding to the i-th music-video pair.
In a possible implementation manner, when determining the second fitting coefficient corresponding to the ith music-video pair, the first coefficient bias value corresponding to the ith music-video pair may be determined according to the fourth video attribute information of the second video segment in the ith music-video pair, the music attribute information of the first music in the ith music-video pair, the fifth video attribute information corresponding to the first music in the ith music-video pair, and the preset weight. The preset weight can be used for expressing the influence degree of the ith music-video pair on the second fitting coefficient corresponding to the ith-1 music-video pair. The value of the preset weight can be set by a person skilled in the art according to practical situations, and the disclosure does not limit this.
After determining the first coefficient bias value corresponding to the ith music-video pair, the difference between the first coefficient bias value and the second fitting coefficient corresponding to the ith-1 music-video pair may be determined as the second fitting coefficient corresponding to the ith music-video pair. For example, the second fitting coefficient corresponding to the i-1 st music-video pair may be represented as Wi-1The first coefficient deviation value corresponding to the ith music-video pair is represented as BiThen, a second fitting coefficient W corresponding to the ith music-video pairi=Wi-1-Bi
In this embodiment, the first coefficient deviation value corresponding to the ith music-video pair can be determined according to the related attribute information and the weight of the ith music-video pair, and the difference between the first coefficient deviation value and the second fitting coefficient corresponding to the ith-1 music-video pair is determined as the second fitting coefficient corresponding to the ith music-video pair, so that the related attribute information and the weight of the ith music-video pair can be comprehensively considered when determining the second fitting coefficient corresponding to the ith music-video pair, and the accuracy of the second fitting coefficient is improved.
In one possible implementation manner, determining a first coefficient bias value corresponding to an ith music-video pair according to fourth video attribute information of a second video segment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair, and a preset weight includes:
determining the difference value of fifth movie attribute information corresponding to the first music in the ith music-movie pair and fourth movie attribute information of the second movie fragment in the ith music-movie pair as a movie attribute information deviation;
determining the product of the movie and television attribute information deviation and the music attribute information of the first music in the ith music-movie pair as a second coefficient deviation value corresponding to the ith music-movie pair;
and determining the product of the second coefficient deviation value and the preset weight as a first coefficient deviation value corresponding to the ith music-video pair.
That is, the first coefficient deviation value B corresponding to the ith music-video pair may be determined in the following manneri
The fifth movie attribute information corresponding to the first music in the ith music-movie pair may be represented as a sequence YiAnd representing fourth movie attribute information of a second movie fragment in the ith music-movie pair as a sequence YiIs a reaction of Yi"and YiIs determined as the deviation Y of the film and television attribute informationi", i.e. Yi″′=Yi″-Yi
Then the ith oneThe music attribute information of the first music in the music-video pair is represented as a vector XiDeviation Y of the film and television attribute informationi"and XiIs determined as a second coefficient deviation value B'iI.e. B'i=Yi″′×Xi
Extracting the second coefficient deviation value B'iMultiplied by a predetermined weight a, a first coefficient deviation value B corresponding to the ith music-video pair is determinediI.e. Bi=a×B′i=a×Yi″′×Xi=a×(Yi″-Yi)×Xi
In this embodiment, the deviation of the movie attribute information can be determined according to the predicted difference between the fifth movie attribute information and the fourth movie attribute information, and then the first coefficient deviation value can be determined according to the product and the weight of the movie attribute information deviation and the music attribute information, so that the pertinence and the accuracy of the first coefficient deviation can be improved.
In a possible implementation manner, after a plurality of first music pieces and second movie attribute information corresponding to the first music pieces are determined, in step S100, a matching degree of the first movie attribute information and the plurality of second movie attribute information of the first movie fragment to be dubbed is determined. The matching degree of the first video attribute information and the second video attribute information can be determined in various ways, for example, the matching degree of the first video attribute information and the second video attribute information can be determined by distance (euclidean distance, manhattan distance, etc.), pearson correlation coefficient, etc. The present disclosure does not limit the manner of determining the degree of matching.
In one possible implementation, step S100 may include: determining the Euclidean distance between the first film and television attribute information of the first film and television segment to be dubbed and the any second film and television attribute information; and determining the matching degree of the first video attribute information and the second video attribute information according to the Euclidean distance.
In a possible implementation manner, the first movie attribute information and the second movie attribute information of a first movie fragment to be dubbed are respectively regarded as two p-dimensional vectors, and the Euclidean distance between the two vectors is calculated; and then determining the matching degree between the first film and television attribute information and the second film and television attribute information according to the Euclidean distance. For example, the euclidean distance may be directly determined as the matching degree between the first movie attribute information and the second movie attribute information; normalization processing can be carried out on the Euclidean distance to obtain a relative value of the Euclidean distance, and the relative value is determined as the matching degree between the first film and television attribute information and the second film and television attribute information; or otherwise determine the matching degree between the first film and television attribute information and the second film and television attribute information, which is not limited by the present disclosure.
The matching degree between the first film and television attribute information and the second film and television attribute information is determined through the Euclidean distance, the method is simple and quick, and the processing efficiency can be improved.
In a possible implementation manner, after determining the matching degree between the first movie attribute information and the plurality of second movie attribute information of the first movie fragment, in step S200, at least one movie attribute information with the highest matching degree with the first movie attribute information may be selected from the plurality of second movie attribute information as the third movie attribute information. For example, assuming that the number of the selected third video attribute information is 5, when the matching degree is expressed by the euclidean distance, the shorter the euclidean distance is, the higher the matching degree is, the euclidean distance between the first video attribute information and each of the second video attribute information may be determined, the determined euclidean distances may be compared or sorted, 5 euclidean distances having the shortest distance may be selected, and 5 second video attribute information corresponding to the 5 euclidean distances may be determined as the third video attribute information.
In one possible implementation manner, in step S300, a second music corresponding to the third movie attribute information may be determined from the plurality of first music, and an author of the second music may be determined as a dubbing author of the first movie fragment.
According to the embodiment of the disclosure, at least one music piece with the highest matching degree with the movie and television segment to be dubbed can be selected from the plurality of music pieces according to the matching degree of the movie and television attribute information of the movie and television segment to be dubbed and the preset movie and television attribute information corresponding to the plurality of music pieces, and the author of the music piece is determined as the dubbing author of the movie and television segment, so that the dubbing author can be recommended for the movie and television segment according to the matching degree of the movie and television attribute information, the pertinence and the accuracy of recommendation of the dubbing author are improved, and the production efficiency of the whole movie and television work can be improved.
Fig. 2 shows a block diagram of a recommendation system for movie soundtrack authors according to an embodiment of the present disclosure. As shown in fig. 2, the recommendation system for movie and television soundtrack authors includes:
the matching module 21 is configured to determine matching degrees of first movie attribute information of a first movie fragment to be dubbed and a plurality of second movie attribute information, where the plurality of second movie attribute information are movie attribute information corresponding to a plurality of preset first music pieces;
the selecting module 22 is configured to determine third movie attribute information from the plurality of second movie attribute information, where the third movie attribute information is at least one movie attribute information with a highest matching degree with the first movie attribute information in the plurality of second movie attribute information;
the determining module 23 is configured to determine an author of the second music corresponding to the third movie attribute information as a music partner of the first movie fragment.
According to the embodiment of the disclosure, at least one music piece with the highest matching degree with the movie and television segment to be dubbed can be selected from the plurality of music pieces according to the matching degree of the movie and television attribute information of the movie and television segment to be dubbed and the preset movie and television attribute information corresponding to the plurality of music pieces, and the author of the music piece is determined as the dubbing author of the movie and television segment, so that the dubbing author can be recommended for the movie and television segment according to the matching degree of the movie and television attribute information, the pertinence and the accuracy of recommendation of the dubbing author are improved, and the production efficiency of the whole movie and television work can be improved.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A recommendation method for movie and television dubbing authors is characterized by comprising the following steps:
determining the matching degree of first video attribute information of a first video segment to be dubbed music and a plurality of second video attribute information, wherein the plurality of second video attribute information are video attribute information corresponding to a plurality of preset first music;
determining third movie attribute information from the plurality of second movie attribute information, wherein the third movie attribute information is at least one movie attribute information with the highest matching degree with the first movie attribute information in the plurality of second movie attribute information;
and determining the author of the second music corresponding to the third movie and television attribute information as the partner music author of the first movie and television segment.
2. The method of claim 1, further comprising:
establishing a movie and television music library, wherein the movie and television music library comprises a plurality of music-movie pairs, and each music-movie pair comprises a first music and a second movie and television fragment corresponding to the first music;
determining a first fitting coefficient according to a preset fitting model, the music attribute information of a first music in the plurality of music-video pairs and the fourth video attribute information of a second video fragment;
and respectively determining second film and television attribute information corresponding to each first music according to the fitting model, the first fitting coefficient and the music attribute information of each first music.
3. The method of claim 2, wherein the library of movie music comprises N music-movie pairs, N being a positive integer,
determining a first fitting coefficient according to a preset fitting model, music attribute information of a first music in a plurality of music-video pairs and fourth video attribute information of a second video fragment, wherein the first fitting coefficient comprises the following steps:
determining a second fitting coefficient corresponding to the 1 st music-video pair according to a preset fitting model, the music attribute information of the first music in the 1 st music-video pair and the fourth video attribute information of the second video fragment, wherein the 1 st music-video pair is one of the N music-video pairs;
determining fifth video attribute information corresponding to the first music in the ith music-video pair according to the fitting model, the second fitting coefficient corresponding to the (i-1) th music-video pair and the music attribute information of the first music in the ith music-video pair, wherein i is a positive integer and is more than or equal to 2 and less than or equal to N;
determining a second fitting coefficient corresponding to the ith music-video pair according to fourth video attribute information of a second video fragment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair and a second fitting coefficient corresponding to the (i-1) th music-video pair;
and determining a second fitting coefficient corresponding to the Nth music-video pair as the first fitting coefficient.
4. The method of claim 3, wherein determining the second fitting coefficient corresponding to the ith music-video pair according to the fourth video attribute information of the second video segment in the ith music-video pair, the music attribute information of the first music in the ith music-video pair, the fifth video attribute information corresponding to the first music in the ith music-video pair, and the second fitting coefficient corresponding to the ith-1 music-video pair comprises:
determining a first coefficient deviation value corresponding to the ith music-video pair according to fourth video attribute information of a second video segment in the ith music-video pair, music attribute information of a first music in the ith music-video pair, fifth video attribute information corresponding to the first music in the ith music-video pair and preset weight;
and determining the difference value of the second fitting coefficient corresponding to the i-1 th music-video pair and the first coefficient deviation value as the second fitting coefficient corresponding to the i-th music-video pair.
5. The method of claim 4, wherein determining the first coefficient bias value corresponding to the ith music-video pair according to the fourth video attribute information of the second video segment in the ith music-video pair, the music attribute information of the first music in the ith music-video pair, the fifth video attribute information corresponding to the first music in the ith music-video pair and the preset weight comprises:
determining the difference value of fifth movie attribute information corresponding to the first music in the ith music-movie pair and fourth movie attribute information of the second movie fragment in the ith music-movie pair as a movie attribute information deviation;
determining the product of the movie and television attribute information deviation and the music attribute information of the first music in the ith music-movie pair as a second coefficient deviation value corresponding to the ith music-movie pair;
and determining the product of the second coefficient deviation value and the preset weight as a first coefficient deviation value corresponding to the ith music-video pair.
6. The method according to claim 2, wherein determining second movie attribute information corresponding to each first music from the fitting model, the first fitting coefficient, and music attribute information of each first music, respectively, comprises:
and for any first music, inputting the first fitting coefficient and the music attribute information of the first music into the fitting model for processing to obtain second film and television attribute information corresponding to the first music.
7. The method of claim 1, wherein determining a matching degree of the first movie attribute information and the plurality of second movie attribute information of the first movie fragment to be dubbed comprises:
determining the Euclidean distance between the first film and television attribute information of the first film and television segment to be dubbed and the any second film and television attribute information;
and determining the matching degree of the first video attribute information and the second video attribute information according to the Euclidean distance.
8. The method according to claim 1 or 7, wherein the first movie attribute information includes a movie type, an emotion type, a plot effect and a shooting technique of the first movie fragment.
9. The method according to any one of claims 2 to 6, wherein the music piece attribute information includes a rhythm, a keynote style, a harmony progression type, a orchestrator, an emotion type, a style type, and a sound-picture relationship of the first music piece, wherein the rhythm includes a beat and a tempo.
10. A recommendation system for movie and television dubbing authors, the system comprising:
the matching module is used for determining the matching degree of first video attribute information of a first video segment to be dubbed and a plurality of second video attribute information, wherein the plurality of second video attribute information are video attribute information corresponding to a plurality of preset first music;
the selecting module is used for determining third movie attribute information from the plurality of second movie attribute information, wherein the third movie attribute information is at least one movie attribute information with the highest matching degree with the first movie attribute information in the plurality of second movie attribute information;
and the determining module is used for determining the author of the second music corresponding to the third movie attribute information as the partner music author of the first movie fragment.
CN201911275715.7A 2019-12-12 2019-12-12 Recommendation method and system for movie and television soundtrack authors Active CN111061908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911275715.7A CN111061908B (en) 2019-12-12 2019-12-12 Recommendation method and system for movie and television soundtrack authors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911275715.7A CN111061908B (en) 2019-12-12 2019-12-12 Recommendation method and system for movie and television soundtrack authors

Publications (2)

Publication Number Publication Date
CN111061908A true CN111061908A (en) 2020-04-24
CN111061908B CN111061908B (en) 2023-11-21

Family

ID=70300720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911275715.7A Active CN111061908B (en) 2019-12-12 2019-12-12 Recommendation method and system for movie and television soundtrack authors

Country Status (1)

Country Link
CN (1) CN111061908B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744763A (en) * 2021-08-18 2021-12-03 北京达佳互联信息技术有限公司 Method and device for determining similar melody

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120118127A1 (en) * 2010-11-12 2012-05-17 Yasushi Miyajima Information processing apparatus, musical composition section extracting method, and program
CN105718510A (en) * 2016-01-12 2016-06-29 海信集团有限公司 Multimedia data recommendation method and device
CN106708894A (en) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 Method and device of configuring background music for electronic book
CN110377840A (en) * 2019-07-29 2019-10-25 电子科技大学 A kind of music list recommended method and system based on user's shot and long term preference

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120118127A1 (en) * 2010-11-12 2012-05-17 Yasushi Miyajima Information processing apparatus, musical composition section extracting method, and program
CN106708894A (en) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 Method and device of configuring background music for electronic book
CN105718510A (en) * 2016-01-12 2016-06-29 海信集团有限公司 Multimedia data recommendation method and device
CN110377840A (en) * 2019-07-29 2019-10-25 电子科技大学 A kind of music list recommended method and system based on user's shot and long term preference

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744763A (en) * 2021-08-18 2021-12-03 北京达佳互联信息技术有限公司 Method and device for determining similar melody
CN113744763B (en) * 2021-08-18 2024-02-23 北京达佳互联信息技术有限公司 Method and device for determining similar melodies

Also Published As

Publication number Publication date
CN111061908B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
Ewert et al. Score-informed source separation for musical audio recordings: An overview
Gillet et al. Enst-drums: an extensive audio-visual database for drum signals processing
Schloss Using contemporary technology in live performance: The dilemma of the performer
Leydon et al. Off the planet: music, sound and science fiction cinema
TW201238279A (en) Semantic audio track mixer
WO2020029382A1 (en) Method, system and apparatus for building music composition model, and storage medium
CN111061908B (en) Recommendation method and system for movie and television soundtrack authors
Sha’ath Estimation of key in digital music recordings
Garoufis et al. Multi-Source Contrastive Learning from Musical Audio
Santiaji Learning Electric Bass Techniques in the Funk Genre Using Minus One
JP2007240552A (en) Musical instrument sound recognition method, musical instrument annotation method and music piece searching method
O'Connor Patented Electric Guitar Pickups and the Creation of Modern Music Genres
Hession et al. Extending instruments with live algorithms in a percussion/code duo
Zager Writing music for television and radio commercials (and more): a manual for composers and students
Holbrow Fluid Music
CN112528631B (en) Intelligent accompaniment system based on deep learning algorithm
Brunou Aural Identities
Sora The persona in instrumental rock.
Liu et al. Soundtrack Matching and Recommendation System of Film and TV Series
Klein Live and interactive electronic vocal compositions: trends and techniques for the art of performance
Chowenhill Seeing the Invisible: New Approaches to the Analysis of Extreme Metal and an Original Composition, raw [within](to self soothe) for Electric Guitar, Bass, and Percussion
Casanelles The hyperorchestra: A study of a virtual musical ensemble in film music that transcends reality
Ryall Portfolio of Musical Compositions
Mikkonen Producing pop music to Asian markets: a look into the markets and the song writing process of J-pop, K-pop and C-pop
Sarkar Time-domain music source separation for choirs and ensembles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant