CN106503034A - A kind of method and device for motion picture soundtrack - Google Patents

A kind of method and device for motion picture soundtrack Download PDF

Info

Publication number
CN106503034A
CN106503034A CN201610824071.2A CN201610824071A CN106503034A CN 106503034 A CN106503034 A CN 106503034A CN 201610824071 A CN201610824071 A CN 201610824071A CN 106503034 A CN106503034 A CN 106503034A
Authority
CN
China
Prior art keywords
animation
keyword
dubbed
music
background music
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610824071.2A
Other languages
Chinese (zh)
Other versions
CN106503034B (en
Inventor
吴松城
陈军宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Black Mirror Technology Co., Ltd.
Original Assignee
XIAMEN HUANSHI NETWORK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIAMEN HUANSHI NETWORK TECHNOLOGY Co Ltd filed Critical XIAMEN HUANSHI NETWORK TECHNOLOGY Co Ltd
Priority to CN201610824071.2A priority Critical patent/CN106503034B/en
Publication of CN106503034A publication Critical patent/CN106503034A/en
Priority to PCT/CN2017/099626 priority patent/WO2018049982A1/en
Application granted granted Critical
Publication of CN106503034B publication Critical patent/CN106503034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/14Details of searching files based on file metadata
    • G06F16/148File search processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This application discloses a kind of method for motion picture soundtrack, including:According to animation fragment, the first eigenvector of animation fragment is determined;Animation fragment by animation to be dubbed in background music in, obtain according to the Motion feature extraction of animation to be dubbed in background music;According to the first eigenvector of animation fragment, the first keyword corresponding with animation to be dubbed in background music is determined;According to the first keyword, the music sources matched with the first keyword are determined, the corresponding relation that sets up between animation to be dubbed in background music and the music sources that match.Disclosed herein as well is a kind of device for motion picture soundtrack, including:Characteristic vector determining module, the first keyword determining module and music sources matching module.The application determines keyword based on the motion feature of animation, can truer, accurately, comprehensively reflect the feature of animation, be to set up suitable corresponding relation to lay the foundation.Also, all processes of the application can be completed according to algorithm set in advance by computer, be conducive to the efficiency for rising to motion picture soundtrack.

Description

A kind of method and device for motion picture soundtrack
Technical field
The application is related to field of computer technology, more particularly to a kind of method and device for motion picture soundtrack.
Background technology
Three-dimensional animation, also known as 3D animations, is the emerging technology produced with the development of computer hardware technique. Using the three-dimensional animation of three-dimensional animation fabrication techniques, dash forward because its true, lively, accurate, operability and Modulatory character etc. are multinomial Go out performance, be widely used in the numerous areas such as medical science, education, military affairs, amusement.
For strengthening the expression effect of three-dimensional animation, can add for three-dimensional animation and suitably dub in background music.In prior art, can be with Go out animation text according to information abstractions such as the role in animation, object, scenes, i.e., animation, then foundation is described with text message Animation text finds corresponding audio file, and then audio file is associated with animation, can be lifted to a certain extent The make efficiency of animation audio.
But, there is following defect in above-mentioned prior art:
(1) animation is described by the information abstraction text message such as the role of animation, object, scene, there is description Inaccurate, not comprehensive the problems such as, so as to affecting the lookup of audio file and corresponding to.
(2) when the corresponding relation of animation and audio file is set up, it is used as medium by describing the text message of animation, right The raising of animation audio make efficiency is very limited.
Content of the invention
The embodiment of the present application provides a kind of method for motion picture soundtrack, it is intended to choose phase accurately, comprehensively, efficiently for animation The music of coupling.
The embodiment of the present application also provides a kind of device for motion picture soundtrack, it is intended to accurately, comprehensively, efficiently choose for animation The music for matching.
The embodiment of the present application adopts following technical proposals:
The method for motion picture soundtrack that the embodiment of the present application is provided, including:
According to animation fragment, the first eigenvector of the animation fragment is determined;The animation fragment is moved by dub in background music Obtain in picture, according to the Motion feature extraction of the animation to be dubbed in background music;
According to the first eigenvector of the animation fragment, determine that first corresponding with the animation to be dubbed in background music is crucial Word;
According to first keyword, determine the music sources matched with first keyword, set up described in wait to match somebody with somebody Corresponding relation between happy animation and the music sources for matching.
Alternatively, the embodiment of the present application is provided as in the method for motion picture soundtrack, according to the fisrt feature of the animation fragment Vector, determines the first keyword corresponding with the animation to be dubbed in background music, including:
According to the first eigenvector of the animation fragment, the second feature vector of the animation to be dubbed in background music is determined;
According to using second feature vector as input layer, using third feature vector as the first god of output layer building Through network, using probability highest predetermined number keyword in output layer as corresponding with the animation to be dubbed in background music first Keyword;
Wherein, animation to be dubbed in background music and the key corresponding to the component described in the representation in components in the third feature vector The corresponding probability of word, the keyword in component and the first keywords database in the third feature vector are corresponded;Also, Include at least one keyword in first keywords database.
Alternatively, the embodiment of the present application is provided as in the method for motion picture soundtrack, according to first keyword, is determined and institute The music sources that the first keyword matches are stated, including:
Obtain the second keyword corresponding with the music sources;
First keyword is mated with second keyword, if coupling, relative with second keyword The music sources that answers are matched with first keyword.
Alternatively, the embodiment of the present application is provided as in the method for motion picture soundtrack, is obtained corresponding with the music sources Second keyword, including:
Extract the mel-frequency cepstrum coefficient of the music sources;
According to the mel-frequency cepstrum coefficient of the music sources, the fourth feature vector of the music sources is determined;
According to using fourth feature vector as input layer, using fifth feature vector as the second god of output layer building Through network, probability highest predetermined number keyword in output layer is crucial as corresponding with the music sources second Word;
Wherein, music sources and the keyword phase corresponding to the component described in the representation in components in the fifth feature vector Corresponding probability, the keyword in component and the second keywords database in the fifth feature vector are corresponded;Also, it is described Include at least one keyword in second keywords database.
Alternatively, the embodiment of the present application is provided as in the method for motion picture soundtrack, set up described in animation to be dubbed in background music with described After corresponding relation between the music sources for matching, also include:
According to the first eigenvector of the animation fragment, merge audio in the music sources for matching.
Alternatively, the embodiment of the present application is provided as in the method for motion picture soundtrack, and the animation fragment is by animation to be dubbed in background music Extract in such a way and obtain:
To the animation to be dubbed in background music, the interframe variable quantity of two interframe is calculated;Wherein, two interFrameGap first is preset Frame number;
If the interframe variable quantity reaches predetermined threshold value, the institute comprising two frame and two interFrameGap is extracted The animation frame of the first default frame number is stated, as the animation fragment.
Alternatively, the embodiment of the present application is provided as in the method for motion picture soundtrack, and the animation fragment is by animation to be dubbed in background music Extract in such a way and obtain:
To the animation to be dubbed in background music, the interframe variable quantity of two interframe is calculated;Wherein, two interFrameGap first is preset Frame number;
Each interframe variable quantity is ranked up according to numerical values recited, predetermined number interframe variable quantity is extracted maximum , the animation frame of the described first default frame number comprising two frame and two interFrameGap, as the animation fragment.
Alternatively, the embodiment of the present application is provided as in the method for motion picture soundtrack, the first eigenvector of the animation fragment Including:The bone acceleration of animation bone spatial data and/or interframe.
The device for motion picture soundtrack that the embodiment of the present application is provided, including:
Characteristic vector determining module, for according to animation fragment, determining the first eigenvector of the animation fragment;Its In, the animation fragment is obtained by extracting in animation to be dubbed in background music;
First keyword determining module, for the first eigenvector according to the animation fragment, determines and waits to match somebody with somebody with described The first corresponding keyword of happy animation;
Music sources matching module, for according to first keyword, determination is matched with first keyword Music sources, set up the corresponding relation between animation to be dubbed in background music and the music sources for matching.
Alternatively, the embodiment of the present application is provided as in the device of motion picture soundtrack, is wrapped in the first keyword determining module Include first nerves network, the first nerves network is using second feature vector as input layer, using third feature vector as defeated Go out layer, for determining the first keyword corresponding with the animation to be dubbed in background music;Wherein, the second feature vector is according to institute First eigenvector determination is stated, corresponding to animation to be dubbed in background music described in the representation in components in the third feature vector and the component The corresponding probability of keyword, one a pair of the keyword in component and the first keywords database in the third feature vector Should;Also, include at least one keyword in first keywords database.
Above-mentioned at least one technical scheme that the embodiment of the present application is adopted can reach following beneficial effect:
Motion feature extraction animation fragment of the embodiment of the present application by animation, determines corresponding key on this basis Word, then the music sources for matching are determined according to keyword, and then set up corresponding between animation to be dubbed in background music and music sources Relation.Keyword is determined based on the motion feature of animation, can truer, accurately, comprehensively be reflected the feature of animation, be foundation Suitable corresponding relation lays the foundation.Also, all processes of the embodiment of the present application can be by computer according to set in advance Algorithm is completed, and is conducive to the efficiency for rising to motion picture soundtrack.
Description of the drawings
Accompanying drawing described herein is used for providing further understanding of the present application, constitutes the part of the application, this Shen Schematic description and description please does not constitute the improper restriction to the application for explaining the application.In the accompanying drawings:
Fig. 1 is the schematic flow sheet of the method in the embodiment of the present application for motion picture soundtrack;
Fig. 2 is the composition schematic diagram of animation fragment in the embodiment of the present application;
Fig. 3 is second schematic flow sheet for the method for motion picture soundtrack in the embodiment of the present application;
Fig. 4 is that the enforcement of the third neutral net for building in the method for motion picture soundtrack in the embodiment of the present application is illustrated Figure;
Fig. 5 is the 4th kind of schematic flow sheet for the method for motion picture soundtrack in the embodiment of the present application;
Fig. 6 is the structural representation of the device in the embodiment of the present application for motion picture soundtrack.
Specific embodiment
Purpose, technical scheme and advantage for making the application is clearer, below in conjunction with the application specific embodiment and Corresponding accompanying drawing is clearly and completely described to technical scheme.Obviously, described embodiment is only the application one Section Example, rather than whole embodiments.Embodiment in based on the application, those of ordinary skill in the art are not being done The every other embodiment obtained under the premise of going out creative work, belongs to the scope of the application protection.
Below in conjunction with accompanying drawing, the technical scheme that each embodiment of the application is provided is described in detail.
Embodiment 1
A kind of method for motion picture soundtrack that the embodiment of the present application is provided, shown in Figure 1, including:
S101:According to animation fragment, the first eigenvector of animation fragment is determined;Animation fragment is by animation to be dubbed in background music In, obtain according to the Motion feature extraction of animation to be dubbed in background music;
S102:According to the first eigenvector of animation fragment, the first keyword corresponding with animation to be dubbed in background music is determined;
S103:According to the first keyword, determine the music sources matched with the first keyword, set up animation to be dubbed in background music With the corresponding relation between the music sources for matching.
Motion feature extraction animation fragment of the embodiment of the present application by animation, determines corresponding key on this basis Word, then the music sources for matching are determined according to keyword, and then set up corresponding between animation to be dubbed in background music and music sources Relation.Keyword is determined based on the motion feature of animation, can truer, accurately, comprehensively be reflected the feature of animation, be foundation Suitable corresponding relation lays the foundation.Also, all processes of the embodiment of the present application can be by computer according to set in advance Algorithm is completed, and is conducive to the efficiency for rising to motion picture soundtrack.
In step S101 according to animation fragment, before determining the first eigenvector of animation fragment, first foundation is needed to wait to match somebody with somebody The motion feature of happy animation, extracts above-mentioned animation fragment from animation to be dubbed in background music.Specifically, the animation that dubs in background music is treated, can be with The interframe variable quantity of two interframe is first calculated;Wherein, two interFrameGaps first preset frame number.Then, between judgment frame, whether variable quantity Predetermined threshold value is reached, if interframe variable quantity reaches predetermined threshold value, comprising two frames and two interFrameGaps first is extracted and is preset The animation frame of frame number, used as animation fragment.After interframe variable quantity is calculated, it is also possible to big according to numerical value to each interframe variable quantity Little be ranked up, extract predetermined number interframe variable quantity maximum, comprising two frames and two interFrameGaps first preset frame Several animation frames, used as animation fragment.
When the interframe variable quantity of two interframe is calculated, the two of the certain frame number in interval (being designated as the first default frame number) can be chosen Frame is calculated, and the frame number at interval can be 1 frame, 5 frames, 10 frames etc..First default frame number can be changeless preset value, For example, it is possible to treating animation of dubbing in background music carries out preliminary classification, the animation of the fast pace type such as sports, dancing, action is set Less first default frame number, sets the default frame number of larger first to the animation of the slow rhythm types such as the lyric, story of a play or opera.First is pre- If frame number can also be the adjustable value for carrying out adaptations according to the motion feature of animation to be dubbed in background music.For example, it is assumed that the The initial value of one default frame number is 10, then the interframe variable quantity of the two field pictures of 10 frame of counting period;If interframe variable quantity is very Greatly, the animation for representing to be dubbed in background music to be had within 10 frames that this is spaced significantly move or frequently move, then for avoiding leak Fall motion characteristic, more comprehensively, more accurately to reflect the motion feature of animation to be dubbed in background music, can be by the first default frame number Value is reduced to 5, further the interframe variable quantity of the two field pictures of 5 frame of counting period;The rest may be inferred, until thinking to be spaced first The two field pictures of default frame number only reflect the single self contained function of animation to be dubbed in background music.
When the interframe variable quantity of two interframe is calculated, the coordinate data that can extract bone space in animation frame is counted Calculate.Generally, 100 or so skeleton points, coordinate data of each skeleton point in bone space just embody on 1 frame animation frame Action form in animation, the change of coordinate data of each skeleton point between different animation frames also just embody animation Motion feature.Therefore, the changes in coordinates amount of same skeleton point in bone space can just be reflected animation as interframe variable quantity Motion feature, and interframe variable quantity is bigger, represents that the motion feature of animation is stronger.
When animation fragment is extracted according to interframe variable quantity, as it was previously stated, default threshold can be reached based on interframe variable quantity The animation frame of value constitutes animation fragment, it is also possible to which the animation frame based on interframe variable quantity relative maximum constitutes animation fragment, may be used also Further it is ranked up by numerical values recited with the interframe variable quantity to reaching predetermined threshold value, is then based on interframe variable quantity maximum Animation frame constitutes animation fragment.When animation fragment is constituted, to extract and include:Interframe variable quantity meets two pre-conditioned frames and moves Frame, and the animation frame of the first default frame number of two interFrameGaps is drawn, as animation fragment.In the specific implementation, can more than Based on stating two frame animation frames, forwardly and/or backwardly extend the animation frame of default frame number (such as 2 frames, 5 frames etc.), with above-mentioned two The animation frame of the first default frame number between frame collectively forms animation fragment.Fig. 2 gives the schematic diagram of above-mentioned animation fragment.t Represent the first default frame number between animation frame 11 and animation frame 12, t1Represent based on the interframe variable quantity between frame 11 and frame 12 Reach the frame number that pre-conditioned t frame animations extend forward, t2Represent the frame number extended back based on t frame animations, t1And t2Value Natural number more than or equal to zero, t are all taken as1And t2Value can be the same or different, and t1And t2Value should be generally less than The value of t.Animation fragment shown in Fig. 2, start frame be animation frame 10, end frame be animation frame 13, the animation fragment include (t1+t +t2) frame animation frame.In known key-frame animation (Key Frame Animation) wait the animation that dubs in background music, it is also possible to will close Keyframe animation is set out so as to more efficiently extract from animation to be dubbed in background music directly as start frame or the end frame of animation fragment Picture section.
According to the motion feature of animation to be dubbed in background music, from wait extracting in animation of dubbing in background music after above-mentioned animation fragment, step is can perform Rapid S101, according to animation fragment, determines the first eigenvector of animation fragment.Wherein, the first eigenvector of animation fragment can To include:The bone acceleration of animation bone spatial data and/or interframe.Animation bone spatial data can be characterized The amplitude of variation of skeleton point in animation fragment, the bone acceleration of interframe can characterize the change speed of skeleton point in animation fragment Degree, therefore, first eigenvector can show the motion feature of animation fragment.
Below by taking animation fragment shown in Fig. 2 as an example, the calculating process of the bone acceleration of interframe is illustrated.Calculate starting The gap of animation bone spatial data between frame 10 and end frame 13, used as the change width of each skeleton point in the animation fragment Degree T;Calculate the time s between start frame 10 and end frame 13;It is assumed that the skeleton point in animation carries out uniformly accelerated motion, according to public affairs FormulaIt is calculated the bone acceleration a of interframe.It should be noted that when calculating bone acceleration, during selected motion Between corresponding with amplitude of variation, for example, can use key frame 11 and calculate animation bone spatial data with key frame 12 Gap and interlude, so that calculate bone acceleration;Also the animation frame that can use 5 frames of interval calculates bone acceleration, Then bone acceleration can be calculated according to below equation:Be spaced each skeleton point in two frames of 5 frames amplitude of variation/5 frame corresponding when Between square.
In the fisrt feature that the bone acceleration according to animation bone spatial data and/or interframe constitutes animation fragment When vectorial, can be carried out using any regular, as long as each animation fragment of same animation to be dubbed in background music follows identical rule. For example, first eigenvector is constituted according to animation bone spatial data, the component of the first eigenvector can be taken as jth frame The x-axis coordinate of i-th skeleton point (common I skeleton point) of (common J frames), y-axis coordinate or z-axis coordinate;Again for example, according to interframe Bone acceleration constitute first eigenvector, the component of the first eigenvector can be taken as the bone in the x-axis direction of adjacent two frame The bone acceleration of bone acceleration, the bone acceleration in y-axis direction or z-axis direction, it is also possible to be taken as start frame and end frame Between the bone acceleration in x-axis direction, the bone acceleration in y-axis direction or z-axis direction bone acceleration, can also be taken as In animation fragment between two crucial animation frames all directions bone acceleration;Again for example, while sitting according to animation bone space The bone acceleration of mark data and interframe constitutes first eigenvector, then can be by i-th skeleton point (common I of jth frame (common J frames) Individual skeleton point) x-axis coordinate, bone acceleration, the y-axis direction in y-axis coordinate or z-axis coordinate and the x-axis direction of adjacent two frame Bone acceleration or the bone acceleration in z-axis direction constitute each component in a certain order.It should be noted that each point Particular location of the amount in first eigenvector can be not construed as limiting, as long as each animation fragment of same animation to be dubbed in background music is corresponded to Coordinate data and/or the bone acceleration institute structure on corresponding skeleton point, correspondence direction on frame, corresponding skeleton point, correspondence direction Into position of the component in first eigenvector identical.
Shown in Figure 3, S101 is being executed according to animation fragment, after determining the first eigenvector of animation fragment, executing First eigenvectors of the S102 according to animation fragment, determines with when the first corresponding keyword of the animation that dubs in background music, can adopt Carried out with methods such as decision tree, neutral nets.As a example by determining the first keyword using neutral net, may particularly include:
S1021:According to the first eigenvector of animation fragment, the second feature vector of animation to be dubbed in background music is determined;
Specifically, the component in the second feature vector of animation to be dubbed in background music, can directly using the of each animation fragment Component in one characteristic vector, in a certain order or aligned transfer.For example, it is assumed that extracting 2 from animation to be dubbed in background music Individual animation fragment, includes 5 components, respectively in the first eigenvector of each animation fragment:One { x of animation fragment0,x1,x2, x3,x4And two { y of animation fragment0,y1,y2,y3,y4, then second feature vector can be according to animation fragment appearance order with And the order of each component is constituted in first eigenvector, such as { x0,x1,x2,x3,x4,y0,y1,y2,y3,y4, or according to Certain rule, such as extracts corresponding component order in each animation fragment and arranges, form second feature vector { x0,y0,x1,y1, x2,y2,x3,y3,x4,y4}.In addition, it is also possible to which the component in first eigenvector is calculated, for example weighted calculation, Using result of calculation as the vectorial component of second feature.
S1022:According to using second feature vector as input layer, using third feature vector as export layer building first Neutral net, using probability highest predetermined number keyword in output layer as the first pass corresponding with animation to be dubbed in background music Keyword;Wherein, the representation in components animation to be dubbed in background music in third feature vector is corresponding with the keyword corresponding to the component Probability, the keyword in component and the first keywords database in third feature vector are corresponded;Also, in the first keywords database Comprising at least one keyword.
Assume that animation to be dubbed in background music is divided into l animation fragment, in each animation fragment, there are J frames, include I bone per frame Point, each skeleton point have the coordinate data in 3 directions (x-axis direction, y-axis direction and z-axis direction) and 3 directions (x-axis direction, y Direction of principal axis and z-axis direction) bone acceleration, then the first eigenvector of animation fragment have (J*I* (3+3)) tie up, to be dubbed in background music The second feature vector of animation has (l*J*I* (3+3)) to tie up.
Further, when execution step S1022 is built neutral net and determines the first keyword, made with second feature vector For input layer, then input layer has (l*J*I* (3+3)) individual input variable, in conjunction with the neutral net schematic diagram shown in Fig. 4, that is, is input into Layer { x0,x1,…,xN-1In component in each variable and second feature vector correspond, indicate "+1 " in input layer Circle is the bias node of input layer, that is, intercept item, dimension N=l*J*I* (3+3)+1 of input layer.Nerve shown in Fig. 4 The output layer of network is made up of the third feature vector for representing animation the to be dubbed in background music probability corresponding with corresponding keyword, defeated The number for going out layer is consistent with the number of the keyword in the first keywords database, is M, and the numerical value of output layer output is represented to be waited to match somebody with somebody The happy animation probability corresponding with each keyword in the first keywords database.The hidden layer of neutral net shown in Fig. 4 can have One layer, it is possibility to have multilayer;In the number of the node in each hidden layer, i.e. Fig. 4, the value of K is also optional.The quantity of hidden layer Can be set by testing the empirical value for obtaining with the quantity of the node of each hidden layer.Input layer, each hidden layer and Weight w between output layer is adjustable.Each component during explanation output layer third feature is vectorial by taking one layer of hidden layer as an example below Calculating process.
Input layer { x0,x1,…,xN-1, hidden layer is delivered to, the input of hidden layer is { h0,h1,…,hK-1, hidden layer It is output as { a0,a1,…,aK-1, wherein, each component of input layer is:
h0=x0·w00+x1·w01+x2·w02+…+xN-1·w0(N-1)+w0N
h1=x0·w10+x1·w11+x2·w12+…+xN-1·w1(N-1)+w1N
h2=x0·w20+x1·w21+x2·w22+…+xN-1·w2(N-1)+w2N
……
hK-1=x0·w(K-1)0+x1·w(K-1)1+x2·w(K-1)2+…+xN-1·w(K-1)(N-1)+w(K-1)N
The activation primitive of each concealed nodes is f, then concealed nodes are output as:
a0=f (h0)
a1=f (h1)
a2=f (h2)
……
aK-1=f (hK-1)
Wherein, activation primitive is represented between the input and output of single neuron (including concealed nodes and output node layer) Functional relation.Herein, activation primitive f can select continuously, can lead, bounded, the Sigmoid functions with regard to origin symmetryOr tanh functions
If hidden layer only has one layer, using the output of hidden layer as output layer input, each output layer node based on swash Function living can be calculated the output result of output layer, the i.e. component of third feature vector.If there is multilayer hidden layer, upper one Layer hidden layer output as the input of next layer of hidden layer, successively calculate, until using the output of last layer of hidden layer as The input of output layer, is calculated the output result of output layer, the i.e. component of third feature vector.
After being calculated the component of third feature vector, you can obtain animation to be dubbed in background music with the key corresponding to the component The corresponding probability of word.As the keyword in the component and the first keywords database in third feature vector is corresponded, therefore, Can be using probability highest predetermined number keyword as the first keyword corresponding with animation to be dubbed in background music.Same section is treated The animation that dubs in background music may correspond to multiple keywords for dividing from different perspectives, for example, the animation that one section of pupil plays soccer This kind of role of pupil, excited mood may be shown and run jump the type of action such as to play football, therefore, its first keyword can Can determine that as " excitement ", " children ", " playing soccer ", " running " etc..
The keyword included in first keywords database can be based on same angular divisions, for example can be according to mood, angle Color or type of action division etc., then now, in order to reach the purpose for describing animation to be dubbed in background music with multiple keyword multi-angles, can To set up multiple neutral nets, each neutral net using the first keywords database for dividing from different angles, now, can So that one keyword of probability highest (predetermined number is now set as 1) is closed as corresponding with animation to be dubbed in background music first Keyword.The keyword included in first keywords database can also be based on different angular divisions, for example, can be by mood, role The keyword angularly divided with type of action is listed in the first keywords database, then when exporting, can will be many for probability highest Individual keyword (predetermined number now may be set to the quantity for dividing angle) is used as the first pass corresponding with animation to be dubbed in background music Keyword.
First eigenvectors of the S102 according to animation fragment is being executed, the first pass corresponding with animation to be dubbed in background music is being determined After keyword, further execution step S103 according to the first keyword, the music sources matched with the first keyword can be determined, Including:
Obtain the second keyword corresponding with music sources;
First keyword is mated with the second keyword, if coupling, the music corresponding with second keyword Resource is matched with the first keyword.
Further, when obtaining the second keyword corresponding with music sources, if music sources have demarcated key Word, then directly can be mated the second keyword of music sources with the first keyword of animation to be dubbed in background music, to set up Corresponding relation between animation to be dubbed in background music and the music sources that match.If music sources not yet demarcate keyword, can be with The second keyword corresponding with music sources is obtained using following steps:
Extract the mel-frequency cepstrum coefficient of music sources;
According to the mel-frequency cepstrum coefficient of music sources, the fourth feature vector of music sources is determined;
According to using fourth feature vector as input layer, using fifth feature vector as the nervus opticus net of output layer building Network, using probability highest predetermined number keyword in output layer as the second keyword corresponding with music sources;Wherein, Corresponding with the keyword corresponding to the component probability of representation in components music sources in fifth feature vector, fifth feature to Keyword in component and the second keywords database in amount is corresponded;Also, close comprising at least one in the second keywords database Keyword.
Mel-frequency cepstrum coefficient (Mel-Frequency Cepstral Coefficients, MFCCs) is exactly to constitute plum The coefficient of your frequency cepstral.They are derived from the cepstrum (cepstrum) of audio fragment and represent (a nonlinear " spectrum- of-a-spectrum").The difference of cepstrum and mel-frequency cepstrum is that the frequency band of mel-frequency cepstrum is divided to be carved in Mel Equidistant partition on degree, it more can subhuman sense of hearing systems than the frequency band for the linear interval in normal cepstrum System.Therefore, the feature of music sources preferably can be embodied with mel-frequency cepstrum coefficient.With the Mel according to music sources Input layer of the fourth feature vector that frequency cepstral coefficient determines as neutral net, using the neutral net frame similar with Fig. 4 Structure, you can obtain the value of each component in fourth feature vector in output layer, and then can be according to the numerical values recited of each component, will be general Rate highest predetermined number keyword is used as the second keyword corresponding with music sources.Will not be described here.
Embodiment 2
On the basis of embodiment 1, in the corresponding relation for setting up animation to be dubbed in background music and between the music sources that match Afterwards, first eigenvector of step S104 according to animation fragment is can also carry out, merges sound in the music sources for matching Effect, shown in Figure 5.
As bone acceleration comprising animation bone spatial data and/or interframe in first eigenvector etc. embodies The component of motion feature, therefore, according to the first eigenvector of animation fragment, enters after it have found the music sources for matching again One step merges audio such that it is able to the motion feature of more lively, intuitive and accurate reflection animation.
For example, for the skeleton point of hand, according to hand skeleton point in the different animation frames of performance in first eigenvector Component, can real-time monitor the acceleration of the skeleton point.When acceleration reaches default threshold value, can be in the acceleration In duration, add the music sound effect of the acceleration rate threshold for being suitable for hand skeleton point, and with the mode be fade-in fade-out and The music sources of coupling are blended.
Again for example, for the skeleton point of foot, if the animation belongs to dancing class (now, one of keyword of the animation May be the relative words such as dancing or dancing), then by foot bones point in the different animation frames of performance in first eigenvector Component, when detect the skeleton point of foot with exceed pre-set velocity threshold value speed touching floor when, can add suitable Dancing, the instantaneous audio of tap-tap class, are blended with the music sources that mates.
For every kind of music type, the motion characteristic that can be combined with animation merges multiple music sound effects.According to each The characteristics of motion of animation skeleton point and motion feature, can go out most suitable music sound effect with decision-making, be added to original musical sound In, so as to strengthen expression effect.
Embodiment 3
With above example 1 or embodiment 2 accordingly, present invention also provides a kind of device for motion picture soundtrack, ginseng As shown in Figure 6, including:
Characteristic vector determining module 101, for according to animation fragment, determining the first eigenvector of animation fragment;Wherein, Animation fragment is obtained by extracting in animation to be dubbed in background music;
First keyword determining module 102, for the first eigenvector according to animation fragment, is determined and is moved with to dub in background music Draw the first corresponding keyword;
Music sources matching module 103, for according to the first keyword, determining the music money matched with the first keyword Source, the corresponding relation that sets up between animation to be dubbed in background music and the music sources that match.
Wherein, first nerves network may further include in the first keyword determining module, and first nerves network is with Two characteristic vectors are as input layer, using third feature vector as output layer, corresponding with animation to be dubbed in background music for determining First keyword;Wherein, second feature vector is determined according to first eigenvector, and the representation in components in third feature vector is waited to match somebody with somebody The happy animation probability corresponding with the keyword corresponding to the component, the component and the first keywords database in third feature vector In keyword correspond;Also, include at least one keyword in the first keywords database.
As the present embodiment is the device embodiment corresponding with the method for motion picture soundtrack, therefore, embodiment 1 and enforcement Explaination in example 2 with regard to method is applied to the present embodiment, will not be described here.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program Product.Therefore, the present invention can adopt complete hardware embodiment, complete software embodiment or with reference to software and hardware in terms of reality Apply the form of example.And, the present invention can be adopted in one or more computers for wherein including computer usable program code The upper computer program that implements of usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) is produced The form of product.
The present invention is the flow process with reference to method according to embodiments of the present invention, equipment (system) and computer program Figure and/or block diagram are describing.It should be understood that can be by computer program instructions flowchart and/or each stream in block diagram Journey and/or the combination of square frame and flow chart and/or the flow process in block diagram and/or square frame.These computer programs can be provided Instruct the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine so that produced for reality by the instruction of computer or the computing device of other programmable data processing devices The device of the function of specifying in present one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or multiple square frames.
These computer program instructions may be alternatively stored in and can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory is produced to be included referring to Make the manufacture of device, the command device realize in one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or The function of specifying in multiple square frames.
These computer program instructions can be also loaded in computer or other programmable data processing devices so that in meter Series of operation steps is executed on calculation machine or other programmable devices to produce computer implemented process, so as in computer or The instruction executed on other programmable devices is provided for realization in one flow process of flow chart or multiple flow processs and/or block diagram one The step of function of specifying in individual square frame or multiple square frames.
In a typical configuration, computing device includes one or more processors (CPU), input/output interface, net Network interface and internal memory.
Internal memory potentially includes the volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only storage (ROM) or flash memory (flash RAM).Internal memory is computer-readable medium Example.
Computer-readable medium includes that permanent and non-permanent, removable and non-removable media can be by any method Or technology is realizing information Store.Information can be computer-readable instruction, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only storage (ROM), electric erasable Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage (CD-ROM), Digital versatile disc (DVD) or other optical storages, magnetic cassette tape, the storage of tape magnetic rigid disk or other magnetic storage apparatus Or any other non-transmission medium, can be used to store the information that can be accessed by a computing device.Define according to herein, calculate Machine computer-readable recording medium does not include temporary computer readable media (transitory media), the such as data-signal and carrier wave of modulation.
Also, it should be noted term " including ", "comprising" or its any other variant are intended to nonexcludability Comprising so that a series of process, method, commodity or equipment including key elements not only includes those key elements, but also wrapping Other key elements being not expressly set out are included, or also includes intrinsic for this process, method, commodity or equipment wanting Element.In the absence of more restrictions, the key element for being limited by sentence "including a ...", it is not excluded that including described wanting Also there is other identical element in the process of element, method, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can be provided as method, system or computer program. Therefore, the application can adopt complete hardware embodiment, complete software embodiment or with reference to software and hardware in terms of embodiment Form.And, the application can be adopted to can use in one or more computers for wherein including computer usable program code and be deposited The shape of the upper computer program that implements of storage media (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) Formula.
Embodiments herein is the foregoing is only, the application is not limited to.For those skilled in the art For, the application can have various modifications and variations.All any modifications that is made within spirit herein and principle, equivalent Replace, improve etc., within the scope of should be included in claims hereof.

Claims (10)

1. a kind of method for motion picture soundtrack, it is characterised in that include:
According to animation fragment, the first eigenvector of the animation fragment is determined;The animation fragment by animation to be dubbed in background music in, Obtain according to the Motion feature extraction of the animation to be dubbed in background music;
According to the first eigenvector of the animation fragment, the first keyword corresponding with the animation to be dubbed in background music is determined;
According to first keyword, the music sources matched with first keyword are determined, set up described to be dubbed in background music Corresponding relation between animation and the music sources for matching.
2. according to claim 1 methods described, it is characterised in that according to the first eigenvector of the animation fragment, determine with The first corresponding keyword of the animation to be dubbed in background music, including:
According to the first eigenvector of the animation fragment, the second feature vector of the animation to be dubbed in background music is determined;
According to using second feature vector as input layer, using third feature vector as the first nerves net of output layer building Network, probability highest predetermined number keyword in output layer is crucial as corresponding with the animation to be dubbed in background music first Word;
Wherein, animation to be dubbed in background music and the keyword phase corresponding to the component described in the representation in components in the third feature vector Corresponding probability, the keyword in component and the first keywords database in the third feature vector are corresponded;Also, it is described Include at least one keyword in first keywords database.
3., according to claim 1 methods described, it is characterised in that according to first keyword, determine crucial with described first The music sources that word matches, including:
Obtain the second keyword corresponding with the music sources;
First keyword is mated with second keyword, if coupling, corresponding with second keyword Music sources are matched with first keyword.
4. according to claim 3 methods described, it is characterised in that obtain the second keyword corresponding with the music sources, Including:
Extract the mel-frequency cepstrum coefficient of the music sources;
According to the mel-frequency cepstrum coefficient of the music sources, the fourth feature vector of the music sources is determined;
According to using fourth feature vector as input layer, using fifth feature vector as the nervus opticus net of output layer building Network, using probability highest predetermined number keyword in output layer as the second keyword corresponding with the music sources;
Wherein, music sources described in the representation in components in the fifth feature vector are corresponding with the keyword corresponding to the component Probability, the keyword one-to-one corresponding in the component and the second keywords database in fifth feature vector;Also, described second Include at least one keyword in keywords database.
5. according to claim 1 methods described, it is characterised in that animation to be dubbed in background music and the music for matching described in setting up After corresponding relation between resource, also include:
According to the first eigenvector of the animation fragment, merge audio in the music sources for matching.
6. according to claim 1 methods described, it is characterised in that the animation fragment by animation to be dubbed in background music in such a way Extraction is obtained:
To the animation to be dubbed in background music, the interframe variable quantity of two interframe is calculated;Wherein, two interFrameGap first presets frame Number;
If the interframe variable quantity reaches predetermined threshold value, described the comprising two frame and two interFrameGap is extracted The animation frame of one default frame number, used as the animation fragment.
7. according to claim 1 methods described, it is characterised in that the animation fragment by animation to be dubbed in background music in such a way Extraction is obtained:
To the animation to be dubbed in background music, the interframe variable quantity of two interframe is calculated;Wherein, two interFrameGap first presets frame Number;
Each interframe variable quantity is ranked up according to numerical values recited, predetermined number interframe variable quantity maximum, bag is extracted Animation frame containing two frame and the described first default frame number of two interFrameGap, used as the animation fragment.
8. according to claim 1 methods described, it is characterised in that the first eigenvector of the animation fragment includes:Animation bone The bone acceleration of bone spatial data and/or interframe.
9. a kind of device for motion picture soundtrack, it is characterised in that include:
Characteristic vector determining module, for according to animation fragment, determining the first eigenvector of the animation fragment;Wherein, institute State animation fragment and obtained by extracting in animation to be dubbed in background music;
First keyword determining module, for the first eigenvector according to the animation fragment, determines to be dubbed in background music with described The first corresponding keyword of animation;
Music sources matching module, for according to first keyword, determining the music matched with first keyword Resource, sets up the corresponding relation between animation to be dubbed in background music and the music sources for matching.
10. according to claim 9 described device, it is characterised in that the first keyword determining module includes first nerves Network, the first nerves network is using second feature vector as input layer, using third feature vector as output layer, for true Fixed the first keyword corresponding with the animation to be dubbed in background music;Wherein, the second feature vector is according to the fisrt feature Vector determination, animation to be dubbed in background music and the keyword phase corresponding to the component described in the representation in components in the third feature vector Corresponding probability, the keyword in component and the first keywords database in the third feature vector are corresponded;Also, it is described Include at least one keyword in first keywords database.
CN201610824071.2A 2016-09-14 2016-09-14 A kind of method and device for motion picture soundtrack Active CN106503034B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610824071.2A CN106503034B (en) 2016-09-14 2016-09-14 A kind of method and device for motion picture soundtrack
PCT/CN2017/099626 WO2018049982A1 (en) 2016-09-14 2017-08-30 Method and device for soundtracking animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610824071.2A CN106503034B (en) 2016-09-14 2016-09-14 A kind of method and device for motion picture soundtrack

Publications (2)

Publication Number Publication Date
CN106503034A true CN106503034A (en) 2017-03-15
CN106503034B CN106503034B (en) 2019-07-19

Family

ID=58290432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610824071.2A Active CN106503034B (en) 2016-09-14 2016-09-14 A kind of method and device for motion picture soundtrack

Country Status (2)

Country Link
CN (1) CN106503034B (en)
WO (1) WO2018049982A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018049982A1 (en) * 2016-09-14 2018-03-22 厦门幻世网络科技有限公司 Method and device for soundtracking animation
CN109309863A (en) * 2018-08-01 2019-02-05 李春莲 Movie contents matching mechanism for seedlings
CN109672927A (en) * 2018-08-01 2019-04-23 李春莲 Movie contents matching process
CN110278484A (en) * 2019-05-15 2019-09-24 北京达佳互联信息技术有限公司 Video is dubbed in background music method, apparatus, electronic equipment and storage medium
CN110392302A (en) * 2018-04-16 2019-10-29 北京陌陌信息技术有限公司 Video is dubbed in background music method, apparatus, equipment and storage medium
CN110489572A (en) * 2019-08-23 2019-11-22 北京达佳互联信息技术有限公司 Multimedia data processing method, device, terminal and storage medium
CN110767201A (en) * 2018-07-26 2020-02-07 Tcl集团股份有限公司 Score generation method, storage medium and terminal equipment
CN111596918A (en) * 2020-05-18 2020-08-28 网易(杭州)网络有限公司 Animation interpolator construction method, animation playing method and device and electronic equipment
CN112153460A (en) * 2020-09-22 2020-12-29 北京字节跳动网络技术有限公司 Video dubbing method and device, electronic equipment and storage medium
CN113032619A (en) * 2019-12-25 2021-06-25 北京达佳互联信息技术有限公司 Music recommendation method and device, electronic equipment and storage medium
CN118283343A (en) * 2024-03-27 2024-07-02 北京度友信息技术有限公司 Video generation method, device and equipment based on music

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727943A (en) * 2009-12-03 2010-06-09 北京中星微电子有限公司 Method and device for dubbing music in image and image display device
CN102314702A (en) * 2011-08-31 2012-01-11 上海华勤通讯技术有限公司 Mobile terminal and animation editing method
US8717367B2 (en) * 2007-03-02 2014-05-06 Animoto, Inc. Automatically generating audiovisual works
CN103793447A (en) * 2012-10-26 2014-05-14 汤晓鸥 Method and system for estimating semantic similarity among music and images
CN105096989A (en) * 2015-07-03 2015-11-25 北京奇虎科技有限公司 Method and apparatus for processing background music
CN105447896A (en) * 2015-11-14 2016-03-30 华中师范大学 Animation creation system for young children

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503034B (en) * 2016-09-14 2019-07-19 厦门黑镜科技有限公司 A kind of method and device for motion picture soundtrack

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8717367B2 (en) * 2007-03-02 2014-05-06 Animoto, Inc. Automatically generating audiovisual works
CN101727943A (en) * 2009-12-03 2010-06-09 北京中星微电子有限公司 Method and device for dubbing music in image and image display device
CN102314702A (en) * 2011-08-31 2012-01-11 上海华勤通讯技术有限公司 Mobile terminal and animation editing method
CN103793447A (en) * 2012-10-26 2014-05-14 汤晓鸥 Method and system for estimating semantic similarity among music and images
CN105096989A (en) * 2015-07-03 2015-11-25 北京奇虎科技有限公司 Method and apparatus for processing background music
CN105447896A (en) * 2015-11-14 2016-03-30 华中师范大学 Animation creation system for young children

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018049982A1 (en) * 2016-09-14 2018-03-22 厦门幻世网络科技有限公司 Method and device for soundtracking animation
CN110392302A (en) * 2018-04-16 2019-10-29 北京陌陌信息技术有限公司 Video is dubbed in background music method, apparatus, equipment and storage medium
CN110767201B (en) * 2018-07-26 2023-09-05 Tcl科技集团股份有限公司 Music score generation method, storage medium and terminal equipment
CN110767201A (en) * 2018-07-26 2020-02-07 Tcl集团股份有限公司 Score generation method, storage medium and terminal equipment
CN109309863A (en) * 2018-08-01 2019-02-05 李春莲 Movie contents matching mechanism for seedlings
CN109672927A (en) * 2018-08-01 2019-04-23 李春莲 Movie contents matching process
CN110278484B (en) * 2019-05-15 2022-01-25 北京达佳互联信息技术有限公司 Video dubbing method and device, electronic equipment and storage medium
CN110278484A (en) * 2019-05-15 2019-09-24 北京达佳互联信息技术有限公司 Video is dubbed in background music method, apparatus, electronic equipment and storage medium
CN110489572A (en) * 2019-08-23 2019-11-22 北京达佳互联信息技术有限公司 Multimedia data processing method, device, terminal and storage medium
CN110489572B (en) * 2019-08-23 2021-10-08 北京达佳互联信息技术有限公司 Multimedia data processing method, device, terminal and storage medium
CN113032619A (en) * 2019-12-25 2021-06-25 北京达佳互联信息技术有限公司 Music recommendation method and device, electronic equipment and storage medium
CN113032619B (en) * 2019-12-25 2024-03-19 北京达佳互联信息技术有限公司 Music recommendation method, device, electronic equipment and storage medium
CN111596918A (en) * 2020-05-18 2020-08-28 网易(杭州)网络有限公司 Animation interpolator construction method, animation playing method and device and electronic equipment
CN111596918B (en) * 2020-05-18 2024-03-22 网易(杭州)网络有限公司 Method for constructing animation interpolator, method and device for playing animation, and electronic equipment
CN112153460A (en) * 2020-09-22 2020-12-29 北京字节跳动网络技术有限公司 Video dubbing method and device, electronic equipment and storage medium
CN118283343A (en) * 2024-03-27 2024-07-02 北京度友信息技术有限公司 Video generation method, device and equipment based on music

Also Published As

Publication number Publication date
WO2018049982A1 (en) 2018-03-22
CN106503034B (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN106503034B (en) A kind of method and device for motion picture soundtrack
Ofli et al. Learn2dance: Learning statistical music-to-dance mappings for choreography synthesis
Zhao et al. Interactive authoring of simulation-ready plants
Lopes et al. Modelling affect for horror soundscapes
US20140087871A1 (en) Character model animation using stored recordings of player movement interface data
TWI740315B (en) Sound separation method, electronic and computer readable storage medium
Farnell Behaviour, Structure and Causality in Procedural Audio1
Eigenfeldt et al. Negotiated Content: Generative Soundscape Composition by Autonomous Musical Agents in Coming Together: Freesound.
CN109670623A (en) Neural net prediction method and device
WO2022231824A1 (en) Audio reactive augmented reality
CN109670590A (en) Neural net prediction method and device
CN109670621A (en) Neural net prediction method and device
CN109670567A (en) Neural net prediction method and device
US9401684B2 (en) Methods, systems, and computer readable media for synthesizing sounds using estimated material parameters
CN109523614A (en) A kind of 3D animation deriving method, 3D animation playing method and device
Bogaers et al. Music-driven animation generation of expressive musical gestures
Manovich et al. Visualizing change: Computer graphics as a research method
CN109670572A (en) Neural net prediction method and device
JP2024522115A (en) Selection of supplemental audio segments based on video analysis
CN109670571A (en) Neural net prediction method and device
Choi et al. Can We Find Neurons that Cause Unrealistic Images in Deep Generative Networks?
Oliveira et al. Towards a comprehensive classification for procedural content generation techniques
CN110047118A (en) Video generation method, device, computer equipment and storage medium
Hamilton Perceptually coherent mapping schemata for virtual space and musical method
US20240282130A1 (en) Qualifying labels automatically attributed to content in images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190326

Address after: 361012 3F-A193, Innovation Building C, Software Park, Xiamen Torch High-tech Zone, Xiamen City, Fujian Province

Applicant after: Xiamen Black Mirror Technology Co., Ltd.

Address before: 9th Floor, Maritime Building, 16 Haishan Road, Huli District, Xiamen City, Fujian Province, 361000

Applicant before: XIAMEN HUANSHI NETWORK TECHNOLOGY CO., LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant