CN105404681A - Live broadcast sentiment classification method and apparatus - Google Patents

Live broadcast sentiment classification method and apparatus Download PDF

Info

Publication number
CN105404681A
CN105404681A CN201510834112.1A CN201510834112A CN105404681A CN 105404681 A CN105404681 A CN 105404681A CN 201510834112 A CN201510834112 A CN 201510834112A CN 105404681 A CN105404681 A CN 105404681A
Authority
CN
China
Prior art keywords
affective
information
characteristics information
style
matching degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510834112.1A
Other languages
Chinese (zh)
Inventor
韦传毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201510834112.1A priority Critical patent/CN105404681A/en
Publication of CN105404681A publication Critical patent/CN105404681A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7834Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using audio features

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention discloses a live broadcast sentiment classification method and apparatus, and belongs to the technical field of a network. The method comprises: in the live broadcast process of a live broadcast user, acquiring sound information of the live broadcast user; carrying out feature extraction on the sound information to acquire sentiment feature information of the sound information; and according to a matching degree between the sentiment feature information and predetermined sentiment feature information of each sentiment type, determining the sentiment type of the live broadcast user. According to the live broadcast sentiment classification method and apparatus, by acquiring the sound information of the live broadcast user in the live broadcast process, the sentiment type of the live broadcast user can be automatically determined according to the sound information without manual selection of the live broadcast user, so that the operation of the live broadcast user is simplified. Moreover, when live broadcast contents of the live broadcast user are changed, the sentiment type matched with a sentiment included in the changed live broadcast contents also can be determined, so that the condition that the changed sentiment is inconsistent with the sentiment type manually selected by the live broadcast user is avoided, and the classification accuracy is improved.

Description

Live sensibility classification method and device
Technical field
The present invention relates to networking technology area, particularly the live sensibility classification method of one and device.
Background technology
Along with the development of network technology, application live on line is more and more extensive, and a lot of live user can carry out live in channel, and other users can enter channel, watch the content of live user live broadcast.Because the content of the live user live broadcast of difference is different, the emotion comprised in content is also different, conveniently carry out unified management to multiple live user, the emotion that can comprise according to the live content of each live user, determines the affective style of each live user.
Direct broadcast server can provide multiple affective style, selects for live user, and live user according to oneself live content, can select the type of mating with the emotion that live content comprises, as the affective style of oneself from multiple affective style.And each spectators user can select interested live user according to the affective style of each live user.Such as, live user sings sad song usually, then the affective style of oneself can be defined as " sad songs " type by live user, and spectators user can watch the live content of this live user after selecting " sad songs " type.
Realizing in process of the present invention, inventor finds that prior art at least exists following problem:
Live user is needed manually to select affective style, complex operation.And the emotion that the live content due to live user comprises may change, the emotion after change may not mated with the affective style selected in advance, causes classification inaccurate.
Summary of the invention
In order to solve the problem of prior art, embodiments provide a kind of live sensibility classification method and device.Described technical scheme is as follows:
On the one hand, provide a kind of live sensibility classification method, described method comprises:
Carry out in live process live user, obtain the acoustic information of described live user;
Feature extraction is carried out to described acoustic information, obtains the affective characteristics information of described acoustic information;
According to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of described live user.
Alternatively, described feature extraction is carried out to described acoustic information, obtains the affective characteristics information of described acoustic information, comprising:
Described acoustic information is converted to text message;
Carry out feature extraction to described text message, obtain text emotion characteristic information, described text emotion characteristic information comprises at least one emotion keyword.
Alternatively, described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of described live user, comprising:
Obtain the pre-set text affective characteristics information of each affective style, the pre-set text affective characteristics information of each affective style comprises at least one emotion keyword;
Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style;
According to matching degree order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
Alternatively, described feature extraction is carried out to described acoustic information, obtains the affective characteristics information of described acoustic information, comprising:
Adopt and preset feature extraction algorithm, feature extraction is carried out to described acoustic information, obtains the sound affective characteristics information of described acoustic information.
Alternatively, described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of described live user, comprising:
Obtain the preset sound affective characteristics information of each affective style;
Matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style;
According to matching degree order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
Alternatively, described feature extraction is carried out to described acoustic information, obtains the affective characteristics information of described acoustic information, comprising:
Obtain the user voice information comprised in described acoustic information, feature extraction is carried out to described user voice information, obtain the first affective characteristics information; Or,
Obtain the environmental voice information comprised in described acoustic information, feature extraction is carried out to described environmental voice information, obtain the second affective characteristics information.
Alternatively, described affective characteristics information comprises text emotion characteristic information and sound affective characteristics information;
Described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of described live user, comprising:
Obtain pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style;
Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style, and the matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style;
According to the matching degree calculated order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
Alternatively, described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, after determining the affective style of described live user, described method also comprises:
Set up the corresponding relation between the identify label of described live user and the described affective style determined.
Alternatively, described method also comprises:
Receive the inquiry request that spectators user sends, described inquiry request comprises appointment affective style;
According to the corresponding relation between the identify label of each live user set up and affective style, inquire about the identify label that described appointment affective style is corresponding;
The identify label inquired is sent to described spectators user.
On the other hand, provide a kind of live emotional semantic classification device, described device comprises:
Acquisition module, for carrying out in live process live user, obtains the acoustic information of described live user;
Extraction module, for carrying out feature extraction to described acoustic information, obtains the affective characteristics information of described acoustic information;
Determination module, for according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determines the affective style of described live user.
Alternatively, described extraction module is also for being converted to text message by described acoustic information; Carry out feature extraction to described text message, obtain text emotion characteristic information, described text emotion characteristic information comprises at least one emotion keyword.
Alternatively, described determination module is also for obtaining the pre-set text affective characteristics information of each affective style, and the pre-set text affective characteristics information of each affective style comprises at least one emotion keyword; Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style; According to matching degree order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
Alternatively, described extraction module also for adopting default feature extraction algorithm, carries out feature extraction to described acoustic information, obtains the sound affective characteristics information of described acoustic information.
Alternatively, described determination module is also for obtaining the preset sound affective characteristics information of each affective style; Matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style; According to matching degree order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
Alternatively, described acquisition module also for obtaining the user voice information comprised in described acoustic information, carries out feature extraction to described user voice information, obtains the first affective characteristics information; Or,
Described acquisition module also for obtaining the environmental voice information comprised in described acoustic information, carries out feature extraction to described environmental voice information, obtains the second affective characteristics information.
Alternatively, described affective characteristics information comprises text emotion characteristic information and sound affective characteristics information;
Described determination module is also for obtaining pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style; Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style, and the matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style; According to the matching degree calculated order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
Alternatively, described device also comprises opening relationships module, and described opening relationships module is for setting up the corresponding relation between the identify label of described live user and the described affective style determined.
Alternatively, described device also comprises enquiry module, the inquiry request that described enquiry module sends for receiving spectators user, and described inquiry request comprises appointment affective style; According to the corresponding relation between the identify label of each live user set up and affective style, inquire about the identify label that described appointment affective style is corresponding; The identify label inquired is sent to described spectators user.
The beneficial effect that the technical scheme that the embodiment of the present invention provides is brought is:
By carrying out feature extraction to the acoustic information of live user, obtain the affective characteristics information of this acoustic information, and according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of this live user, can according to the acoustic information in live user live broadcast process, automatically determine the affective style of this live user, and manually select without the need to live user, simplify the operation of live user.And, when the live content of live user changes, also can by obtaining the acoustic information in live process, determine the affective style that the emotion comprised with the live content after change is mated, avoid the situation that affective style that the emotion after change and live user manually select is not inconsistent, improve the accuracy of classification.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the live sensibility classification method process flow diagram of one that the embodiment of the present invention provides;
Fig. 2 is the live sensibility classification method process flow diagram of one that the embodiment of the present invention provides;
Fig. 3 is the one live emotional semantic classification apparatus structure schematic diagram that the embodiment of the present invention provides;
Fig. 4 is the one live emotional semantic classification apparatus structure schematic diagram that the embodiment of the present invention provides;
Fig. 5 is the one live emotional semantic classification apparatus structure schematic diagram that the embodiment of the present invention provides.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Fig. 1 is the live sensibility classification method process flow diagram of one that the embodiment of the present invention provides, and the executive agent of the embodiment of the present invention is server, and see Fig. 1, the method comprises the following steps:
101, carry out, in live process, obtaining the acoustic information of this live user live user.
102, feature extraction is carried out to this acoustic information, obtain the affective characteristics information of this acoustic information.
103, according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, the affective style of this live user is determined.
The method that the embodiment of the present invention provides, by carrying out feature extraction to the acoustic information of live user, obtain the affective characteristics information of this acoustic information, and according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of this live user, according to the acoustic information in live user live broadcast process, can automatically determine the affective style of this live user, and manually select without the need to live user, simplify the operation of live user.And, when the live content of live user changes, also can by obtaining the acoustic information in live process, determine the affective style that the emotion comprised with the live content after change is mated, avoid the situation that affective style that the emotion after change and live user manually select is not inconsistent, improve the accuracy of classification.
Alternatively, this carries out feature extraction to this acoustic information, obtains the affective characteristics information of this acoustic information, comprising:
This acoustic information is converted to text message;
Carry out feature extraction to text information, obtain text emotion characteristic information, text affective characteristics information comprises at least one emotion keyword.
Alternatively, this, according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, is determined the affective style of this live user, comprising:
Obtain the pre-set text affective characteristics information of each affective style, the pre-set text affective characteristics information of each affective style comprises at least one emotion keyword;
Calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style;
According to matching degree order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Alternatively, this carries out feature extraction to this acoustic information, obtains the affective characteristics information of this acoustic information, comprising:
Adopt and preset feature extraction algorithm, feature extraction is carried out to this acoustic information, obtains the sound affective characteristics information of this acoustic information.
Alternatively, this, according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, is determined the affective style of this live user, comprising:
Obtain the preset sound affective characteristics information of each affective style;
Matching degree between the preset sound affective characteristics information calculating this sound affective characteristics information and each affective style;
According to matching degree order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Alternatively, this carries out feature extraction to this acoustic information, obtains the affective characteristics information of this acoustic information, comprising:
Obtain the user voice information comprised in this acoustic information, feature extraction is carried out to this user voice information, obtain the first affective characteristics information; Or,
Obtain the environmental voice information comprised in this acoustic information, feature extraction is carried out to this environmental voice information, obtain the second affective characteristics information.
Alternatively, this affective characteristics information comprises text emotion characteristic information and sound affective characteristics information;
This, according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, is determined the affective style of this live user, comprising:
Obtain pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style;
Calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style, and the matching degree between the preset sound affective characteristics information calculating this sound affective characteristics information and each affective style;
According to the matching degree calculated order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Alternatively, this is according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, and after determining the affective style of this live user, the method also comprises:
Set up the corresponding relation between the identify label of this live user and this affective style determined.
Alternatively, the method also comprises:
Receive the inquiry request that spectators user sends, this inquiry request comprises appointment affective style;
According to the corresponding relation between the identify label of each live user set up and affective style, inquire about the identify label that this appointment affective style is corresponding;
The identify label inquired is sent to this spectators user.
Above-mentioned all alternatives, can adopt the embodiment combining arbitrarily and form the embodiment of the present invention, this is no longer going to repeat them.
Fig. 2 is the live sensibility classification method process flow diagram of one that the embodiment of the present invention provides, and the executive agent of the embodiment of the present invention is server, and see Fig. 2, the method comprises the following steps:
201, carry out, in live process, obtaining the acoustic information of this live user live user.
In the embodiment of the present invention, this server can create one or more channel, and arbitrary user can, as live user, carry out live in channel, and other users as spectators user, can watch the live content of live user in this channel.Carry out in live process live user, the terminal of the current use of live user obtains the acoustic information of live user, and send this acoustic information to server, server receives this acoustic information, and carries out live in channel.
When the live content of live user changes, the emotion that the live content of live user comprises also may change, then in order to ensure the affective style can determining that when live content changes the live content after changing is corresponding exactly, server can carry out in live process live user, obtain the acoustic information of this live user, so that according to this acoustic information, determine the affective style of the emotion coupling that current live content comprises.
Particularly, the acoustic information that server can send from live terminal, intercept the acoustic information of preset duration, to determine the affective style mated according to the acoustic information be truncated to.Wherein, this preset duration can be 5 seconds, 10 seconds, 30 seconds, 1 minute etc., and the embodiment of the present invention does not limit this preset duration.
In addition, this server can also periodically obtain this acoustic information, and this acquisition cycle can be 1 minute, 5 minutes, 10 minutes etc., and the acquisition cycle of the embodiment of the present invention to this acoustic information does not also limit.
Such as, this preset duration is 10 seconds, and this acquisition cycle is 1 minute, then this server can carry out in live process live user, intercepts 10 seconds acoustic informations every 1 minute from the acoustic information that live terminal sends.
In practical application, this acoustic information can comprise user voice information and environmental voice information, when this server gets acoustic information, can get user voice information and environmental voice information simultaneously.Wherein, this user voice information refers to live user's one's voice in speech, and this environmental voice information refers to the background sound except live user's one's voice in speech, and as accompaniment sound etc., the embodiment of the present invention does not limit this.
202, feature extraction is carried out to this acoustic information, obtain the affective characteristics information of this acoustic information.
This server can carry out feature extraction to the acoustic information got, thus obtains the affective characteristics information of this acoustic information, and this affective characteristics information is for representing the emotion that the live content of this live user comprises.
In the embodiment of the present invention, adopt different feature extraction modes, dissimilar affective characteristics information can be got.That is to say, step 202 can comprise the following steps at least one item in 2021-2022:
2021, this acoustic information is converted to text message, feature extraction is carried out to text information, obtain text emotion characteristic information.
Wherein, text affective characteristics information, for representing the emotion that text information comprises, can comprise at least one emotion keyword, and this emotion keyword can be " happiness ", " sadness ", " anger " etc., and the embodiment of the present invention does not limit this.
In practical application, this server can adopt the technology such as speech recognition to change this acoustic information, this acoustic information is converted to text message, and adopt default segmentation methods to carry out word segmentation processing to text information, obtain several words in text information, at least one emotion keyword is chosen, as text emotion characteristic information from these words.
2022, adopt default feature extraction algorithm, feature extraction is carried out to this acoustic information, obtains the sound affective characteristics information of this acoustic information.
Server can adopt default feature extraction algorithm to carry out feature extraction to acoustic information, and using the characteristic information that extracts as sound affective characteristics information, this sound affective characteristics information is for representing the emotion that this acoustic information comprises.
Wherein, this default feature extraction algorithm is for extracting the characteristic information of this acoustic information, can be PCA (PrincipalComponentAnalysis, principal component analysis method), FDA (LinearDiscriminantAnalysis, Fisher face), ICA (IndependentComponentAnalysis, Independent component analysis) etc., the embodiment of the present invention does not limit this default feature extraction algorithm.
Owing to comprising different types of characteristic information in this acoustic information, as audio frequency characteristics, tonality feature, loudness of a sound feature, tamber characteristic etc., then this server can adopt different default feature extraction algorithms, feature extraction is carried out to this acoustic information, obtain different types of sound affective characteristics information, the kind of the embodiment of the present invention to sound affective characteristics information does not limit.
203, the matching degree between this affective characteristics information and the default affective characteristics information of each affective style is calculated, according to the matching degree calculated order from big to small, choose the matching degree of preset number, the affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
In order to determine the affective style that the emotion comprised with live content is mated in live process, this server can pre-determine the default affective characteristics information of each affective style, after getting affective characteristics information according to the acoustic information of live user, the default affective characteristics information of each affective style can be obtained, the affective characteristics information got is mated with the default affective characteristics information of each affective style, calculates the matching degree of the default affective characteristics information of this affective characteristics information and each affective style.
Wherein, this matching degree represents the similarity degree between this affective characteristics information and default affective characteristics information, the matching degree of this affective characteristics information and default affective characteristics information is higher, then represent that the default affective characteristics information of this affective characteristics information and this is more similar, that is to say, the possibility that this acoustic information belongs to affective style corresponding to default affective characteristics information is higher.
Therefore, after this server calculates the matching degree of the default affective characteristics information of this affective characteristics information and multiple affective style, the multiple matching degrees calculated can be arranged according to order from big to small, choosing the matching degree of preset number according to putting in order, the affective style corresponding to the matching degree chosen being defined as the affective style of this live user.Wherein, this preset number can pre-set, and also can arrange according to the number of the matching degree calculated, the embodiment of the present invention does not limit this.
Such as, this preset number is 2, after calculating multiple matching degree, choose front 2 matching degrees that numerical value is maximum, if the affective style corresponding with the matching degree chosen is respectively " cheerful and light-hearted " and " passion ", then this server determines that the affective style of this live user is for " cheerful and light-hearted " and " passion ".
In the embodiment of the present invention, for dissimilar affective characteristics information, determine that the detailed process of the affective style of this live user is also different according to this affective characteristics information.Based on above-mentioned steps 2021-2022, determine that the process of the affective style of this live user can comprise the following steps at least one item in 2031-2033 according to this affective characteristics information:
2031, the pre-set text affective characteristics information of each affective style is obtained, calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style, according to matching degree order from big to small, choose the matching degree of preset number, the affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Based on above-mentioned steps 2021, server can pre-determine the pre-set text affective characteristics information of each affective style, and the pre-set text affective characteristics information of each affective style comprises at least one emotion keyword.Such as, this affective style can be " sadness ", " cheerful and light-hearted ", " passion ", " anger " etc., if affective style is " cheerful and light-hearted ", then corresponding with " cheerful and light-hearted " pre-set text affective characteristics information can comprise the emotion keyword such as " happiness ", " joy ", " happily ", " happiness ".
When after the text emotion characteristic information getting acoustic information, this server can obtain the pre-set text affective characteristics information of each affective style, according to the emotion keyword in the pre-set text affective characteristics information of the emotion keyword in text affective characteristics information and each affective style, calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style, and the matching degree calculated is arranged according to order from big to small, the matching degree of preset number is chosen according to putting in order, affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Particularly, when calculating the matching degree between text affective characteristics information and the pre-set text affective characteristics information of some affective styles, this server can from the emotion keyword text affective characteristics information, choose the emotion keyword that the arbitrary emotion keyword corresponding with this affective style is similar, determine the number of selected emotion keyword, as the matching degree between text affective characteristics information and the pre-set text affective characteristics information of this affective style.Or, using the ratio between the total number of affective style corresponding with this affective style for the number of selected emotion keyword as the matching degree between text affective characteristics information and the pre-set text affective characteristics information of this affective style.
2032, the preset sound affective characteristics information of each affective style is obtained, matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style, according to matching degree order from big to small, choose the matching degree of preset number, the affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
Based on above-mentioned steps 2022, server can pre-determine the preset sound affective characteristics information of each affective style.When after the sound affective characteristics information getting acoustic information, this server can obtain the preset sound affective characteristics information of each affective style, matching degree between the preset sound affective characteristics information calculating this sound affective characteristics information and each affective style, and the matching degree calculated is arranged according to order from big to small, choosing the matching degree of preset number according to putting in order, the affective style corresponding to the matching degree chosen being defined as the affective style of this live user.
In addition, owing to comprising different types of characteristic information in acoustic information, this server can pre-determine the preset sound affective characteristics information of multiple kinds corresponding to each affective style, if get the sound affective characteristics information of multiple kind according to the acoustic information of live user, then the sound affective characteristics information of each kind can be mated with the preset sound affective characteristics information of each kind by this server respectively, obtains the matching degree that multiple kind is corresponding.
2033, pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style is obtained, calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style, and the matching degree between the preset sound affective characteristics information calculating this sound affective characteristics information and each affective style, according to the matching degree calculated order from big to small, choose the matching degree of preset number, the affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
In conjunction with above-mentioned steps 2021-2022, this server can obtain text emotion characteristic information and the sound affective characteristics information of this acoustic information respectively, and obtain pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style, calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style, and calculate the matching degree of the preset sound affective characteristics information of this sound affective characteristics information and each affective style, the multiple matching degrees calculated are arranged according to order from big to small, the matching degree of preset number is chosen according to putting in order, affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Further, after multiple matching degrees of the multiple matching degree and this sound affective characteristics information that calculate text affective characteristics information, this server can also according to pre-set text weight, multiple matching degrees of text emotion characteristic information are weighted respectively, obtain multiple weighted registration degree, and according to preset sound weight, multiple matching degrees of sound affective characteristics information are weighted respectively, obtain multiple weighted registration degree, the multiple weighted registration degree calculated are arranged according to order from big to small, the weighted registration degree of preset number is chosen according to putting in order, affective style corresponding to the weighted registration degree chosen is defined as the affective style of this live user.
Wherein, text weight and sound weight can be arranged by this Server Default, and also the impact on emotion can arrange according to the text in live content and sound, the embodiment of the present invention does not limit this.And this pre-set text weight and this preset sound weight sum can be predetermined threshold value, if this predetermined threshold value is 1, text weight and this sound weight are respectively 0.6 and 0.4.
In addition, this acoustic information can comprise user voice information and environmental voice information, for these two kinds of acoustic informations, above-mentioned steps 2021-2022 all can be adopted to carry out feature extraction, obtain affective characteristics information, and according to this affective characteristics information, adopt at least one item in above-mentioned steps 2031-2033, determine the affective style of this live user.
Particularly, when carrying out feature extraction, this server can obtain the user voice information comprised in this acoustic information, carries out feature extraction to this user voice information, obtains the first affective characteristics information; Or, obtain the environmental voice information comprised in this acoustic information, feature extraction carried out to this environmental voice information, obtain the second affective characteristics information.Wherein, this the first affective characteristics information comprises at least one item in the text emotion characteristic information of user voice information and the sound affective characteristics information of user voice information, and this second affective characteristics information comprises at least one item in the text emotion characteristic information of environmental voice information and the sound affective characteristics information of environmental voice information.
For the ease of managing the default affective characteristics information of multiple affective style, can the default affective characteristics information of each affective style be stored in affection data storehouse, such as the default affective characteristics information of all affective styles is stored in same affection data storehouse, or the default affective characteristics information of different emotions type is stored in different affection data storehouses, and the embodiment of the present invention does not limit this.Further, the default affective characteristics information of each affective style can comprise pre-set text affective characteristics information and preset sound affective characteristics information, these two kinds of affective characteristics information can be stored in same affection data storehouse, also can be stored in different affection data storehouses, the embodiment of the present invention does not limit this.
Wherein, this affection data storehouse can be MySQL (associated data management system), Oracle (relation data management system) etc., and the type of the embodiment of the present invention to this affection data storehouse does not limit.
204, the corresponding relation between the identify label of this live user and the affective style determined is set up.
Wherein, the identify label of this live user, for representing the identity of this live user, can be the user account of this live user, user's pet name etc., and this identify label can be determined by live user, or distributed by server, the embodiment of the present invention does not all limit this.
After the affective style determining this live user, server can obtain the identify label of this live user, sets up the corresponding relation between identify label and affective style, and stores this corresponding relation.
In subsequent process, when the live content of live user changes, server can redefine affective style according to the acoustic information of this live user, if the affective style determined is different from the former affective style of this live user, this server can in this corresponding relation, former affective style corresponding for the identify label of this live user is replaced with the affective style redefined, realizes the renewal of corresponding relation, thus realize the renewal to affective style.
Above-mentioned steps 201-204 determines that the affective style of a live user is for example, and in practical application, this server can perform step 201-203 respectively to the live user of difference, determine the affective style of different live user, and the corresponding relation set up between the identify label of each live user and the affective style determined, thus achieve the emotional semantic classification to multiple live user.
Follow-up in the live process of multiple live user, spectators user can inquire the live user of coupling according to interested affective style.
Particularly, spectators user triggers the inquiry request to specifying affective style in the terminal used, this server can receive the inquiry request that spectators user sends, according to the corresponding relation between the identify label of each live user set up and affective style, the identify label that this appointment affective style is corresponding can be inquired about, and send the identify label inquired to this spectators user.The identify label that spectators user can send from server, select some identify labels, thus enter the channel at corresponding live user place, watch the live content of this live user.
Wherein, this appointment affective style can be determined according to the search keyword of spectators user's input, that is to say, inputted search keyword in the searched page that spectators user can provide at this server, and click search button triggers this inquiry request, then this server using this search keyword as appointment affective style, can return and the identify label of searching for Keywords matching.
In addition, this server can be each channel allocation channel identication, this channel identication can be channel number, channel designation etc., and according to the current identify label carrying out live live user in each channel, sets up the corresponding relation between identify label and channel identication.So, when spectators user inquires about the identify label of specifying affective style corresponding, this server can obtain the channel identication corresponding with this identify label, sends to spectators user, so that spectators user enters the channel at corresponding live user place according to this channel identication.
The method that the embodiment of the present invention provides, by carrying out feature extraction to the acoustic information of live user, obtain the affective characteristics information of this acoustic information, and according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of this live user, according to the acoustic information in live user live broadcast process, can automatically determine the affective style of this live user, and manually select without the need to live user, simplify the operation of live user.And, when the live content of live user changes, also can by obtaining the acoustic information in live process, determine the affective style that the emotion comprised with the live content after change is mated, avoid the situation that affective style that the emotion after change and live user manually select is not inconsistent, ensure that the affective style determined can accurately reflect the emotion comprised in current live content, improve the accuracy of classification.Further, spectators user accurately can find the live user of coupling according to the affective style determined, improves search efficiency, improves the viscosity of spectators user.
Fig. 3 is the one live emotional semantic classification apparatus structure schematic diagram that the embodiment of the present invention provides, and this device can be server, and see Fig. 3, this device comprises: acquisition module 310, extraction module 320 and determination module 330.
Acquisition module 310 in live process, obtains the acoustic information of this live user for carrying out live user;
Extraction module 320, for carrying out feature extraction to this acoustic information, obtains the affective characteristics information of this acoustic information;
Determination module 330, for according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, determines the affective style of this live user.
The device that the embodiment of the present invention provides, by carrying out feature extraction to the acoustic information of live user, obtain the affective characteristics information of this acoustic information, and according to the matching degree between this affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of this live user, according to the acoustic information in live user live broadcast process, can automatically determine the affective style of this live user, and manually select without the need to live user, simplify the operation of live user.And, when the live content of live user changes, also can by obtaining the acoustic information in live process, determine the affective style that the emotion comprised with the live content after change is mated, avoid the situation that affective style that the emotion after change and live user manually select is not inconsistent, improve the accuracy of classification.
Alternatively, this extraction module 320 is also for being converted to text message by this acoustic information; Carry out feature extraction to text information, obtain text emotion characteristic information, text affective characteristics information comprises at least one emotion keyword.
Alternatively, this determination module 330 is also for obtaining the pre-set text affective characteristics information of each affective style, and the pre-set text affective characteristics information of each affective style comprises at least one emotion keyword; Calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style; According to matching degree order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Alternatively, this extraction module 320 also for adopting default feature extraction algorithm, carries out feature extraction to this acoustic information, obtains the sound affective characteristics information of this acoustic information.
Alternatively, this determination module 330 is also for obtaining the preset sound affective characteristics information of each affective style; Matching degree between the preset sound affective characteristics information calculating this sound affective characteristics information and each affective style; According to matching degree order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
Alternatively, this acquisition module 310 also for obtaining the user voice information comprised in this acoustic information, carries out feature extraction to this user voice information, obtains the first affective characteristics information; Or,
This acquisition module also for obtaining the environmental voice information comprised in this acoustic information, carries out feature extraction to this environmental voice information, obtains the second affective characteristics information.
Alternatively, this affective characteristics information comprises text emotion characteristic information and sound affective characteristics information; This determination module 330 is also for obtaining pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style; Calculate the matching degree between text affective characteristics information and the pre-set text affective characteristics information of each affective style, and the matching degree between the preset sound affective characteristics information calculating this sound affective characteristics information and each affective style; According to the matching degree calculated order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of this live user.
See Fig. 4, alternatively, this device also comprises opening relationships module 340, and this opening relationships module 340 is for setting up the corresponding relation between the identify label of this live user and this affective style determined.
See Fig. 4, alternatively, this device also comprises enquiry module 350, the inquiry request that this enquiry module 350 sends for receiving spectators user, and this inquiry request comprises appointment affective style; According to the corresponding relation between the identify label of each live user set up and affective style, inquire about the identify label that this appointment affective style is corresponding; The identify label inquired is sent to this spectators user.
Above-mentioned all alternatives, can adopt the embodiment combining arbitrarily and form the embodiment of the present invention, this is no longer going to repeat them.
Fig. 5 is the structural representation of a kind of live emotional semantic classification device 500 that the embodiment of the present invention provides.Such as, device 500 may be provided in a server.With reference to Fig. 5, device 500 comprises processing components 522, and it comprises one or more processor further, and the memory resource representated by storer 532, can such as, by the instruction of the execution of processing element 522, application program for storing.The application program stored in storer 532 can comprise each module corresponding to one group of instruction one or more.In addition, processing components 522 is configured to perform instruction, to perform all or part of step in the arbitrary shown method of above-mentioned Fig. 1 or Fig. 2.
Device 500 can also comprise the power management that a power supply module 526 is configured to actuating unit 500, and a wired or wireless network interface 550 is configured to device 500 to be connected to network, and input and output (I/O) interface 558.Device 500 can operate the operating system based on being stored in storer 532, such as WindowsServer tM, MacOSX tM, Unix tM, Linux tM, FreeBSD tMor it is similar.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can have been come by hardware, the hardware that also can carry out instruction relevant by program completes, the program of being somebody's turn to do can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium mentioned can be ROM (read-only memory), disk or CD etc.
This is only preferred embodiment of the present invention above, not in order to limit the present invention, within the spirit and principles in the present invention all, and any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (18)

1. a live sensibility classification method, is characterized in that, described method comprises:
Carry out in live process live user, obtain the acoustic information of described live user;
Feature extraction is carried out to described acoustic information, obtains the affective characteristics information of described acoustic information;
According to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of described live user.
2. method according to claim 1, is characterized in that, describedly carries out feature extraction to described acoustic information, obtains the affective characteristics information of described acoustic information, comprising:
Described acoustic information is converted to text message;
Carry out feature extraction to described text message, obtain text emotion characteristic information, described text emotion characteristic information comprises at least one emotion keyword.
3. method according to claim 2, is characterized in that, described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determines the affective style of described live user, comprising:
Obtain the pre-set text affective characteristics information of each affective style, the pre-set text affective characteristics information of each affective style comprises at least one emotion keyword;
Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style;
According to matching degree order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
4. method according to claim 1, is characterized in that, describedly carries out feature extraction to described acoustic information, obtains the affective characteristics information of described acoustic information, comprising:
Adopt and preset feature extraction algorithm, feature extraction is carried out to described acoustic information, obtains the sound affective characteristics information of described acoustic information.
5. method according to claim 4, is characterized in that, described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determines the affective style of described live user, comprising:
Obtain the preset sound affective characteristics information of each affective style;
Matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style;
According to matching degree order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
6. method according to claim 1, is characterized in that, describedly carries out feature extraction to described acoustic information, obtains the affective characteristics information of described acoustic information, comprising:
Obtain the user voice information comprised in described acoustic information, feature extraction is carried out to described user voice information, obtain the first affective characteristics information; Or,
Obtain the environmental voice information comprised in described acoustic information, feature extraction is carried out to described environmental voice information, obtain the second affective characteristics information.
7. method according to claim 1, is characterized in that, described affective characteristics information comprises text emotion characteristic information and sound affective characteristics information;
Described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determine the affective style of described live user, comprising:
Obtain pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style;
Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style, and the matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style;
According to the matching degree calculated order from big to small, choose the matching degree of preset number;
Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
8. method according to claim 1, is characterized in that, described according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, after determining the affective style of described live user, described method also comprises:
Set up the corresponding relation between the identify label of described live user and the described affective style determined.
9. method according to claim 8, is characterized in that, described method also comprises:
Receive the inquiry request that spectators user sends, described inquiry request comprises appointment affective style;
According to the corresponding relation between the identify label of each live user set up and affective style, inquire about the identify label that described appointment affective style is corresponding;
The identify label inquired is sent to described spectators user.
10. a live emotional semantic classification device, is characterized in that, described device comprises:
Acquisition module, for carrying out in live process live user, obtains the acoustic information of described live user;
Extraction module, for carrying out feature extraction to described acoustic information, obtains the affective characteristics information of described acoustic information;
Determination module, for according to the matching degree between described affective characteristics information and the default affective characteristics information of each affective style, determines the affective style of described live user.
11. devices according to claim 10, is characterized in that, described extraction module is also for being converted to text message by described acoustic information; Carry out feature extraction to described text message, obtain text emotion characteristic information, described text emotion characteristic information comprises at least one emotion keyword.
12. devices according to claim 11, is characterized in that, described determination module is also for obtaining the pre-set text affective characteristics information of each affective style, and the pre-set text affective characteristics information of each affective style comprises at least one emotion keyword; Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style; According to matching degree order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
13. devices according to claim 10, is characterized in that, described extraction module also for adopting default feature extraction algorithm, carries out feature extraction to described acoustic information, obtain the sound affective characteristics information of described acoustic information.
14. devices according to claim 13, is characterized in that, described determination module is also for obtaining the preset sound affective characteristics information of each affective style; Matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style; According to matching degree order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
15. devices according to claim 10, is characterized in that, described acquisition module also for obtaining the user voice information comprised in described acoustic information, carries out feature extraction to described user voice information, obtains the first affective characteristics information; Or,
Described acquisition module also for obtaining the environmental voice information comprised in described acoustic information, carries out feature extraction to described environmental voice information, obtains the second affective characteristics information.
16. devices according to claim 10, is characterized in that, described affective characteristics information comprises text emotion characteristic information and sound affective characteristics information;
Described determination module is also for obtaining pre-set text affective characteristics information and the preset sound affective characteristics information of each affective style; Matching degree between the pre-set text affective characteristics information calculating described text emotion characteristic information and each affective style, and the matching degree between the preset sound affective characteristics information calculating described sound affective characteristics information and each affective style; According to the matching degree calculated order from big to small, choose the matching degree of preset number; Affective style corresponding to the matching degree chosen is defined as the affective style of described live user.
17. devices according to claim 10, is characterized in that, described device also comprises opening relationships module, and described opening relationships module is for setting up the corresponding relation between the identify label of described live user and the described affective style determined.
18. devices according to claim 17, is characterized in that, described device also comprises enquiry module, the inquiry request that described enquiry module sends for receiving spectators user, and described inquiry request comprises appointment affective style; According to the corresponding relation between the identify label of each live user set up and affective style, inquire about the identify label that described appointment affective style is corresponding; The identify label inquired is sent to described spectators user.
CN201510834112.1A 2015-11-25 2015-11-25 Live broadcast sentiment classification method and apparatus Pending CN105404681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510834112.1A CN105404681A (en) 2015-11-25 2015-11-25 Live broadcast sentiment classification method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510834112.1A CN105404681A (en) 2015-11-25 2015-11-25 Live broadcast sentiment classification method and apparatus

Publications (1)

Publication Number Publication Date
CN105404681A true CN105404681A (en) 2016-03-16

Family

ID=55470170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510834112.1A Pending CN105404681A (en) 2015-11-25 2015-11-25 Live broadcast sentiment classification method and apparatus

Country Status (1)

Country Link
CN (1) CN105404681A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341694A (en) * 2016-08-29 2017-01-18 广州华多网络科技有限公司 Method and device for obtaining live streaming operation data
CN109788345A (en) * 2019-03-29 2019-05-21 广州虎牙信息科技有限公司 Live-broadcast control method, device, live streaming equipment and readable storage medium storing program for executing
CN110866147A (en) * 2019-10-14 2020-03-06 北京达佳互联信息技术有限公司 Method, apparatus and storage medium for classifying live broadcast application
US10685058B2 (en) 2015-01-02 2020-06-16 Gracenote, Inc. Broadcast profiling system
CN111583968A (en) * 2020-05-25 2020-08-25 桂林电子科技大学 Speech emotion recognition method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005182368A (en) * 2003-12-18 2005-07-07 Seiko Epson Corp Expression image estimating device, expression image estimating method and its program
CN102723078A (en) * 2012-07-03 2012-10-10 武汉科技大学 Emotion speech recognition method based on natural language comprehension
CN103456314A (en) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 Emotion recognition method and device
CN104200804A (en) * 2014-09-19 2014-12-10 合肥工业大学 Various-information coupling emotion recognition method for human-computer interaction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005182368A (en) * 2003-12-18 2005-07-07 Seiko Epson Corp Expression image estimating device, expression image estimating method and its program
CN102723078A (en) * 2012-07-03 2012-10-10 武汉科技大学 Emotion speech recognition method based on natural language comprehension
CN103456314A (en) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 Emotion recognition method and device
CN104200804A (en) * 2014-09-19 2014-12-10 合肥工业大学 Various-information coupling emotion recognition method for human-computer interaction

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10685058B2 (en) 2015-01-02 2020-06-16 Gracenote, Inc. Broadcast profiling system
US11397767B2 (en) 2015-01-02 2022-07-26 Gracenote, Inc. Broadcast profiling system
CN106341694A (en) * 2016-08-29 2017-01-18 广州华多网络科技有限公司 Method and device for obtaining live streaming operation data
CN106341694B (en) * 2016-08-29 2019-08-02 广州华多网络科技有限公司 A kind of method and apparatus obtaining live streaming operation data
CN109788345A (en) * 2019-03-29 2019-05-21 广州虎牙信息科技有限公司 Live-broadcast control method, device, live streaming equipment and readable storage medium storing program for executing
CN109788345B (en) * 2019-03-29 2020-03-10 广州虎牙信息科技有限公司 Live broadcast control method and device, live broadcast equipment and readable storage medium
CN110866147A (en) * 2019-10-14 2020-03-06 北京达佳互联信息技术有限公司 Method, apparatus and storage medium for classifying live broadcast application
CN111583968A (en) * 2020-05-25 2020-08-25 桂林电子科技大学 Speech emotion recognition method and system

Similar Documents

Publication Publication Date Title
US11030412B2 (en) System and method for chatbot conversation construction and management
US11631123B2 (en) Voice shopping method, device and computer readable storage medium
EP3577610B1 (en) Associating meetings with projects using characteristic keywords
JP6894534B2 (en) Information processing method and terminal, computer storage medium
CN105404681A (en) Live broadcast sentiment classification method and apparatus
US20190369958A1 (en) Method and system for providing interface controls based on voice commands
CN109086026B (en) Broadcast voice determination method, device and equipment
US20180053261A1 (en) Automated Compatibility Matching Based on Music Preferences of Individuals
JP2019501466A (en) Method and system for search engine selection and optimization
CN105488135A (en) Live content classification method and device
CN104794122A (en) Position information recommending method, device and system
WO2020253064A1 (en) Speech recognition method and apparatus, and computer device and storage medium
CN109451147B (en) Information display method and device
JP7204801B2 (en) Man-machine interaction method, device and medium based on neural network
CN107992523B (en) Function option searching method of mobile application and terminal equipment
WO2017032084A1 (en) Information output method and apparatus
CN110287364B (en) Voice search method, system, device and computer readable storage medium
CN114490975B (en) User question labeling method and device
CN116595150A (en) Dialogue recommendation method, device, equipment and storage medium
CN112328905A (en) Online marketing content pushing method and device, computer equipment and storage medium
CN112417996B (en) Information processing method and device for industrial drawing, electronic equipment and storage medium
CN110717095B (en) Service item pushing method and device
KR102337800B1 (en) Project workflow design method using online work-based database of previously performed projects and apparatus of the same
CN113377775B (en) Information processing method and device
JP2017068782A (en) Time series data processing device and time series data processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510660 Guangzhou City, Guangzhou, Guangdong, Whampoa Avenue, No. 315, self - made 1-17

Applicant after: Guangzhou KuGou Networks Co., Ltd.

Address before: 510000 B1, building, No. 16, rhyme Road, Guangzhou, Guangdong, China 13F

Applicant before: Guangzhou KuGou Networks Co., Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160316