CN110992982A - Audio classification method and device and readable storage medium - Google Patents

Audio classification method and device and readable storage medium Download PDF

Info

Publication number
CN110992982A
CN110992982A CN201911033406.9A CN201911033406A CN110992982A CN 110992982 A CN110992982 A CN 110992982A CN 201911033406 A CN201911033406 A CN 201911033406A CN 110992982 A CN110992982 A CN 110992982A
Authority
CN
China
Prior art keywords
audio
playlist
classification
broadcasting
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911033406.9A
Other languages
Chinese (zh)
Inventor
吴海旭
任娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Lizhi Network Technology Co ltd
Original Assignee
Guangzhou Lizhi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Lizhi Network Technology Co ltd filed Critical Guangzhou Lizhi Network Technology Co ltd
Priority to CN201911033406.9A priority Critical patent/CN110992982A/en
Publication of CN110992982A publication Critical patent/CN110992982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Abstract

The embodiment of the invention discloses an audio classification method, an audio classification device and a readable storage medium, wherein the method comprises the following steps: constructing an audio broadcasting image system according to the broadcasting log information; and training an audio classification generation model according to an audio broadcasting image system to finish audio classification. In the implementation process of the embodiment of the invention, the audio playlist representation system can automatically classify the new audio in the playlist according to the historical representation of the playlist, so that the fast processing of mass audio resources is realized, and meanwhile, the embodiment of the invention has self-learning property, and saves manpower resources and server resources.

Description

Audio classification method and device and readable storage medium
Technical Field
The invention relates to the technical field of audio data intelligent processing, in particular to an audio classification method, an audio classification device and a readable storage medium.
Background
With the rapid development of the internet industry, people increasingly acquire information on various large platforms of the internet. Various information is loaded on various platforms through various modes such as characters, voice, video and the like, and the requirements of various aspects of users on the information are met. Voice plays an increasingly important role as the most important information carrier and one of the most important channels through which people obtain external information. Due to the invisible particularity of sound, integrating sound through a play list is the most important way for people to acquire various sound information. The judgment and classification of audio contents are one of the most important processing modes for audio contents of each large audio platform.
At present, for the audio classification method, the general scheme is as follows:
firstly, the method comprises the following steps: manually classifying, and finishing audio-related classification in a manual mode;
secondly, the method comprises the following steps: by means of an algorithm, for intelligent classification of audio content, the general intelligent classification is performed according to the following steps:
(1): preprocessing audio content information, and eliminating interference information such as audio noise and the like;
(2): extracting the characteristics of the audio content;
(3): classifying the audio according to the extracted features;
the two main disadvantages of the two methods are:
firstly, the method comprises the following steps: a large amount of manpower is occupied, and computer resources such as computing resources and storage resources are occupied;
secondly, the method comprises the following steps: the feature classification algorithm has insufficient expansibility, can only aim at the pre-designed classification, and has no good adaptation to the new classification;
thirdly, the method comprises the following steps: the whole system has no self-learning and self-expanding capability.
Disclosure of Invention
The invention aims to provide an audio classification method, an audio classification device and a readable storage medium. In the using process, the embodiment of the invention can automatically classify the new audio in the playlist according to the historical representation of the playlist by the playlist representation system, thereby realizing the rapid processing of mass audio resources, and simultaneously having self-learning property, and saving manpower resources and server resources.
In order to solve the technical problem, the embodiment of the invention adopts the following technical scheme:
there is provided an audio classification method including:
constructing an audio broadcasting image system according to the broadcasting log information;
and training an audio classification generation model according to an audio broadcasting image system to finish audio classification.
Optionally, the training of the audio classification generation model according to the audio broadcasting portrait system and the finishing of the audio classification include: the misclassified data is manually marked and manually classified, and the associated data is updated into an audio playlist representation system.
Optionally, the system for constructing an audio playlist representation according to the playlist log information includes: collecting log information of an audio playing list server; mining audio play list data information by adopting a data mining method based on the log information; and merging and clustering the information of the audio broadcasting list data information to form an audio broadcasting list portrait system.
Optionally, the audio playlist data information includes: the creation time of the playlist and/or the originator of the playlist and/or the upload time of the individual audio in the playlist and/or the category to which the individual audio in the playlist belongs.
Optionally, the 5, training an audio classification generation model according to an audio broadcasting portrait system, and completing the audio classification process includes:
generating a characteristic vector according to data information of an audio broadcasting image system;
a fusion model of naive Bayes and a long-short term memory network is adopted as a classification model;
the classification model predicts the probability distribution of the audio classes, and the class with the highest probability is determined as the class of the audio.
Optionally, the generating a feature vector according to the data information of the audio broadcasting portrait system includes: and generating sequence characteristic vectors of various audio types in the playlist according to the data information of the audio playlist representation system and generating probability distribution characteristic vectors of the audio types in the playlist according to the data information of the audio playlist representation system.
Optionally, the process of generating the feature vectors of each audio type sequence in the playlist according to the data information of the audio playlist representation system includes:
sequencing the audio in the playlist according to time to generate an audio generation time sequence in the playlist;
setting a time series window as a parameter of the model;
and generating training model feature data by using the broadcast single audio category time sequence according to the time sequence window, and converting the feature data into feature vectors.
Optionally, the process of generating the audio type probability distribution feature vector in the playlist according to the audio playlist representation system data information is as follows: and counting the frequency of each category of audio uploaded in each playlist before each time point of each playlist new audio sequence.
An embodiment of the present invention further provides an audio classification apparatus, including:
the audio broadcasting receipt image system generating module is used for constructing an audio broadcasting receipt image system according to the broadcasting receipt log information;
and the audio classification generation model training module is used for generating a model according to the audio classification trained by the audio broadcasting portrait system so as to finish audio classification.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the audio classification method are realized.
Embodiments of the present invention provide an audio classification method, an audio classification device, and a readable storage medium, in which a playlist is the most important way for people to use audio in daily life, and from the perspective of a playlist constructor, audio classification in a playlist in a short period of time does not change, such as a phase-sound type playlist, most of contents should be phase-sound type, and music type playlists, contents should be music classification. Through the audio broadcasting sheet portrait system, according to the historical expression of the broadcasting sheet, the new audio in the broadcasting sheet can be automatically classified, the rapid processing of mass audio resources is realized, and meanwhile, the audio broadcasting sheet portrait system has self-learning performance, and saves manpower resources and server resources.
Drawings
In order to more clearly illustrate the technical solution in the present embodiment, the drawings used in the prior art and the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for a person skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart illustrating an audio classification method according to an embodiment of the present invention.
Fig. 2 is a schematic algorithm flow diagram of an audio classification method according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an audio classification apparatus according to an embodiment of the present invention.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
The following detailed description of embodiments of the invention refers to the accompanying drawings and examples.
Referring to fig. 1 and fig. 2, fig. 1 is a flowchart illustrating an audio classification method according to an embodiment of the present invention. Fig. 2 is a schematic algorithm flow diagram of an audio classification method according to an embodiment of the present invention.
The method comprises the following steps:
s11: constructing an audio broadcasting image system according to the broadcasting log information;
specifically, an audio playlist representation system is constructed based on the playlist log information, the representation system describing the detailed information of the audio within the playlist and the relationship between the playlist and the audio content.
It should be noted that, the process of constructing the audio playlist representation system according to the playlist log information is as follows:
s111: collecting log information of a broadcasting list related server;
s112: based on the collected log information, a data mining method is adopted to mine at least the following information:
(1): the creation time of the playlist;
(2): the originator of the play order;
(3): uploading time of each audio in the play list;
(4): the category of each audio in the playlist;
s113: and merging and clustering the mined information according to the unique identifier of the playlist to form an audio playlist portrait system.
S12: training an audio classification generation model according to an audio broadcasting single image system to finish audio classification;
in the embodiment, the training of the audio classification generation model according to the audio broadcasting sheet portrait system to complete the audio classification is based on the audio broadcasting sheet portrait system, and the deep learning algorithm is adopted, and the method is mainly characterized in that programs and classification information uploaded in the broadcasting sheet history and the audio uploading frequency in the broadcasting sheet are completed by adopting a KNN algorithm.
Specifically, training an audio classification generation model according to an audio broadcasting image system, and completing the audio classification comprises the following steps:
s121, generating a feature vector according to the related information of the audio playing single image system. It should be noted that the following two feature vectors are used in the algorithm in this embodiment:
(1): the sequence characteristics of each audio type in the playlist;
(2): the probability distribution characteristics of the audio types in the playlist;
specifically, the manner of generating the audio type sequence feature vector and label in the playlist is as follows:
(1): sequencing the audio in the playlist according to time to generate a program generation time sequence in the playlist;
(2): and setting a time sequence window M, wherein M is used as a super parameter of the model.
(3): generating training model feature data from the playlist program category time sequence according to the time window, wherein the features are specifically converted into feature vectors in the following manner:
ti is the time point of the ith program uploading of the playlist, Si is the ith program of the playlist, and L (xi) is the category of the program Si.
Sequence data
Point in time Program category
T1 L(X1)
T2 L(X2)
Tn L(Sn)
Characteristic data:
Feature1 Feature2 Featurek label
L(X1) L(X2) …. L(Xk+1) L(Xk+2)
specifically, the method for generating the broadcast audio probability distribution feature and the label is as follows:
and counting the frequency of each category of audio uploaded in the playlist before each time point of each new audio sequence of the playlist, wherein label is the category of the program corresponding to the time point.
The format of the broadcast audio sequence generating audio probability features is as follows:
Feature1 Feature2 Featurek label
N1/S N2/S Nk/S O(X1)
wherein Ni is the number of categories i in all programs in the playlist before the uploading time point of the audio Xi. And S is the total amount of programs in the playlist before the uploading time point of the program x. O (Si) is the oneshot code for the class of Si.
And S122, adopting a fusion model of bayes (naive Bayes) and lstm (long-short term memory network) as the audio classification training model, wherein the fusion mode is stacking. The bayes is used for extracting the class probability distribution characteristics, and the lstm is used for extracting the time series fluctuation rule.
Specifically, the method for training the audio classification model in this embodiment includes:
and S1221, averagely dividing all training data generated by the time sequence into two groups, wherein the first group is lstm model training data, and the second group is DNN model training data. The lstm model is trained using the first set of data.
S1222, the lstm model training method comprises: and during training lstm, converting the sequence data into sequence feature vectors, inputting each feature vector in the sequence to the network one by one for each training data sequence, mapping the data sequence into a two-dimensional feature vector by the network finally, mapping the two-dimensional feature vector into probability distribution of each program category through a softmax function, comparing the probability distribution with the real probability distribution, calculating loss through a loss function, and iteratively updating parameters by reversely propagating the loss.
And S1223, training the DNN model by using the second set of training data. And generating class probability distribution characteristics and sequence characteristics for each piece of training data, inputting the class probability distribution characteristics into a Bayesian model, wherein data used by the Bayesian model is a first group of training data. The sequence features are input to the lstm model trained in step S1222. And splicing onehot codes output by the Bayesian model and onehot codes output by the lstm. The model is trained dnn using the stitched features as new features.
It should be noted that, in this embodiment, the Stacking formula is as follows:
D1={xi,yi},i=[1,m]
Figure BDA0002250781420000081
Figure BDA0002250781420000082
t1(xi) is a probability distribution feature vector of data xi, and t2(xi) is a sequence feature vector of data xi. h1 is Bayesian, h2 is lstm, and h3 is dnn.
S123, the prediction method of the classification training model in this embodiment is as follows:
and S1231, generating a Bayesian model by using all historical data. And calculating the probability distribution vector of the program in the current playlist. And predicting the category probability distribution of the next program in the playlist according to the data through a Bayesian model.
And S1232, generating a time series feature vector by using the latest k time point data, wherein k is the same as the value of k in the step S121. Lstm trained in step S1222 is input to obtain the category probability distribution of the next program.
S1233, the feature vectors of the output results of the bayesian model and the lstm model are concatenated in the same order in step S122, the concatenated feature vectors are input to the dnn model trained in step S1222, and onehot finally output is encoded as the probability distribution of the model prediction class, and the class with the highest probability is the class of the next audio predicted by the model.
It should be noted that, in order to make the audio classification training model have better self-learning, human resources and server resources are saved. The embodiment further comprises the following steps:
and S124, performing iterative optimization on the audio classification training model. Specifically, the method comprises the following steps:
and S1241, for each broadcast list, taking the whole amount of the historical uploaded program data of the broadcast list, wherein the fields comprise the time of the uploaded program and the program category. And generating the probability distribution characteristic of the broadcast program and the sequence characteristic of the broadcast program. The lstm and dnn models were trained using the full-scale data.
And S1242, manually classifying the programs when the playlist has new programs, and storing the data serving as an error sample into a database when the manual classification is different from the model classification. And the model is finely adjusted by regularly using the error samples in the database, and after the error rate of the model is lower than a threshold value, manual classification can be omitted, and the model is automatically used.
Wherein, step S1242 includes:
s12421, the product and the user feed back to the system through a feedback channel, and the audio frequency with the wrong classification is obtained.
S12422, the system automatically updates the updated audio classification to the broadcasting portrait, and finishes the correction of classification errors.
Of course, the embodiment of the present invention is not limited to the foregoing method for intelligently classifying audio categories, and may also be implemented by other methods. The embodiment of the present invention is not limited to specific methods.
On the basis of the foregoing embodiments, an audio classification device is correspondingly provided in an embodiment of the present invention, with reference to fig. 3. The device includes:
the audio broadcasting receipt image system generating module is used for constructing an audio broadcasting receipt image system according to the broadcasting receipt log information;
and the audio classification generation model training module is used for generating a model according to the audio classification trained by the audio broadcasting portrait system so as to finish audio classification.
It should be noted that the embodiment of the present invention has the same beneficial effects as the audio classification method in the foregoing embodiment, and for the specific description of the audio classification method related in the embodiment of the present invention, please refer to the foregoing embodiment, which is not described herein again.
On the basis of the foregoing embodiments, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the audio classification method.
It should be noted that the embodiment of the present invention has the same beneficial effects as the audio classification method in the foregoing embodiment, and for the specific description of the audio classification method related to the foregoing embodiment of the present invention, please refer to the foregoing embodiment, which is not described herein again.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of audio classification, comprising:
constructing an audio broadcasting image system according to the broadcasting log information;
and training an audio classification generation model according to an audio broadcasting image system to finish audio classification.
2. The audio classification method of claim 1, wherein the training of the audio classification generation model according to the audio singleton representation system to perform audio classification comprises: the misclassified data is manually marked and manually classified, and the associated data is updated into an audio playlist representation system.
3. The audio classification method of claim 1, wherein said constructing an audio playlist representation system based on playlist log information comprises: collecting log information of an audio playing list server; mining audio play list data information by adopting a data mining method based on the log information; and merging and clustering the information of the audio broadcasting list data information to form an audio broadcasting list portrait system.
4. The audio classification method according to claim 3, characterized in that the audio playlist data information comprises: the creation time of the playlist and/or the originator of the playlist and/or the upload time of the individual audio in the playlist and/or the category to which the individual audio in the playlist belongs.
5. The audio classification method of claim 1, wherein the training of the audio classification generation model according to the audio singleton representation system comprises:
generating a characteristic vector according to data information of an audio broadcasting image system;
a fusion model of naive Bayes and a long-short term memory network is adopted as a classification model;
the classification model predicts the probability distribution of the audio classes, and the class with the highest probability is determined as the class of the audio.
6. The audio classification method of claim 5, wherein the generating feature vectors according to the data information of the audio playlist representation system comprises: and generating sequence characteristic vectors of various audio types in the playlist according to the data information of the audio playlist representation system and generating probability distribution characteristic vectors of the audio types in the playlist according to the data information of the audio playlist representation system.
7. The method of claim 6, wherein the generating of the sequence feature vectors of each audio type in the playlist from the audio playlist representation system data information comprises:
sequencing the audio in the playlist according to time to generate an audio generation time sequence in the playlist; setting a time series window as a parameter of the model;
and generating training model feature data by using the broadcast single audio category time sequence according to the time sequence window, and converting the feature data into feature vectors.
8. The audio classification method of claim 6, wherein the process of generating the audio type probability distribution feature vector in the playlist based on the audio playlist representation system data information comprises: and counting the frequency of each category of audio uploaded in each playlist before each time point of each playlist new audio sequence.
9. An audio classification apparatus, comprising:
the audio broadcasting receipt image system generating module is used for constructing an audio broadcasting receipt image system according to the broadcasting receipt log information; and the audio classification generation model training module is used for generating a model according to the audio classification trained by the audio broadcasting portrait system so as to finish audio classification.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the audio classification method according to any one of claims 1 to 8.
CN201911033406.9A 2019-10-28 2019-10-28 Audio classification method and device and readable storage medium Pending CN110992982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911033406.9A CN110992982A (en) 2019-10-28 2019-10-28 Audio classification method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911033406.9A CN110992982A (en) 2019-10-28 2019-10-28 Audio classification method and device and readable storage medium

Publications (1)

Publication Number Publication Date
CN110992982A true CN110992982A (en) 2020-04-10

Family

ID=70082476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911033406.9A Pending CN110992982A (en) 2019-10-28 2019-10-28 Audio classification method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN110992982A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070048484A (en) * 2005-11-04 2007-05-09 주식회사 케이티 Apparatus and method for classification of signal features of music files, and apparatus and method for automatic-making playing list using the same
CN1998044A (en) * 2004-04-29 2007-07-11 皇家飞利浦电子股份有限公司 Method of and system for classification of an audio signal
CN107943865A (en) * 2017-11-10 2018-04-20 阿基米德(上海)传媒有限公司 It is a kind of to be suitable for more scenes, the audio classification labels method and system of polymorphic type
CN109784961A (en) * 2017-11-13 2019-05-21 阿里巴巴集团控股有限公司 A kind of data processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1998044A (en) * 2004-04-29 2007-07-11 皇家飞利浦电子股份有限公司 Method of and system for classification of an audio signal
KR20070048484A (en) * 2005-11-04 2007-05-09 주식회사 케이티 Apparatus and method for classification of signal features of music files, and apparatus and method for automatic-making playing list using the same
CN107943865A (en) * 2017-11-10 2018-04-20 阿基米德(上海)传媒有限公司 It is a kind of to be suitable for more scenes, the audio classification labels method and system of polymorphic type
CN109784961A (en) * 2017-11-13 2019-05-21 阿里巴巴集团控股有限公司 A kind of data processing method and device

Similar Documents

Publication Publication Date Title
CN110796190B (en) Exponential modeling with deep learning features
CN109408731B (en) Multi-target recommendation method, multi-target recommendation model generation method and device
CN107423442B (en) Application recommendation method and system based on user portrait behavior analysis, storage medium and computer equipment
CN109408665B (en) Information recommendation method and device and storage medium
CN106651542B (en) Article recommendation method and device
Giordani et al. Adaptive independent Metropolis–Hastings by fast estimation of mixtures of normals
CN111507419B (en) Training method and device of image classification model
Wang Bankruptcy prediction using machine learning
CN108665148B (en) Electronic resource quality evaluation method and device and storage medium
CN105701191A (en) Push information click rate estimation method and device
CN112765480B (en) Information pushing method and device and computer readable storage medium
CN112231584A (en) Data pushing method and device based on small sample transfer learning and computer equipment
CN111859967B (en) Entity identification method and device and electronic equipment
Asadi et al. Creating discriminative models for time series classification and clustering by HMM ensembles
CN109241261A (en) User's intension recognizing method, device, mobile terminal and storage medium
CN111967971A (en) Bank client data processing method and device
CN114330659A (en) BP neural network parameter optimization method based on improved ASO algorithm
CN110956277A (en) Interactive iterative modeling system and method
Kamruzzaman Text classification using artificial intelligence
CN116843388B (en) Advertisement delivery analysis method and system
CN103744958A (en) Webpage classification algorithm based on distributed computation
Llerena et al. On using sum-product networks for multi-label classification
CN110992982A (en) Audio classification method and device and readable storage medium
CN110689040B (en) Sound classification method based on anchor portrait
CN109815474B (en) Word sequence vector determination method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410

RJ01 Rejection of invention patent application after publication