CN104268279A - Query method and device of corpus data - Google Patents
Query method and device of corpus data Download PDFInfo
- Publication number
- CN104268279A CN104268279A CN201410549904.XA CN201410549904A CN104268279A CN 104268279 A CN104268279 A CN 104268279A CN 201410549904 A CN201410549904 A CN 201410549904A CN 104268279 A CN104268279 A CN 104268279A
- Authority
- CN
- China
- Prior art keywords
- sound
- groove model
- corpus data
- object identity
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000013507 mapping Methods 0.000 claims description 23
- 239000000284 extract Substances 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001755 vocal effect Effects 0.000 abstract description 32
- 230000000694 effects Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 238000009432 framing Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a query method and device of corpus data. The query method of the corpus data includes: obtaining a first vocal print model of a user; seeking a vocal print model matched with the first vocal print model from the pre-stored vocal print models in a corpus database to obtain a second vocal print model; obtaining first corpus data correlative with the second vocal print model according to the correlation of the pre-stored vocal print models in the corpus database and the corpus data; sending the first corpus data to the user. The query method and device of the corpus data solve the problem in the prior art of low corpus data seeking efficiency, and further achieves the effect of improving the corpus data seeking efficiency.
Description
Technical field
The present invention relates to MultiMedia Field, in particular to a kind of querying method and device of corpus data.
Background technology
Along with development and the progress of multimedia technology, increasing corpus data produces and stores.When needs call these corpus data, need in the file stored according to the filename of corpus data one by one comparison search, for a large amount of corpus data, the obvious efficiency comparison of mode being searched corpus data by the mode of documents name is one by one low.In addition, searched the method for corpus data by filename when the naming rule of filename is lack of standardization, cannot accurately find corresponding corpus data, even cannot find corpus data.
For the problem that the efficiency comparison searching corpus data in prior art is low, at present effective solution is not yet proposed.
Summary of the invention
Fundamental purpose of the present invention is the querying method and the device that provide a kind of corpus data, to solve in prior art the low problem of the efficiency comparison of searching corpus data.
To achieve these goals, according to an aspect of the present invention, a kind of querying method of corpus data is provided.Querying method according to corpus data of the present invention comprises: the first sound-groove model obtaining user; From the sound-groove model that prestores in corpus data storehouse, search the sound-groove model matched with described first sound-groove model, obtain the second sound-groove model; According to the incidence relation of prestore described in prestoring in described corpus data storehouse sound-groove model and corpus data, obtain the first corpus data be associated with described second sound-groove model; And described first corpus data is sent to described user.
Further, obtain the first corpus data be associated with described second sound-groove model to comprise: from described corpus data storehouse, search the object identity with described second sound-groove model with mapping relations; Obtain the second corpus data associated with described object identity; And using described second corpus data that associates with described object identity as described first corpus data.
Further, described user is sent to comprise described first corpus data: the object information obtaining described object identity; And when described first corpus data being sent to described user, described object information is sent to described user.
Further, the sound-groove model matched with described first sound-groove model is searched from the sound-groove model that prestores in corpus data storehouse, obtain the second sound-groove model to comprise: utilize posterior probability to calculate the similarity of described first sound-groove model and the described each sound-groove model that prestores prestored in sound-groove model respectively, obtain multiple similarity; More described multiple similarity, obtains maximum similarity; And will there is the sound-groove model of described maximum similarity as described second sound-groove model in the described sound-groove model that prestores.
Further, search from the sound-groove model that prestores in corpus data storehouse with the sound-groove model matched of described first sound-groove model before, described method also comprises: the corpus data collecting the multiple objects indicated by multiple object identity; Obtain the sound-groove model of the corpus data of described multiple object, prestore described in obtaining sound-groove model; And the corresponding relation of prestore described in setting up sound-groove model and described multiple object identity.
Further, obtain the sound-groove model of the corpus data of multiple object, the sound-groove model that prestores described in obtaining comprises: the mark of described corpus data with described multiple object associated; Extract the speech characteristic parameter of each frame voice signal in all corpus data associated with each object identity in described multiple object identity; The speech characteristic parameter of the described each object identity extracted is trained, obtains the sound-groove model belonging to described each object identity; And will the sound-groove model of described each object identity be belonged to as the described sound-groove model that prestores.
Further, after the corresponding relation of prestore described in setting up sound-groove model and described multiple object identity, and before obtaining first sound-groove model of user, described method also comprises: the corpus data obtaining the first object; Identify the sound-groove model in the corpus data of described first object; The sound-groove model mated with the sound-groove model of described first object is searched from the described sound-groove model that prestores; And the corpus data of described first object is associated with the object identity corresponding to sound-groove model found.
To achieve these goals, according to a further aspect in the invention, a kind of inquiry unit of corpus data is provided.Inquiry unit according to corpus data of the present invention comprises: the first acquiring unit, for obtaining first sound-groove model of user; First searches unit, for searching the sound-groove model matched with described first sound-groove model from the sound-groove model that prestores in corpus data storehouse, obtains the second sound-groove model; Second acquisition unit, for the incidence relation according to prestore described in prestoring in described corpus data storehouse sound-groove model and corpus data, obtains the first corpus data be associated with described second sound-groove model; And transmitting element, for described first corpus data is sent to described user.
Further, described first acquiring unit comprises: search module, for searching the object identity with described second sound-groove model with mapping relations from described corpus data storehouse; First acquisition module, for obtaining the second corpus data associated with described object identity; And first determination module, for described second corpus data that will associate with described object identity as described first corpus data.
Further, described transmitting element comprises: the second acquisition module, for obtaining the object information of described object identity; And sending module, for when described first corpus data being sent to described user, described object information is sent to described user.
Further, search unit described in comprise: computing module, for the similarity utilizing posterior probability to calculate described first sound-groove model and the described each sound-groove model that prestores prestored in sound-groove model respectively, obtains multiple similarity; Comparison module, for more described multiple similarity, obtains maximum similarity; And second determination module, for there is the sound-groove model of described maximum similarity as described second sound-groove model in the described sound-groove model that prestores.
Further, described device also comprises: collector unit, for search from the sound-groove model that prestores in corpus data storehouse with the sound-groove model matched of described first sound-groove model before, collect the corpus data of the multiple objects indicated by multiple object identity; 3rd acquiring unit, for obtaining the sound-groove model of the corpus data of described multiple object, prestore described in obtaining sound-groove model; And set up unit, for the corresponding relation of prestore described in setting up sound-groove model and described multiple object identity.
Further, described 3rd acquiring unit comprises: relating module, for the mark of described corpus data with described multiple object being associated; Extraction module, for extracting the speech characteristic parameter of each frame voice signal in all corpus data of associating with each object identity in described multiple object identity; Training module, for training the speech characteristic parameter of the described each object identity extracted, obtains the sound-groove model belonging to described each object identity; And the 3rd determination module, for will the sound-groove model of described each object identity be belonged to as the described sound-groove model that prestores.
Further, described device also comprises: the 4th acquiring unit, for after the corresponding relation of prestore described in setting up sound-groove model and described multiple object identity, and before obtaining first sound-groove model of user, obtains the corpus data of the first object; Recognition unit, for identifying the sound-groove model in the corpus data of described first object; Second searches unit, for searching the sound-groove model mated with the sound-groove model of described first object from the described sound-groove model that prestores; And associative cell, for the corpus data of described first object being associated with the object identity corresponding to the sound-groove model that finds.
Pass through the present invention, the mapping relations of the second sound-groove model and the second sound-groove model and corpus data are preserved in corpus data storehouse, therefore, the second sound-groove model with the first vocal print Model Matching is found from corpus data storehouse, just can find the corpus data be associated with the second sound-groove model, thus find with first sound-groove model of user to all corpus data mated, solve in prior art the problem that the efficiency comparison of searching corpus data is low, and then reach the effect improving and search the efficiency of corpus data.
Accompanying drawing explanation
The accompanying drawing forming a application's part is used to provide a further understanding of the present invention, and schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the process flow diagram of the querying method of corpus data according to the embodiment of the present invention;
Fig. 2 is the process flow diagram of the querying method of corpus data according to further embodiment of this invention;
Fig. 3 is the schematic diagram associated with object information table by corpus data table by object identity according to the embodiment of the present invention; And
Fig. 4 is the schematic diagram of the inquiry unit of corpus data according to the embodiment of the present invention.
Embodiment
It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.Below with reference to the accompanying drawings and describe the present invention in detail in conjunction with the embodiments.
The present invention program is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the embodiment of a part of the present invention, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, should belong to the scope of protection of the invention.
It should be noted that, term " first ", " second " etc. in instructions of the present invention and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged in the appropriate case, so as embodiments of the invention described herein can with except here diagram or describe those except order implement.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, such as, contain those steps or unit that the process of series of steps or unit, method, system, product or equipment is not necessarily limited to clearly list, but can comprise clearly do not list or for intrinsic other step of these processes, method, product or equipment or unit.
The invention provides a kind of lookup method of corpus data.Fig. 1 is the process flow diagram of the lookup method of corpus data according to the embodiment of the present invention.As shown in the figure, the lookup method of this corpus data comprises the steps:
Step S102, obtains first sound-groove model of user;
Step S104, searches the sound-groove model matched with the first sound-groove model, obtains the second sound-groove model from the sound-groove model that prestores in corpus data storehouse;
Step S106, according to the incidence relation of prestore sound-groove model and the corpus data that prestore in corpus data storehouse, obtains the first corpus data be associated with the second sound-groove model; And
Step S108, sends to user by the first corpus data.
The mapping relations of the second sound-groove model and the second sound-groove model and corpus data are saved in corpus data storehouse, therefore, the second sound-groove model with the first vocal print Model Matching is found from corpus data storehouse, just can find the corpus data be associated with the second sound-groove model, thus find with first sound-groove model of user to all corpus data mated, improve the efficiency of searching corpus data, and can accurate match to needing the corpus data searched.
Be described below in conjunction with table 1 pair the present embodiment.
Table 1
Object identity | Prestore sound-groove model | Corpus data |
ID1 | M1 | Y11 |
Y12 | ||
ID2 | M2 | Y2 |
As shown in table 1, the sound-groove model that prestores is after acquisition user first sound-groove model, the first sound-groove model is utilized to mate with sound-groove model M1 and M2 that prestore one by one, if the first sound-groove model matches with the sound-groove model M1 that prestores, then obtain corpus data Y11 and Y12 relevant to the sound-groove model M1 that prestores, and corpus data Y11 and Y12 is sent to user.That is, the mapping relations of prestore sound-groove model and prestore sound-groove model and corpus data are saved in corpus data storehouse, prestore to find in sound-groove model with the second sound-groove model of the first vocal print Model Matching after, just can find corpus data according to mapping relations, thus the corpus data with the first vocal print Model Matching can be found fast.
Particularly, the sound-groove model matched with the first sound-groove model is searched from the sound-groove model that prestores in corpus data storehouse, obtain the second sound-groove model to comprise: utilize posterior probability to calculate the first sound-groove model and the similarity of each sound-groove model that prestores in sound-groove model that prestores respectively, obtain multiple similarity; More multiple similarity, obtains maximum similarity; And will there is the sound-groove model of maximum similarity as the second sound-groove model in the sound-groove model that prestore.
After getting the voice signal of user, pre-service is carried out to the voice signal of input, remove non-speech audio, and to voice signal framing, extract and preserve the speech characteristic parameter of each frame voice signal in voice signal, obtaining the sound-groove model of this user, i.e. the first sound-groove model.Then the similarity of each sound-groove model in the sound-groove model that prestores utilizing posterior probability to calculate respectively to prestore in the first sound-groove model and data, using the model prestored in sound-groove model corresponding to maximum similarity as the second sound-groove model.
Particularly, because the sound-groove model that prestores in database obtains according to all corpus data training belonging to same object identity, therefore, obtain the first corpus data be associated with the second sound-groove model to comprise: from corpus data storehouse, search the object identity with the second sound-groove model with mapping relations; Obtain the second corpus data associated with object identity; And using the second corpus data of associating with object identity as the first corpus data.
Such as, object identity ID1 is the unique identity of passerby's first, corpus data Y11 and Y12 corresponding with object identity ID1 is the voice document belonging to passerby's first, and the sound-groove model M1 that prestores extracts according to corpus data Y11 and Y12 and obtains, and can characterize the phonetic feature of passerby's first.The object identity ID1 of passerby's first and the mapping relations of the sound-groove model M1 that prestores just are determined when being obtained prestoring sound-groove model M1 by corpus data Y11 and Y12, then after obtaining the second vocal print model M 1 with the first vocal print Model Matching, according to the second vocal print model M 1 and the mapping relations of object identity ID1 and the incidence relation of this object identity ID1 and corpus data Y11 and Y12, can determine that the corpus data associated with the second vocal print model M 1 is Y11 and Y12.
Alternatively, user is sent to comprise the first corpus data: the object information obtaining object identity; And when the first corpus data being sent to user, object information is sent to user.
In order to provide more information to user, also store the object information of object identity at database, as shown in table 2.
Table 2
Compared with table 1, also have recorded object information in table 2, in corpus data, also have recorded the contents such as audio frequency, video and text.Wherein, object information can be object oriented passerby first, the Lu Renyi shown in table 2, the photo etc. of object can also be comprised, if the kind of object information is more, object information table can also be set up for object information, be used for storage object information, and by object identity, object information table and corpus data are associated, after determining object identity, from object information table, search the object information be associated with this object identity according to object identity, and object information is sent to user.
Audio frequency, the video of the corpus data in table 2 are associated with object information, and that is these Voice & Videos belong to the object corresponding to object information that associates with it.Such as, audio A 1, video V1 and text T1 all belong to the corpus data of passerby's first, and the sound that is in audio A 1 and video V1 is all from passerby's first, and text T1 is the writing text corresponding to voice in audio A 1 and video V1, as, the lines etc. of TV play.When sending expectation data to user, audio A 1, video V1 and text T1 all can be sent to user, user can contrast text listening to audio A1 or viewing video V1.
Above-mentioned audio file and video file can be the files such as video display, drama, and user can utilize these files to carry out imitating, dub and study etc.Such as, the object information that object identity ID2 is corresponding is Mei Lanfang, audio A 2 is the audio file of " Farewell My Concubine " of Mei Lanfang, video V2 is the video file of " Farewell My Concubine " of Mei Lanfang, text T2 is the lines of " Farewell My Concubine ", and default sound-groove model M2 trains according to audio A 2 and video V2 the model obtained, the vocal print feature of Mei Lanfang can be embodied.Present user needs " Farewell My Concubine " that learn Mei Lanfang, the first sound-groove model is extracted from the voice document that user provides, and find the second vocal print model M 2 with this first vocal print Model Matching, also just have found video file and the audio file of " Farewell My Concubine ".User imitates " Farewell My Concubine " of Mei Lanfang by the Audio and Video file of " Farewell My Concubine " study, thus reaches and utilize existing multimedia file to carry out the destination of study.In addition, if the audio file of films and television programs or video file, can also be dubbed etc. by these files, method is similar with the method for " Farewell My Concubine " of study Mei Lanfang, repeats no more.
Alternatively, search with the sound-groove model of the first vocal print Model Matching before, set up the database device storing corpus data, before search the sound-groove model matched with the first sound-groove model from the sound-groove model that prestores in corpus data storehouse, method also comprises the following steps shown in Fig. 2:
Step S202, collects the corpus data of the multiple objects indicated by multiple object identity.
Step S204, obtains the sound-groove model of the corpus data of multiple object, obtains the sound-groove model that prestores.
Step S206, sets up the corresponding relation of prestore sound-groove model and multiple object identity.
Collect the corpus data of the multiple objects indicated by multiple object identity, as corpus data Y11, Y12 and Y2 in table 1, for the ease of in database lookup and storage data, set up object information table, corpus data table respectively.Wherein, object information table stores the such as information such as title, head portrait, and corpus data table stores the files such as such as audio frequency, video and text, and object information table can be associated by object identity with corpus data table, as shown in Figure 3.
To all corpus data belonging to same object after the sound-groove model that obtains prestoring, to prestore sound-groove model and multiple object identity sets up corresponding relation, an i.e. corresponding sound-groove model that prestores of object identity, such basis sound-groove model that prestores can determine an object identity, can find corpus data further according to object identity.
After prestore sound-groove model and object identity set up corresponding relation, the multiple corpus data associated with it can be searched by object identity, avoid again setting up the incidence relation of sound-groove model and corpus data of prestoring, make the mapping relations that store in database simpler, improve the efficiency calculating mapping relations, also just improve the efficiency of searching the second sound-groove model in a database, and then reach the effect improving and search the efficiency of corpus data.
Particularly, obtain the sound-groove model of the corpus data of multiple object, the sound-groove model that obtains prestoring comprises: the mark of corpus data with multiple object associated; Extract the speech characteristic parameter of each frame voice signal in all corpus data associated with each object identity in multiple object identity; The speech characteristic parameter of each object identity extracted is trained, obtains the sound-groove model belonging to each object identity; And will the sound-groove model of each object identity be belonged to as the sound-groove model that prestores.
As table 1, corpus data associates with the mark of multiple object, and an object identity can associate multiple corpus data, as object identity ID1 in table 1 associates corpus data Y11 and corpus data Y12.Pre-service is carried out to corpus data, remove the non-speech audio in corpus data, and framing is carried out to the voice signal in corpus data Y11 and corpus data Y12, be subordinated in each the frame voice signal in corpus data Y11 and corpus data Y12 and extract speech characteristic parameter, and the speech characteristic parameter extracted is trained, obtain the sound-groove model M1 belonging to object identity ID1, this sound-groove model M1 can embody the vocal print feature of the object corresponding to object identity ID1 of its subordinate.All carry out aforesaid operations to the corpus data that multiple object identity associates, obtain the sound-groove model of each object identity, the sound-groove model corresponding to all object identities just constitutes the sound-groove model that prestores.
Preferably, after the corresponding relation setting up prestore sound-groove model and multiple object identity, and before obtaining first sound-groove model of user, method also comprises: the corpus data obtaining the first object; Identify the sound-groove model in the corpus data of the first object; The sound-groove model mated with the sound-groove model of the first object is searched from the sound-groove model that prestores; And the corpus data of the first object is associated with the object identity corresponding to sound-groove model found.
After the corresponding relation setting up prestore sound-groove model and multiple object identity, also the database of sound-groove model is just obtained, fashionable when there being new corpus data to add, by searching the sound-groove model that prestores mated with the sound-groove model of new corpus data, and new corpus data is associated with the object identity prestored corresponding to sound-groove model searched, achieve the corpus data that autostore is new, and the new corpus data stored is associated with the object identity existed in database, is convenient to the data maintenance of database.
Such as, store in a database " Farewell My Concubine " of Mei Lanfang, when needing to add the audio file of " Drunken Concubine " of Mei Lanfang to database, then extract the sound-groove model of the audio file of " Drunken Concubine ", the sound-groove model finding coupling according to this sound-groove model from the sound-groove model that prestores is sound-groove model M2, then the audio frequency of " Drunken Concubine " is associated with the object identity D2 found corresponding to sound-groove model M2, also namely complete and " Drunken Concubine " added in database and be associated with object identity.If user needs all videos extracting Mei Lanfang, then according to finding with the voice that the sound-groove model that prestores of Mei Lanfang mates the corpus data associated with Mei Lanfang, the corpus data found not only comprises " Farewell My Concubine ", also comprises " Drunken Concubine ".
The above embodiment of the present invention reaches following effect:
1, the voice signal by receiving user searches the sound-groove model that prestores mated with this voice signal, utilize sound groove recognition technology in e, and according to the corresponding relation of object identity with prestore sound-groove model, corpus data, fast finding to the corpus data prestored corresponding to sound-groove model mated, improve the efficiency of searching corpus data;
2, according to prestoring sound-groove model, by the contrast of sound-groove model, rapidly new corpus data can be added in database, and setting up mapping relations with corresponding object identity, being convenient to the maintenance of database.
The embodiment of the present invention additionally provides a kind of inquiry unit of corpus data.The inquiry unit of the corpus data that the querying method of the corpus data of the embodiment of the present invention can be provided by the embodiment of the present invention performs, and the inquiry unit of the corpus data of the embodiment of the present invention also may be used for the querying method performing the corpus data that the embodiment of the present invention provides.
Fig. 4 is the schematic diagram of the inquiry unit of corpus data according to the embodiment of the present invention.As shown in the figure, the inquiry unit of this corpus data comprises: the first acquiring unit 10, first searches unit 30, second acquisition unit 50 and transmitting element 70.
First acquiring unit 10 is for obtaining first sound-groove model of user;
First searches unit 30 for searching the sound-groove model matched with the first sound-groove model from the sound-groove model that prestores in corpus data storehouse, obtains the second sound-groove model;
Second acquisition unit 50, for the incidence relation according to prestore sound-groove model and the corpus data that prestore in corpus data storehouse, obtains the first corpus data be associated with the second sound-groove model;
Transmitting element 70 is for sending to user by the first corpus data.
The mapping relations of the second sound-groove model and the second sound-groove model and corpus data are saved in corpus data storehouse, therefore, the second sound-groove model with the first vocal print Model Matching is found from corpus data storehouse, just can find the corpus data be associated with the second sound-groove model, thus find with first sound-groove model of user to all corpus data mated, improve the efficiency of searching corpus data, and can accurate match to needing the corpus data searched.
Be described below in conjunction with table 1 pair the present embodiment.
As shown in table 1, the sound-groove model that prestores is after acquisition user first sound-groove model, the first sound-groove model is utilized to mate with sound-groove model M1 and M2 that prestore one by one, if the first sound-groove model matches with the sound-groove model M1 that prestores, then obtain corpus data Y11 and Y12 relevant to the sound-groove model M1 that prestores, and corpus data Y11 and Y12 is sent to user.That is, the mapping relations of prestore sound-groove model and prestore sound-groove model and corpus data are saved in corpus data storehouse, prestore to find in sound-groove model with the second sound-groove model of the first vocal print Model Matching after, just can find corpus data according to mapping relations, thus the corpus data with the first vocal print Model Matching can be found fast.
Particularly, searching unit and comprise: computing module, for utilizing posterior probability to calculate the first sound-groove model and the similarity of each sound-groove model that prestores in sound-groove model that prestores respectively, obtaining multiple similarity; Comparison module, for more multiple similarity, obtains maximum similarity; And second determination module, for there is the sound-groove model of maximum similarity as the second sound-groove model in the sound-groove model that prestore.
After getting the voice signal of user, pre-service is carried out to the voice signal of input, remove non-speech audio, and to voice signal framing, extract and preserve the speech characteristic parameter of each frame voice signal in voice signal, obtaining the sound-groove model of this user, i.e. the first sound-groove model.Then the similarity of each sound-groove model in the sound-groove model that prestores utilizing posterior probability to calculate respectively to prestore in the first sound-groove model and data, using the model prestored in sound-groove model corresponding to maximum similarity as the second sound-groove model.
Particularly, because the sound-groove model that prestores in database obtains according to all corpus data training belonging to same object identity, therefore, the first acquiring unit comprises: search module, for searching the object identity with the second sound-groove model with mapping relations from corpus data storehouse; First acquisition module, for obtaining the second corpus data associated with object identity; And first determination module, for the second corpus data of will associating with object identity as the first corpus data.
Such as, object identity ID1 is the unique identity of passerby's first, corpus data Y11 and Y12 corresponding with object identity ID1 is the voice document belonging to passerby's first, and the sound-groove model M1 that prestores extracts according to corpus data Y11 and Y12 and obtains, and can characterize the phonetic feature of passerby's first.The object identity ID1 of passerby's first and the mapping relations of the sound-groove model M1 that prestores just are determined when being obtained prestoring sound-groove model M1 by corpus data Y11 and Y12, then after obtaining the second vocal print model M 1 with the first vocal print Model Matching, according to the second vocal print model M 1 and the mapping relations of object identity ID1 and the incidence relation of this object identity ID1 and corpus data Y11 and Y12, can determine that the corpus data associated with the second vocal print model M 1 is Y11 and Y12.
Alternatively, transmitting element comprises: the second acquisition module, for obtaining the object information of object identity; And sending module, for when the first corpus data being sent to user, object information is sent to user.
In order to provide more information to user, also store the object information of object identity at database, as shown in table 2.Compared with table 1, also have recorded object information in table 2, in corpus data, also have recorded the contents such as audio frequency, video and text.Wherein, object information can be object oriented passerby first, the Lu Renyi shown in table 2, the photo etc. of object can also be comprised, if the kind of object information is more, object information table can also be set up for object information, be used for storage object information, and by object identity, object information table and corpus data are associated, after determining object identity, from object information table, search the object information be associated with this object identity according to object identity, and object information is sent to user.
Audio frequency, the video of the corpus data in table 2 are associated with object information, and that is these Voice & Videos belong to the object corresponding to object information that associates with it.Such as, audio A 1, video V1 and text T1 all belong to the corpus data of passerby's first, and the sound that is in audio A 1 and video V1 is all from passerby's first, and text T1 is the writing text corresponding to voice in audio A 1 and video V1, as, the lines etc. of TV play.When sending expectation data to user, audio A 1, video V1 and text T1 all can be sent to user, user can contrast text listening to audio A1 or viewing video V1.
Above-mentioned audio file and video file can be the files such as video display, drama, and user can utilize these files to carry out imitating, dub and study etc.Such as, the object information that object identity ID2 is corresponding is Mei Lanfang, audio A 2 is the audio file of " Farewell My Concubine " of Mei Lanfang, video V2 is the video file of " Farewell My Concubine " of Mei Lanfang, text T2 is the lines of " Farewell My Concubine ", and default sound-groove model M2 trains according to audio A 2 and video V2 the model obtained, the vocal print feature of Mei Lanfang can be embodied.Present user needs " Farewell My Concubine " that learn Mei Lanfang, the first sound-groove model is extracted from the voice document that user provides, and find the second vocal print model M 2 with this first vocal print Model Matching, also just have found video file and the audio file of " Farewell My Concubine ".User imitates " Farewell My Concubine " of Mei Lanfang by the Audio and Video file of " Farewell My Concubine " study, thus reaches and utilize existing multimedia file to carry out the destination of study.In addition, if the audio file of films and television programs or video file, can also be dubbed etc. by these files, method is similar with the method for " Farewell My Concubine " of study Mei Lanfang, repeats no more.
Alternatively, search with the sound-groove model of the first vocal print Model Matching before, set up the database device storing corpus data, device also comprises: collector unit, for before search the sound-groove model matched with the first sound-groove model from the sound-groove model that prestores in corpus data storehouse, collect the corpus data of the multiple objects indicated by multiple object identity; 3rd acquiring unit, for obtaining the sound-groove model of the corpus data of multiple object, obtains the sound-groove model that prestores; And set up unit, for setting up the corresponding relation of prestore sound-groove model and multiple object identity.
Collect the corpus data of the multiple objects indicated by multiple object identity, as corpus data Y11, Y12 and Y2 in table 1, for the ease of in database lookup and storage data, set up object information table, corpus data table respectively.Wherein, object information table stores the such as information such as title, head portrait, and corpus data table stores the files such as such as audio frequency, video and text, and object information table can be associated by object identity with corpus data table, as shown in Figure 3.
To all corpus data belonging to same object after the sound-groove model that obtains prestoring, to prestore sound-groove model and multiple object identity sets up corresponding relation, an i.e. corresponding sound-groove model that prestores of object identity, such basis sound-groove model that prestores can determine an object identity, can find corpus data further according to object identity.
After prestore sound-groove model and object identity set up corresponding relation, the multiple corpus data associated with it can be searched by object identity, avoid again setting up the incidence relation of sound-groove model and corpus data of prestoring, make the mapping relations that store in database simpler, improve the efficiency calculating mapping relations, also just improve the efficiency of searching the second sound-groove model in a database, and then reach the effect improving and search the efficiency of corpus data.
Particularly, the 3rd acquiring unit comprises: relating module, for the mark of corpus data with multiple object being associated; Extraction module, for extracting the speech characteristic parameter of each frame voice signal in all corpus data of associating with each object identity in multiple object identity; Training module, for training the speech characteristic parameter of each object identity extracted, obtains the sound-groove model belonging to each object identity; And the 3rd determination module, for will the sound-groove model of each object identity be belonged to as the sound-groove model that prestores.
As table 1, corpus data associates with the mark of multiple object, and an object identity can associate multiple corpus data, as object identity ID1 in table 1 associates corpus data Y11 and corpus data Y12.Pre-service is carried out to corpus data, remove the non-speech audio in corpus data, and framing is carried out to the voice signal in corpus data Y11 and corpus data Y12, be subordinated in each the frame voice signal in corpus data Y11 and corpus data Y12 and extract speech characteristic parameter, and the speech characteristic parameter extracted is trained, obtain the sound-groove model M1 belonging to object identity ID1, this sound-groove model M1 can embody the vocal print feature of the object corresponding to object identity ID1 of its subordinate.All carry out aforesaid operations to the corpus data that multiple object identity associates, obtain the sound-groove model of each object identity, the sound-groove model corresponding to all object identities just constitutes the sound-groove model that prestores.
Preferably, device also comprises: the 4th acquiring unit, for after the corresponding relation setting up prestore sound-groove model and multiple object identity, and before obtaining first sound-groove model of user, obtains the corpus data of the first object; Recognition unit, for identifying the sound-groove model in the corpus data of the first object; Second searches unit, for searching the sound-groove model mated with the sound-groove model of the first object from the sound-groove model that prestores; And associative cell, for the corpus data of the first object being associated with the object identity corresponding to the sound-groove model that finds.
After the corresponding relation setting up prestore sound-groove model and multiple object identity, also the database of sound-groove model is just obtained, fashionable when there being new corpus data to add, by searching the sound-groove model that prestores mated with the sound-groove model of new corpus data, and new corpus data is associated with the object identity prestored corresponding to sound-groove model searched, achieve the corpus data that autostore is new, and the new corpus data stored is associated with the object identity existed in database, is convenient to the data maintenance of database.
Such as, store in a database " Farewell My Concubine " of Mei Lanfang, when needing to add the audio file of " Drunken Concubine " of Mei Lanfang to database, then extract the sound-groove model of the audio file of " Drunken Concubine ", the sound-groove model finding coupling according to this sound-groove model from the sound-groove model that prestores is sound-groove model M2, then the audio frequency of " Drunken Concubine " is associated with the object identity D2 found corresponding to sound-groove model M2, also namely complete and " Drunken Concubine " added in database and be associated with object identity.If user needs all videos extracting Mei Lanfang, then according to finding with the voice that the sound-groove model that prestores of Mei Lanfang mates the corpus data associated with Mei Lanfang, the corpus data found not only comprises " Farewell My Concubine ", also comprises " Drunken Concubine ".
The above embodiment of the present invention reaches following effect:
1, the voice signal by receiving user searches the sound-groove model that prestores mated with this voice signal, utilize sound groove recognition technology in e, and according to the corresponding relation of object identity with prestore sound-groove model, corpus data, fast finding to the corpus data prestored corresponding to sound-groove model mated, improve the efficiency of searching corpus data;
2, according to prestoring sound-groove model, by the contrast of sound-groove model, rapidly new corpus data can be added in database, and setting up mapping relations with corresponding object identity, being convenient to the maintenance of database.
If the integrated unit in above-described embodiment using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in the storage medium that above computer can read.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in storage medium, comprises all or part of step of some instructions in order to make one or more computer equipment (can be personal computer, server or the network equipment etc.) perform method described in each embodiment of the present invention.
In the above embodiment of the present invention, the description of each embodiment is all emphasized particularly on different fields, in certain embodiment, there is no the part described in detail, can see the associated description of other embodiments.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (14)
1. a querying method for corpus data, is characterized in that, comprising:
Obtain first sound-groove model of user;
From the sound-groove model that prestores in corpus data storehouse, search the sound-groove model matched with described first sound-groove model, obtain the second sound-groove model;
According to the incidence relation of prestore described in prestoring in described corpus data storehouse sound-groove model and corpus data, obtain the first corpus data be associated with described second sound-groove model; And
Described first corpus data is sent to described user.
2. method according to claim 1, is characterized in that, obtains the first corpus data be associated with described second sound-groove model and comprises:
The object identity with described second sound-groove model with mapping relations is searched from described corpus data storehouse;
Obtain the second corpus data associated with described object identity; And
Using described second corpus data that associates with described object identity as described first corpus data.
3. method according to claim 2, is characterized in that, sends to described user to comprise described first corpus data:
Obtain the object information of described object identity; And
When described first corpus data being sent to described user, described object information is sent to described user.
4. method according to claim 1, is characterized in that, searches the sound-groove model matched with described first sound-groove model from the sound-groove model that prestores in corpus data storehouse, obtains the second sound-groove model and comprises:
Utilize posterior probability to calculate the similarity of described first sound-groove model and the described each sound-groove model that prestores prestored in sound-groove model respectively, obtain multiple similarity;
More described multiple similarity, obtains maximum similarity; And
To there is the sound-groove model of described maximum similarity as described second sound-groove model in the described sound-groove model that prestores.
5. method according to claim 1, is characterized in that, search from the sound-groove model that prestores in corpus data storehouse with the sound-groove model matched of described first sound-groove model before, described method also comprises:
Collect the corpus data of the multiple objects indicated by multiple object identity;
Obtain the sound-groove model of the corpus data of described multiple object, prestore described in obtaining sound-groove model; And
Prestore described in foundation the corresponding relation of sound-groove model and described multiple object identity.
6. method according to claim 5, is characterized in that, obtains the sound-groove model of the corpus data of multiple object, and the sound-groove model that prestores described in obtaining comprises:
The mark of described corpus data with described multiple object is associated;
Extract the speech characteristic parameter of each frame voice signal in all corpus data associated with each object identity in described multiple object identity;
The speech characteristic parameter of the described each object identity extracted is trained, obtains the sound-groove model belonging to described each object identity; And
To the sound-groove model of described each object identity be belonged to as the described sound-groove model that prestores.
7. method according to claim 5, is characterized in that, after the corresponding relation of prestore described in setting up sound-groove model and described multiple object identity, and before obtaining first sound-groove model of user, described method also comprises:
Obtain the corpus data of the first object;
Identify the sound-groove model in the corpus data of described first object;
The sound-groove model mated with the sound-groove model of described first object is searched from the described sound-groove model that prestores; And
The corpus data of described first object is associated with the object identity corresponding to the sound-groove model found.
8. an inquiry unit for corpus data, is characterized in that, comprising:
First acquiring unit, for obtaining first sound-groove model of user;
First searches unit, for searching the sound-groove model matched with described first sound-groove model from the sound-groove model that prestores in corpus data storehouse, obtains the second sound-groove model;
Second acquisition unit, for the incidence relation according to prestore described in prestoring in described corpus data storehouse sound-groove model and corpus data, obtains the first corpus data be associated with described second sound-groove model; And
Transmitting element, for sending to described user by described first corpus data.
9. device according to claim 8, is characterized in that, described first acquiring unit comprises:
Search module, for searching the object identity with described second sound-groove model with mapping relations from described corpus data storehouse;
First acquisition module, for obtaining the second corpus data associated with described object identity; And
First determination module, for described second corpus data that will associate with described object identity as described first corpus data.
10. device according to claim 9, is characterized in that, described transmitting element comprises:
Second acquisition module, for obtaining the object information of described object identity; And
Sending module, for when described first corpus data being sent to described user, sends to described user by described object information.
11. devices according to claim 8, is characterized in that, described in search unit and comprise:
Computing module, for the similarity utilizing posterior probability to calculate described first sound-groove model and the described each sound-groove model that prestores prestored in sound-groove model respectively, obtains multiple similarity;
Comparison module, for more described multiple similarity, obtains maximum similarity; And
Second determination module, for having the sound-groove model of described maximum similarity as described second sound-groove model in the described sound-groove model that prestores.
12. devices according to claim 8, is characterized in that, described device also comprises:
Collector unit, for search from the sound-groove model that prestores in corpus data storehouse with the sound-groove model matched of described first sound-groove model before, collect the corpus data of the multiple objects indicated by multiple object identity;
3rd acquiring unit, for obtaining the sound-groove model of the corpus data of described multiple object, prestore described in obtaining sound-groove model; And
Set up unit, for the corresponding relation of prestore described in setting up sound-groove model and described multiple object identity.
13. devices according to claim 12, is characterized in that, described 3rd acquiring unit comprises:
Relating module, for associating the mark of described corpus data with described multiple object;
Extraction module, for extracting the speech characteristic parameter of each frame voice signal in all corpus data of associating with each object identity in described multiple object identity;
Training module, for training the speech characteristic parameter of the described each object identity extracted, obtains the sound-groove model belonging to described each object identity; And
3rd determination module, for belonging to the sound-groove model of described each object identity as the described sound-groove model that prestores.
14. devices according to claim 12, is characterized in that, described device also comprises:
4th acquiring unit, for after the corresponding relation of prestore described in setting up sound-groove model and described multiple object identity, and before obtaining first sound-groove model of user, obtains the corpus data of the first object;
Recognition unit, for identifying the sound-groove model in the corpus data of described first object;
Second searches unit, for searching the sound-groove model mated with the sound-groove model of described first object from the described sound-groove model that prestores; And
Associative cell, for being associated with the object identity corresponding to the sound-groove model that finds by the corpus data of described first object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410549904.XA CN104268279B (en) | 2014-10-16 | 2014-10-16 | The querying method and device of corpus data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410549904.XA CN104268279B (en) | 2014-10-16 | 2014-10-16 | The querying method and device of corpus data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104268279A true CN104268279A (en) | 2015-01-07 |
CN104268279B CN104268279B (en) | 2018-04-20 |
Family
ID=52159800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410549904.XA Expired - Fee Related CN104268279B (en) | 2014-10-16 | 2014-10-16 | The querying method and device of corpus data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104268279B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105679296A (en) * | 2015-12-28 | 2016-06-15 | 百度在线网络技术(北京)有限公司 | Instrumental performance assessment method and device |
WO2018214663A1 (en) * | 2017-05-26 | 2018-11-29 | 北京搜狗科技发展有限公司 | Voice-based data processing method and apparatus, and electronic device |
CN108922543A (en) * | 2018-06-11 | 2018-11-30 | 平安科技(深圳)有限公司 | Model library method for building up, audio recognition method, device, equipment and medium |
CN108986825A (en) * | 2018-07-02 | 2018-12-11 | 北京百度网讯科技有限公司 | Context acquisition methods and equipment based on interactive voice |
CN109129509A (en) * | 2018-09-17 | 2019-01-04 | 金碧地智能科技(珠海)有限公司 | A kind of endowment based on screen intelligent interaction is accompanied and attended to robot |
CN111368191A (en) * | 2020-02-29 | 2020-07-03 | 重庆百事得大牛机器人有限公司 | User portrait system based on legal consultation interaction process |
CN108364654B (en) * | 2018-01-30 | 2020-10-13 | 网易乐得科技有限公司 | Voice processing method, medium, device and computing equipment |
CN113327622A (en) * | 2021-06-02 | 2021-08-31 | 云知声(上海)智能科技有限公司 | Voice separation method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070033044A1 (en) * | 2005-08-03 | 2007-02-08 | Texas Instruments, Incorporated | System and method for creating generalized tied-mixture hidden Markov models for automatic speech recognition |
CN102404278A (en) * | 2010-09-08 | 2012-04-04 | 盛乐信息技术(上海)有限公司 | Song requesting system based on voiceprint recognition and application method thereof |
CN102831890A (en) * | 2011-06-15 | 2012-12-19 | 镇江佳得信息技术有限公司 | Method for recognizing text-independent voice prints |
CN103035247A (en) * | 2012-12-05 | 2013-04-10 | 北京三星通信技术研究有限公司 | Method and device of operation on audio/video file based on voiceprint information |
CN103077713A (en) * | 2012-12-25 | 2013-05-01 | 青岛海信电器股份有限公司 | Speech processing method and device |
CN103853778A (en) * | 2012-12-04 | 2014-06-11 | 大陆汽车投资(上海)有限公司 | Methods for updating music label information and pushing music, as well as corresponding device and system |
CN103956168A (en) * | 2014-03-29 | 2014-07-30 | 深圳创维数字技术股份有限公司 | Voice recognition method and device, and terminal |
-
2014
- 2014-10-16 CN CN201410549904.XA patent/CN104268279B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070033044A1 (en) * | 2005-08-03 | 2007-02-08 | Texas Instruments, Incorporated | System and method for creating generalized tied-mixture hidden Markov models for automatic speech recognition |
CN102404278A (en) * | 2010-09-08 | 2012-04-04 | 盛乐信息技术(上海)有限公司 | Song requesting system based on voiceprint recognition and application method thereof |
CN102831890A (en) * | 2011-06-15 | 2012-12-19 | 镇江佳得信息技术有限公司 | Method for recognizing text-independent voice prints |
CN103853778A (en) * | 2012-12-04 | 2014-06-11 | 大陆汽车投资(上海)有限公司 | Methods for updating music label information and pushing music, as well as corresponding device and system |
CN103035247A (en) * | 2012-12-05 | 2013-04-10 | 北京三星通信技术研究有限公司 | Method and device of operation on audio/video file based on voiceprint information |
CN103077713A (en) * | 2012-12-25 | 2013-05-01 | 青岛海信电器股份有限公司 | Speech processing method and device |
CN103956168A (en) * | 2014-03-29 | 2014-07-30 | 深圳创维数字技术股份有限公司 | Voice recognition method and device, and terminal |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105679296A (en) * | 2015-12-28 | 2016-06-15 | 百度在线网络技术(北京)有限公司 | Instrumental performance assessment method and device |
WO2018214663A1 (en) * | 2017-05-26 | 2018-11-29 | 北京搜狗科技发展有限公司 | Voice-based data processing method and apparatus, and electronic device |
CN108962253A (en) * | 2017-05-26 | 2018-12-07 | 北京搜狗科技发展有限公司 | A kind of voice-based data processing method, device and electronic equipment |
CN108364654B (en) * | 2018-01-30 | 2020-10-13 | 网易乐得科技有限公司 | Voice processing method, medium, device and computing equipment |
CN108922543A (en) * | 2018-06-11 | 2018-11-30 | 平安科技(深圳)有限公司 | Model library method for building up, audio recognition method, device, equipment and medium |
CN108922543B (en) * | 2018-06-11 | 2022-08-16 | 平安科技(深圳)有限公司 | Model base establishing method, voice recognition method, device, equipment and medium |
CN108986825A (en) * | 2018-07-02 | 2018-12-11 | 北京百度网讯科技有限公司 | Context acquisition methods and equipment based on interactive voice |
CN109129509A (en) * | 2018-09-17 | 2019-01-04 | 金碧地智能科技(珠海)有限公司 | A kind of endowment based on screen intelligent interaction is accompanied and attended to robot |
CN111368191A (en) * | 2020-02-29 | 2020-07-03 | 重庆百事得大牛机器人有限公司 | User portrait system based on legal consultation interaction process |
CN113327622A (en) * | 2021-06-02 | 2021-08-31 | 云知声(上海)智能科技有限公司 | Voice separation method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104268279B (en) | 2018-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104268279A (en) | Query method and device of corpus data | |
CN100414548C (en) | Search system and technique comprehensively using information of graphy and character | |
CN105120304B (en) | Information display method, apparatus and system | |
JP5466119B2 (en) | Optimal viewpoint estimation program, apparatus, and method for estimating viewpoints of attributes of viewers interested in the same shared content | |
US11966404B2 (en) | Media names matching and normalization | |
CN105045818B (en) | A kind of recommendation methods, devices and systems of picture | |
CN102760169A (en) | Method for detecting advertising slots in television direct transmission streams | |
CN105426550B (en) | Collaborative filtering label recommendation method and system based on user quality model | |
CN110430476A (en) | Direct broadcasting room searching method, system, computer equipment and storage medium | |
CN102549603A (en) | Relevance-based image selection | |
CA2817103A1 (en) | Learning tags for video annotation using latent subtags | |
SG194442A1 (en) | In-video product annotation with web information mining | |
US9606975B2 (en) | Apparatus and method for automatically generating visual annotation based on visual language | |
CN103593356A (en) | Method and system for information searching on basis of multimedia information fingerprint technology and application | |
Jeong et al. | Ontology-based automatic video annotation technique in smart TV environment | |
CN102855317A (en) | Multimode indexing method and system based on demonstration video | |
CN105450778A (en) | Information push system | |
JP6397378B2 (en) | Feature value generation method, feature value generation device, and feature value generation program | |
CN105631461A (en) | Image recognition system and method | |
CN102959539A (en) | Method and system for item recommendation in service crossing situation | |
CN107818183A (en) | A kind of Party building video pushing method based on three stage combination recommended technologies | |
US20170134806A1 (en) | Selecting content based on media detected in environment | |
CN118035489A (en) | Video searching method and device, storage medium and electronic equipment | |
CN113762040B (en) | Video identification method, device, storage medium and computer equipment | |
Cui et al. | Content-enriched classifier for web video classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200409 Address after: Room 522, floor 5, chuangji Building 1, No. 10 yard, Longyu North Street, Huilongguan, Changping District, Beijing 100085 Patentee after: Weizhen Technology (Beijing) Co.,Ltd. Address before: 100193 Beijing city Haidian District Dongbeiwang West Road No. 8 Zhongguancun Software Park Building 5, building 2 207 Hanvon Patentee before: MOFUNSKY TECHNOLOGY (BEIJING) Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180420 |
|
CF01 | Termination of patent right due to non-payment of annual fee |