CN111159535A - Resource acquisition method and device - Google Patents

Resource acquisition method and device Download PDF

Info

Publication number
CN111159535A
CN111159535A CN201911235441.9A CN201911235441A CN111159535A CN 111159535 A CN111159535 A CN 111159535A CN 201911235441 A CN201911235441 A CN 201911235441A CN 111159535 A CN111159535 A CN 111159535A
Authority
CN
China
Prior art keywords
entity
named entity
scene
resource
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911235441.9A
Other languages
Chinese (zh)
Inventor
胡晓慧
苏少炜
陈孝良
常乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN201911235441.9A priority Critical patent/CN111159535A/en
Publication of CN111159535A publication Critical patent/CN111159535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure discloses a resource acquisition method, a resource acquisition device, an electronic device and a computer-readable storage medium. The method comprises the following steps: determining a corresponding scene according to the user voice instruction, and filling slot positions of the scene according to the user voice instruction; determining a corresponding category according to the scene; when determining that the scene is related to the resource according to the category, acquiring an initial named entity related to the resource according to the slot position; searching a named entity library through a search server, and determining an entity name matched with the initial named entity as a target named entity according to a search result; and acquiring the resource according to the target named entity. The method and the system for identifying the named entities in the voice command of the user firstly identify scenes of the voice command of the user to obtain the initial named entities, then search a named entity library through a search server, and determine entity names matched with the initial named entities as target named entities according to search results, so that the determined named entities are more accurate.

Description

Resource acquisition method and device
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a resource acquisition method, device, and computer-readable storage medium.
Background
In the field of intelligent equipment, in the process of interacting with a user, identifying a named entity in a user voice instruction is one of the difficult problems to be solved. For example, for the smart speaker, when a user issues an instruction of "playing the qilien of zhou jilun", the smart speaker needs to effectively recognize that zhou jilun in the instruction is a singer and qilien is a song name, so that a song can be accurately played.
In the prior art, a vocabulary method is generally adopted to identify named entities in a user voice command. For example, a word list is constructed, the resource names are recorded in the word list, for each user voice instruction, whether the resource names in the word list appear or not is searched, and if the resource names appear, the resource names are extracted to be used as named entities.
However, the vocabulary method has the following disadvantages: due to the diversity and inaccuracy of the words of the user voice commands, the method is poor in possible effect when used in application scenes with strong dependence on resources. Taking a music playing scene as an example, the following situations may occur:
for example, the user may have a voice command to play milk and bread of yankee, but there is no song of milk and bread, and only a song named milk and bread is in the vocabulary, which may cause a failure in playing because the corresponding song cannot be found.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The technical problem solved by the present disclosure is to provide a resource acquisition method to at least partially solve the technical problem of inaccurate identification of named entities in the prior art. In addition, a resource acquisition device, a resource acquisition hardware device, a computer readable storage medium and a resource acquisition terminal are also provided.
In order to achieve the above object, according to one aspect of the present disclosure, the following technical solutions are provided:
a method of resource acquisition, comprising:
determining a corresponding scene according to a user voice instruction, and filling slot positions of the scene according to the user voice instruction;
determining a corresponding category according to the scene;
when the scene is determined to be related to the resource according to the category, acquiring an initial named entity related to the resource according to the slot;
searching a named entity library through a search server, and determining an entity name matched with the initial named entity as a target named entity according to a search result;
and acquiring the resource according to the target named entity.
Further, the searching the named entity library through the search server, and determining an entity name matched with the initial named entity as a target named entity according to a search result includes:
searching a resource data table corresponding to the scene through a search server; the resource data table comprises an entity name set corresponding to at least one preset attribute;
and taking the entity name matched with the initial named entity in the entity name set as a target named entity.
Further, the searching the resource data table corresponding to the scene through the search server includes:
determining a slot position attribute according to a slot position corresponding to the initial named entity;
selecting a preset attribute matched with the slot position attribute from the at least one preset attribute as a target preset attribute;
and searching an entity name set corresponding to the target preset attribute through a search server.
Further, the searching the resource data table corresponding to the scene through the search server includes:
acquiring a retrieval template; wherein, the retrieval template comprises an entity name matching rule;
and searching a resource data table corresponding to the scene through a search server according to the entity name matching rule.
Further, the taking the entity name in the entity name set matching the initial named entity as a target named entity includes:
acquiring a retrieval template; wherein, the retrieval template comprises an entity name screening rule;
and when the entity names matched with the initial named entity are multiple, selecting the entity name which is most matched with the initial named entity from the multiple entity names as a target named entity according to the entity name screening rule.
Further, the retrieval template corresponds to the resource data table one to one.
Further, the method further comprises:
determining at least one preset slot position attribute according to the slot position of the scene related to the preset resource;
and determining at least one preset attribute contained in the resource data table according to the at least one preset slot position attribute.
Further, the method further comprises:
and when the resources are successfully acquired according to the target named entity, updating the named entity library by using the initial named entity.
Further, the determining the corresponding category according to the scene includes:
selecting an instruction matched with the user voice instruction from preset user voice instructions as a target user voice instruction;
and taking the category of the preset scene corresponding to the target user voice instruction as the category of the scene.
In order to achieve the above object, according to an aspect of the present disclosure, the following technical solutions are also provided:
a resource acquisition apparatus, comprising:
the slot filling module is used for determining a corresponding scene according to a user voice instruction and filling a slot of the scene according to the user voice instruction;
the scene type determining module is used for determining a corresponding type according to the scene;
an initial named entity obtaining module, configured to obtain an initial named entity related to the resource according to the slot when it is determined that the scene is related to the resource according to the category;
the search module is used for searching the named entity library through the search server and determining an entity name matched with the initial named entity as a target named entity according to a search result;
and the resource acquisition module is used for acquiring the resources related to the resources according to the target named entity.
Further, the search module comprises:
the searching unit is used for searching the resource data table corresponding to the scene through a searching server; the resource data table comprises an entity name set corresponding to at least one preset attribute;
and the matching unit is used for taking the entity name matched with the initial named entity in the entity name set as a target named entity.
Further, the search unit is specifically configured to: determining a slot position attribute according to a slot position corresponding to the initial named entity; selecting a preset attribute matched with the slot position attribute from the at least one preset attribute as a target preset attribute; and searching an entity name set corresponding to the target preset attribute through a search server.
Further, the search unit is specifically configured to: acquiring a retrieval template; wherein, the retrieval template comprises an entity name matching rule; and searching a resource data table corresponding to the scene through a search server according to the entity name matching rule.
Further, the matching unit is specifically configured to: acquiring a retrieval template; wherein, the retrieval template comprises an entity name screening rule; and when the entity names matched with the initial named entity are multiple, selecting the entity name which is most matched with the initial named entity from the multiple entity names as a target named entity according to the entity name screening rule.
Further, the retrieval template corresponds to the resource data table one to one.
Further, the apparatus further comprises:
the slot position attribute determining module is used for determining at least one preset slot position attribute according to the slot position of the scene related to the preset resource;
and the resource table attribute determining module is used for determining at least one preset attribute contained in the resource data table according to the at least one preset slot position attribute.
Further, the apparatus further comprises:
and the named entity library updating module is used for updating the named entity library by using the initial named entity when the resource is successfully acquired according to the target named entity. .
Further, the scene category determination module is specifically configured to: selecting an instruction matched with the user voice instruction from preset user voice instructions as a target user voice instruction; and taking the category of the preset scene corresponding to the target user voice instruction as the category of the scene.
In order to achieve the above object, according to one aspect of the present disclosure, the following technical solutions are provided:
an electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor configured to execute the computer readable instructions, so that the processor implements the resource obtaining method according to any one of the above embodiments when executing the computer readable instructions.
In order to achieve the above object, according to one aspect of the present disclosure, the following technical solutions are provided:
a computer-readable storage medium storing non-transitory computer-readable instructions that, when executed by a computer, cause the computer to perform any of the resource acquisition methods described above.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
a resource acquisition terminal comprises any one of the resource acquisition devices.
The method and the system for identifying the named entities in the voice command of the user firstly identify scenes of the voice command of the user to obtain the initial named entities, then search a named entity library through a search server, and determine entity names matched with the initial named entities as target named entities according to search results, so that the determined named entities are more accurate.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic flow chart diagram of a resource acquisition method according to one embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a resource acquisition apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
Example one
In order to solve the technical problem of inaccurate identification of named entities in the prior art, the embodiment of the disclosure provides a resource acquisition method. As shown in fig. 1, the resource acquiring method mainly includes the following steps S11 to S15.
Step S11: and determining a corresponding scene according to a user voice instruction, and filling slot positions of the scene according to the user voice instruction.
The user voice instruction is related to the scene, and the corresponding scene can be determined according to the user voice instruction. Specifically, the scenario may be predefined according to the use skills supported by the smart device at present, for example, the following scenarios may be predefined: weather checking, alarm clock setting, music playing, shadow discharge, ancient poetry backing and the like. For each scene, the slot included therein is further defined, for example, for a scene of checking weather, the defined slot includes a place and a time, and for a scene of playing music, the defined slot is singer, song, genre and album. And based on the slot position of the scene, filling the slot position according to the voice instruction of the user.
Further, before scene classification, the user voice command is preprocessed, which mainly includes defining a user dictionary, segmenting words, removing special symbols, removing stop words, and the like. The user dictionary refers to some inseparable words manually added during word segmentation, and prevents a word segmentation algorithm from splitting some fixedly collocated phrases, so that recognition of the intelligent device is influenced.
And classifying the preprocessed user voice instruction based on the existing scenes of the current intelligent equipment, and positioning the user voice instruction into one scene. The scene classification may adopt a relatively mature Machine learning method in the industry, such as a Logistic Regression (LR) algorithm, a Support Vector Machine (SVM) algorithm, or a deep learning method, such as a Text Convolutional neural network (Text CNN) algorithm, and specifically which method is adopted may be further selected according to the expression effect of different methods on data.
For example, if the user voice command is in the following mode:
the title of the singer is played.
Please play [ song name ].
It can be determined that the scene to which the user voice command belongs is a music playing scene.
As another example, if the user voice command is in the following mode:
i want to book a hotel from [ time to stay ] to [ time to leave ] in [ city ].
I want to define a hotel in city.
I want to specify a room from [ store name ] to [ away time ].
It may be determined that the scenario to which the user voice instruction belongs is a hotel booking scenario.
Step S12: and determining a corresponding category according to the scene.
The scene category comprises resource-related scenes and resource-non-related scenes. The resource-related scene is a scene related to the resource, and the resource-non-related scene is a scene unrelated to the resource. For example, for a scenario of playing music, the scenario depends on music resources, i.e. songs, and thus the scenario is related to resources, and for a scenario of booking a hotel, a scenario of alarm clock, since these scenarios do not depend on resources, and thus the scenario is not related to resources, the following steps S13-S14 do not have to be performed.
Step S13: and when the scene is determined to be related to the resource according to the category, acquiring an initial named entity related to the resource according to the slot.
The resources can be texts or video data which are required to be acquired from a local database or the Internet, such as music, movies, stories, poems and the like.
Specifically, the slot position recognition may be performed on the scene by using an existing slot position recognition model, and the initial named entity is determined. The initial named entity may be exactly matched with the entity name corresponding to the resource, for example, a correct song name or movie name, or may be fuzzy matched with the entity name corresponding to the resource. For example a partial word of the correct song or movie name.
Wherein, the scene can be playing music, playing movie, telling stories, etc. The initial named entities may be one or more. For example, if the user voice instruction is to play a song of the singer, it may be determined that the scene is a music playing scene, and since the user voice instruction only includes the singer name of the singer, the singer slot of the music playing scene is filled according to the user voice instruction, and the initial named entity that can be obtained through slot identification is the singer of the singer. For another example, if the user voice instruction is to play the qilixiang of the zhou jilun, since the user voice instruction includes the singer name zhou jilun and the song name qilixiang, the singer slot position and the song slot position of the music playing scene are filled according to the user voice instruction, and the initial named entities which can be obtained through slot position identification are the singer jilun and the song qilixiang.
Step S14: and searching a named entity library through a search server, and determining an entity name matched with the initial named entity as a target named entity according to a search result.
Wherein, the Search server can be an Elastic Search, which is a Lucene-based Search server. It provides a distributed multi-user capable full-text search engine based on RESTful web interface. The resources may be stored in a database local to the search server or may be obtained via the internet. After the target named entity is determined, resources may be obtained from a local database or the internet based on the target named entity.
In addition, the search server naming database can store and maintain a high-frequency resource table besides the content related to the resources, wherein the high-frequency resource table contains the resources with higher search frequency. For example, the number of searches for the resource is updated periodically (e.g., every day or every 7 hours, etc.) according to the number of times the resource is searched.
Meanwhile, the high-frequency resources may also be loaded into the memory, and a snapshot of the high-frequency resources is maintained in the memory, for example, the content of the previous short period of the cache resources (for example, the audio corresponding to the first 5s of the song) or the thumbnail picture (for example, the video content corresponding to the first 5s of the movie) is maintained. After the target named entity is determined, the high-frequency resources in the memory and the high-frequency resources in the search server are searched and matched at the same time, and if the high-frequency resources in the memory are hit, the resources are directly returned, so that the searching efficiency can be greatly improved.
When searching and matching the high-frequency resources in the memory, the corresponding resources are obtained according to the searched frequency of the target named entity, the searched time period and the history situation (for example, the number of times of playing songs in the case of music resources) that the corresponding resources are used. Moreover, due to the limitation of problems such as different storage spaces of equipment, utilization efficiency and the like, the resource storage space occupied by the cache is not more than 50% of the memory. The priority of the resource being put into the cache can be distributed according to the historical use condition of the resource, the frequency of being retrieved and the weighted value of the number of times of being retrieved in the latest period of time, wherein the weight corresponding to the historical use condition is the most important factor for selecting the cache.
The search server can also be various software and hardware facilities such as other search service function devices and modules for realizing fuzzy search or non-precise matching search, semantic analysis or semantic understanding can be realized on search keywords based on a neural network algorithm or model, fuzzy or non-precise search can be carried out according to the analysis or understanding, and generalized word meaning matching retrieval is realized.
The named entity library may be stored in a database local to the search server, or may be stored in another server.
When the initial named entity is one, it corresponds to one search result. When the initial named entity is multiple, each initial named entity corresponds to one search result. When the search result comprises a plurality of entity names, the entity names can be used as target named entities together, or one entity name can be selected from the target named entities.
Step S15: and acquiring the resource according to the target named entity.
In particular, the resources may be obtained locally or from the internet.
According to the embodiment, firstly, a scene of a user voice instruction is identified, an initial named entity is obtained, then a named entity library is searched through a search server, an entity name matched with the initial named entity is determined as a target named entity according to a search result, and the determined named entity is more accurate.
In an optional embodiment, step S14 specifically includes:
step S141: searching a resource data table corresponding to the scene through a search server; and the resource data table comprises an entity name set corresponding to at least one preset attribute.
Step S142: and taking the entity name matched with the initial named entity in the entity name set as a target named entity.
Specifically, the resource data table may be established in advance for the scenes related to the resource, where one scene related to the resource corresponds to one resource data table. For example, a play scene corresponds to a music resource data table, a movie release corresponds to a power resource data table, a story telling corresponds to a story resource data table, and the like.
The resource-related scenes can be scenes depending on resources, such as playing music, playing movies, telling stories and the like. The resource of playing music scene is music, the resource of playing movie is playing movie, and the resource of telling story is story.
And the resource data table comprises an entity name set corresponding to at least one preset attribute. For example, for a scene of playing music, the corresponding preset attribute may include attributes of singers, songs, genres, albums, and the like. Each preset attribute corresponds to an entity name set, and the entity name set comprises at least one entity name. For example, for the entity name set corresponding to the song attribute, it contains the name of at least one song.
Specifically, during searching, the matched entity name can be selected from the entity name sets corresponding to all the preset attributes, and the matched entity name can also be selected from the entity name sets corresponding to part of the preset attributes.
In an optional embodiment, when the entity name set corresponding to the search part of the preset attributes selects the matched entity name, the corresponding steps are as follows, that is, step S141 includes:
step S1411: and determining the slot position attribute according to the slot position corresponding to the initial named entity.
Wherein the slot attributes may be celebrities (e.g., singers, movie stars, historical characters, scientists, etc.), music (e.g., songs, albums, etc.), movies (e.g., movies, short videos, television shows, etc.), and the like.
Specifically, if the user voice instruction is to play the qilixiang, the corresponding slot position can be determined as a song according to the initial named entity qilixiang, and further the corresponding slot position attribute is determined as music according to the slot position song.
Step S1412: and selecting a preset attribute matched with the slot position attribute from the at least one preset attribute as a target preset attribute.
Specifically, the corresponding relationship between the slot attribute and the preset attribute is pre-established, which is specifically referred to the related description of step S1401 and step S1402 in the following optional embodiments, and details are not described here again.
Step S1413: and searching an entity name set corresponding to the target preset attribute through a search server.
In an optional embodiment, before the step S1411 is executed, the method further includes a method for determining a resource data table, which specifically includes:
step S1401: and determining at least one preset slot position attribute according to the slot position of the scene related to the preset resource.
The preset resource related scene may be a plurality of scenes, and at least one preset slot attribute is determined for each preset resource related scene. Wherein, a slot position can correspond to a preset slot position attribute, and a plurality of slot positions can correspond to a preset slot position attribute.
The preset resource-related scenes can be music playing, movie playing, story telling, ancient poetry reading and the like. For a music playing scene, the corresponding slot position comprises a singer, a music type, a song and an album, the corresponding slot position attribute can be determined to be a celebrity according to the singer slot position, the corresponding slot position attribute can be determined to be the music type according to the music type slot position, and the corresponding slot position attribute can be determined to be the music according to the song and the album.
Step S1402: and determining at least one preset attribute contained in the resource data table according to the at least one preset slot position attribute.
Wherein, a preset resource related scene corresponds to a resource data table.
One preset slot attribute may correspond to one preset attribute in the resource data table, or multiple preset slot attributes may correspond to one preset attribute in the resource data table. For example, in the above step SA01, if the slot attributes include a celebrity, a music type, and music, when determining the preset attributes in the resource data table, the celebrity may be used as one preset attribute, and the music type and the music may be classified as one preset attribute.
In an optional embodiment, step S141 specifically includes:
step S1411: acquiring a retrieval template; wherein, the retrieval template comprises an entity name matching rule.
The entity name matching rule comprises fuzzy matching or precise matching. The fuzzy matching is to search by using similar keywords (such as similar meaning words, similar shape words, partial words or words of the initial named entity, keywords containing the initial named entity, etc.) as the initial named entity in addition to the initial named entity during the search. For example, if the initial named entity is seven miles, then a search may also be conducted with other keywords containing seven miles, such as seven miles, as the initial named entity. For example, if the user wants to watch drama "daming dynasty 1566", and the user says "play 1566" in a certain voice command, and then additionally says "damming dynasty 1566", the situation may be that 1566 is set as the corresponding match for "damming dynasty 1566" of the current user, and is saved in the search server. The user supplement can be the supplement of the user in a period of time, or the sound box device sends feedback that the resources cannot be retrieved. When the initial named entity is 1566, it can be searched directly as the target named entity. That is, under a preset time rule, for example, a history search record in one week/month, the keyword usage habit of different users in the history search record or the keyword that the user used once is taken as a matching rule.
The search server may also be dynamically updated according to the rules described above.
Step S1412: and searching a resource data table corresponding to the scene through a search server according to the entity name matching rule.
In an optional embodiment, step S142 specifically includes:
step S1421: acquiring a retrieval template; wherein, the retrieval template comprises an entity name screening rule.
The entity name screening rule may be predetermined, for example, the entity name is scored according to a predetermined rule. For example, when the entity is named a singer, the singer may be scored according to his popularity or attention. When the entity name is a song, the song may be scored according to the click volume of the song. When the entity names matched with the initial named entity are multiple, the entity name with the highest score can be selected as the target named entity according to the score.
For example, if the user voice instructs to play qilixiang, according to the search result, which may include qilixiang of zhonglun and qilixiang of jongqiao, the qilixiang is scored according to the popularity of the singer and/or the click rate of the qilixiang singing respectively, and when the score of the qilixiang of jongqiao is higher than that of the qilixiang of zhonglun, the qilixiang of jongqiao is taken as the target named entity.
Step S1422: and when the entity names matched with the initial named entity are multiple, selecting the entity name which is most matched with the initial named entity from the multiple entity names as a target named entity according to the entity name screening rule.
In an alternative embodiment, to facilitate subsequent maintenance of the resource data table and the search template, one search template may correspond to the resource data table for each scene. In addition, in order to quickly and accurately determine the target named entity, the retrieval template can be updated according to the historical search results.
In an optional embodiment, the method further includes an updating method of the named entity library, specifically including:
and when the resources are successfully acquired according to the target named entity, updating the named entity library by using the initial named entity.
Specifically, if the resource can be successfully acquired according to the target named entity, it is indicated that the initial named entity corresponding to the target named entity is a valid named entity, and the valid named entity is also stored in the named entity library, so that the retrieval efficiency can be improved when the same initial named entity is used for retrieval in the subsequent process.
In an optional embodiment, step S12 specifically includes:
step S121: and selecting an instruction matched with the user voice instruction from preset user voice instructions as a target user voice instruction.
Step S122: and taking the category of the preset scene corresponding to the target user voice instruction as the category of the scene.
Specifically, the preset scenes may be classified into resource-related scenes and resource-unrelated scenes in advance. The preset scenes can be weather finding, alarm timing, music playing, movie discharging, story telling, ancient poetry backing and the like, and according to the dependence of the scenes on resources, the scenes can be classified into resource-related scenes because the music playing scenes depend on music resources, the movie discharging scenes depend on movie resources, the story telling scenes depend on story resources, and the ancient poetry backing scenes depend on ancient poetry resources, and the weather finding and alarm timing do not depend on any resources, so the scenes are classified into resource-independent scenes. And then carrying out scene division on the preset user voice instruction, and determining the corresponding relation between the preset user voice instruction and the preset scene. The preset user voice instruction comprises at least one user voice instruction.
It will be appreciated by those skilled in the art that obvious modifications (e.g., combinations of the enumerated modes) or equivalents may be made to the above-described embodiments.
In the above, although the steps in the embodiment of the resource obtaining method are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiment of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse sequence, parallel sequence, cross sequence, and the like, and further, on the basis of the above steps, those skilled in the art may also add other steps, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
Example two
In order to solve the technical problem of inaccurate identification of named entities in the prior art, the embodiment of the disclosure provides a resource acquisition device. The apparatus may perform the steps in the resource obtaining method described in the first embodiment. As shown in fig. 2, the apparatus mainly includes: the system comprises a slot filling module 21, a scene category determining module 22, an initial named entity obtaining module 23, a searching module 24 and a resource obtaining module 25; wherein,
the slot filling module 21 is configured to determine a corresponding scene according to a user voice instruction, and fill a slot of the scene according to the user voice instruction;
the scene type determining module 22 is configured to determine a corresponding type according to the scene;
the initial named entity obtaining module 23 is configured to obtain an initial named entity related to the resource according to the slot when it is determined that the scene is related to the resource according to the category;
the search module 24 is configured to search the named entity library through the search server, and determine an entity name matching the initial named entity as a target named entity according to the search result.
The resource obtaining module 25 is configured to obtain the resource according to the target named entity.
Further, the search module 24 includes: a search unit 241 and a matching unit 242; wherein,
the searching unit 241 is configured to search the resource data table corresponding to the scene through a search server; the resource data table comprises an entity name set corresponding to at least one preset attribute;
the matching unit 242 is configured to use the entity name in the entity name set that matches the initial named entity as a target named entity.
Further, the search unit 241 is specifically configured to: determining a slot position attribute according to a slot position corresponding to the initial named entity; selecting a preset attribute matched with the slot position attribute from the at least one preset attribute as a target preset attribute; and searching an entity name set corresponding to the target preset attribute through a search server.
Further, the search unit 241 is specifically configured to: acquiring a retrieval template; wherein, the retrieval template comprises an entity name matching rule; and searching a resource data table corresponding to the scene through a search server according to the entity name matching rule.
Further, the matching unit 242 is specifically configured to: acquiring a retrieval template; wherein, the retrieval template comprises an entity name screening rule; and when the entity names matched with the initial named entity are multiple, selecting the entity name which is most matched with the initial named entity from the multiple entity names as a target named entity according to the entity name screening rule.
Further, the retrieval template corresponds to the resource data table one to one.
Further, the apparatus further comprises: a slot attribute determination module 26 and a resource table attribute determination module 27; wherein,
the slot position attribute determining module 26 is configured to determine at least one preset slot position attribute according to a slot position of a scene related to a preset resource;
the resource table attribute determining module 27 is configured to determine at least one preset attribute included in the resource data table according to the at least one preset slot attribute.
Further, the apparatus further comprises: named entity library update module 28; wherein,
the named entity repository updating module 28 is configured to update the named entity repository with the initial named entity when the resource is successfully acquired according to the target named entity. .
Further, the scene category determining module 22 is specifically configured to: selecting an instruction matched with the user voice instruction from preset user voice instructions as a target user voice instruction; and taking the category of the preset scene corresponding to the target user voice instruction as the category of the scene.
For detailed descriptions of the working principle, the technical effect of implementation, and the like of the embodiment of the resource obtaining apparatus, reference may be made to the description of the embodiment of the resource obtaining method, and details are not repeated here.
EXAMPLE III
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a corresponding scene according to a user voice instruction, and filling slot positions of the scene according to the user voice instruction; determining a corresponding category according to the scene; when the scene is determined to be related to the resource according to the category, acquiring an initial named entity related to the resource according to the slot; searching a named entity library through a search server, and determining an entity name matched with the initial named entity as a target named entity according to a search result; and acquiring the resource according to the target named entity.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1. A resource acquisition method, comprising:
determining a corresponding scene according to a user voice instruction, and filling slot positions of the scene according to the user voice instruction;
determining a corresponding category according to the scene;
when the scene is determined to be related to the resource according to the category, acquiring an initial named entity related to the resource according to the slot;
searching a named entity library through a search server, and determining an entity name matched with the initial named entity as a target named entity according to a search result;
and acquiring the resource according to the target named entity.
2. The method of claim 1, wherein searching a named entity repository through a search server, and determining an entity name matching the initial named entity as a target named entity according to a search result comprises:
searching a resource data table corresponding to the scene through a search server; the resource data table comprises an entity name set corresponding to at least one preset attribute;
and taking the entity name matched with the initial named entity in the entity name set as a target named entity.
3. The method according to claim 2, wherein the searching the resource data table corresponding to the scene through the search server comprises:
determining a slot position attribute according to a slot position corresponding to the initial named entity;
selecting a preset attribute matched with the slot position attribute from the at least one preset attribute as a target preset attribute;
and searching an entity name set corresponding to the target preset attribute through a search server.
4. The method according to claim 2, wherein the searching the resource data table corresponding to the scene through the search server comprises:
acquiring a retrieval template; wherein, the retrieval template comprises an entity name matching rule;
and searching a resource data table corresponding to the scene through a search server according to the entity name matching rule.
5. The method of claim 2, wherein the using, as the target named entity, the entity name in the set of entity names that matches the initial named entity comprises:
acquiring a retrieval template; wherein, the retrieval template comprises an entity name screening rule;
and when the entity names matched with the initial named entity are multiple, selecting the entity name which is most matched with the initial named entity from the multiple entity names as a target named entity according to the entity name screening rule.
6. The method according to claim 4 or 5, wherein the search template has a one-to-one correspondence with the resource data table.
7. The method of claim 2, further comprising:
determining at least one preset slot position attribute according to the slot position of the scene related to the preset resource;
and determining at least one preset attribute contained in the resource data table according to the at least one preset slot position attribute.
8. The method of claim 1, further comprising:
and when the resources are successfully acquired according to the target named entity, updating the named entity library by using the initial named entity.
9. The method of any of claims 1-5 and 7-8, wherein determining the corresponding category from the scene comprises:
selecting an instruction matched with the user voice instruction from preset user voice instructions as a target user voice instruction;
and taking the category of the preset scene corresponding to the target user voice instruction as the category of the scene.
10. A resource acquisition apparatus, comprising:
the slot filling module is used for determining a corresponding scene according to a user voice instruction and filling a slot of the scene according to the user voice instruction;
the scene type determining module is used for determining a corresponding type according to the scene;
an initial named entity obtaining module, configured to obtain an initial named entity related to the resource according to the slot when it is determined that the scene is related to the resource according to the category;
the search module is used for searching the named entity library through the search server and determining an entity name matched with the initial named entity as a target named entity according to a search result;
and the resource acquisition module is used for acquiring the resources according to the target named entity.
11. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing implements the resource acquisition method of any of claims 1-9.
12. A computer-readable storage medium storing non-transitory computer-readable instructions that, when executed by a computer, cause the computer to perform the resource acquisition method of any one of claims 1-9.
CN201911235441.9A 2019-12-05 2019-12-05 Resource acquisition method and device Pending CN111159535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911235441.9A CN111159535A (en) 2019-12-05 2019-12-05 Resource acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911235441.9A CN111159535A (en) 2019-12-05 2019-12-05 Resource acquisition method and device

Publications (1)

Publication Number Publication Date
CN111159535A true CN111159535A (en) 2020-05-15

Family

ID=70556480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911235441.9A Pending CN111159535A (en) 2019-12-05 2019-12-05 Resource acquisition method and device

Country Status (1)

Country Link
CN (1) CN111159535A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800769A (en) * 2021-02-20 2021-05-14 深圳追一科技有限公司 Named entity recognition method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130166303A1 (en) * 2009-11-13 2013-06-27 Adobe Systems Incorporated Accessing media data using metadata repository
CN109117233A (en) * 2018-08-22 2019-01-01 百度在线网络技术(北京)有限公司 Method and apparatus for handling information
CN109344401A (en) * 2018-09-18 2019-02-15 深圳市元征科技股份有限公司 Named Entity Extraction Model training method, name entity recognition method and device
CN110309507A (en) * 2019-05-30 2019-10-08 深圳壹账通智能科技有限公司 Testing material generation method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130166303A1 (en) * 2009-11-13 2013-06-27 Adobe Systems Incorporated Accessing media data using metadata repository
CN109117233A (en) * 2018-08-22 2019-01-01 百度在线网络技术(北京)有限公司 Method and apparatus for handling information
CN109344401A (en) * 2018-09-18 2019-02-15 深圳市元征科技股份有限公司 Named Entity Extraction Model training method, name entity recognition method and device
CN110309507A (en) * 2019-05-30 2019-10-08 深圳壹账通智能科技有限公司 Testing material generation method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800769A (en) * 2021-02-20 2021-05-14 深圳追一科技有限公司 Named entity recognition method and device, computer equipment and storage medium
CN112800769B (en) * 2021-02-20 2024-06-14 深圳追一科技有限公司 Named entity recognition method, named entity recognition device, named entity recognition computer equipment and named entity recognition storage medium

Similar Documents

Publication Publication Date Title
CN107832434B (en) Method and device for generating multimedia play list based on voice interaction
CN110971969B (en) Video dubbing method and device, electronic equipment and computer readable storage medium
CN109165302B (en) Multimedia file recommendation method and device
US10838746B2 (en) Identifying parameter values and determining features for boosting rankings of relevant distributable digital assistant operations
US20150046418A1 (en) Personalized content tagging
CN112037792B (en) Voice recognition method and device, electronic equipment and storage medium
US10885107B2 (en) Music recommendation method and apparatus
US10109273B1 (en) Efficient generation of personalized spoken language understanding models
CN111324700A (en) Resource recall method and device, electronic equipment and computer-readable storage medium
CN110990598B (en) Resource retrieval method and device, electronic equipment and computer-readable storage medium
CN111428011B (en) Word recommendation method, device, equipment and storage medium
WO2023016349A1 (en) Text input method and apparatus, and electronic device and storage medium
CN109325180B (en) Article abstract pushing method and device, terminal equipment, server and storage medium
CN112364235A (en) Search processing method, model training method, device, medium and equipment
CN111414512A (en) Resource recommendation method and device based on voice search and electronic equipment
CN111078849B (en) Method and device for outputting information
CN111274819A (en) Resource acquisition method and device
US20140372455A1 (en) Smart tags for content retrieval
US9361289B1 (en) Retrieval and management of spoken language understanding personalization data
WO2021098175A1 (en) Method and apparatus for guiding speech packet recording function, device, and computer storage medium
CN111159535A (en) Resource acquisition method and device
CN110765357A (en) Method, device and equipment for searching online document and storage medium
CN114357205A (en) Candidate word mining method and device, electronic equipment and storage medium
CN113420723A (en) Method and device for acquiring video hotspot, readable medium and electronic equipment
CN113918801A (en) Information recommendation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination