WO2021051565A1 - 基于机器学习的语义解析方法、装置、电子设备及计算机非易失性可读存储介质 - Google Patents
基于机器学习的语义解析方法、装置、电子设备及计算机非易失性可读存储介质 Download PDFInfo
- Publication number
- WO2021051565A1 WO2021051565A1 PCT/CN2019/117680 CN2019117680W WO2021051565A1 WO 2021051565 A1 WO2021051565 A1 WO 2021051565A1 CN 2019117680 W CN2019117680 W CN 2019117680W WO 2021051565 A1 WO2021051565 A1 WO 2021051565A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- input information
- processed
- semantic
- machine learning
- template
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Definitions
- This application relates to the field of machine learning application technology, and in particular to a semantic analysis method, device, electronic device, and computer non-volatile readable storage medium based on machine learning.
- Semantic analysis is to parse out the semantic analysis result that the user wants to output to a certain object from a piece of information output by the user.
- the initial information output by different legal person users is usually different, and the same user may have various expressions at different times. For example, when the air conditioner needs to be turned on, different users, or even the same user, will have many expressions. For the initial information output by the user, there are multiple semantic analysis results, and the efficiency is difficult to guarantee.
- the inventor of the present application realizes that the prior art has the problems of low efficiency and low accuracy when using a fixed analysis method to analyze user semantics.
- one purpose of the present application is to provide a semantic analysis method, device, electronic device, and computer non-volatile readable storage medium based on machine learning.
- a semantic analysis method based on machine learning includes: when receiving input information to be processed, converting the input information to be processed into pre-input information; and inputting the pre-input information into a pre-trained machine Learning model to obtain the prediction semantic template corresponding to the input information to be processed; obtain the semantic template constraint information when the input information to be processed is received, and input the prediction semantic template constraint model together with the prediction prediction semantic template, and output the prediction after the constraint A semantic template, where the semantic template constraint information is real-time environmental information related to input information; according to the constrained semantic template, the input information to be processed is converted into pre-analyzed data; the pre-analyzed data is used to obtain the The semantic analysis result of the input information to be processed.
- a semantic analysis device based on machine learning includes: a pre-processing module for converting the input information to be processed into pre-input information when the input information to be processed is received; a template analysis module, Used to input the pre-input information into a pre-trained machine learning model to obtain the predicted semantic template corresponding to the input information to be processed; the template constraint module is used to obtain the semantic template constraints when the input information to be processed is received
- the semantic template is input to the prediction semantic template constraint model together with the prediction and prediction semantic template, and the semantic template is predicted after the constraint is output.
- the semantic template constraint information is real-time environmental information related to the input information; the conversion module is used for the semantic template after the constraint , Converting the input information to be processed into pre-analysis data; an acquisition module, configured to obtain the semantic analysis result of the input information to be processed according to the pre-analysis data.
- an electronic device includes: a processing unit; and a storage unit for storing a machine learning-based semantic analysis program of the processing unit; wherein the processing unit is configured to execute the machine learning-based semantic analysis program
- the semantic analysis program executes the above-mentioned semantic analysis method based on machine learning.
- a computer non-volatile readable storage medium stores a semantic analysis program based on machine learning.
- the semantic analysis program based on machine learning is executed by a processing unit, the machine learning-based The semantic analysis method.
- the predicted semantic template is parsed and obtained, thereby effectively ensuring the accuracy and efficiency of semantic analysis.
- Fig. 1 schematically shows a flowchart of a semantic parsing method based on machine learning.
- Fig. 2 schematically shows a flow chart of a method for obtaining pre-input information.
- Fig. 3 schematically shows a flow chart of a method for obtaining input information supplementary instructions.
- Fig. 4 schematically shows a block diagram of a semantic parsing device based on machine learning.
- FIG. 5 schematically shows an example block diagram of an electronic device for implementing the above-mentioned semantic analysis method based on machine learning.
- FIG. 6 schematically shows a schematic diagram of a computer non-volatile readable storage medium for implementing the above-mentioned machine learning-based semantic analysis method.
- a semantic analysis method based on machine learning is first provided.
- the semantic analysis method based on machine learning can be run on a server, a server cluster or a cloud server, etc.
- the method of the present invention is run on other platforms, which is not particularly limited in this exemplary embodiment.
- the semantic parsing method based on machine learning may include the following steps:
- Step S110 when the input information to be processed is received, the input information to be processed is converted into pre-input information
- Step S120 input the pre-input information into a pre-trained machine learning model to obtain a prediction semantic template corresponding to the input information to be processed;
- Step S130 Obtain the semantic template constraint information when the input information to be processed is received, and input the prediction semantic template constraint model together with the predicted prediction semantic template, and predict the semantic template after the constraint is output.
- the semantic template constraint information is related to the input information. Real-time environmental information;
- Step S140 Convert the input information to be processed into pre-parsed data according to the post-constrained semantic template
- Step S150 Obtain a semantic analysis result of the input information to be processed according to the pre-analysis data.
- the input information to be processed is converted into pre-input information; in this way, various forms of input information to be processed can be converted into input information.
- Pre-input information for machine learning models. Furthermore, inputting the pre-input information into a pre-trained machine learning model can accurately and efficiently obtain the predicted semantic template corresponding to the input information to be processed.
- obtain the semantic template constraint information when the input information to be processed is received, and input the prediction semantic template constraint model together with the prediction prediction semantic template, and predict the semantic template after the constraint is output.
- the semantic template constraint information is real-time related to the input information.
- the semantic template predicted by real-time environmental information can be constrained to further ensure the accuracy of the semantic template.
- the input information to be processed is converted into pre-parsed data; in this way, the input information to be processed can be parsed based on the constrained semantic template to obtain pre-parsed data that meets the acceptance requirements of the accepting object .
- the semantic analysis result of the input information to be processed is obtained; a result that can be directly accepted by the accepting object is obtained.
- the prediction semantic template is obtained by analyzing, thereby effectively ensuring the accuracy and efficiency of semantic analysis.
- step S110 when the input information to be processed is received, the input information to be processed is converted into pre-input information.
- the input information to be processed is the information that the user expresses the inner thoughts in a certain application environment, but this information needs to be analyzed according to the current input information to parse out the inner thoughts contained in it in a specific application. , And at the same time parse the inner idea into the analysis result that can be implemented in the specific application environment, that is, the semantic analysis result. For example, when an insurance product needs to be purchased, in an insurance purchase application, the user inputs: "I want to know about insurance package A", at this time "I want to know about insurance package A" is the input information to be processed.
- the input information to be processed needs to be parsed into a semantic analysis result that can be recognized by the insurance app, for example: "get-the Insurance a package-data".
- a semantic analysis result that can be recognized by the insurance app, for example: "get-the Insurance a package-data”.
- After receiving the input information to be processed it is necessary to perform semantic analysis in the subsequent steps to convert the input information to be processed into pre-input information, which can ensure that the input information to be processed is accurately represented, and facilitate subsequent subsequent steps. Calculate and analyze in the step to improve efficiency.
- the pre-input information can be obtained, for example, by converting the input information to be processed into a vector form.
- converting the input information to be processed into pre-input information includes:
- the word vector is concatenated into a word vector string as the pre-input information.
- the word vector dictionary stores the vectors corresponding to various words. It can search for the word vector of each word corresponding to the text of each input information to be processed, and concatenate the word vector into a word vector string as the pre-input information.
- the machine learning model performs calculations.
- concatenating the word vector into a word vector string as the pre-input information includes:
- the word vectors are concatenated into a word vector string as the pre-input information.
- concatenating the word vector into a word vector string as the pre-input information includes:
- the word vectors are concatenated in a random order to form a word vector string as the pre-input information.
- converting the input information to be processed into pre-input information includes:
- the word vector of each word and the word vector of the word are concatenated into a vector string as the pre-input information.
- Text segmentation is to use the existing text segmentation to decompose, for example, "I want to know about children's insurance” into “I”, “want”, “understand”, “children” and “insurance”. Furthermore, searching for the word vector of each word and the word vector of the word from the word vector dictionary can effectively ensure the initial semantic connotation of the input information to be processed.
- converting the input information to be processed into pre-input information includes:
- Step S210 Convert the input information to be processed in the non-text form into a text form
- Step S220 Convert the input information to be processed in text form into pre-input information.
- the input information to be processed in non-text form is the voice information input by the user.
- Voice information can be converted into textual input information to be processed through voice recognition.
- the input information to be processed in the form of speech and not in the form of text can be processed throughout.
- step S120 the pre-input information is input into a pre-trained machine learning model to obtain a predicted semantic template corresponding to the input information to be processed.
- the predictive semantic template is a predictive semantic template corresponding to the input information to be processed in various application environments and includes the implementation elements required to implement the inner ideas expressed by each input information to be processed.
- the prediction semantic template can be, for example: "acquisition” + “insurance A package” + “data”; where "acquisition” is the implementation action element, "insurance A package” is the implementation target element, and "data” To implement the object attribute element.
- the method of matching the input information to be processed by the user input with the preset prediction semantic template through the preset prediction semantic template to analyze the inner idea of the input information to be processed is very limited by the user input information For example: when the user enters "I’m so hot, do you know the air conditioner", and matching through the preset predictive semantic template, there will be a phenomenon that only the implementation target element "air conditioner” is parsed, and accurate analysis cannot be achieved. purpose.
- the pre-input information obtained by transforming the input information to be processed is input and collected a large number of input information to be processed in various expressions as a sample.
- the trained machine learning model can automatically and accurately obtain the input information.
- the prediction semantic template corresponding to the input information to be processed has high accuracy and high efficiency. For example, after inputting the feature vector data of "I'm so hot, do you know the air conditioner" into machine learning, you can accurately get the prediction semantic template of "open” + "air conditioner”. Through the pre-trained machine learning model, the efficiency and accuracy of obtaining the predicted semantic template can be effectively guaranteed.
- the training method of the machine learning model is:
- Input the pre-input information obtained by converting each input information sample to be processed into a machine learning model to obtain a prediction semantic template corresponding to each input information sample to be processed;
- the coefficients of the machine learning model are adjusted until the machine learning model targets
- the predicted semantic template output by the input information sample to be processed is consistent with the predicted semantic template previously calibrated for the sample;
- a machine learning model suitable for each application environment type is trained, and the pre-input information is input to the pre-training
- a good machine learning model to obtain the prediction semantic template corresponding to the input information to be processed includes:
- the pre-input information is input into a machine learning model corresponding to the application environment type to obtain a prediction semantic template corresponding to the input information to be processed.
- the application environment type is the acceptance environment of the input information to be processed.
- the acceptance environment is various environments such as air-conditioning terminals, mobile phones, and televisions.
- Training is suitable for machine learning models of each application environment type, and can be selected according to requirements to ensure the accuracy of obtaining the predicted semantic template corresponding to the input information to be processed.
- a machine learning model suitable for all the application environment types is trained, and the pre-input information is input into the pre-trained
- the machine learning model to obtain the prediction semantic template corresponding to the input information to be processed includes:
- the pre-input information is input into the machine learning model applicable to all the application environment types, and the prediction semantic template corresponding to the input information to be processed is obtained.
- step S130 the semantic template constraint information when the input information to be processed is received is obtained, and the prediction semantic template constraint model is input together with the prediction prediction semantic template, and the semantic template is predicted after the constraint is output, and the semantic template constraint information is the input information Relevant real-time environmental information.
- the semantic template constraint information is real-time environmental information related to the input information, including at least one of the following three levels of related environmental information.
- the first level the user’s voice and voiceprint information (the user’s voice and audio information when inputting information by voice, etc.).
- Level two input information related to the use environment information of the receiving equipment (such as counter machines, portable terminals, home machines, etc.)
- level three weather-related information when the input information is received (such as real-time temperature, whether it is raining, etc.) .
- the above three levels of information can be easily obtained through networking or direct reception. Among them, the more obtained from the third-level information, the better the constraint effect on the semantic template.
- the prediction semantic template is: "acquisition” + “insurance A package” + “information”
- the semantic template after constraint is: ⁇ pagination> "acquisition” + “insurance A package” + ⁇ transportable> "information”
- the prediction semantic template is: "open” + “air conditioning”
- the semantic template after constraint is: ⁇ immediately> ⁇ dry> "open” + "air conditioning”.
- the predicted semantic template can be further constrained to a constrained semantic template in a real-time environment, thereby improving the flexibility of semantic analysis.
- the training method of the predictive semantic template constraint model is:
- step S140 the input information to be processed is converted into pre-parsed data according to the post-constrained semantic template.
- the data receiving form of one receiving device is: "implementation object attribute-implementation object-implementation action”
- the data receiving form of another receiving device is: “implementation action@implementation object-implementation object attribute”.
- pre-analyzed data To generate pre-analyzed data is to obtain the corresponding form of pre-analyzed data in advance according to the form of the received data of the device that accepts the input information to be processed. In the subsequent steps, only the language mode conversion is required to accurately obtain the applicable data. The semantic analysis result of the device of the input information to be processed.
- converting the input information to be processed into pre-parsed data according to the post-constrained semantic template includes:
- the input information to be processed is converted into pre-analyzed data.
- the data receiving requirement corresponding to the application environment type is, for example, the data receiving form of an accepting device is: " ⁇ A>implementation object attribute- ⁇ B>implementation object- ⁇ C>implementation action", and the data receiving form of another accepting device is : " ⁇ A>Implementation Action@Implementation Object ⁇ B>- ⁇ C>Implementation Object Attribute”.
- This embodiment can accurately convert the input information to be processed into pre-analyzed data according to different application environment types.
- step S150 a semantic analysis result of the input information to be processed is obtained according to the pre-analysis data.
- the semantic analysis result is data that can be recognized by the device that accepts the input information to be processed, such as machine language.
- the pre-analyzed data is data that has been converted into the corresponding format requirements, and the semantic analysis result of the input information to be processed can be obtained by performing simple language instruction conversion on the pre-analyzed data. For example, " ⁇ pagination>Get@ ⁇ Apackage- ⁇ transportable> data” is converted into the insurance purchase app identifiable, that is, the semantic mind result according to the data format requirements of the app instruction in advance, such as "(fy)Gain@ insur-a(ts)pk".
- the method of obtaining the semantic analysis result of the input information to be processed may be to pre-store the analysis data block in various pre-analysis data and the corresponding analysis result in the database according to the corresponding relationship, and the corresponding analysis data block can be queried according to the corresponding relationship.
- the result of the analysis is then composed of the semantic analysis result of the entire input information to be processed.
- the pre-analyzed data is " ⁇ pagination> Get@ ⁇ A Package- ⁇ Transportable> Data”
- the parsed data block is ⁇ pagination> "Get”, “Insurance A Package”, ⁇ Transportable> "Data”
- the corresponding analysis results are (fy) "Gain”, “insur-a” and (ts) "pk”.
- obtaining the semantic analysis result of the input information to be processed according to the pre-analysis data includes:
- the sub-semantic analysis result is combined into the semantic analysis result of the input information to be processed.
- the pre-analyzed data is " ⁇ pagination>Get@ ⁇ A ⁇ - ⁇ transportable> data”
- the sub-pre-analyzed data are respectively ⁇ pagination> "obtain”, “Insurance A package”, ⁇ transportable> "Data”
- the corresponding sub-semantic analysis results are (fy) "Gain”, “insur-a” and (ts) "pk”.
- the Methods also include:
- Step S310 Obtain the semantic blocks that make up the prediction semantic template
- Step S320 judging whether the semantic blocks composing the prediction semantic template lacks necessary semantic blocks
- step S330 if the necessary semantic block is missing, a necessary input information supplement instruction corresponding to the type of the missing necessary semantic block is issued to the user.
- the necessary semantic block is the indispensable semantic block for the prediction semantic template to express the inner idea of the input information to be processed. For example, if the semantic block "insurance A package” is missing from the prediction semantic template "acquisition” + “insurance A package” + “information”, then the implementation object of the input information to be processed corresponding to the prediction semantic template is not known.
- the predicted semantic template output by the machine learning model lacks necessary semantic blocks, indicating that the input information to be processed at the beginning of the input lacks necessary input information, and the necessary input information can be accurately obtained through supplementary instructions. In turn, the completeness and practicability of the semantic analysis results are ensured.
- the method further includes:
- the necessary input information to be processed and the previous input information to be processed are converted into pre-input information together;
- the prediction semantic template converting the necessary input information to be processed and the previous input information to be processed into pre-analyzed data
- the application also provides a semantic analysis device based on machine learning.
- the semantic analysis device based on machine learning may include a preprocessing module 410, a template analysis module 420, a template constraint module 430, a transformation module 440, and an acquisition module 450. among them:
- the pre-processing module 410 may be used to convert the input information to be processed into pre-input information when the input information to be processed is received;
- the template analysis module 420 may be used to input the pre-input information into a pre-trained machine learning model to obtain a predicted semantic template corresponding to the input information to be processed;
- the template constraint module 430 may be used to obtain the semantic template constraint information when the input information to be processed is received, and input the prediction semantic template constraint model together with the prediction prediction semantic template, and output the constraint after the prediction semantic template.
- the semantic template constraint information is Enter real-time environmental information related to the information;
- the conversion module 440 may be configured to convert the input information to be processed into pre-parsed data according to the post-constrained semantic template
- the obtaining module 450 may be configured to obtain a semantic analysis result of the input information to be processed according to the pre-analysis data.
- modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
- the features and functions of two or more modules or units described above may be embodied in one module or unit.
- the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
- the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the method according to the embodiment of the present application.
- a non-volatile storage medium which can be a CD-ROM, U disk, mobile hard disk, etc.
- Including several instructions to make a computing device which can be a personal computer, a server, a mobile terminal, or a network device, etc.
- an electronic device capable of implementing the above method is also provided.
- the electronic device 500 according to this embodiment of the present invention will be described below with reference to FIG. 5.
- the electronic device 500 shown in FIG. 5 is only an example, and should not bring any limitation to the function and application scope of the embodiment of the present invention.
- the electronic device 500 is represented in the form of a general-purpose computing device.
- the components of the electronic device 500 may include, but are not limited to: the aforementioned at least one processing unit 510, the aforementioned at least one storage unit 520, and a bus 530 connecting different system components (including the storage unit 520 and the processing unit 510).
- the storage unit stores program code, and the program code can be executed by the processing unit 510, so that the processing unit 510 executes the various exemplary methods described in the "Exemplary Method" section of this specification. Steps of implementation.
- the processing unit 510 may perform step S110 as shown in FIG.
- Step S120 when receiving input information to be processed, convert the input information to be processed into pre-input information;
- S120 convert the pre-input information Input the pre-trained machine learning model to obtain the predicted semantic template corresponding to the input information to be processed;
- Step S130 According to the predicted semantic template, convert the input information to be processed into pre-analyzed data;
- Step S140 Obtain the semantic template constraint information when the input information to be processed is received, and input the prediction semantic template constraint model together with the prediction prediction semantic template, and predict the semantic template after outputting the constraint.
- the semantic template constraint information is real-time environmental information related to the input information
- Step S150 Obtain a semantic analysis result of the input information to be processed according to the pre-analysis data.
- the storage unit 520 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 5201 and/or a cache storage unit 5202, and may further include a read-only storage unit (ROM) 5203.
- RAM random access storage unit
- ROM read-only storage unit
- the storage unit 520 may also include a program/utility tool 5204 having a set (at least one) program module 5205.
- program module 5205 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
- the bus 530 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
- the electronic device 500 can also communicate with one or more external devices 700 (such as keyboards, pointing devices, Bluetooth devices, etc.), and can also communicate with one or more devices that enable customers to interact with the electronic device 500, and/or communicate with Any device (such as a router, modem, etc.) that enables the electronic device 500 to communicate with one or more other computing devices. Such communication can be performed through an input/output (I/O) interface 550.
- the electronic device 500 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 560. As shown in the figure, the network adapter 560 communicates with other modules of the electronic device 500 through the bus 530.
- LAN local area network
- WAN wide area network
- public network such as the Internet
- the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present application.
- a computing device which can be a personal computer, a server, a terminal device, or a network device, etc.
- a computer non-volatile readable storage medium on which is stored a program product capable of implementing the above method of this specification.
- various aspects of the present invention may also be implemented in the form of a program product, which includes program code.
- the program product runs on a terminal device, the program code is used to make the The terminal device executes the steps according to various exemplary embodiments of the present invention described in the above-mentioned "Exemplary Method" section of this specification.
- a program product 600 for implementing the above method according to an embodiment of the present invention is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be installed in a terminal device, For example, running on a personal computer.
- the program product of the present invention is not limited to this.
- the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or combined with an instruction execution system, device, or device.
- the program product can use any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
- the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the above.
- the program code used to perform the operations of the present invention can be written in any combination of one or more programming languages.
- the programming languages include object-oriented programming languages-such as Java, C++, etc., as well as conventional procedural programming languages. Programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the client computing device, partly executed on the client device, executed as an independent software package, partly executed on the client computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
- the remote computing device can be connected to a client computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computing device (for example, using Internet service providers). Business to connect via the Internet).
- LAN local area network
- WAN wide area network
- Internet service providers for example, using Internet service providers
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Machine Translation (AREA)
Abstract
一种基于机器学习的语义解析方法、装置、电子设备及计算机非易失性可读存储介质,该方法包括:当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息(S110);将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板(S120);获取语义模板约束信息与预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板(S130);根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据(S140);根据所述预解析数据,获取所述待处理的输入信息的语义解析结果(S150)。所述方法可提高语义解析的准确性和效率。
Description
本申请要求2019年09月18日递交、发明名称为“基于机器学习的语义解析方法、装置、介质及电子设备”的中国专利申请201910879338.1的优先权,在此通过引用将其全部内容合并于此。
本申请涉及机器学习应用技术领域,尤其涉及一种基于机器学习的语义解析方法、装置、电子设备及计算机非易失性可读存储介质。
语义解析就是从用户输出的一段信息中解析出用户想要对某个对象输出的语义解析结果。
目前,进行语义解析时,针对同样的语义解析结果,不同法人用户输出的初始信息通常不同,而且,同一用户,在不同的时间可能存在各种各样的表达。例如,需要开启空调时,不同的用户,甚至同一用户,会有很多的表达方式,针对用户输出的初始信息解析,存在解析出多个语义解析结果,而且效率难以保证。
发明概述
因此,本申请的发明人意识到,现有技术中存在采用固定的解析方法解析用户语义时效率低、准确率低的问题。
问题的解决方案
为了解决上述技术问题,本申请的一个目的在于提供一种基于机器学习的语义解析方法、装置、电子设备及计算机非易失性可读存储介质。
其中,本申请所采用的技术方案为:
一方面,一种基于机器学习的语义解析方法,包括:当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
另一方面,一种基于机器学习的语义解析装置,,包括:预处理模块,用于当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;模板解析模块,用于将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;模板约束模块,用于获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;转化模块,用于根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;获取模块,用于根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
另一方面,一种电子设备,包括:处理单元;以及存储单元,用于存储所述处理单元的基于机器学习的语义解析程序;其中,所述处理单元配置为经由执行所述基于机器学习的语义解析程序来执行如上述的基于机器学习的语义解析方法。
另一方面,一种计算机非易失性可读存储介质,其上存储有基于机器学习的语义解析程序,,所述基于机器学习的语义解析程序被处理单元执行时实现如上述的基于机器学习的语义解析方法。
在上述技术方案中,基于预设的机器学习模型,根据各种输入信息,解析得到预测语义模板,进而有效保证语义解析的准确性和效率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
发明的有益效果
对附图的简要说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并于说明书一起用于解释本申请的原理。
图1示意性示出一种基于机器学习的语义解析方法的流程图。
图2示意性示出一种获取预输入信息的方法流程图。
图3示意性示出一种获取输入信息补充指令的方法流程图。
图4示意性示出一种基于机器学习的语义解析装置的方框图。
图5示意性示出一种用于实现上述基于机器学习的语义解析方法的电子设备示例框图。
图6示意性示出一种用于实现上述基于机器学习的语义解析方法的计算机非易失性可读存储介质的示意图。
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
发明实施例
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本申请将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本申请的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本申请的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本申请的各方面变得模糊。
此外,附图仅为本申请的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
本示例实施方式中首先提供了基于机器学习的语义解析方法,该基于机器学习的语义解析方法可以运行于服务器,也可以运行于服务器集群或云服务器等,当然,本领域技术人员也可以根据需求在其他平台运行本发明的方法,本示例性实施例中对此不做特殊限定。参考图1所示,该基于机器学习的语义解析方法可以包括以下步骤:
步骤S110,当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;
步骤S120,将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;
步骤S130,获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;
步骤S140,根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;
步骤S150,根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
上述基于机器学习的语义解析方法中,首先当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;这样可以将各种形式的待处理的输入信息转化为可以输入机器学习模型的预输入信息。进而,将所述预输入信息输入预先训练好的机器学习模型,可以准确、高效地得到所述待处理的输入信息对应的预测语义模板。其次,获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预 测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;这样可以对通过实时环境信息约束预测出的语义模板,进一步保证语义模板的准确性。然后,根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;这样可以基于约束后语义模板对待处理的输入信息进行解析,得到满足受理对象的受理要求的预解析数据。最后,根据所述预解析数据,获取所述待处理的输入信息的语义解析结果;得到可以直接由受理对象受理的结果。这样基于预设的机器学习模型,根据各种输入信息,解析得到预测语义模板,进而有效保证语义解析的准确性和效率。
下面,将结合附图对本示例实施方式中上述基于机器学习的语义解析方法中的各步骤进行详细的解释以及说明。
在步骤S110中,当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息。
本示例的实施方式中,待处理的输入信息就是在某种应用环境下用户表达了内在想法的信息,但是这种信息在具体应用的时候,需要根据当前的输入信息解析出其中包含的内在想法,同时将该内在想法解析为在具体的应用环境中可以实施的解析结果,也就是语义解析结果。例如在需要购买保险产品时,在购买保险的应用中,用户输入:“我想要了解一下保险A套餐”,此时“我想要了解一下保险A套餐”就是待处理的输入信息,此时,需要将该待处理的输入信息解析成保险app可以识别的语义解析结果,例如:“get-the Insurance a package-data”。在接收到该待处理的输入信息后,需要在后续步骤中进行语义解析,将待处理的输入信息转化为预输入信息,可以在保证待处理的输入信息被准确表示的基础上,方便后续的步骤中计算解析,提高效率。其中,预输入信息可以例如通过将待处理的输入信息转化为向量形式得到。
本示例的一种实施方式中,所述当待处理的输入信息为文本形式时,当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息,包括:
根据所述待处理的输入信息的文本,查找字词向量词典,获取所述文本中每个字的字向量;
将所述字向量串联成字向量串,作为所述预输入信息。
字词向量词典存储了各种字词对应的向量,可以查找映射每个待处理的输入信息的文本对应的每个字的字向量,将字向量串联成字向量串作为预输入信息可以用于机器学习模型进行计算。
本示例的一种实施方式中,将所述字向量串联成字向量串,作为所述预输入信息,包括:
按照所述字向量对应的字在所述文本中的顺序,将所述字向量串联成字向量串,作为所述预输入信息。
本示例的一种实施方式中,将所述字向量串联成字向量串,作为所述预输入信息,包括:
将所述字向量,按照随机顺序串联成字向量串,作为所述预输入信息。
本示例的一种实施方式中,当待处理的输入信息为文本形式时,当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息,包括:
将所述待处理的输入信息的文本分词,得到组成所述文本的每个字和词;
从字词向量词典中,查找所述组成所述文本的每个字的字向量和词的词向量;
将所述每个字的字向量和词的词向量串联成向量串,作为所述预输入信息。
文本分词就是利用现有的文本分词将例如“我想了解儿童保险”分解为“我”“想”“了解”“儿童”“保险”。进而,从字词向量词典中,查找组成文本的每个字的字向量和词的词向量,可以有效保证待处理的输入信息的初始语义内涵。
本示例的一种实施方式中,参考图2所示,若接收到的所述待处理的输入信息为非文本形式,
所述当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息,包括:
步骤S210,将所述非文本形式的所述待处理的输入信息转化为文本形式;
步骤S220,将文本形式的所述待处理的输入信息转化为预输入信息。
非文本形式的待处理的输入信息就是如用户输入的语音信息。通过语音识别可以将语音信息转化为文本形式的待处理的输入信息。进而可以遍于处理语音形式的非文本形式的待处理的输入信息。
在步骤S120中,将所述预输入信息输入预先训练好的机器学习模型,得到所 述待处理的输入信息对应的预测语义模板。
本示例的实施方式中,预测语义模板就是对应于各种应用环境下的待处理的输入信息的包括实施每个待处理的输入信息所表达的内在想法的所需要的实施要素的预测语义模板。在一种应用环境中,预测语义模板可以是例如:“获取”+“保险A套餐”+“资料”;其中“获取”为实施动作要素,“保险A套餐”为实施对象要素,“资料”为实施对象属性要素。
一种应用环境下,用户表达同样的想法时,输入的待处理的输入信息,会有非常多的个性化表达方式。相关技术中的通过预设预测语义模板,将用户输入的待处理的输入信息与预设预测语义模板进行匹配的方式来解析该待处理的输入信息的内在想法,非常受限于用户的输入信息,例如:当用户输入“我好热啊,空调你知道吗”,这样通过预设的预测语义模板进行匹配,则会出现只解析出实施对象要素“空调”的现象,不能够实现准确解析的目的。
本实施方式中,通过将待处理的输入信息转化得到的预输入信息,输入收集大量的各种表达方式的待处理的输入信息作为样本训练好的机器学习模型,可以自动准确的得到与输入的待处理的输入信息相对应的预测语义模板,准确率高,效率高。例如,将“我好热啊,空调你知道吗”的特征向量数据输入机器学习后,就可以准确的得到“打开”+“空调”的预测语义模板。通过预先训练好的机器学习模型,可以有效保证预测语义模板获取的效率和准确率。
本示例的一种实施方式中,所述机器学习模型的训练方法是:
收集待处理的输入信息样本集,所述待处理的输入信息样本事先标定了对应的预测语义模板;
将每个所述待处理的输入信息样本转化为预输入信息;
将每个所述待处理的输入信息样本转化得到的预输入信息输入机器学习模型,得到每个所述待处理的输入信息样本对应的预测语义模板;
如果存在所述机器学习模型针对所述待处理的输入信息样本输出的预测语义模板,与对所述样本事先标定的预测语义模板不一致,则调整机器学习模型的系数,直到所述机器学习模型针对所述待处理的输入信息样本输出的预测语义模板,与对所述样本事先标定的预测语义模板一致;
如果所述机器学习模型针对所有所述待处理的输入信息样本输出的预测语义模板,与对每个所述样本事先标定的预测语义模板一致,训练结束。
本示例的一种实施方式中,根据每个所述待处理的输入信息的应用环境类型,训练适用于每个所述应用环境类型的机器学习模型,所述将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板,包括:
获取所述预输入信息对应的应用环境类型;
根据所述应用环境类型,查找所述应用环境类型对应的机器学习模型;
将所述预输入信息输入与所述应用环境类型对应的机器学习模型,得到所述待处理的输入信息对应的预测语义模板。
应用环境类型就是待处理的输入信息的受理环境,例如,受理环境为空调终端、手机、电视等各种环境。训练适用于每个应用环境类型的机器学习模型,可以根据需求进行选择,保证待处理的输入信息对应的预测语义模板获取的准确性。
本示例的一种实施方式中,根据所有的所述待处理的输入信息的应用环境类型,训练适用于所有的所述应用环境类型的机器学习模型,将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板,包括:
将所述预输入信息输入所述适用于所有的所述应用环境类型的机器学习模型,得到所述待处理的输入信息对应的预测语义模板。
在步骤S130中,获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息。
语义模板约束信息为输入信息相关的实时环境信息,至少包括以下三级相关环境信息中一种,第一级:用户的语音声纹信息(用户通过语音输入信息时的语音音频信息等),第二级:输入信息关联的受理设备的使用环境信息(如柜台机、随身携带终端、家用机等),第三级:接收到输入信息时的天气相关信息(如实时温度、是否下雨等)。以上三级信息可以方便地通过联网或者直接接 收的方式获取。其中,三级信息中获取的越多对语义模板的约束效果越好。例如,当预测语义模板为:“获取”+“保险A套餐”+“资料”;约束后语义模板为:<分页>“获取”+“保险A套餐”+<可传输型>“资料”;或者,当预测语义模板为:“打开”+“空调”,约束后语义模板为:<立刻><干燥>“打开”+“空调”。
通过这种方式,可以在接收到的输入信息的预测的基础上,进一步将预测语义模板约束为实时环境融洽的约束后语义模板,提高语义解析的灵活性。
本示例的一种实施方式中,预测语义模板约束模型的训练方法为:
收集语义模板约束信息与预测预测语义模板样本集,其中每个样本事先标定对应的语义约束模板;
将所述样本输入机器学习模型,得到所述样本对应的预测语义约束模板;
如果存在所述机器学习模型针对所述样本输出的预测语义约束模板,与对所述样本事先标定的语义约束模板不一致,则调整机器学习模型的系数,直到所述机器学习模型针对所述样本输出的预测语义约束模板,与对所述样本事先标定的语义约束模板一致;
如果所述机器学习模型针对所有所述样本输出的预测语义约束模板,与对每个所述样本事先标定的语义约束模板一致,训练结束。
在步骤S140中,根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据。
本示例的实施方式中,受理该待处理的输入信息的设备的接收数据的形式,通常会有很多种。例如:一种受理设备的数据接收形式为:“实施对象属性-实施对象-实施动作”,而另一个受理设备的数据接收形式为:“实施动作@实施对象-实施对象属性”。
生成预解析数据,就是根据受理该待处理的输入信息的设备的接收数据的形式,可以提前得到相应形式的预解析数据,后续步骤中只需要做进行语言模式转换,就可以准确地得到适用于待处理的输入信息的设备的语义解析结果。
本示例的一种实施方式中,根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据,包括:
获取所述待处理的输入信息的应用环境类型对应的数据接收要求;
按照所述数据接收要求,根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据。
应用环境类型对应的数据接收要求就是例如一个受理设备的数据接收形式为:“<A>实施对象属性-<B>实施对象-<C>实施动作”,而另一个受理设备的数据接收形式为:“<A>实施动作@实施对象<B>-<C>实施对象属性”。该实施例可以根据不同的应用环境类型,准确将待处理的输入信息转化为预解析数据。
在步骤S150中,根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
本示例的实施方式中,语义解析结果就是受理待处理的输入信息的设备等可以识别的数据,例如机器语言。预解析数据是已经转化为相应格式要求的数据,将预解析数据进行简单的语言指令转换就可以得到所述待处理的输入信息的语义解析结果。例如,“<分页>获取@保险A套餐-<可传输型>资料”转化为保险购买app可以识别的,也就是预先根据app指令的数据格式要求的语义介意结果,例如“(fy)Gain@insur-a(ts)pk”。获取待处理的输入信息的语义解析结果的方法可以是预先将各种预解析数据中的解析数据块与对应的解析结果,按照对应的关系保存在数据库中,根据解析数据块就可以查询到对应的解析结果,然后组成整个待处理的输入信息的语义解析结果。其中,若预解析数据为“<分页>获取@保险A套餐-<可传输型>资料”,则解析数据块为<分页>“获取”、“保险A套餐”、<可传输型>“资料”,对应的解析结果为(fy)“Gain”、“insur-a”和(ts)“pk”。
本示例的一种实施方式中,根据所述预解析数据,获取所述待处理的输入信息的语义解析结果,包括:
获取组成所述预解析数据的每个子预解析数据;
从数据库中查找每个所述子预解析数据对应的子语义解析结果;
将所述子语义解析结果组合成所述待处理的输入信息的语义解析结果。
例如,若预解析数据为“<分页>获取@保险A套餐-<可传输型>资料”,则子预解析数据分别为<分页>“获取”、“保险A套餐”、<可传输型>“资料”,对应的子语义解析结果为(fy)“Gain”、“insur-a”和(ts)“pk”。
本示例的一种实施方式中,参考图3所示,在所述将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板之后,所述方法还包括:
步骤S310,获取组成所述预测语义模板的语义块;
步骤S320,判断所述组成所述预测语义模板的语义块中是否缺少必要语义块;
步骤S330,如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令。
其中,若预测语义模板为“获取”+“保险A套餐”+“资料”,则语义块为“获取”、“保险A套餐”和
“资料”。必要语义块就是该预测语义模板表达出待处理的输入信息的内在想法不可缺少的语义块。例如,如果预测语义模板“获取”+“保险A套餐”+“资料”中缺少了语义块“保险A套餐”,则不知道预测语义模板对应的待处理的输入信息的实施对象。在该实施例中,机器学习模型输出的预测语义模板缺少必要语义块,说明开始输入的待处理的输入信息本身缺少必要输入信息,通过补充指令可以准确的获取到必要输入信息。进而保证语义解析结果的完整性、实用性。
本示例的一种实施方式中,在所述如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令之后,所述方法还包括:
当接收到补充的所述缺少的必要语义块的类型对应的必要待处理的输入信息,将所述必要待处理的输入信息和之前的待处理的输入信息,一起转化为预输入信息;
将所述预输入信息输入预先训练好的机器学习模型,得到所述必要待处理的输入信息和之前的待处理的输入信息对应的预测语义模板;
根据所述预测语义模板,将所述必要待处理的输入信息和之前的待处理的输入信息转化为预解析数据;
根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
本申请还提供了一种基于机器学习的语义解析装置。参考图4所示,该基于机器学习的语义解析装置可以包括预处理模块410、模板解析模块420、模板约束 模块430、转化模块440及获取模块450。其中:
预处理模块410可以用于当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;
模板解析模块420可以用于将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;
模板约束模块430可以用于获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;
转化模块440可以用于根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;
获取模块450可以用于根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
上述基于机器学习的语义解析装置中各模块的具体细节已经在对应的基于机器学习的语义解析方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
此外,尽管在附图中以特定顺序描述了本申请中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器 、移动终端、或者网络设备等)执行根据本申请实施方式的方法。
在本申请的示例性实施例中,还提供了一种能够实现上述方法的电子设备。
所属技术领域的技术人员能够理解,本发明的各个方面可以实现为系统、方法或程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
下面参照图5来描述根据本发明的这种实施方式的电子设备500。图5显示的电子设备500仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500以通用计算设备的形式表现。电子设备500的组件可以包括但不限于:上述至少一个处理单元510、上述至少一个存储单元520、连接不同系统组件(包括存储单元520和处理单元510)的总线530。
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元510执行,使得所述处理单元510执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施方式的步骤。例如,所述处理单元510可以执行如图1中所示的步骤S110:当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;S120:将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;步骤S130:根据所述预测语义模板,将所述待处理的输入信息转化为预解析数据;步骤S140:获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;步骤S150:根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
存储单元520可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)5201和/或高速缓存存储单元5202,还可以进一步包括只读存储单元(ROM)5203。
存储单元520还可以包括具有一组(至少一个)程序模块5205的程序/实用工具5204,这样的程序模块5205包括但不限于:操作系统、一个或者多个应用程序 、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线530可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备500也可以与一个或多个外部设备700(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得客户能与该电子设备500交互的设备通信,和/或与使得该电子设备500能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口550进行。并且,电子设备500还可以通过网络适配器560与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器560通过总线530与电子设备500的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备500使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本申请实施方式的方法。
在本申请的示例性实施例中,还提供了一种计算机非易失性可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施方式中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施方式的步骤。
参考图6所示,描述了根据本发明的实施方式的用于实现上述方法的程序产品600,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在 终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,所述程序设计语言包括面向对象的程序设计语言-诸如Java、C++等,还包括常规的过程式程序设计语言-诸如“C”语言或类似的程序设计语言。程序代码可以完全地在客户计算设备上执行、部分地在客户设备上执行、作为一个独立的软件包执行、部分在客户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到客户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处 理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其他实施例。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由权利要求指出。
Claims (22)
- 一种基于机器学习的语义解析方法,包括:当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
- 根据权利要求1所述的方法,其中,所述当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息,包括:当接收到待处理的输入信息,且所述待处理的输入信息为文本形式时,根据所述待处理的输入信息的文本,查找字词向量词典,获取所述文本中每个字的字向量;将所述字向量串联成字向量串,作为预输入信息。
- 根据权利要求1所述的方法,其中,若接收到的所述待处理的输入信息为非文本形式,所述当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息,包括:将所述非文本形式的所述待处理的输入信息转化为文本形式;将文本形式的所述待处理的输入信息转化为预输入信息。
- 根据权利要求1所述的方法,其中,所述机器学习模型的训练方法是:收集待处理的输入信息样本集,所述待处理的输入信息样本事先标定了对应的语义模板;将每个所述待处理的输入信息样本转化为预输入信息;将每个所述待处理的输入信息样本转化得到的预输入信息输入机器学习模型,得到每个所述待处理的输入信息样本对应的语义模板;如果存在所述机器学习模型针对所述待处理的输入信息样本输出的语义模板,与对所述样本事先标定的语义模板不一致,则调整机器学习模型的系数,直到所述机器学习模型针对所述待处理的输入信息样本输出的语义模板,与对所述样本事先标定的语义模板一致;如果所述机器学习模型针对所有所述待处理的输入信息样本输出的语义模板,与对每个所述样本事先标定的语义模板一致,训练结束。
- 根据权利要求1所述的方法,其中,根据每个所述待处理的输入信息的应用环境类型,训练适用于每个所述应用环境类型的机器学习模型,所述将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板,包括:获取所述预输入信息对应的应用环境类型;根据所述应用环境类型,查找所述应用环境类型对应的机器学习模型;将所述预输入信息输入与所述应用环境类型对应的机器学习模型,得到所述待处理的输入信息对应的预测语义模板。
- 根据权利要求1所述的方法,其中,在所述将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板之后,所述方法还包括:获取组成所述预测语义模板的语义块;判断所述组成所述预测语义模板的语义块中是否缺少必要语义块 ;如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令。
- 根据权利要求6所述的方法,其中,在所述如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令之后,所述方法还包括:当接收到补充的所述缺少的必要语义块的类型对应的必要待处理的输入信息,将所述必要待处理的输入信息和之前的待处理的输入信息,一起转化为预输入信息;将所述预输入信息输入预先训练好的机器学习模型,得到所述必要待处理的输入信息和之前的待处理的输入信息对应的预测语义模板;根据所述预测语义模板,将所述必要待处理的输入信息和之前的待处理的输入信息转化为预解析数据;根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
- 一种基于机器学习的语义解析装置,包括:预处理模块,用于当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;模板解析模块,用于将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;模板约束模块,用于获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;转化模块,用于根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;获取模块,用于根据所述预解析数据,获取所述待处理的输入信 息的语义解析结果。
- 根据权利要求8所述的装置,所述预处理模块被配置为:当接收到待处理的输入信息,且所述待处理的输入信息为文本形式时,根据所述待处理的输入信息的文本,查找字词向量词典,获取所述文本中每个字的字向量;将所述字向量串联成字向量串,作为预输入信息。
- 根据权利要求8所述的装置,若接收到的所述待处理的输入信息为非文本形式,所述预处理模块被配置为:将所有所述可检测关键点的识别分数的求和,得到识别分数总和;用所述识别分数总和减去预定识别分数阈值,得到所述识别分数总和与所述预定识别分数阈值的差值,作为所述部分人脸特征被遮挡区域对识别人脸的影响分数。
- 根据权利要求8所述的装置,还包括被配置为:收集待处理的输入信息样本集,所述待处理的输入信息样本事先标定了对应的语义模板;将每个所述待处理的输入信息样本转化为预输入信息;将每个所述待处理的输入信息样本转化得到的预输入信息输入机器学习模型,得到每个所述待处理的输入信息样本对应的语义模板;如果存在所述机器学习模型针对所述待处理的输入信息样本输出的语义模板,与对所述样本事先标定的语义模板不一致,则调整机器学习模型的系数,直到所述机器学习模型针对所述待处理的输入信息样本输出的语义模板,与对所述样本事先标定的语义模板一致;如果所述机器学习模型针对所有所述待处理的输入信息样本输出的语义模板,与对每个所述样本事先标定的语义模板一致,训练结束。
- 根据权利要求8所述的装置,根据每个所述待处理的输入信息的应用环境类型,训练适用于每个所述应用环境类型的机器学习模型,所述模板解析模块被配置为:获取所述预输入信息对应的应用环境类型;根据所述应用环境类型,查找所述应用环境类型对应的机器学习模型;将所述预输入信息输入与所述应用环境类型对应的机器学习模型,得到所述待处理的输入信息对应的预测语义模板。
- 根据权利要求8所述的装置,还包括:在所述将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板之后,获取组成所述预测语义模板的语义块;判断所述组成所述预测语义模板的语义块中是否缺少必要语义块;如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令。
- 根据权利要求13所述的装置,还包括:在所述如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令之后,当接收到补充的所述缺少的必要语义块的类型对应的必要待处理的输入信息,将所述必要待处理的输入信息和之前的待处理的输入信息,一起转化为预输入信息;将所述预输入信息输入预先训练好的机器学习模型,得到所述必要待处理的输入信息和之前的待处理的输入信息对应的预测语义模板;根据所述预测语义模板,将所述必要待处理的输入信息和之前的待处理的输入信息转化为预解析数据;根据所述预解析数据,获取所述待处理的输入信息的语义解析结 果。
- 一种电子设备,包括:处理单元;以及存储单元,用于存储所述处理单元的基于机器学习的语义解析程序;其中,所述处理单元配置为经由执行所述基于机器学习的语义解析程序来执行以下处理:当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息;将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板;获取接收到待处理的输入信息时的语义模板约束信息,并与预测预测语义模板共同输入预测语义模板约束模型,输出约束后预测语义模板,所述语义模板约束信息为输入信息相关的实时环境信息;根据所述约束后语义模板,将所述待处理的输入信息转化为预解析数据;根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
- 根据权利要求15所述的电子设备,其中,所述当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息,包括:当接收到待处理的输入信息,且所述待处理的输入信息为文本形式时,根据所述待处理的输入信息的文本,查找字词向量词典,获取所述文本中每个字的字向量;将所述字向量串联成字向量串,作为预输入信息。
- 根据权利要求15所述的电子设备,其中,若接收到的所述待处理的输入信息为非文本形式,所述当接收到待处理的输入信息,将所述待处理的输入信息转化为预输入信息,包括:将所述非文本形式的所述待处理的输入信息转化为文本形式;将文本形式的所述待处理的输入信息转化为预输入信息。
- 根据权利要求15所述的电子设备,其中,所述机器学习模型的训练方法是:收集待处理的输入信息样本集,所述待处理的输入信息样本事先标定了对应的语义模板;将每个所述待处理的输入信息样本转化为预输入信息;将每个所述待处理的输入信息样本转化得到的预输入信息输入机器学习模型,得到每个所述待处理的输入信息样本对应的语义模板;如果存在所述机器学习模型针对所述待处理的输入信息样本输出的语义模板,与对所述样本事先标定的语义模板不一致,则调整机器学习模型的系数,直到所述机器学习模型针对所述待处理的输入信息样本输出的语义模板,与对所述样本事先标定的语义模板一致;如果所述机器学习模型针对所有所述待处理的输入信息样本输出的语义模板,与对每个所述样本事先标定的语义模板一致,训练结束。
- 根据权利要求15所述的电子设备,其中,根据每个所述待处理的输入信息的应用环境类型,训练适用于每个所述应用环境类型的机器学习模型,所述将所述预输入信息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板,包括:获取所述预输入信息对应的应用环境类型;根据所述应用环境类型,查找所述应用环境类型对应的机器学习模型;将所述预输入信息输入与所述应用环境类型对应的机器学习模型,得到所述待处理的输入信息对应的预测语义模板。
- 根据权利要求15所述的电子设备,其中,在所述将所述预输入信 息输入预先训练好的机器学习模型,得到所述待处理的输入信息对应的预测语义模板之后,还包括:获取组成所述预测语义模板的语义块;判断所述组成所述预测语义模板的语义块中是否缺少必要语义块;如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令。
- 根据权利要求20所述的电子设备,其中,在所述如果缺少必要语义块,向用户发出补充所述缺少的必要语义块的类型对应的必要待处理的输入信息补充指令之后,还包括:当接收到补充的所述缺少的必要语义块的类型对应的必要待处理的输入信息,将所述必要待处理的输入信息和之前的待处理的输入信息,一起转化为预输入信息;将所述预输入信息输入预先训练好的机器学习模型,得到所述必要待处理的输入信息和之前的待处理的输入信息对应的预测语义模板;根据所述预测语义模板,将所述必要待处理的输入信息和之前的待处理的输入信息转化为预解析数据;根据所述预解析数据,获取所述待处理的输入信息的语义解析结果。
- 一种计算机非易失性可读存储介质,其上存储有基于机器学习的语义解析程序,其中,所述基于机器学习的语义解析程序被处理单元执行时执行权利要求1至7任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910879338.1 | 2019-09-18 | ||
CN201910879338.1A CN110688859B (zh) | 2019-09-18 | 2019-09-18 | 基于机器学习的语义解析方法、装置、介质及电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021051565A1 true WO2021051565A1 (zh) | 2021-03-25 |
Family
ID=69109664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/117680 WO2021051565A1 (zh) | 2019-09-18 | 2019-11-12 | 基于机器学习的语义解析方法、装置、电子设备及计算机非易失性可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110688859B (zh) |
WO (1) | WO2021051565A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111353292B (zh) * | 2020-02-26 | 2023-06-16 | 支付宝(杭州)信息技术有限公司 | 针对用户操作指令的解析方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080196016A1 (en) * | 2007-02-13 | 2008-08-14 | International Business Machines Corporation | Processing of Expressions |
CN104575501A (zh) * | 2015-01-19 | 2015-04-29 | 北京云知声信息技术有限公司 | 一种收音机语音操控指令解析方法及系统 |
CN106874259A (zh) * | 2017-02-23 | 2017-06-20 | 腾讯科技(深圳)有限公司 | 一种基于状态机的语义解析方法及装置、设备 |
CN110147490A (zh) * | 2017-08-07 | 2019-08-20 | 声音猎手公司 | 自然语言推荐反馈 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150310862A1 (en) * | 2014-04-24 | 2015-10-29 | Microsoft Corporation | Deep learning for semantic parsing including semantic utterance classification |
CN110209831A (zh) * | 2018-02-13 | 2019-09-06 | 北京京东尚科信息技术有限公司 | 模型生成、语义识别的方法、系统、设备及存储介质 |
-
2019
- 2019-09-18 CN CN201910879338.1A patent/CN110688859B/zh active Active
- 2019-11-12 WO PCT/CN2019/117680 patent/WO2021051565A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080196016A1 (en) * | 2007-02-13 | 2008-08-14 | International Business Machines Corporation | Processing of Expressions |
CN104575501A (zh) * | 2015-01-19 | 2015-04-29 | 北京云知声信息技术有限公司 | 一种收音机语音操控指令解析方法及系统 |
CN106874259A (zh) * | 2017-02-23 | 2017-06-20 | 腾讯科技(深圳)有限公司 | 一种基于状态机的语义解析方法及装置、设备 |
CN110147490A (zh) * | 2017-08-07 | 2019-08-20 | 声音猎手公司 | 自然语言推荐反馈 |
Also Published As
Publication number | Publication date |
---|---|
CN110688859B (zh) | 2024-09-06 |
CN110688859A (zh) | 2020-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021174757A1 (zh) | 语音情绪识别方法、装置、电子设备及计算机可读存储介质 | |
CN111402861B (zh) | 一种语音识别方法、装置、设备及存储介质 | |
WO2020073530A1 (zh) | 客服机器人会话文本分类方法及装置、电子设备、计算机可读存储介质 | |
JP5901001B1 (ja) | 音響言語モデルトレーニングのための方法およびデバイス | |
JP6909832B2 (ja) | オーディオにおける重要語句を認識するための方法、装置、機器及び媒体 | |
CN110415679B (zh) | 语音纠错方法、装置、设备和存储介质 | |
CN111368559A (zh) | 语音翻译方法、装置、电子设备及存储介质 | |
CN110019742B (zh) | 用于处理信息的方法和装置 | |
WO2022052505A1 (zh) | 基于依存句法的句子主干抽取方法、设备和可读存储介质 | |
CN107844470B (zh) | 一种语音数据处理方法及其设备 | |
EP4113357A1 (en) | Method and apparatus for recognizing entity, electronic device and storage medium | |
CN110503956B (zh) | 语音识别方法、装置、介质及电子设备 | |
CN114840671A (zh) | 对话生成方法、模型的训练方法、装置、设备及介质 | |
CN112016275A (zh) | 一种语音识别文本的智能纠错方法、系统和电子设备 | |
CN112669842A (zh) | 人机对话控制方法、装置、计算机设备及存储介质 | |
CN113051894B (zh) | 一种文本纠错的方法和装置 | |
CN112466289A (zh) | 语音指令的识别方法、装置、语音设备和存储介质 | |
CN112100339A (zh) | 用于智能语音机器人的用户意图识别方法、装置和电子设备 | |
CN112711943B (zh) | 一种维吾尔文语种识别方法、装置及存储介质 | |
WO2021051565A1 (zh) | 基于机器学习的语义解析方法、装置、电子设备及计算机非易失性可读存储介质 | |
WO2021051584A1 (zh) | 语义解析方法、装置、电子设备及存储介质 | |
CN110705308A (zh) | 语音信息的领域识别方法、装置、存储介质及电子设备 | |
US20240005917A1 (en) | Speech interaction method ,and apparatus, computer readable storage medium, and electronic device | |
CN116010571A (zh) | 知识库构建方法、信息查询方法、装置以及设备 | |
CN115620726A (zh) | 语音文本生成方法、语音文本生成模型的训练方法、装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19945849 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19945849 Country of ref document: EP Kind code of ref document: A1 |