CN109033053B - Scene-based knowledge editing method and device - Google Patents

Scene-based knowledge editing method and device Download PDF

Info

Publication number
CN109033053B
CN109033053B CN201810754779.4A CN201810754779A CN109033053B CN 109033053 B CN109033053 B CN 109033053B CN 201810754779 A CN201810754779 A CN 201810754779A CN 109033053 B CN109033053 B CN 109033053B
Authority
CN
China
Prior art keywords
scene
editing
knowledge
question
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810754779.4A
Other languages
Chinese (zh)
Other versions
CN109033053A (en
Inventor
陈家威
陈海勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Giantan Information Technology Co ltd
Original Assignee
Guangzhou Giantan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Giantan Information Technology Co ltd filed Critical Guangzhou Giantan Information Technology Co ltd
Priority to CN201810754779.4A priority Critical patent/CN109033053B/en
Publication of CN109033053A publication Critical patent/CN109033053A/en
Application granted granted Critical
Publication of CN109033053B publication Critical patent/CN109033053B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting

Abstract

The invention discloses a knowledge editing method and a knowledge editing device based on scenes, wherein the knowledge editing device based on scenes comprises the following steps: the scene classification editing module is used for acquiring target business knowledge and editing scene classification related to the target business knowledge; the scene creating module is used for creating all scenes corresponding to each scene category; the interactive scene construction module is used for editing interactive question and answer contents according to business requirements so as to construct an interactive scene; and the generating module is used for generating a scene knowledge interaction model when the interactive scene meets the service requirement. By adopting the embodiment of the invention, the editing can be more visual, the editing complexity is simplified, and the maintenance cost is reduced.

Description

Scene-based knowledge editing method and device
Technical Field
The invention relates to the technical field of computers, in particular to a knowledge editing method and device based on scenes.
Background
The intelligent question-answering system orderly and scientifically arranges the accumulated unordered corpus information and establishes a knowledge-based classification model. The classification models can guide newly added corpus consultation and service information, so that the manpower resource is saved, the automation of information processing is improved, and the website operation cost is reduced.
Currently, most of the question-answering systems adopt a question-answer knowledge organization mode, that is, business knowledge is arranged into a standard question-answering library mode to support intelligent question answering of various types of questions. The multi-round conversation question and answer knowledge is disordered in structure, complex in editing and high in requirement on editing workers, and the multi-round conversation question and answer is constructed by adopting a decision tree method, so that the maintenance difficulty and the cost are high; if the decision tree is branched more, each terminal point needs to have an answer, so that the editing process is not intuitive, and editing errors are easy to occur.
Disclosure of Invention
The embodiment of the invention provides a knowledge editing method and device based on a scene, which can enable editing to be more visual, simplify editing complexity and reduce maintenance cost.
The embodiment of the invention provides a knowledge editing device based on scenes, which comprises:
the scene classification editing module is used for acquiring target business knowledge and editing scene classification related to the target business knowledge;
the scene creating module is used for creating all scenes corresponding to each scene category;
the interactive scene construction module is used for editing interactive question and answer contents according to business requirements so as to construct an interactive scene; and the generating module is used for generating a scene knowledge interaction model when the interactive scene meets the service requirement.
Preferably, the scene classification editing module specifically includes:
the analysis unit is used for analyzing the business target and the business knowledge and acquiring the target business knowledge;
and the defining unit is used for editing the scene knowledge related to the target service knowledge and defining the scene classification corresponding to the scene knowledge.
Preferably, the interactive scene construction module specifically includes:
the entrance editing unit is used for identifying and matching user problems, and determining and triggering corresponding scenes according to the identified user problems;
the system response editing unit is used for editing the questions sent to the user according to the user questions in the entry editing unit so as to obtain scene variables in the reply content of the user, wherein the scene variables comprise global scene variables generated or quoted in the scene;
the condition editing unit is used for guiding a user to provide necessary scene variables, extracting and storing the scene variables in the reply content of the user, and the scene variables are used for determining the next scene variables; the condition editing unit can be set as a single-selection condition editing unit or a multi-selection condition editing unit or a user input condition editing unit;
a result editing unit for editing the answers of the user questions in the entry editing unit to construct an interactive scene;
and the jump editing unit is used for jumping to other editing units or other scenes to complete the complete interaction or scene crossing.
Preferably, the system response editing unit sets a scene variable of a user; the condition editing unit sets condition types and option contents; the result editing unit matches and associates scene variables in the scene, performs logic operation expression editing or mathematical operation expression editing on any one or more scene variables, and verifies the operation expression; and the result editing unit integrates the operation expression result.
The embodiment of the invention also provides a knowledge editing method based on the scene, which comprises the following steps:
acquiring target business knowledge, and editing scene classification related to the target business knowledge;
creating all scenes corresponding to each scene category;
editing interactive question and answer contents according to business requirements to construct an interactive scene;
and when the interactive scene meets the service requirement, generating a scene knowledge interaction model.
Preferably, the acquiring target business knowledge and editing the scene classification related to the target business knowledge specifically include:
analyzing a service target and service knowledge to acquire target service knowledge;
editing scene knowledge related to the target service knowledge, and defining scene classification corresponding to the scene knowledge.
Preferably, the editing of interactive question and answer content according to business requirements to construct an interactive scene specifically includes:
selecting scenes in the scene category one by one;
editing the user question of the selected scene entrance;
editing the question associated with the user question and the corresponding condition option;
and editing the user question and combining the answer corresponding to the question to construct an interactive scene.
Preferably, the editing of the question associated with the user question and the corresponding condition option specifically includes:
acquiring scene variables influencing answers of the user questions, and editing questions about the scene variables;
editing the answer mode and condition of the question;
editing the answer prompt information of the question.
Preferably, the editing the user question and combining the answer corresponding to the question chasing to construct an interactive scene specifically includes:
judging whether the edited question meets the answer condition of the user question;
if yes, editing answers and judging whether to jump to other scenes or other chasing questions; if the skip is needed, judging whether the skipped scenes form complete question-answer interaction, if the skip is not needed or the skipped scenes form complete question-answer interaction, constructing an interactive scene, and if the skipped scenes do not form complete question-answer interaction, continuously editing the question of the skipped scenes;
if not, continuing to edit and ask questions. The embodiment of the invention has the following beneficial effects:
the method comprises the steps of dividing scene categories of target business knowledge to create scenes aiming at each scene category, further editing question-answer content of each scene, constructing interactive scenes, generating a scene knowledge question-answer model when the interactive scenes meet business requirements, completing editing, effectively simplifying editing complexity, reducing maintenance cost and enabling editing to be more visual.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a method for scenario-based knowledge editing provided by the present invention;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a scene-based knowledge editing method provided by the present invention;
FIG. 3 is a schematic flow chart diagram illustrating one embodiment of step S203 in FIG. 2;
fig. 4 is a schematic structural diagram of an embodiment of a scene-based knowledge editing apparatus provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, it is a schematic flow chart of an embodiment of a scenario-based question-answer editing method provided in the present invention, including:
s1, acquiring target business knowledge, and editing scene classification related to the target business knowledge;
s2, creating all scenes corresponding to each scene type;
s3, editing interactive question and answer contents according to business requirements to construct an interactive scene;
and S4, generating a scene knowledge interaction model when the interactive scene meets the service requirement.
In step S1, the acquiring target business knowledge and editing the scene classification related to the target business knowledge specifically include:
analyzing a service target and service knowledge to acquire target service knowledge;
editing scene knowledge related to the target service knowledge, and defining a scene category corresponding to the scene knowledge.
It should be noted that the target business knowledge is obtained by analyzing the business target and the business knowledge, and specifically includes: analyzing project requirements; analyzing a business knowledge system and business knowledge classification, including but not limited to an enterprise business system, an enterprise business line and a supporting line; and analyzing the business target, the service range of business knowledge and the service object of the business knowledge. After the target business knowledge is obtained, analyzing, determining and selecting the scene knowledge from the target business knowledge, inheriting the existing business knowledge system and business classification, defining a scene knowledge classification structure and obtaining a scene class.
In step S3, the editing of the interactive question and answer content according to the service requirement to construct an interactive scene specifically includes:
selecting scenes in the scene category one by one;
editing the user question of the selected scene entrance;
editing the question associated with the user question and the corresponding condition option;
and editing the user question and combining the answer corresponding to the question to construct an interactive scene.
In the construction process, the user questions of the scene entrance, namely the questions frequently asked by the user in the scene, are edited through the entrance editing unit, the question tracing and the corresponding condition options are further edited based on the user questions, finally, the scene exit is determined, namely, the answers of the corresponding answers are edited, and the construction of the interactive scene is completed.
Specifically, the editing of the question associated with the user question and the corresponding condition option specifically includes:
acquiring scene variables influencing answers of the user questions, and editing questions about the scene variables; editing the answer mode, the answer content format and the condition of the question;
and the system response editing unit edits the response prompt information of the question.
After editing the entry question of the scene, the system response editing unit edits the question of the scene variable; editing the mode of each question-chasing answer and the format and condition of the answer content, namely, the scene variable type in the answer content of the user; the prompts for the user answers are edited, for example, a plurality of answers are set for the user to select for each question chase, or the user answers that do not meet the question chase requirements are prompted.
Specifically, the editing the user question and combining the answer corresponding to the question to construct an interactive scene specifically includes:
judging whether the edited question meets the answer condition of the user question;
if yes, editing answers and judging whether to jump to other scenes or other chasing questions; if the skip is needed, judging whether the skipped scenes form complete question-answer interaction, if the skip is not needed or the skipped scenes form complete question-answer interaction, constructing an interactive scene, and if the skipped scenes do not form complete question-answer interaction, continuously editing the question of the skipped scenes;
if not, continuing to edit and ask questions.
It should be noted that, when determining a scene exit, it is determined whether the scene variable question-chasing settings are all satisfied, that is, it is determined whether the edited question-chasing satisfies the answer condition of the entry question, if so, a final answer is set, and if not, the question-chasing of the scene variable is continuously edited.
And when the scene needs to jump to other scenes, setting to jump to other scenes, checking whether the jumped scene is complete scene knowledge, and if the jumped scene is not complete scene knowledge, continuously editing the question of the scene variable in the scene. The complete scene knowledge means that all the questions in the scene point to the user feedback, and all the user feedback points to the questions or answers or other scenes.
In step S4, when the interactive scene meets the service requirement, a scene knowledge interaction model is generated, and when the interactive scene is applied, a question and a response with the user can be implemented according to the scene knowledge interaction model; and when the interactive scene does not meet the service requirement, continuously constructing the interactive scene.
Referring to fig. 2, it is a schematic flowchart of another embodiment of the method for editing a question and answer based on a scene provided in the present invention, including:
s201, editing scene types;
s202, creating a scene;
s203, editing an interactive scene;
s204, judging whether the edited scene meets the service requirement or not; if yes, executing step S205, otherwise, returning to step S203;
and S205, generating a scene knowledge interaction model and ending.
Referring to fig. 3, it is a schematic flowchart of an embodiment of step S203 in fig. 2, including:
s301, selecting one scene in the scene category;
s302, setting an entrance problem;
s303, insert question
S304, setting a user answering mode and conditions;
s305, judging whether an answer is output or not; if yes, executing step S308, otherwise, executing step S306;
s306, judging whether to jump to a scene; if yes, go to step S307;
s307, setting a jump scene, and executing the step S310;
s308, setting an answer;
s309, judging whether all condition items point to the next stage; if yes, executing step S310, otherwise, returning to step S303;
s310, editing answers;
s311, judging whether to enter the next scene in the scene category; if yes, the process returns to step S301, and if no, the process ends.
The following describes the scene-based knowledge editing method provided by the present invention in detail, taking project service objects as an automobile knowledge service website as an example.
Step 1: editing the scene category.
Determining a project service object as an automobile knowledge service website, acquiring the classification and system of automobile knowledge service, analyzing and screening the knowledge of the automobile service, defining the first-level classification of a scene as buying a new automobile, buying a used automobile, buying parts and buying insurance, wherein the classification of buying parts and insurance is buying tires, buying wipers and a driving recorder.
Step 2: editing the scene of buying tires in the buying parts.
Step 2.1: and determining whether the scene variables contained in the purchased tire are brand, model, size and explosion prevention.
Step 2.2: and editing the interactive process.
Step 2.2.1: and selecting a scene.
And selecting a scene of buying tires in the purchased parts.
Step 2.2.2: set up the entrance problem.
The user asks questions: "how much money I want to change tires. "
Step 2.2.3: setting the answer mode and condition of the user.
Acquiring scene variables through question hunting, and setting question hunting: "which brand of tire you want to buy? "set the user answer mode to text.
Step 2.2.4: a scene exit is determined.
And (3) judging that the user does not meet the answering condition after answering the Brand Brand, returning to the step 2.2.3, setting question hunting, and hunting for the scene variable of the size: "do you want to buy that model of tire? And set up option A model, B model and C model, after judging the user and answering the model, when choosing round A, can skip and ask whether explosion-proof, and give the answer directly.
And if yes, returning to the step 2.2.3.
Step 2.2.4: editing answers
Editing the answer, when the user selects the Brand model a tire, the answer is: "based on the information you provide, we calculate if you buy [ scene variable: tire brand [ scene variables: tire model ], it will cost [ results: tire price ].
Step 2.3: and generating a scene knowledge question-answer model of buying the tire.
The embodiment of the invention divides the scene category of the target service knowledge to create scenes aiming at each scene category, further edits the question-answer content of each scene, constructs an interactive scene, and can generate a scene knowledge question-answer model to finish editing when the interactive scene meets the service requirement, thereby effectively simplifying the editing complexity, reducing the maintenance cost and enabling the editing to be more visual.
Correspondingly, the invention also provides a scene-based knowledge editing device, which can realize all the processes of the scene-based knowledge editing method.
Referring to fig. 4, it is a schematic structural diagram of an embodiment of the scene-based knowledge editing apparatus provided in the present invention, including:
the scene classification editing module 1 is used for acquiring target business knowledge and editing scene classification related to the target business knowledge;
a scene creating module 2, configured to create all scenes corresponding to each scene category;
the interactive scene construction module 3 is used for editing interactive question and answer contents according to business requirements so as to construct an interactive scene;
and the generating module 4 is used for generating a scene knowledge interaction model when the interactive scene meets the service requirement.
Further, the scene classification editing module specifically includes:
the analysis unit is used for analyzing the business target and the business knowledge and acquiring the target business knowledge;
and the defining unit is used for editing the scene knowledge related to the target service knowledge and defining the scene classification corresponding to the scene knowledge.
Further, the interactive scene construction module specifically includes: an entry editing unit: the system also can be called an entry node, and is used for identifying and matching user problems, and determining and triggering corresponding scenes according to the identified user problems.
A system response editing unit: the system issues questions to the user according to the previous node so as to acquire scene variables in the reply content of the user, wherein the scene variables comprise texts, pictures, voices and other expression forms.
A condition editing unit: the system is also called a user input editing unit or a condition node or an option node and is used for guiding a user to provide necessary scene variables, extracting and storing the scene variables in the input content of the user, wherein the scene variables are used for determining the next scene variable, and the scene variables provided by the user can be one or more.
A result editing unit: also called a result node, edits the answers to the questions in the portal editing unit to construct an interactive scene.
A skip editing unit: also called jump nodes, for jumping to question nodes or other scenarios to complete the complete or scenario crossing of the interaction.
Further, the system response editing unit may set a scene variable of a user; the condition editing unit can set condition types and option contents; the result editing unit can match and associate scene variables in the scene, can edit a logic operation expression or a mathematical operation expression for any one or more scene variables, and can verify the operation expression; the result editing unit may integrate the operation expression result.
The method and the device have the advantages that the scene classification editing module is used for dividing the scene classification of the target business knowledge, the scene creating module is used for creating scenes for each scene classification, the interactive scene building module is used for editing the interactive question and answer content of each scene, the interactive scene is built, and when the interactive scene meets the business requirements, the scene interactive model is generated through the generating module, so that the editing is completed, the editing complexity is effectively simplified, the maintenance cost is reduced, and the editing is more visual.
The embodiment of the invention also provides knowledge editing equipment based on the scene. The scene-based knowledge editing apparatus of this embodiment includes: a processor, a memory, and a computer program, such as a dual system based handwriting display program, stored in the memory and executable on the processor. The processor implements the steps in the various scene-based knowledge editing method embodiments described above when executing the computer program, or implements the functions of the various modules/units in the various device embodiments described above when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the scene-based knowledge editing apparatus.
The scene-based knowledge editing device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The scene-based knowledge editing apparatus may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the above components are merely examples of a context-based knowledge editing device and do not constitute a limitation of a context-based knowledge editing device, and that more or fewer components than those shown, or some components in combination, or different components may be included, e.g., the context-based knowledge editing device may also include input-output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the scene based knowledge editing apparatus, various interfaces and lines connecting the various parts of the entire scene based knowledge editing apparatus.
The memory may be used to store the computer programs and/or modules, and the processor may implement the various functions of the scene-based knowledge editing apparatus by running or executing the computer programs and/or modules stored in the memory, and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the integrated module/unit of the knowledge editing device based on the scene can be stored in a computer readable storage medium if the module/unit is realized in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (6)

1. A scene-based knowledge editing apparatus, comprising:
the scene classification editing module is used for acquiring target business knowledge and editing scene classification related to the target business knowledge;
the scene creating module is used for creating all scenes corresponding to each scene category;
the interactive scene construction module is used for editing interactive question and answer contents according to business requirements so as to construct an interactive scene; the interactive scene construction module comprises: the entrance editing unit is used for identifying and matching user problems, and determining and triggering corresponding scenes according to the identified user problems; the system response editing unit is used for editing the questions sent to the user according to the user questions in the entry editing unit so as to acquire scene variables in the reply content of the user, wherein the scene variables comprise global scene variables generated or quoted in the scene, and the expression forms of the scene variables comprise texts, pictures and voices; the condition editing unit is used for guiding a user to provide necessary scene variables, extracting and storing the scene variables in the reply content of the user, and the scene variables are used for determining the next scene variables; the condition editing unit is set as a single-selection condition editing unit or a multi-selection condition editing unit or a user input condition editing unit; a result editing unit for editing the answers of the user questions in the entry editing unit to construct an interactive scene; the skip editing unit is used for skipping to other editing units or other scenes to complete the complete interaction or scene crossing;
and the generating module is used for generating a scene knowledge interaction model when the interactive scene meets the service requirement.
2. The scene-based knowledge editing apparatus of claim 1, wherein the scene classification editing module specifically comprises:
the analysis unit is used for analyzing the business target and the business knowledge and acquiring the target business knowledge;
and the defining unit is used for editing the scene knowledge related to the target service knowledge and defining the scene classification corresponding to the scene knowledge.
3. The scene-based knowledge editing apparatus according to claim 1, wherein the system response editing unit sets a scene variable of a user; the condition editing unit sets condition types and option contents; the result editing unit matches and associates scene variables in the scene, performs logic operation expression editing or mathematical operation expression editing on any one or more scene variables, and verifies the operation expression; and the result editing unit integrates the operation expression result.
4. A scene-based knowledge editing method is characterized by comprising the following steps:
acquiring target business knowledge, and editing scene classification related to the target business knowledge;
creating all scenes corresponding to each scene category;
editing interactive question and answer contents according to business requirements to construct an interactive scene, which specifically comprises the following steps: selecting scenes in the scene category one by one; editing the user question of the selected scene entrance; editing the question associated with the user question and the corresponding condition option; editing the user question and combining the answer corresponding to the question chasing to construct an interactive scene; wherein the editing the user question in combination with the answer corresponding to the question chasing to construct an interactive scene specifically includes: judging whether the edited question meets the answer condition of the user question; if yes, editing answers and judging whether to jump to other scenes or other chasing questions; if the skip is needed, judging whether the skipped scenes form complete question-answer interaction, if the skip is not needed or the skipped scenes form complete question-answer interaction, constructing an interactive scene, and if the skipped scenes do not form complete question-answer interaction, continuously editing the question of the skipped scenes; if not, continuing to edit and ask questions;
and when the interactive scene meets the service requirement, generating a scene knowledge interaction model.
5. The method for editing knowledge based on a scene according to claim 4, wherein the acquiring the target business knowledge and editing the scene classification related to the target business knowledge specifically include:
analyzing a service target and service knowledge to acquire target service knowledge;
editing scene knowledge related to the target service knowledge, and defining scene classification corresponding to the scene knowledge.
6. The method for editing knowledge based on scenario as claimed in claim 4, wherein the editing of the question associated with the user question and the corresponding conditional option specifically comprises:
acquiring scene variables influencing answers of the user questions, and editing questions about the scene variables, wherein the presentation forms of the scene variables comprise texts, pictures and voices;
editing the answer mode and condition of the question;
editing the answer prompt information of the question.
CN201810754779.4A 2018-07-10 2018-07-10 Scene-based knowledge editing method and device Active CN109033053B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810754779.4A CN109033053B (en) 2018-07-10 2018-07-10 Scene-based knowledge editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810754779.4A CN109033053B (en) 2018-07-10 2018-07-10 Scene-based knowledge editing method and device

Publications (2)

Publication Number Publication Date
CN109033053A CN109033053A (en) 2018-12-18
CN109033053B true CN109033053B (en) 2022-05-17

Family

ID=64640725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810754779.4A Active CN109033053B (en) 2018-07-10 2018-07-10 Scene-based knowledge editing method and device

Country Status (1)

Country Link
CN (1) CN109033053B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726277B (en) * 2018-12-30 2021-08-17 联想(北京)有限公司 Data processing method and device
CN109684461A (en) * 2018-12-30 2019-04-26 联想(北京)有限公司 A kind of data processing method and device
CN109857910B (en) * 2019-01-07 2024-03-26 平安科技(深圳)有限公司 XML file generation method and device, computer equipment and storage medium
CN111797856B (en) * 2019-04-09 2023-12-12 Oppo广东移动通信有限公司 Modeling method and device, storage medium and electronic equipment
CN111325006B (en) * 2020-03-17 2023-05-05 北京百度网讯科技有限公司 Information interaction method and device, electronic equipment and storage medium
CN114924666A (en) * 2022-05-12 2022-08-19 上海云绅智能科技有限公司 Interaction method and device for application scene, terminal equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247726A (en) * 2017-04-28 2017-10-13 北京神州泰岳软件股份有限公司 Suitable for the implementation method and device of the intelligent robot of multi-service scene
CN107357855A (en) * 2017-06-29 2017-11-17 北京神州泰岳软件股份有限公司 Support the intelligent answer method and device of scene relating
CN107491555A (en) * 2017-09-01 2017-12-19 北京纽伦智能科技有限公司 Knowledge mapping construction method and system
CN108090177A (en) * 2017-12-15 2018-05-29 上海智臻智能网络科技股份有限公司 The generation methods of more wheel question answering systems, equipment, medium and take turns question answering system more

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160148097A1 (en) * 2013-07-10 2016-05-26 Ifthisthen, Inc. Systems and methods for knowledge management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247726A (en) * 2017-04-28 2017-10-13 北京神州泰岳软件股份有限公司 Suitable for the implementation method and device of the intelligent robot of multi-service scene
CN107357855A (en) * 2017-06-29 2017-11-17 北京神州泰岳软件股份有限公司 Support the intelligent answer method and device of scene relating
CN107491555A (en) * 2017-09-01 2017-12-19 北京纽伦智能科技有限公司 Knowledge mapping construction method and system
CN108090177A (en) * 2017-12-15 2018-05-29 上海智臻智能网络科技股份有限公司 The generation methods of more wheel question answering systems, equipment, medium and take turns question answering system more

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
社交化软件开发问答中的交互过程研究;王海等;《计算机应用与软件》;20170515;第34卷(第05期);第1-11页 *

Also Published As

Publication number Publication date
CN109033053A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109033053B (en) Scene-based knowledge editing method and device
US10558335B2 (en) Information providing system, information providing method, and non-transitory recording medium
US11450318B2 (en) Speech skill creating method and system
CN101859425A (en) Method and device for providing application list
US10339222B2 (en) Information providing system, information providing method, non-transitory recording medium, and data structure
CN104484363A (en) Search result display method and device
CN108924598A (en) Video caption display methods and device
KR101691554B1 (en) Apparatus and Method for Educational Content Management
JP2022028881A (en) Method of automatically generating advertisements, apparatus, device, and computer-readable storage medium
CN112579757A (en) Intelligent question and answer method and device, computer readable storage medium and electronic equipment
US10387125B2 (en) Dynamically building mobile applications
CN113592535A (en) Advertisement recommendation method and device, electronic equipment and storage medium
CN116701662A (en) Knowledge graph-based supply chain data management method, device, equipment and medium
US11544582B2 (en) Predictive modelling to score customer leads using data analytics using an end-to-end automated, sampled approach with iterative local and global optimization
CN112905451A (en) Automatic testing method and device for application program
CN110070385A (en) Advertising commentary method, apparatus, electronic equipment and storage medium
CN116228153A (en) Engineering project design change price management method, system, equipment and medium
CN115658063A (en) Page information generation method, device, equipment and storage medium
CN115422439A (en) Automatic learning resource organization and display method for online education platform
CN114625914A (en) Vehicle-mounted interactive music recommendation device, equipment and method
KR20220121290A (en) Apparatus and method determining vehicle purchase for vehicle subscription service
CN113419957A (en) Rule-based big data offline batch processing performance capacity scanning method and device
CN105915601A (en) Resource downloading control method and terminal
CN111401395A (en) Data processing method, terminal equipment and storage medium
US20200013211A1 (en) Automated virtual artifact generation through natural language processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant