CN112989013A - Conversation processing method and device, electronic equipment and storage medium - Google Patents

Conversation processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112989013A
CN112989013A CN202110478592.8A CN202110478592A CN112989013A CN 112989013 A CN112989013 A CN 112989013A CN 202110478592 A CN202110478592 A CN 202110478592A CN 112989013 A CN112989013 A CN 112989013A
Authority
CN
China
Prior art keywords
time
sentence
current
determining
time information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110478592.8A
Other languages
Chinese (zh)
Other versions
CN112989013B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Longjin Science And Technology Inc
Wuhan University WHU
Original Assignee
Wuhan Longjin Science And Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Longjin Science And Technology Inc filed Critical Wuhan Longjin Science And Technology Inc
Priority to CN202110478592.8A priority Critical patent/CN112989013B/en
Publication of CN112989013A publication Critical patent/CN112989013A/en
Application granted granted Critical
Publication of CN112989013B publication Critical patent/CN112989013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/383Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention discloses a conversation processing method, a device, electronic equipment and a storage medium, wherein the method is applied to the electronic equipment with a conversation function and comprises the following steps: acquiring a current conversation sentence of a user; determining time information contained in the current conversation sentence; determining calendar time corresponding to the time information from a calendar library of the electronic equipment, wherein the calendar library is used for storing standard calendar time; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment; and generating a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic equipment, the current dialogue sentence, the time information and the calendar time.

Description

Conversation processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and an apparatus for processing a dialog, an electronic device, and a storage medium.
Background
With the development of human-computer interaction technology, more and more intelligent products based on human-computer interaction technology are produced, such as chat robots (chat bots) and the like. These smart products can chat with the user and generate answer information based on the user's questions. However, the answer information of the existing chat robot is usually inaccurate and is not practical, and the answers have no practical significance and influence the use experience of the user.
Disclosure of Invention
In view of the above, the present invention provides a dialog processing method, a dialog processing apparatus, an electronic device, and a storage medium, which detect time information of a received dialog sentence, and obtain a variety of answers matching the reality according to the detected time information and a question-answer model preset in an electronic device system, so as to improve the application experience of a user.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a dialog processing method, where the method is applied to an electronic device including a dialog function, and the method includes:
acquiring a current conversation sentence of a user;
determining time information contained in the current conversation sentence;
determining calendar time corresponding to the time information from a calendar library of the electronic equipment, wherein the calendar library is used for storing standard calendar time; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment;
and generating a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic equipment, the current dialogue sentence, the time information and the calendar time.
In the above scheme, before the determining the time information included in the current conversational sentence, the method further includes:
judging the format of the current conversation statement;
under the condition that the format of the current dialogue statement is judged to be a non-text format, converting the format of the current dialogue statement into a text format according to a specific technology; wherein the particular technique is associated with a format of the current conversational sentence.
In the above scheme, before the determining the time information included in the current conversational sentence, the method further includes:
judging whether the current dialogue statement contains time components or not;
under the condition that the current dialogue statement contains time components, obtaining a time phrase contained in the current dialogue statement; determining time information contained in the current conversational sentence based on the contained time phrase.
In the above scheme, the determining whether the current dialogue statement includes a time component includes:
performing keyword matching processing on the current conversation sentence to obtain a matching result; and judging whether the current dialogue statement contains a time component or not based on the matching result.
In the above scheme, the performing keyword matching processing on the current dialog statement to obtain a matching result includes:
determining the association degree of each keyword contained in the current conversation sentence and a time phrase in a time resource library; obtaining a matching result based on each of the association degrees;
wherein, the time resource library is used for storing various standard time phrases; the standard time phrase is used to characterize a time expression of known form.
In the foregoing solution, the obtaining a matching result based on each of the relevancy degrees includes:
judging whether each correlation degree meets a preset threshold value;
obtaining a matching result under the condition that at least one of the association degrees meets a preset threshold value; the matching result is that the current dialogue statement contains time components;
under the condition that each association degree is judged not to meet a preset threshold value, a matching result is obtained; and the matching result is that the current dialogue statement does not contain a time component.
In the above scheme, the obtaining the time phrase included in the current dialog sentence includes: determining a first keyword of which the association degree meets a preset threshold; the first keyword is a keyword contained in the current conversation sentence; and obtaining the time phrase contained in the current dialogue sentence based on the first keyword.
In the above solution, the determining time information included in the current conversational sentence based on the included time phrase includes: performing semantic analysis on the first keyword to obtain an analysis result; determining time information contained in the current dialogue sentence based on the analysis result.
In the above scheme, after the determining the time information included in the current conversational sentence based on the analysis result, the method further includes:
judging whether the time information can reflect complete time or not;
under the condition that the time information cannot reflect the complete time, acquiring the previous dialogue sentences of the user; determining a time phrase contained in the previous dialog sentence; and completing the time information based on the time phrase contained in the previous dialog sentence and the time phrase contained in the current dialog sentence so that the time information can reflect the complete time.
In the foregoing solution, the determining the calendar time corresponding to the time information from the calendar library of the electronic device includes:
identifying time phrases representing years, time phrases representing months and time phrases representing days contained in the time information;
determining from the calendar repository a time combination corresponding to a time phrase based on the characterizing year, a time phrase characterizing month, and a time phrase characterizing day; the time combination is calendar time corresponding to the time information.
In the foregoing solution, in a case that the question-answer model is a text semantic matching model, the determining a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic device, the current dialogue sentence, the time information, and the calendar time includes:
determining a first candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the time information and the current dialogue sentence;
determining a second candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the calendar time and the current dialogue sentence;
generating a reply sentence corresponding to the current dialogue sentence based on the first candidate reply sentence and the second candidate reply sentence;
correspondingly, in a case where the question-answer model is a generative model, the determining a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic device, the current dialogue sentence, the time information, and the calendar time includes:
obtaining a time sequence which can be identified by the question-answering model based on the time information and the calendar time;
and generating a reply sentence corresponding to the current dialogue sentence through a reply sentence generation network of the question-answer model based on the time sequence and the current dialogue sentence.
In a second aspect, an embodiment of the present invention further provides a dialog processing apparatus, which is applied to an electronic device including a dialog function, where the dialog processing apparatus includes an obtaining unit, a first determining unit, a second determining unit, and a generating unit;
the acquisition unit is used for acquiring the current conversation sentence of the user;
the first determining unit is configured to determine time information included in the current dialogue statement;
the second determining unit is used for determining the calendar time corresponding to the time information from a calendar library of the electronic equipment, and the calendar library is used for storing standard calendar time; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment;
the generating unit is used for generating a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic equipment, the current dialogue sentence, the time information and the calendar time.
In the above scheme, the dialog processing apparatus further includes a first determining unit, configured to determine a format of the current dialog statement; under the condition that the format of the current dialogue statement is judged to be a non-text format, converting the format of the current dialogue statement into a text format according to a specific technology; wherein the particular technique is associated with a format of the current conversational sentence.
In the foregoing solution, the dialog processing apparatus further includes a second determining unit, configured to determine whether the current dialog statement includes a time component; under the condition that the current dialogue statement contains time components, obtaining a time phrase contained in the current dialogue statement; determining time information contained in the current conversational sentence based on the contained time phrase.
In the above scheme, the second judging unit includes a matching subunit and a judging subunit, where the matching subunit is configured to perform keyword matching processing on the current dialog statement to obtain a matching result; and the judging subunit is used for judging whether the current dialogue statement contains a time component or not based on the matching result.
In the above scheme, the matching subunit is specifically configured to determine a degree of association between each keyword included in the current dialog statement and a time phrase in a time resource library; obtaining a matching result based on each of the association degrees; wherein, the time resource library is used for storing various standard time phrases; the standard time phrase is used to characterize a time expression of known form.
In the foregoing solution, the determining subunit is specifically configured to: judging whether each correlation degree meets a preset threshold value; obtaining a matching result under the condition that at least one of the association degrees meets a preset threshold value; the matching result is that the current dialogue statement contains time components; under the condition that each association degree is judged not to meet a preset threshold value, a matching result is obtained; and the matching result is that the current dialogue statement does not contain a time component.
In the foregoing solution, the second determining unit further includes an obtaining subunit, specifically configured to determine a first keyword for which the association degree meets a preset threshold; the first keyword is a keyword contained in the current conversation sentence; and obtaining the time phrase contained in the current dialogue sentence based on the first keyword.
In the above scheme, the second determining unit further includes a determining subunit, configured to perform semantic analysis on the first keyword to obtain an analysis result; determining time information contained in the current dialogue sentence based on the analysis result.
In the foregoing aspect, the dialog processing apparatus further includes a third determining unit configured to: judging whether the time information can reflect complete time or not; under the condition that the time information cannot reflect the complete time, acquiring the previous dialogue sentences of the user; determining a time phrase contained in the previous dialog sentence; and completing the time information based on the time phrase contained in the previous dialog sentence and the time phrase contained in the current dialog sentence so that the time information can reflect the complete time.
In the foregoing solution, the second determining unit is specifically configured to identify a time phrase representing a year, a time phrase representing a month, and a time phrase representing a day included in the time information; determining from the calendar repository a time combination corresponding to a time phrase based on the characterizing year, a time phrase characterizing month, and a time phrase characterizing day; the time combination is calendar time corresponding to the time information.
In the foregoing solution, the generating unit is specifically configured to: determining a first candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the time information and the current dialogue sentence under the condition that the question-answer model is a text semantic matching model; determining a second candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the calendar time and the current dialogue sentence; generating a reply sentence corresponding to the current dialogue sentence based on the first candidate reply sentence and the second candidate reply sentence;
or, the generating unit is specifically configured to, when the question-answer model is a generative model, obtain a time series that can be recognized by the question-answer model based on the time information and the calendar time; and generating a reply sentence corresponding to the current dialogue sentence through a reply sentence generation network of the question-answer model based on the time sequence and the current dialogue sentence.
In a third aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method described above.
In a fourth aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes: a processor and a memory for storing a computer program operable on the processor, wherein the processor is operable to perform the steps of the method when executing the computer program.
The embodiment of the invention provides a conversation processing method, a conversation processing device, electronic equipment and a storage medium. The method is applied to the electronic equipment containing the conversation function, and comprises the following steps: acquiring a current conversation sentence of a user; determining time information contained in the current conversation sentence; determining calendar time corresponding to the time information from a calendar library of the electronic equipment, wherein the calendar library is used for storing standard calendar time; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment; and generating a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic equipment, the current dialogue sentence, the time information and the calendar time. According to the embodiment of the invention, the received current conversation sentence is subjected to time information detection, and then a more diversified reply sentence which is fit with the reality is obtained according to the detected time information and the question-answer model preset in the electronic equipment system, so that the reply sentence of the electronic equipment is accurate and does not deviate from the reality, and the application experience of a user can be improved.
Drawings
Fig. 1 is a schematic flow chart of a dialog processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a session processing apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following describes specific technical solutions of the present invention in further detail with reference to the accompanying drawings in the embodiments of the present invention. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic flowchart of a dialog processing method according to an embodiment of the present invention. As shown in fig. 1, the method is applied to an electronic device including a dialog function, and includes the following specific steps:
s101: and acquiring the current conversation sentence of the user.
It should be noted that the user may be a person, or may be another electronic device having a conversation function, and as long as the electronic device can provide a conversation sentence to the electronic device in the embodiment of the present invention, the user may be referred to as a user in the embodiment of the present invention.
In practical application, for S101, the following steps may be included: the electronic equipment acquires a current dialogue sentence of a user through a user interface of the electronic equipment, wherein the user interface can comprise a display, a keyboard, a mouse, a track ball, a click wheel, a key, a button, a touch pad or a touch screen and the like; the electronic equipment can also acquire the current dialogue sentences of the user through the voice module. It should be understood that the format of the current dialog sentence may be various, for example, the current dialog sentence may be in a text format, a voice format, a picture format, and the like. In the dialog processing method provided by the embodiment of the present invention, the current dialog sentences in other formats need to be converted into text formats.
In this case, that is, before the determining of the time information contained in the current dialogue sentence, the method may further include:
judging the format of the current conversation statement;
under the condition that the format of the current dialogue statement is judged to be a non-text format, converting the format of the current dialogue statement into a text format according to a specific technology; wherein the particular technique is associated with a format of the current conversational sentence.
It should be noted that the format of the current dialog statement may be determined based on an input manner of the user, for example, the electronic device obtains the current dialog statement of the user through a voice module, and then the format of the current dialog statement is a voice format, and further needs to be converted into a text format.
Here, the specific technology is associated with the format of the current dialogue statement, specifically, a conversion technology for determining what is used according to the specific format of the current dialogue statement, for example, when the format of the current dialogue statement is voice, then, the specific technology is a technology for converting voice into text, that is, a technology for converting the current dialogue statement in a voice format into a dialogue statement in a text format by using a voice-to-text technology. It should be understood that in the case where the format of the current conversational sentence is determined to be a non-text format, no conversion is required.
S102: and determining the time information contained in the current dialogue statement.
It should be noted that the idea of the embodiment of the present invention is to obtain accurate reply sentences without departing from the actual reply sentences based on the time information in the conversational sentences, and then before determining the time information included in the current conversational sentence, it needs to be determined whether the current conversational sentence includes a time component, that is: prior to the determining the time information contained in the current conversational sentence, the method may further comprise:
judging whether the current dialogue statement contains time components or not;
under the condition that the current dialogue statement contains time components, obtaining a time phrase contained in the current dialogue statement; determining time information contained in the current conversational sentence based on the contained time phrase.
It should be understood that, in the embodiment of the present invention, in a case where it is determined that the current conversational sentence includes a time component, it is only necessary to acquire the included time phrase, and then determine the time information included in the current conversational sentence according to the included time phrase. In a case where it is determined that the current dialogue sentence does not include a time component, an alternative embodiment replies according to a reply model preset in the electronic device in a past reply manner.
In an actual application process, the determining whether the current dialog statement includes a time component may include:
performing keyword matching processing on the current conversation sentence to obtain a matching result; and judging whether the current dialogue statement contains a time component or not based on the matching result.
Specifically, the performing keyword matching processing on the current dialog statement to obtain a matching result includes:
determining the association degree of each keyword contained in the current conversation sentence and a time phrase in a time resource library; obtaining a matching result based on each of the association degrees;
wherein, the time resource library is used for storing various standard time phrases; the standard time phrase is used to characterize a time expression of known form.
It should be noted that the time resource base can be understood as storing various standard time phrases, and the standard time phrases are used for representing time expressions in known forms, and the forms are various, such as holidays (such as New year, national day, etc.); as another example, a specific time point (e.g., eight am, half am, three pm, etc.); as another example, spoken language is commonly used (e.g., morning, noon, evening, etc.); or, some phrases that are not a complete time (e.g., first five, third day, etc.) are viewed alone, and so on.
The meaning expressed here is: determining the association degree of each keyword contained in the current dialogue sentence with a time phrase in a time resource library, then determining a matching result based on each association degree, and further judging whether the current dialogue sentence contains a time component.
In an actual application process, the obtaining a matching result based on each of the relevancy degrees includes:
judging whether each correlation degree meets a preset threshold value;
obtaining a matching result under the condition that at least one of the association degrees meets a preset threshold value; the matching result is that the current dialogue statement contains time components;
under the condition that each association degree is judged not to meet a preset threshold value, a matching result is obtained; and the matching result is that the current dialogue statement does not contain a time component.
It should be noted that the preset threshold may be some empirical values, and the value range thereof is between 0 and 1. The meaning to be expressed is: judging to obtain a matching result as long as the association degree of one keyword meets a preset threshold; the matching result is that the current dialogue statement contains time components; and only under the condition that all the keywords do not meet the preset threshold value, the matching result is that the current conversation sentence does not contain time components.
Based on the above description, correspondingly, the obtaining of the time phrase included in the current dialog sentence includes: determining a first keyword of which the association degree meets a preset threshold; the first keyword is a keyword contained in the current conversation sentence; and obtaining the time phrase contained in the current dialogue sentence based on the first keyword.
Correspondingly, the determining the time information contained in the current dialog sentence based on the contained time phrase includes:
performing semantic analysis on the first keyword to obtain an analysis result; determining time information contained in the current dialogue sentence based on the analysis result.
What is meant here is: extracting all keywords with the association degrees meeting a preset threshold value from the current dialogue statement, wherein each keyword with the association degrees meeting the preset threshold value is called a first keyword, and the first keyword is a time phrase contained in the current dialogue statement substantially, so that time information contained in the current dialogue statement is determined based on the first keywords.
For example, after the processing of the above steps, all the first keywords included in the current dialog sentence are obtained as follows: year 2020, national day, and the third day, then the time information contained in the current dialog statement is the third day of national day in year 2020. For another example, after the processing of the above steps, all the first keywords included in the current dialog sentence are obtained as: last year and mid-autumn, then, the time information contained in the current dialogue statement is last year and mid-autumn.
In the practical application process, the time component included in the current dialog sentence includes an explicit time and an implicit time, where the explicit time can be understood as a description capable of completely expressing time, for example, the aforementioned explicit time is on the third day of national celebration in 2020, and for example, the implicit time vocabulary further includes time attributes of news and hot events, such as "duhui", "american college" and the like, and although being an event, the implicit time attribute can also be used as a condition for time perception, such as that "duhui" generally is around 3 months in spring, and different levels (country, province, city) are different and can be correspondingly adopted according to specific information of a interlocutor; the implicit time is understood to mean that the description of the time cannot be fully expressed, for example, on the third day, and an accurate time cannot be obtained by looking alone.
Based on the fact that the time information obtained based on the foregoing steps is also classified into explicit and implicit, in some embodiments, after the determining the time information included in the current conversational sentence based on the analysis result, the method further includes:
judging whether the time information can reflect complete time or not;
under the condition that the time information cannot reflect the complete time, acquiring the previous dialogue sentences of the user; determining a time phrase contained in the previous dialog sentence; and completing the time information based on the time phrase contained in the previous dialog sentence and the time phrase contained in the current dialog sentence so that the time information can reflect the complete time.
It should be noted that, here, it is expressed that: when the time information obtained through the previous steps is implicit time, the time information contained in the current dialogue statement is complemented by combining the time phrases appearing in the above text, and the time information can reflect the complete time after the time information is reached.
For example, the time information included in the obtained current dialog sentence is the third day, and the time phrase obtained in the previous dialog sentence is the national day of this year, so the time information after completion is the third day of the national day of this year.
It should be understood that if the previous dialog statement does not have time information, in an alternative embodiment, the electronic device may query actively, for example, if the time information contained in the current dialog statement is the third day and the previous dialog statement does not have time information, the electronic device may query "when the time information is the third day
Figure 98396DEST_PATH_IMAGE001
", then, a reply sentence is provided in the method provided according to the embodiment of the invention; in another alternative embodiment, the reply is performed according to a reply model preset in the electronic device in a conventional reply manner.
S103: determining calendar time corresponding to the time information from a calendar library of the electronic equipment, wherein the calendar library is used for storing standard calendar time; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment.
It should be noted that the calendar library described herein may be system time in the electronic device; the setting format is a format of a system time in the preset electronic device, for example, the setting format may be: xxx month xxx day, the set format may be xxx (year) -xxx (day), or other formats, which are not described herein.
Based on this, for S103 may include:
identifying time phrases representing years, time phrases representing months and time phrases representing days contained in the time information;
determining from the calendar repository a time combination corresponding to a time phrase based on the characterizing year, a time phrase characterizing month, and a time phrase characterizing day; the time combination is calendar time corresponding to the time information.
This means that a time phrase representing a year, a time phrase representing a month, and a time phrase representing a day are identified from the time information, and then the time information is expressed in a set format in the calendar day base, thereby obtaining time information expressed in the set format, that is: a calendar time.
For example, assuming that the time information is the third day of national celebration in this year, the identified time phrase representing the year is this year, the time phrase representing the month is national celebration, and the time phrase representing the day is the third day, and assuming that the setting format of the system time of the electronic device is xxx year xxx month xxx day, the obtained calendar time is: 10 and 3 days in 2021.
S104: and generating a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic equipment, the current dialogue sentence, the time information and the calendar time.
Here, the S104 may include:
in a case that the question-answer model is a text semantic matching model, the determining, based on the question-answer model of the electronic device, the current dialogue statement, the time information, and the calendar time, a reply statement corresponding to the current dialogue statement includes:
determining a first candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the time information and the current dialogue sentence;
determining a second candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the calendar time and the current dialogue sentence;
generating a reply sentence corresponding to the current dialogue sentence based on the first candidate reply sentence and the second candidate reply sentence;
correspondingly, in a case where the question-answer model is a generative model, the determining a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic device, the current dialogue sentence, the time information, and the calendar time includes:
obtaining a time sequence which can be identified by the question-answering model based on the time information and the calendar time;
and generating a reply sentence corresponding to the current dialogue sentence through a reply sentence generation network of the question-answer model based on the time sequence and the current dialogue sentence.
It should be noted that the answer sentence corresponding to the current dialogue sentence is generated according to different question-answer models preset in the electronic device and the obtained time information.
For example, if the question-answer model is a text semantic matching model: two candidate Answer matches are performed in the QA (Question-Answer) library. Replacing the implicit time in the current dialog sentence with a full time phrase (explicit time is not changed) for the first time, and then selecting a first candidate reply sentence from the QA library based on the full time phrase and the current dialog sentence; a second candidate reply sentence is found from QA based on the calendar time and the current dialogue sentence. And finally, inputting the first candidate reply sentence and the second candidate reply sentence into a deep text matching network, and selecting the reply sentence with the highest score as the reply sentence corresponding to the current conversation sentence.
For another example, if the question-answer model is a generative question-answer model: the implicit time words in the current dialog sentence are first replaced with full time phrases (explicit time is not changed) and then later supplemented with brackets with specific year, month and day, e.g., the first five of the true month (2 months and 16 days 2021) (time series that the question-answering model can recognize). And inputting the current dialogue sentences with the supplementary time sequences into a neural network model for generating answer sentences to obtain answer sentences of the questions in the current dialogue sentences.
According to the dialogue processing method provided by the embodiment of the invention, the received current dialogue sentences are detected according to the time information, and then the answer sentences which are more diversified and fit with the reality are obtained according to the detected time information and the question-answer model preset in the electronic equipment system, so that the answer sentences of the electronic equipment are accurate and do not deviate from the reality, and the application experience of a user can be improved. In other words, the invention aims to solve the problem that the chat robot cannot sense the real time node and therefore answers the problem in the current dialogue sentence inaccurately. Firstly, matching and complementing time information in a current dialogue statement, and then adopting different measures according to different models to obtain answers. The invention has the advantages that: first, the use of the time information of the original text (current dialogue sentence) and the supplemented system time information has the effect of increasing the time semantic weight, and the obtained answers are more diverse and fit to reality. Secondly, no matter the original system of the electronic equipment is based on semantic matching or a generating network, the text method can be directly embedded without large modification, and the method is convenient and fast.
Based on the same inventive concept, the embodiment of the present invention further provides a dialog processing apparatus, which is applied to an electronic device including a dialog function, where the dialog processing apparatus 20 includes an obtaining unit 201, a first determining unit 202, a second determining unit 203, and a generating unit 204, where;
the acquiring unit 201 is configured to acquire a current dialog statement of a user;
the first determining unit 202 is configured to determine time information included in the current dialog statement;
the second determining unit 203 is configured to determine a calendar time corresponding to the time information from a calendar library of the electronic device, where the calendar library is configured to store standard calendar times; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment;
the generating unit 204 is configured to generate a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic device, the current dialogue sentence, the time information, and the calendar time.
In some embodiments, the dialogue processing apparatus further includes a first judgment unit configured to judge a format of the current dialogue sentence; under the condition that the format of the current dialogue statement is judged to be a non-text format, converting the format of the current dialogue statement into a text format according to a specific technology; wherein the particular technique is associated with a format of the current conversational sentence.
In some embodiments, the dialogue processing apparatus further includes a second determination unit configured to determine whether the current dialogue sentence includes a time component; under the condition that the current dialogue statement contains time components, obtaining a time phrase contained in the current dialogue statement; determining time information contained in the current conversational sentence based on the contained time phrase.
In some embodiments, the second determining unit includes a matching subunit and a determining subunit, where the matching subunit is configured to perform keyword matching processing on the current dialog statement to obtain a matching result; and the judging subunit is used for judging whether the current dialogue statement contains a time component or not based on the matching result.
In some embodiments, the matching subunit is specifically configured to determine a degree of association between each keyword included in the current dialog sentence and a time phrase in a time resource library; obtaining a matching result based on each of the association degrees; wherein, the time resource library is used for storing various standard time phrases; the standard time phrase is used to characterize a time expression of known form.
In some embodiments, the determining subunit is specifically configured to: judging whether each correlation degree meets a preset threshold value; obtaining a matching result under the condition that at least one of the association degrees meets a preset threshold value; the matching result is that the current dialogue statement contains time components; under the condition that each association degree is judged not to meet a preset threshold value, a matching result is obtained; and the matching result is that the current dialogue statement does not contain a time component.
In some embodiments, the second determining unit further includes an obtaining subunit, specifically configured to determine the first keyword whose association satisfies a preset threshold; the first keyword is a keyword contained in the current conversation sentence; and obtaining the time phrase contained in the current dialogue sentence based on the first keyword.
In some embodiments, the second determining unit further includes a determining subunit, configured to perform semantic analysis on the first keyword to obtain an analysis result; determining time information contained in the current dialogue sentence based on the analysis result.
In some embodiments, the dialog processing apparatus further includes a third determination unit configured to: judging whether the time information can reflect complete time or not; under the condition that the time information cannot reflect the complete time, acquiring the previous dialogue sentences of the user; determining a time phrase contained in the previous dialog sentence; and completing the time information based on the time phrase contained in the previous dialog sentence and the time phrase contained in the current dialog sentence so that the time information can reflect the complete time.
In some embodiments, the second determining unit 203 is specifically configured to identify a time phrase representing a year, a time phrase representing a month, and a time phrase representing a day included in the time information; determining from the calendar repository a time combination corresponding to a time phrase based on the characterizing year, a time phrase characterizing month, and a time phrase characterizing day; the time combination is calendar time corresponding to the time information.
In some embodiments, the generating unit 204 is specifically configured to: determining a first candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the time information and the current dialogue sentence under the condition that the question-answer model is a text semantic matching model; determining a second candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the calendar time and the current dialogue sentence; generating a reply sentence corresponding to the current dialogue sentence based on the first candidate reply sentence and the second candidate reply sentence;
or, the generating unit 204 is specifically configured to, when the question-answer model is a generative model, obtain a time series that can be recognized by the question-answer model based on the time information and the calendar time; and generating a reply sentence corresponding to the current dialogue sentence through a reply sentence generation network of the question-answer model based on the time sequence and the current dialogue sentence.
The dialog processing device provided by the embodiment of the invention is based on the same inventive concept as the dialog processing method, and the terms appearing here are clearly described in the method, and are not described again here.
Embodiments of the present invention further provide a readable storage medium, on which a computer program is stored, where the computer program processor implements the steps of the foregoing method embodiments when executed by a processor, and the foregoing storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. The readable storage medium provided by the embodiment of the invention is a computer readable storage medium.
An embodiment of the present invention further provides an electronic device, where the electronic device includes: a processor and a memory for storing a computer program capable of running on the processor, wherein the processor is configured to execute the steps of the above-described method embodiments stored in the memory when running the computer program.
Fig. 3 is a schematic diagram of a hardware structure of an optical module wavelength electronic device according to an embodiment of the present invention, where the electronic device 30 includes: at least one processor 301, a memory 302, and at least one communication interface 303, the various components of the electronic device 30 being coupled together by a bus system 304, it being understood that the bus system 304 is used to enable connective communication between these components. The bus system 304 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 304 in fig. 3. The communication interface 303 may include a network interface and/or a user interface, and the user interface may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, a touch screen, or the like.
It will be appreciated that the memory 302 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic Random access Memory (FRAM), a magnetic Random access Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 302 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 302 in embodiments of the present invention is used to store various types of data to support the operation of the electronic device 30. Examples of such data include: any computer program for operating on the electronic device 30, such as an implementation of determining a decision pattern matching the first bit based on the forward decision result and the backward decision result, etc., may be embodied in the memory 302 for implementing a method according to an embodiment of the present invention.
The method disclosed in the above embodiments of the present invention may be applied to the processor 301, or implemented by the processor 301. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium having a memory and a processor reading the information in the memory and combining the hardware to perform the steps of the method.
In an exemplary embodiment, the electronic Device 30 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the above-described methods.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A dialog processing method applied to an electronic device including a dialog function, the method comprising:
acquiring a current conversation sentence of a user;
determining time information contained in the current conversation sentence;
determining calendar time corresponding to the time information from a calendar library of the electronic equipment, wherein the calendar library is used for storing standard calendar time; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment;
and generating a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic equipment, the current dialogue sentence, the time information and the calendar time.
2. The method of claim 1, wherein prior to said determining the time information contained in the current conversational sentence, the method further comprises:
judging the format of the current conversation statement;
under the condition that the format of the current dialogue statement is judged to be a non-text format, converting the format of the current dialogue statement into a text format according to a specific technology; wherein the particular technique is associated with a format of the current conversational sentence.
3. The method of claim 1, wherein prior to said determining the time information contained in the current conversational sentence, the method further comprises:
judging whether the current dialogue statement contains time components or not;
under the condition that the current dialogue statement contains time components, obtaining a time phrase contained in the current dialogue statement; determining time information contained in the current conversational sentence based on the contained time phrase;
wherein the determining whether the current dialogue statement contains a time component includes: performing keyword matching processing on the current conversation sentence to obtain a matching result; and judging whether the current dialogue statement contains a time component or not based on the matching result.
4. The method of claim 3, wherein the performing keyword matching processing on the current conversational sentence to obtain a matching result comprises:
determining the association degree of each keyword contained in the current conversation sentence and a time phrase in a time resource library; obtaining a matching result based on each of the association degrees;
wherein, the time resource library is used for storing various standard time phrases; the standard time phrase is used to characterize a time expression of known form.
5. The method of claim 4, wherein obtaining a matching result based on each of the relevance degrees comprises: judging whether each correlation degree meets a preset threshold value; obtaining a matching result under the condition that at least one of the association degrees meets a preset threshold value; the matching result is that the current dialogue statement contains time components; under the condition that each association degree is judged not to meet a preset threshold value, a matching result is obtained; the matching result is that the current dialogue statement does not contain time components;
correspondingly, the obtaining the time phrase included in the current dialog sentence includes: determining a first keyword of which the association degree meets a preset threshold; the first keyword is a keyword contained in the current conversation sentence; obtaining a time phrase contained in the current dialogue sentence based on the first keyword;
correspondingly, the determining the time information included in the current dialog sentence based on the included time phrase includes: performing semantic analysis on the first keyword to obtain an analysis result; determining time information contained in the current dialogue sentence based on the analysis result.
6. The method of claim 5, wherein after the determining time information contained in the current conversational sentence based on the analysis result, the method further comprises:
judging whether the time information can reflect complete time or not;
under the condition that the time information cannot reflect the complete time, acquiring the previous dialogue sentences of the user; determining a time phrase contained in the previous dialog sentence; and completing the time information based on the time phrase contained in the previous dialog sentence and the time phrase contained in the current dialog sentence so that the time information can reflect the complete time.
7. The method according to claim 1, wherein in a case where the question-answer model is a text semantic matching model, the determining a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic device, the current dialogue sentence, the time information, and the calendar time includes:
determining a first candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the time information and the current dialogue sentence;
determining a second candidate reply sentence corresponding to the current dialogue sentence from the question-answer model based on the calendar time and the current dialogue sentence;
generating a reply sentence corresponding to the current dialogue sentence based on the first candidate reply sentence and the second candidate reply sentence;
correspondingly, in a case where the question-answer model is a generative model, the determining a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic device, the current dialogue sentence, the time information, and the calendar time includes:
obtaining a time sequence which can be identified by the question-answering model based on the time information and the calendar time;
and generating a reply sentence corresponding to the current dialogue sentence through a reply sentence generation network of the question-answer model based on the time sequence and the current dialogue sentence.
8. A conversation processing apparatus applied to an electronic device including a conversation function, the conversation processing apparatus includes an acquisition unit, a first determination unit, a second determination unit, and a generation unit;
the acquisition unit is used for acquiring the current conversation sentence of the user;
the first determining unit is configured to determine time information included in the current dialogue statement;
the second determining unit is used for determining the calendar time corresponding to the time information from a calendar library of the electronic equipment, and the calendar library is used for storing standard calendar time; the standard calendar time is used for representing the time expressed according to a set format in the electronic equipment;
the generating unit is used for generating a reply sentence corresponding to the current dialogue sentence based on the question-answer model of the electronic equipment, the current dialogue sentence, the time information and the calendar time.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
10. An electronic device, characterized in that the electronic device comprises: a processor and a memory for storing a computer program operable on the processor, wherein the processor is operable to perform the steps of the method of any of claims 1 to 7 when the computer program is executed.
CN202110478592.8A 2021-04-30 2021-04-30 Conversation processing method and device, electronic equipment and storage medium Active CN112989013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110478592.8A CN112989013B (en) 2021-04-30 2021-04-30 Conversation processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110478592.8A CN112989013B (en) 2021-04-30 2021-04-30 Conversation processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112989013A true CN112989013A (en) 2021-06-18
CN112989013B CN112989013B (en) 2021-08-24

Family

ID=76336836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110478592.8A Active CN112989013B (en) 2021-04-30 2021-04-30 Conversation processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112989013B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217206A1 (en) * 2015-01-26 2016-07-28 Panasonic Intellectual Property Management Co., Ltd. Conversation processing method, conversation processing system, electronic device, and conversation processing apparatus
US20160328469A1 (en) * 2015-05-04 2016-11-10 Shanghai Xiaoi Robot Technology Co., Ltd. Method, Device and Equipment for Acquiring Answer Information
CN107493353A (en) * 2017-10-11 2017-12-19 宁波感微知著机器人科技有限公司 A kind of intelligent robot cloud computing method based on contextual information
CN107612814A (en) * 2017-09-08 2018-01-19 北京百度网讯科技有限公司 Method and apparatus for generating candidate's return information
CN107885756A (en) * 2016-09-30 2018-04-06 华为技术有限公司 Dialogue method, device and equipment based on deep learning
CN108121799A (en) * 2017-12-21 2018-06-05 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the mobile terminal of revert statement
CN109359211A (en) * 2018-11-13 2019-02-19 平安科技(深圳)有限公司 Data-updating method, device, computer equipment and the storage medium of interactive voice
CN110347817A (en) * 2019-07-15 2019-10-18 网易(杭州)网络有限公司 Intelligent response method and device, storage medium, electronic equipment
CN110472033A (en) * 2019-08-16 2019-11-19 北京一链数云科技有限公司 Answering method, device and server based on NLP model
CN111813900A (en) * 2019-04-10 2020-10-23 北京猎户星空科技有限公司 Multi-turn conversation processing method and device, electronic equipment and storage medium
CN112527962A (en) * 2020-12-17 2021-03-19 云从科技集团股份有限公司 Intelligent response method and device based on multi-mode fusion, machine readable medium and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217206A1 (en) * 2015-01-26 2016-07-28 Panasonic Intellectual Property Management Co., Ltd. Conversation processing method, conversation processing system, electronic device, and conversation processing apparatus
US20160328469A1 (en) * 2015-05-04 2016-11-10 Shanghai Xiaoi Robot Technology Co., Ltd. Method, Device and Equipment for Acquiring Answer Information
CN107885756A (en) * 2016-09-30 2018-04-06 华为技术有限公司 Dialogue method, device and equipment based on deep learning
CN107612814A (en) * 2017-09-08 2018-01-19 北京百度网讯科技有限公司 Method and apparatus for generating candidate's return information
CN107493353A (en) * 2017-10-11 2017-12-19 宁波感微知著机器人科技有限公司 A kind of intelligent robot cloud computing method based on contextual information
CN108121799A (en) * 2017-12-21 2018-06-05 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the mobile terminal of revert statement
CN109359211A (en) * 2018-11-13 2019-02-19 平安科技(深圳)有限公司 Data-updating method, device, computer equipment and the storage medium of interactive voice
CN111813900A (en) * 2019-04-10 2020-10-23 北京猎户星空科技有限公司 Multi-turn conversation processing method and device, electronic equipment and storage medium
CN110347817A (en) * 2019-07-15 2019-10-18 网易(杭州)网络有限公司 Intelligent response method and device, storage medium, electronic equipment
CN110472033A (en) * 2019-08-16 2019-11-19 北京一链数云科技有限公司 Answering method, device and server based on NLP model
CN112527962A (en) * 2020-12-17 2021-03-19 云从科技集团股份有限公司 Intelligent response method and device based on multi-mode fusion, machine readable medium and equipment

Also Published As

Publication number Publication date
CN112989013B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN110462730B (en) Facilitating end-to-end communication with automated assistants in multiple languages
CN110069608B (en) Voice interaction method, device, equipment and computer storage medium
US9805718B2 (en) Clarifying natural language input using targeted questions
EP3183728B1 (en) Orphaned utterance detection system and method
CN110223695B (en) Task creation method and mobile terminal
Bunt et al. Towards an ISO standard for dialogue act annotation
JP6909832B2 (en) Methods, devices, equipment and media for recognizing important words in audio
WO2021000497A1 (en) Retrieval method and apparatus, and computer device and storage medium
CN111026319B (en) Intelligent text processing method and device, electronic equipment and storage medium
CN109343696B (en) Electronic book commenting method and device and computer readable storage medium
CN111026320B (en) Multi-mode intelligent text processing method and device, electronic equipment and storage medium
US20180246954A1 (en) Natural language content generator
KR101677859B1 (en) Method for generating system response using knowledgy base and apparatus for performing the method
CN115769220A (en) Document creation and editing via automated assistant interaction
CN116796857A (en) LLM model training method, device, equipment and storage medium thereof
CN110020429B (en) Semantic recognition method and device
US20220188525A1 (en) Dynamic, real-time collaboration enhancement
JP6095487B2 (en) Question answering apparatus and question answering method
CN112989013B (en) Conversation processing method and device, electronic equipment and storage medium
CN113901193A (en) Man-machine conversation processing method, device, equipment and medium based on dynamic code
CN114242047A (en) Voice processing method and device, electronic equipment and storage medium
CN115408500A (en) Question-answer consistency evaluation method and device, electronic equipment and medium
CN110647627B (en) Answer generation method and device, computer equipment and readable medium
EP3552114A1 (en) Natural language content generator
Kim A dialogue-based NLIDB system in a schedule management domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231212

Address after: 430014 Building 2, Guannan Industrial Park, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee after: WUHAN LONGJIN SCIENCE AND TECHNOLOGY Inc.

Patentee after: WUHAN University

Address before: 430014 Building 2, Guannan Industrial Park, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Patentee before: WUHAN LONGJIN SCIENCE AND TECHNOLOGY Inc.

TR01 Transfer of patent right