CN110895657B - Semantic logic expression and analysis method based on spoken language dialogue features - Google Patents

Semantic logic expression and analysis method based on spoken language dialogue features Download PDF

Info

Publication number
CN110895657B
CN110895657B CN201811054040.9A CN201811054040A CN110895657B CN 110895657 B CN110895657 B CN 110895657B CN 201811054040 A CN201811054040 A CN 201811054040A CN 110895657 B CN110895657 B CN 110895657B
Authority
CN
China
Prior art keywords
dialogue
spoken
dialog
expression
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811054040.9A
Other languages
Chinese (zh)
Other versions
CN110895657A (en
Inventor
张宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huijie Shanghai Technology Co ltd
Original Assignee
Huijie Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huijie Shanghai Technology Co ltd filed Critical Huijie Shanghai Technology Co ltd
Priority to CN201811054040.9A priority Critical patent/CN110895657B/en
Publication of CN110895657A publication Critical patent/CN110895657A/en
Application granted granted Critical
Publication of CN110895657B publication Critical patent/CN110895657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a semantic logic expression and analysis method based on spoken language dialogue characteristics, which can realize the comprehensive logic rule expression capability of a plurality of semantic expressions by adopting a definition mode of a keyword sequence rule and combining a logic relation expression formed by an and, or, not and brackets for dialogue texts of dialogue roles input according to spoken language dialogue sequences, so as to extract and analyze information characteristics of any spoken language dialogue texts. The semantic logic expression and analysis method based on the spoken dialog features can fully discover the context features under specific business dialog scenes, can provide complex logic expression and analysis tools for specific business scenes and purposes by adopting the logic and mode which is more in line with human thinking, and has wide application prospect.

Description

Semantic logic expression and analysis method based on spoken language dialogue features
Technical Field
The invention belongs to the technical field of dialogue text analysis, and relates to a semantic logic expression and analysis method based on spoken dialogue features.
Background
In an enterprise call center or customer service center, there are a large number of conversational recordings of agents and customers, and with the advancement of speech recognition technology, these recordings have been largely recognized as text. For example, text customer service dialogue text, which has been widely popularized at present, is the most huge communication data related to clients in enterprises. The data has great significance for enterprises to know clients, know markets and analyze own products and service quality.
However, since speech in the form of spoken language often has characteristics of speaking, and noise such as inaccurate recognition is inevitably introduced into the speech recognition system during the recognition process, many difficulties are encountered in extracting and analyzing information of speech text in the form of spoken language, and particularly when the extracted speech text information relates to characteristics, rules and professional speaking of enterprises and industries, the influence on the quality of text extraction is increasingly remarkable.
Currently known methods based on regular expressions, machine learning, deep learning and the like often have poor effects in specific scenes where speech text is not specified for high noise such as spoken dialog.
Disclosure of Invention
The invention aims to provide a semantic logic expression and analysis method based on spoken language dialogue features, which aims to solve the problems that the extraction of specific target information is difficult in a large amount of noise and short text spoken language dialogue scenes in the existing text analysis method.
The semantic logic expression and analysis method based on the spoken dialog features provided by the invention realizes comprehensive logic rule expression of a plurality of semantic expressions by adopting a definition mode of a keyword sequence rule KS and combining a logic expression formed by an and, or, not and brackets for dialog texts of dialog roles input according to the spoken dialog sequence, and further extracts and analyzes information features of any spoken dialog (including dialog) text.
The invention provides a semantic logic expression and analysis method based on spoken dialog features, which specifically comprises the following steps:
step one, recognizing a spoken dialogue, and obtaining the text information content of the spoken dialogue;
the spoken dialogue speech is recognized by the speech recognition system to obtain spoken dialogue text information content. The spoken dialog text message content is entered in dialog sequence.
In the invention, the voice recognition system refers to a system for recognizing the content of a voice dialogue from voice to words in the prior art. Due to problems of non-normative and accents of human spoken language, the text content translated by speech recognition systems often contains a large amount of erroneous and non-normative information.
In the present invention, the spoken dialog text refers to spoken text dialog content formed after the spoken speech dialog content is translated by the speech recognition system.
In the first step, the spoken dialog text information content includes: the content of each spoken dialog text, the dialog role of each spoken dialog text, the starting time point and the ending time point of each spoken dialog, etc.
Wherein the dialogue roles may be two or more. Preferably, there are two conversational characters, conversational character a and conversational character B.
Step two, defining a keyword sequence rule;
a keyword sequence rule KS is defined that contains an arbitrary length keyword sequence rule in the format of K1-K2-K3- … -Kn that is a text feature expression for any number of keywords over any range of intervals. Wherein, K1, K2 and … … Kn are arbitrary keywords or keywords. These keywords or intervals between keywords refer to arbitrary character intervals or time intervals. The arbitrary interval range is an arbitrary character interval range or an arbitrary time interval range. In this form any semantic expression format can be described.
Wherein the keyword comprises a keyword and/or a keyword.
Wherein, the keyword sequence rule comprises:
(1) K1, K2, K3, … … Kn represent n keywords (or keywords), respectively.
(2) k1-K2 define the maximum interval (character interval or time interval) between keywords (or keywords) K1 and K2.
(3) By analogy, K2-K3, … Kn-1-Kn may define a maximum interval (character interval or time interval) between each pair of adjacent keywords (or keywords).
Step three, limiting dialogue roles to which the keyword belongs;
the above-mentioned K1, K2, … … Kn keywords (or keywords) may define their assigned conversational roles, for example, they may be defined by conversational roles a or B, or may not be defined by conversational roles, so as to represent the descriptive capability of the conversational scene, and may provide an analysis method for information that can be confirmed only by mutual authentication of both parties of the spoken conversation.
Fourthly, realizing semantic logic expression and analysis;
by defining a plurality of keyword sequences KS1, KS2 … KSn and using and, or, not and bracket () combinations, the keyword sequence rule KS is used as a unit to form a logic relation expression of any level, so that the comprehensive logic rule expression capability of a plurality of semantic expressions can be realized, and further, the information feature extraction and analysis can be carried out on any spoken dialog text.
The logical operation Model (Model) in the logical relation expression is as follows: when all key words corresponding to a certain KSn appear according to the definition sequence and the interval accords with the definition of the KSn in the actual spoken dialogue text information content, the value of the KSn is True (True); conversely, if none of the keyword sequences defined by a KSn appears in a piece of spoken dialog text information content, or if the keyword sequences do not satisfy the interval definition even if they appear, then the value of the KSn is False (False); by analyzing the values (true or false) in all the keyword sequences KS to a dialogue text and bringing the values into a Model logic operation formula, the logic values (true or false) of the Model to the dialogue text can be calculated, so that whether the semantics expressed by the logic Model appear in the text dialogue information content or not can be judged.
In particular embodiments, the logical relational expression, i.e., model logical operational formula, may include one or more keyword rules. For example, two keyword rules are involved, and these rules are combined using logical relationships with (and), such as "KS1and KS2"; or combine these rules with or (or) logic relationships, such as "KS1 or KS2"; or combine these rules using a logical relationship other than (not), such as "KS1 not KS2". For example, with respect to a plurality of keyword rules, KS1, KS2 … KSn, a multi-level complex logic is formed by and/or not/brackets, such as ((KS 1and KS 2) or (KS 3 and not KS 4)) and (KS 5or KS 6), satisfying the definition of any business semantic rule.
In the fourth step, further, semantic feature expressions based on dialogue roles may be used as units, and the units are combined by using and, or, not and brackets ("and", "or", "not" and brackets "()"), so as to form semantic logic rules for the spoken dialogue scene.
The invention provides a key word sequence KSn, a key word Kn and logic operation. Wherein, KSn and Kn have own operation rule and definition requirement respectively. Each keyword sequence KSn (n is 1,2,3,4,5,6, … …) comprises a keyword sequence definition like the following: k1— K2 — K3- - - … - - -Kn, wherein each K represents 1 or more consecutive words, each K may be defined with its assigned conversational role (a or B or any), and Kn and kn+1- - -represent the maximum separation (number of words or time) between these two ks. The logical operation is an expression composed of a key word sequence KSn and logical operators (and/or/not/multi-level logic composed of brackets), such as ((KS 1and KS 2) or (KS 3 and not KS 4)) and (KS 5or KS 6), and the result of the logical operation is True (True) or False (False), which is a sign of whether or not the semantics appear.
The invention has the innovative advantages and beneficial effects that: the semantic logic expression and analysis method based on the spoken dialog features provided by the invention can realize the comprehensive logic rule expression capability of a plurality of semantic expressions by adopting a definition mode of key word sequence rules and combining a logic expression formed by an and, or, not and brackets () on the dialog text of the dialog role input according to the spoken dialog sequence, and further extract and analyze the information features of any dialog text.
The method of the invention provides a scene which is similar to human thinking and is often considered when analyzing high-noise interactive dialogue texts, namely, according to limited keywords and words which are related in a contextual manner and the interactive interaction of two spoken dialogue parties, the intention and key information of the spoken dialogue are judged, and on the basis, more complex multi-level logic operation expressions are defined, so that business rules and information processing targets under specific scenes are served. And because of its flexible definition mode, can be suitable for the information extraction of various languages and word communication channels, the configuration is flexible, it is convenient to maintain.
The semantic logic expression and analysis method based on the spoken language dialogue features can fully discover the context features under the specific business spoken language dialogue scene, and can provide complex logic expression and analysis tools for the specific business scene and application by adopting the logic and mode which is more in line with human thinking. The method can be independently used as a dialogue analysis method and tool, and can also be combined with other existing methods to be used as a front-end or rear-end method of the methods, thereby realizing more comprehensive dialogue semantic analysis flow and system. The invention has wide application prospect.
Drawings
FIG. 1 is a schematic diagram of text content of a conversational character A formed after translation by a speech recognition system according to the present invention.
Fig. 2 shows a schematic diagram of text content of a dialog character B formed after translation via a speech recognition system in accordance with the present invention.
FIG. 3 is a flow chart of the semantic logic expression and analysis method based on spoken dialog features of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples and drawings. It should be understood that the specific embodiments described herein are merely illustrative of the present invention, and the objects, technical solutions and advantages of the present invention are more clearly understood and not to limit the present invention. The principles of the invention will be further described with reference to the drawings and specific examples.
Example 1
The semantic logic expression and analysis method based on the spoken language dialogue features provided by the invention comprises the following steps: the method is characterized in that the information feature extraction and analysis of any spoken dialog text is realized by adopting a definition mode of a keyword/word sequence rule and combining a logic expression formed by an and, or, a not and brackets for the dialog text of the individual characters input according to the spoken dialog sequence. The method comprises the following steps as shown in fig. 3:
s11, recognizing the spoken dialogue, and obtaining the text information content of the spoken dialogue.
When the semantics of the spoken dialogue speech needs to be logically analyzed, the spoken dialogue speech to be analyzed is firstly obtained, then the spoken dialogue speech is recognized, the spoken dialogue speech is converted into target spoken dialogue text information, and then the dialogue text information content is input according to the dialogue sequence.
The spoken dialogue is recognized by the speech recognition system, the spoken dialogue text recognized by the speech recognition system is input into each sentence of spoken dialogue text according to the dialogue sequence, and the information input together with the spoken dialogue text also comprises the information of dialogue roles corresponding to each sentence of spoken dialogue text, such as dialogue role A or dialogue role B, and also comprises the information of the starting time point and the ending time point of each sentence of spoken dialogue.
The spoken dialog text content information obtained through the S11 step includes: the content of each spoken dialog text, the dialog role of each spoken dialog text, and the start time point and end time point of each spoken dialog.
As shown in fig. 1and 2, is illustrative text content formed after translation via a speech recognition system. Wherein R0 in fig. 1and R1 in fig. 2 represent two parties in a conversation (conversation role a, conversation role B), respectively, each of which has an explicit start and end time stamp.
S12, defining a keyword sequence rule.
By defining keyword sequence rules KS, it contains arbitrary length keyword rules defining a format similar to K1-K2-K3- … -Kn; where K1, K2, K3, … …, kn are arbitrary keywords or key words, and K1-Kn are key words or intervals between key words, which may be defined as arbitrary character intervals or time intervals.
For example, the definition rule "you good-welcome to call-xx bank" can be used to describe the welcome of a telephone customer service person to a customer call, and as long as the keywords (words) appear in the actual dialogue and the interval between the words is smaller than the range set by the rule, the corresponding semantics appear.
For example, the following rules are used for defining:
(1) K1= "your good", role = a;
(2) K2= "welcome call", role = a;
(3) K3= "xx bank" role = a;
(4) K1-K2 are within 5 words apart;
(5) K2-K3 are within 10 words of each other.
S13, limiting the dialogue roles correspondingly attributed to the keyword.
The K1, K2 and … … Kn keywords can respectively limit the affiliated dialogue roles A or B, so that the description capability of dialogue scenes is embodied, semantic information can be extracted for specific roles, and the method can be used for providing an analysis method for information which can be confirmed only by mutual authentication of two parties of the dialogue.
For example, "you invest is risky," and this rule is that you know not (agent, dialogue role a) -know (customer, dialogue role B) ", you can make sure that the agent provides risk notification information, and that the customer confirms that this information is received.
For example, the definition may be performed with the following rules:
(6) K1= "you invest is risky, this you know, do so", conversational role = a;
(7) K2= "know", dialog role=b;
(8) K1-K2 are within 15 words of each other.
S14, achieving semantic logic expression and analysis.
By defining a plurality of keyword sequences KS1, KS2, … …, KSn, combining the units by using and, or, not and brackets () with the keyword rule as a unit to constitute a logical operation Model (Model); when all key words corresponding to a certain KSn appear according to a definition sequence in an actual spoken dialog text and the interval accords with the definition of the KSn, the value of the KSn is True (True); conversely, if no keyword sequence of a KSn definition appears in a session, or if the interval definition is not satisfied even if the keyword sequence appears, the value of the KSn is False; by analyzing the values (true or false) of all the keyword sequences KS in a dialogue text and bringing the values into the Model logic operation formula, calculating the logic values (true or false) of the Model for the dialogue text, thereby judging whether the semantics expressed by the logic Model appear in the text dialogue; through the key word semantics and the logical expression modeling above the key word semantics, the comprehensive logical rule expression capability of expressing a plurality of semantics can be realized.
Further, semantic feature expressions based on dialogue roles can also be used as units, which are combined by using and, or, not and brackets to form semantic logic rules for dialogue scenes. If the client needs to be informed of a plurality of notes and the client indicates knowledge, then a keyword rule of any length similar to the format of K1-K2-K3- … -Kn is required to be established for each informing statement and client response statement, for example, rule KS1 represents the rule informing that the client is at risk, rule KS2 represents the rule of definite confirmation of transaction and irrevocable, and for the two rules, the combination of the rules by adopting a logic relation with (and), namely, "KS1and KS2", when the logic expression is true (true), the fact that the attention points of the agent (dialog role a) and the client (dialog role B) are consistent is indicated, and the semantics are true, otherwise, the meaning is not true. For complex dialogue content and business rule scenarios, the above logic expression may include a plurality of rules KS1, KS2 … KSn, and form a multi-level complex logic by and/or/not/brackets, such as ((KS 1and KS 2) or (KS 3 and not KS 4)) and (KS 5or KS 6), where the definition of any business semantic rule may be satisfied.
The foregoing description is only illustrative of the present invention and is not to be construed as limiting the invention, but is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (6)

1. A semantic logic expression and analysis method based on the characteristics of spoken dialog is characterized in that through the definition mode of keyword sequence rules KS, and the logic expression formed by combining and, or, not and brackets, the method realizes the comprehensive logic rule expression of a plurality of semantic expressions, and further extracts and analyzes the information characteristics of any spoken dialog text, and the method specifically comprises the following steps:
step one, recognizing a spoken dialogue, and obtaining the text information content of the spoken dialogue;
the voice recognition system recognizes the spoken dialogue voice to obtain the spoken dialogue text information content input according to the dialogue sequence;
step two, defining a keyword sequence rule;
defining a keyword sequence rule KS which comprises any length keyword sequence rule in the format of K1-K2-K3- … -Kn; wherein K1, K2 and … … Kn are random key words, and the interval between the key words can be set as character interval or time interval with random length;
step three, limiting dialogue roles to which the keyword belongs;
respectively limiting dialogue roles to which the K1, K2 and … … Kn key words belong;
fourthly, realizing semantic logic expression and analysis;
by defining a plurality of keyword sequence rules KS1, KS2 … KSn and taking the keyword sequence rules KS as units, and combining the units by using an and, an or, a not and brackets to form a logic relation expression, any logic rule expression of any semantic expression is realized, and further information feature extraction and analysis are carried out on any spoken dialogue text; in the fourth step, the logical relation expression is: when all key words corresponding to a certain KSn appear according to the definition sequence and the interval accords with the definition of the KSn in the oral dialogue text information content, the value of the KSn is true; otherwise, if the keyword sequence defined by a certain KSn does not appear in the text information content of a section of spoken dialog, or if the keyword sequence does not meet the interval definition even if the keyword sequence does appear, the value of the KSn is false; by analyzing the true or false values of all the keyword sequences KS in a dialogue text and bringing the values into a model logical operation formula, the true or false logical values of the model aiming at the dialogue text are calculated, so that whether the semantics expressed by the logical model appear in the text dialogue information content is judged.
2. The method for semantic logic expression and analysis based on spoken dialog features of claim 1, wherein in the first step, the spoken dialog text information content includes: the content of each spoken dialog text, the dialog role of each spoken dialog text, and the start time point and end time point of each spoken dialog.
3. The semantic logic expression and analysis method based on spoken dialog features according to claim 2, wherein the dialog roles include dialog role a, dialog role B.
4. The method for semantic logic expression and analysis based on spoken dialog features of claim 1, wherein in the second step, the keyword sequence rule of arbitrary length is a spoken dialog text feature expression form for an arbitrary plurality of keywords within an arbitrary interval range.
5. The semantic logic expression and analysis method based on spoken dialog features according to claim 1, wherein in the third step, dialog roles a or B for distinguishing spoken dialog features in a spoken dialog scene are used.
6. The method for expressing and analyzing semantic logic based on spoken dialog features according to claim 1, wherein in the fourth step, the semantic feature expression based on spoken dialog roles is also used as a unit, and the units are combined by using and, or, not and brackets, so as to form a semantic logic rule for dialog scenes.
CN201811054040.9A 2018-09-11 2018-09-11 Semantic logic expression and analysis method based on spoken language dialogue features Active CN110895657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811054040.9A CN110895657B (en) 2018-09-11 2018-09-11 Semantic logic expression and analysis method based on spoken language dialogue features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811054040.9A CN110895657B (en) 2018-09-11 2018-09-11 Semantic logic expression and analysis method based on spoken language dialogue features

Publications (2)

Publication Number Publication Date
CN110895657A CN110895657A (en) 2020-03-20
CN110895657B true CN110895657B (en) 2023-05-26

Family

ID=69784798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811054040.9A Active CN110895657B (en) 2018-09-11 2018-09-11 Semantic logic expression and analysis method based on spoken language dialogue features

Country Status (1)

Country Link
CN (1) CN110895657B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2698105A1 (en) * 2007-08-31 2009-03-05 Microsoft Corporation Identification of semantic relationships within reported speech
CA2914398A1 (en) * 2007-08-31 2009-03-05 Microsoft Technology Licensing, Llc Identification of semantic relationships within reported speech
CN105912607A (en) * 2016-04-06 2016-08-31 普强信息技术(北京)有限公司 Grammar rule based classification method
CN107679042A (en) * 2017-11-15 2018-02-09 北京灵伴即时智能科技有限公司 A kind of multi-layer dialog analysis method towards Intelligent voice dialog system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2698105A1 (en) * 2007-08-31 2009-03-05 Microsoft Corporation Identification of semantic relationships within reported speech
CA2914398A1 (en) * 2007-08-31 2009-03-05 Microsoft Technology Licensing, Llc Identification of semantic relationships within reported speech
CN105912607A (en) * 2016-04-06 2016-08-31 普强信息技术(北京)有限公司 Grammar rule based classification method
CN107679042A (en) * 2017-11-15 2018-02-09 北京灵伴即时智能科技有限公司 A kind of multi-layer dialog analysis method towards Intelligent voice dialog system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张婕 ; 王丹力 ; .基于上下文的多通道语义融合.计算机工程与设计.2007,(01),全文. *
张彦楠 ; 黄小红 ; 马严 ; 丛群 ; .基于深度学习的录音文本分类方法.浙江大学学报(工学版).(07),全文. *
赵阳洋 ; 王振宇 ; 王佩 ; 杨添 ; 张睿 ; 尹凯 ; .任务型对话系统研究综述.计算机学报.(10),全文. *

Also Published As

Publication number Publication date
CN110895657A (en) 2020-03-20

Similar Documents

Publication Publication Date Title
CN108962282B (en) Voice detection analysis method and device, computer equipment and storage medium
CN110266899B (en) Client intention identification method and customer service system
CN109388701A (en) Minutes generation method, device, equipment and computer storage medium
US8135579B2 (en) Method of analyzing conversational transcripts
US11954140B2 (en) Labeling/names of themes
CN109065052B (en) Voice robot
CN111128241A (en) Intelligent quality inspection method and system for voice call
CN110377726B (en) Method and device for realizing emotion recognition of natural language text through artificial intelligence
CN111865752A (en) Text processing device, method, electronic device and computer readable storage medium
CN112818109A (en) Intelligent reply method, medium, device and computing equipment for mail
CN114818649A (en) Service consultation processing method and device based on intelligent voice interaction technology
CN117441165A (en) Reducing bias in generating language models
CN110895657B (en) Semantic logic expression and analysis method based on spoken language dialogue features
CN109408621B (en) Dialogue emotion analysis method and system
CN112270166A (en) Method for quickly making and creating 5G message
CN116501844A (en) Voice keyword retrieval method and system
CN107645613A (en) The method and apparatus of service diverting search
CN115831125A (en) Speech recognition method, device, equipment, storage medium and product
CN116303951A (en) Dialogue processing method, device, electronic equipment and storage medium
CN113505606B (en) Training information acquisition method and device, electronic equipment and storage medium
KR102370437B1 (en) Virtual Counseling System and counseling method using the same
CN114328867A (en) Intelligent interruption method and device in man-machine conversation
CN113744742A (en) Role identification method, device and system in conversation scene
CN114171063A (en) Real-time telephone traffic customer emotion analysis assisting method and system
CN111177343A (en) Method and system for automatically constructing medical and American inquiry guide logic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201112 Room 204, South Building, hatching Building 1, no.1588, LIANHANG Road, Minhang District, Shanghai

Patentee after: Huijie (Shanghai) Technology Co.,Ltd.

Address before: 200234 Building 2B, No. 398 Tianlin Road, Xuhui District, Shanghai

Patentee before: Huijie (Shanghai) Technology Co.,Ltd.