CN111862279A - Interaction processing method and device - Google Patents

Interaction processing method and device Download PDF

Info

Publication number
CN111862279A
CN111862279A CN202010719783.4A CN202010719783A CN111862279A CN 111862279 A CN111862279 A CN 111862279A CN 202010719783 A CN202010719783 A CN 202010719783A CN 111862279 A CN111862279 A CN 111862279A
Authority
CN
China
Prior art keywords
interactive
customer service
data
virtual customer
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010719783.4A
Other languages
Chinese (zh)
Inventor
李德强
高园
罗涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010719783.4A priority Critical patent/CN111862279A/en
Publication of CN111862279A publication Critical patent/CN111862279A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides an interactive processing method. The method comprises the steps of obtaining interactive data between a user and virtual customer service; determining expression characteristics suitable for the virtual customer service according to the interaction data; generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics; displaying the interactive response video to a user. The disclosure also provides an interaction processing device, an electronic device and a computer readable storage medium.

Description

Interaction processing method and device
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to an interactive processing method and apparatus.
Background
With the rapid development of artificial intelligence technology, the application scenarios of intelligent services are more and more extensive. In the intelligent service, the virtual customer service can replace the manual customer service to interact with the user, and the problem of partial service of the user is solved.
In the process of realizing the disclosed concept, the inventor finds that in the intelligent service of the related art, the virtual customer service interacts with the user in a text and machine voice mode, and the interaction mode has the problems of poor reality sense of the virtual customer service, low interaction efficiency and poor interaction effect.
Disclosure of Invention
One aspect of the present disclosure provides an interaction processing method. The method comprises the steps of obtaining interactive data between a user and virtual customer service; determining expression characteristics suitable for the virtual customer service according to the interaction data; generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics; and displaying the interactive response video to the user.
Optionally, the determining, according to the interaction data, an expression feature applicable to the virtual customer service includes determining, according to the interaction data, an emotion feature matched with the interaction data; according to the emotional characteristics, determining expression characteristics suitable for the virtual customer service;
optionally, the obtaining of the interaction data between the user and the virtual customer service includes obtaining interaction text data between the user and the virtual customer service; determining emotion characteristics matched with the interactive data according to the interactive data, wherein the emotion characteristics comprise the steps of performing characteristic extraction on the interactive text data to obtain at least one text characteristic in the interactive text data; and determining the emotion characteristics matched with the at least one text characteristic by using a preset emotion recognition model to serve as the emotion characteristics matched with the interactive data.
Optionally, the determining, according to the emotional feature, an expression feature applicable to the virtual customer service includes determining, according to a preset association relationship between the emotional feature and an expression category, a target expression category associated with the emotional feature; and determining the pose parameters of at least one preset facial feature point of the virtual customer service according to the target expression category to obtain the expression features.
Optionally, the generating an interactive response video of the virtual customer service according to the interactive data and the expression features includes generating an interactive response audio according to interactive response data in the interactive data; controlling the virtual customer service to generate a target expression according to the expression characteristics; and generating the interactive response video according to the interactive response audio, the target expression and the preset initial video of the virtual customer service.
Optionally, the obtaining of the interaction data between the user and the virtual customer service includes obtaining interaction image data between the user and the virtual customer service; determining emotion characteristics matched with the interactive data according to the interactive data, wherein the emotion characteristics comprise the steps of performing characteristic extraction on the interactive image data to obtain at least one image characteristic in the interactive image data; and determining the emotion characteristics matched with the at least one image characteristic by using a preset emotion recognition model to serve as the emotion characteristics matched with the interactive data.
Optionally, the method further includes determining a sound characteristic suitable for the virtual customer service according to the emotional characteristic, wherein the sound characteristic includes at least one of a sound size, a sound production speed, and a sound tone; generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics, wherein the generating of the interactive response video of the virtual customer service comprises generating the interactive response video of the virtual customer service according to the interactive data, the expression characteristics and the sound characteristics.
Another aspect of the present disclosure provides an interaction processing apparatus. The device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring interactive data between a user and virtual customer service; the first processing module is used for determining expression characteristics suitable for the virtual customer service according to the interaction data; the second processing module is used for generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics; and the display module is used for displaying the interactive response video to the user.
Optionally, the first processing module includes a first processing sub-module, configured to determine, according to the interaction data, an emotional characteristic that matches the interaction data; and the second processing submodule is used for determining the expression characteristics suitable for the virtual customer service according to the emotion characteristics.
Optionally, the obtaining module includes a first obtaining sub-module, configured to obtain interactive text data between the user and the virtual customer service; the first processing sub-module comprises a first processing unit, which is used for extracting the characteristics of the interactive text data to obtain at least one text characteristic in the interactive text data; and the second processing unit is used for determining the emotion characteristics matched with the at least one text characteristic by using a preset emotion recognition model to serve as the emotion characteristics matched with the interactive data.
Optionally, the second processing sub-module includes a third processing unit, configured to determine, according to a preset association relationship between the emotional feature and the expression category, a target expression category associated with the emotional feature; and the fourth processing unit is used for determining the pose parameters of at least one preset facial feature point of the virtual customer service according to the target expression category so as to obtain the expression features.
Optionally, the second processing module includes a third processing sub-module, configured to generate an interactive response audio according to interactive response data in the interactive data; the fourth processing submodule is used for controlling the virtual customer service to generate a target expression according to the expression characteristics; and the fifth processing submodule is used for generating the interactive response video according to the interactive response audio, the target expression and the preset initial video of the virtual customer service.
Optionally, the obtaining module includes a second obtaining sub-module, configured to obtain interactive image data between the user and the virtual customer service; the first processing sub-module comprises a fifth processing unit, and is used for performing feature extraction on the interactive image data to obtain at least one image feature in the interactive image data; and the sixth processing unit is used for determining the emotion characteristics matched with the at least one image characteristic by using a preset emotion recognition model to serve as the emotion characteristics matched with the interactive data.
Optionally, the apparatus further includes a third processing module, configured to determine a sound characteristic suitable for the virtual customer service according to the emotional characteristic, where the sound characteristic includes at least one of a sound size, a sound production speed, and a sound color; the second processing module comprises a sixth processing submodule and is used for generating an interactive response video of the virtual customer service according to the interactive data, the expression characteristics and the sound characteristics.
Another aspect of the present disclosure provides an electronic device including: one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods of embodiments of the present disclosure.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, implement the method of embodiments of the present disclosure.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which,
fig. 1 schematically shows a system architecture of an interaction processing method and apparatus according to an embodiment of the present disclosure;
FIG. 2A schematically illustrates a flow diagram of an interaction processing method according to an embodiment of the present disclosure;
FIG. 2B schematically illustrates a schematic diagram of virtual customer service, in accordance with an embodiment of the disclosure;
FIG. 3A schematically illustrates a flow diagram of an interaction processing method according to another embodiment of the present disclosure;
FIG. 3B schematically illustrates a schematic diagram of a three-dimensional face model for virtual customer service, in accordance with an embodiment of the disclosure;
FIG. 4 schematically shows a flow chart of an interaction processing method according to a further embodiment of the present disclosure;
FIG. 5 schematically shows a block diagram of an interaction processing arrangement according to an embodiment of the present disclosure; and
fig. 6 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It is to be understood that such description is merely illustrative and not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, operations, and/or components, but do not preclude the presence or addition of one or more other features, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data detection apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
Embodiments of the present disclosure provide an interaction processing method and an apparatus capable of being used to perform the interaction processing method, which may include, for example, the following operations. The method comprises the steps of obtaining interactive data between a user and a virtual customer service, determining expression characteristics suitable for the virtual customer service according to the interactive data, generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics, and displaying the interactive response video to the user.
Fig. 1 schematically shows a system architecture of an interaction processing method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 includes at least one terminal (a plurality of which are shown, e.g., terminals 101, 102, 103) and a server 104 (which may also be a server cluster, not shown). In the system architecture 100, a user interacts with a virtual customer service through a terminal (e.g., terminals 101, 102, 103), a server 104 obtains interaction data between the user and the virtual customer service, then determines an expression feature suitable for the virtual customer service according to the interaction data, then generates an interactive response video of the virtual customer service according to the interaction data and the expression feature, and displays the interactive response video to the user through the terminal.
The present disclosure will be described in detail below with reference to the drawings and specific embodiments.
Fig. 2A schematically shows a flow chart of an interaction processing method according to an embodiment of the present disclosure.
As shown in fig. 2A, the method may include operations S210 to S240.
In operation S210, interaction data between a user and a virtual customer service is acquired.
In the embodiment of the present disclosure, specifically, the virtual customer service is an artificial intelligence customer service implemented by using an artificial intelligence technology, and may provide services such as consultation, chat, business handling, and the like for the user. The virtual customer service can be a virtual human or a digital human, and the image can be a two-dimensional or three-dimensional real human image, a cartoon image, a two-dimensional image and the like. FIG. 2B schematically illustrates a schematic view of virtual customer service, as shown in FIG. 2B, a digital person in the form of a real person customer service image, in accordance with an embodiment of the present disclosure. The user involved in embodiments of the present disclosure may include any object that interacts with a virtual customer service.
In terms of data content, the interaction data between the user and the virtual customer service may include interaction trigger data input when the user initiates an interaction to the virtual customer service, and may also include interaction response data of the virtual customer service for the interaction trigger data. The interaction triggering data is data which is input by a user and used for initiating a service request to the virtual customer service, and specifically may be a problem input by the user or business content requested to be transacted by the user. The interactive response data is response data provided by the virtual customer service for the interactive trigger data, and specifically may be question answers, consultation results, commodity introduction, business handling processes, business handling results, and the like provided to the user.
In the form of data, the interaction data between the user and the virtual customer service may include interaction audio data, interaction text data, and interaction image data.
The obtaining of the interaction data between the user and the virtual customer service may include obtaining interaction trigger data and/or interaction response data between the user and the virtual customer service. For the interaction triggering data, the receiver of the virtual customer service can be used for passively receiving the interaction triggering data input by the user, and the detector of the virtual customer service can be used for actively monitoring and acquiring the interaction triggering data input by the user.
Then, in operation S220, according to the interaction data, an expressive feature suitable for the virtual customer service is determined.
In the embodiment of the present disclosure, specifically, determining the expression features suitable for the virtual customer service according to the interaction data may include determining emotion features matched with the interaction data according to the interaction data, and then determining the expression features suitable for the virtual customer service according to the emotion features. The emotion characteristics are used for indicating emotion categories of the virtual customer service, and the expression characteristics are used for indicating expression actions of the virtual customer service. The categories of emotions that the emotional characteristics can indicate may include, for example, categories of normal, happy, angry, frightened, exclamatory, fear, disgust, sadness, and the like. The expressive actions that the expressive features can indicate may include, for example, a mouth corner rising, a mouth corner falling, a frown, an eye corner falling, an eye gazelle, and the like.
According to the interaction data, determining expression characteristics suitable for the virtual customer service, and enabling the virtual customer service to interact with the user by using various facial expressions, so that the sense of reality of the virtual customer service is favorably improved, and the interaction efficiency and the interaction effect between the virtual customer service and the user are favorably improved.
Next, in operation S230, an interactive response video of the virtual customer service is generated according to the interactive data and the expression features.
In the embodiment of the present disclosure, optionally, generating an interactive response video of the virtual customer service according to the interactive data and the expression features may include generating an interactive response audio according to interactive response data in the interactive data; controlling the virtual customer service to generate a target expression according to the expression characteristics; and generating an interactive response video according to the interactive response audio, the target expression and the preset initial video of the virtual customer service. The generating of the interactive response audio according to the interactive response data in the interactive data may include generating the interactive response audio according to the interactive response data and a preset sound feature. The sound characteristics may include, for example, sound size, sound production speed, sound timbre, pitch range, and the like.
The interactive response data is response data determined for display to the user according to the acquired interactive trigger data of the user, and may be answer data determined for display to the user according to the acquired question text of the user, for example. The interactive response data may include audio data and text data. Illustratively, the interactive response data may be fixed audio recorded in advance for a specific interactive scene, and when the similarity between the acquired interactive trigger data of the user and the preset interactive trigger data is greater than a preset threshold, the fixed audio recorded in advance is used as the interactive response data. As another example, the interactive response data may also be answer text data associated with the question text determined from an answer text library according to the obtained question text of the user.
When the virtual customer service is controlled to generate the target expression according to the determined expression characteristics, the pose parameters of at least one preset five sense organ key point of the virtual customer service can be adjusted according to the determined expression characteristics so as to control the virtual customer service to generate the target expression. In addition, according to the determined expression features, the facial expressions associated with the expression features can be obtained from a database or a server, and the target expressions of the virtual customer service are obtained.
When the interactive response video is generated according to the interactive response audio, the target expression and the preset initial video of the virtual customer service, adjusting the pose parameters of at least one preset five sense organ key point in a three-dimensional face model aiming at the virtual customer service according to the target expression so as to control the virtual customer service to finish the target expression and obtain a rendered face image of the virtual customer service; and fusing the rendered face image, the interactive response audio and the preset initial video of the virtual customer service to obtain an interactive response video. The preset initial video can be a video which is recorded in advance and served by a real person.
Next, in operation S240, the interactive response video is displayed to the user.
In the embodiment of the present disclosure, specifically, the interactive response data is displayed to the user in a video manner by displaying the interactive response video to the user, and since the interactive response video includes the target expression of the virtual customer service, the target expression is adapted to the acquired interactive data, which is beneficial to improving the flexibility and interest of the virtual customer service interaction and improving the sense of reality of the virtual customer service.
According to the embodiment of the disclosure, after the interactive data between the user and the virtual customer service is acquired, the expression characteristics suitable for the virtual customer service are determined according to the interactive data, then the interactive response video of the virtual customer service is generated according to the interactive data and the expression characteristics, and the generated interactive response video is displayed for the user. According to the acquired interaction data, the expression characteristics suitable for the virtual customer service are determined, then the interaction response video is generated and displayed according to the determined expression characteristics, so that the sense of reality of the virtual customer service can be effectively enhanced, and the interestingness and flexibility of the interaction of the virtual customer service are improved.
Fig. 3A schematically shows a flow chart of an interaction processing method according to another embodiment of the present disclosure.
As shown in fig. 3A, the method may include operations S310 to S340, operation S230, and operation S240.
In operation S310, interactive text data between a user and a virtual customer service is acquired.
In the embodiment of the present disclosure, specifically, the acquiring of the interactive text data between the user and the virtual customer service includes acquiring an interactive trigger text input by the user, and also includes acquiring an interactive response text of the determined virtual customer service. Illustratively, answer text data determined for a user question is obtained.
Next, in operation S320, feature extraction is performed on the interactive text data to obtain at least one text feature in the interactive text data.
In the embodiment of the present disclosure, specifically, before feature extraction is performed on the interactive text data, preprocessing is performed on the interactive text data, and the preprocessing may include deletion processing and replacement processing. Specifically, the deleting process may be deleting a word or a phrase that appears repeatedly in the interactive text data; the replacement process may be replacing emotion-independent words or phrases in the interactive text data with placeholders.
The method for extracting the features of the interactive text data to obtain the at least one text feature may be implemented by using an existing algorithm, for example, the interactive text data may be extracted by using a pre-trained language model BERT to obtain a sequence vector set of the at least one text feature. In addition, the feature extraction of the interactive text data can be carried out by using algorithms such as a TF-IDF algorithm, a TextRank algorithm, a CBOW model and a skip-gram model.
Next, in operation S330, an emotion feature matching at least one text feature is determined using a preset emotion recognition model.
In the embodiment of the present disclosure, specifically, the emotion recognition model is an artificial neural network model with an emotion recognition function, which is obtained by training in advance. The method for training the emotion recognition model can comprise the steps of obtaining a large amount of sample interactive text data, extracting features of the sample interactive text data to obtain text features of the sample interactive text data, labeling the emotion features of the text features, inputting the labeled text features into an artificial neural network model, and training to obtain the emotion recognition model.
After the interactive text data is obtained, feature extraction is carried out on the interactive text data to obtain at least one text feature of the interactive text data, the at least one text feature is input into the emotion recognition model obtained through training, and then the emotion feature matched with the at least one text feature can be output to obtain the emotion feature matched with the interactive data.
Alternatively, the emotion recognition model may also employ a Conditional Random Field (CRF) model or a Long Short-Term Memory network (LSTM) model. Specifically, when the emotion feature matched with the at least one text feature is determined by using a preset emotion recognition model, the at least one text feature may be input into the preset emotion recognition model, then an emotion feature probability distribution set for the at least one text feature is output, and then the emotion feature with the highest probability in the emotion feature probability distribution set is determined by using a maximum score algorithm and is used as the emotion feature matched with the at least one text feature, so as to obtain the emotion feature matched with the interactive data.
Next, in operation S340, according to the emotional characteristics, an expressive characteristic suitable for the virtual customer service is determined.
In the embodiment of the disclosure, specifically, determining the expression features suitable for the virtual customer service according to the emotion features may include determining a target expression category associated with the emotion features according to a preset association relationship between the emotion features and the expression categories; and then, determining the pose parameters of at least one preset facial feature point of the virtual customer service according to the target expression category to obtain the expression features.
Different expression categories correspond to different expression actions, the different expression actions can be used for reflecting different emotion categories, and the emotion characteristics are used for indicating the emotion categories, so that the emotion characteristics and the expression categories have preset association relations. After the emotion characteristics suitable for the virtual customer service are determined according to the acquired interactive text data, the target expression category suitable for the virtual customer service is determined according to the preset association relationship between the emotion characteristics and the expression categories. The target expression category is realized through the target expression action, so that the action parameters of the virtual customer service for realizing the target expression action are determined according to the target expression category, namely the pose parameters of at least one preset facial feature point of the virtual customer service are determined, and the expression features are obtained.
By adjusting the pose parameters of the preset feature points of the five sense organs of the virtual customer service, the virtual customer can be controlled to finish specific expression actions or specific mouth shape actions. The preset feature points of the five sense organs may include, for example, feature points of eyes, mouth, nose, eyebrows, face contour, and the like. The method for determining the preset facial feature points can comprise the steps of inputting sample images containing different expression categories into an artificial neural network model, and determining different facial feature points capable of being used for representing the expression categories by using the artificial neural network model.
Optionally, when the pose parameter of at least one preset feature point of the virtual customer service is determined according to the target expression category, six degrees of freedom of each preset feature point in the three-dimensional face model for the virtual customer service can be determined according to the target expression category to obtain the pose parameter. Fig. 3B schematically illustrates a schematic diagram of a three-dimensional face model for virtual customer service according to an embodiment of the present disclosure, where as shown in fig. 3B, the three-dimensional face model is a face model constructed according to a standard expression of the virtual customer service, and the three-dimensional face model includes a plurality of preset feature points of five sense organs. And determining six degrees of freedom of each preset facial feature point in the three-dimensional face model according to the target expression category to obtain the pose parameters of the preset facial feature points when the virtual customer service is used for finishing the target expression action.
Optionally, a sound characteristic suitable for the virtual customer service is determined according to the determined emotional characteristic, and then an interactive response video of the virtual customer service is generated according to the interactive data, the expression characteristic and the sound characteristic, wherein the sound characteristic includes at least one of sound size, sound production speed and sound tone. By determining the sound characteristics matched with the emotion characteristics and generating the interactive response video by using the sound characteristics, the reality sense of the virtual customer service is improved by double angles on expression and sound, the interestingness and flexibility of interaction between the virtual customer service and the user are improved, and the interaction effect between the virtual customer service and the user is improved effectively.
Next, in operation S230, an interactive response video of the virtual customer service is generated according to the interactive data and the expression features.
Next, in operation S240, the interactive response video is displayed to the user.
Operations S230 and S240 are similar to the previous embodiments and are not described herein.
According to the method and the device for processing the interactive text data, after the interactive text data between the user and the virtual customer service are obtained, firstly feature extraction is carried out on the interactive text data to obtain at least one text feature, then the emotion feature matched with the at least one text feature is determined by using the preset emotion recognition model to obtain the expression feature matched with the interactive data, then the interactive response video of the virtual customer service is generated according to the interactive data and the expression feature, and the interactive response video is displayed for the user. The emotion characteristics indicated by the interactive text data are determined by using the preset emotion recognition model, so that the emotion characteristics suitable for the virtual customer service are effectively and automatically determined, the method is suitable for an intelligent service scene, and the intelligent degree of interaction between the virtual customer service and the user is favorably improved; and generating an interactive response video according to the interactive text data and the determined expression characteristics, which is favorable for improving the fitting degree between the facial expression of the virtual customer service and the interactive data content, and is favorable for effectively improving the sense of reality of the virtual customer service, further being favorable for improving the interactive efficiency of the virtual customer service and the user interaction and improving the interactive effect of the virtual customer service and the user interaction.
Fig. 4 schematically shows a flow chart of an interaction processing method according to a further embodiment of the present disclosure.
As shown in fig. 4, the method may include operations S410 to S430, operation S340, operation S230, and operation S240.
In operation S410, interactive image data between a user and a virtual customer service is acquired.
In the embodiment of the present disclosure, specifically, acquiring the interactive image data between the user and the virtual customer service may include acquiring picture data or video data generated when the user interacts with the virtual customer service. And when the interactive video data between the user and the virtual customer service is acquired, performing frame extraction processing on the interactive video data to obtain discrete image frames of the interactive video data.
Next, in operation S420, feature extraction is performed on the interactive image data to obtain at least one image feature in the interactive image data.
In the embodiment of the present disclosure, specifically, feature extraction is performed on the interactive image data, specifically, feature extraction may be performed on a user face image in the interactive image data, so as to obtain at least one image feature in the user face image. The method for extracting the characteristics of the interactive image data can be realized by adopting an ASM algorithm or an AAM algorithm.
Illustratively, when feature extraction is performed on the interactive image data to obtain at least one image feature in the interactive image data, feature point extraction may be performed on a facial image of a user in the interactive image data by using an ASM algorithm model to obtain at least one feature point of the facial feature of the user; and then determining the pose parameters of the feature points of the five sense organs in the face image of the user to obtain at least one image feature of the face image of the user, wherein the at least one image feature can indicate the shape features of face outlines, eyebrows, eyes, a nose and a mouth of the user.
Next, in operation S430, an emotion feature matching at least one image feature is determined using a preset emotion recognition model.
In the embodiment of the present disclosure, specifically, before determining the emotion feature matching with the at least one image feature by using the preset emotion recognition model, the interactive image data may be aligned according to the at least one image feature. Specifically, when the user face image in the interactive image data is a side face or an oblique face, the user front face image is obtained by performing an alignment operation on the user face image, and the alignment operation may be implemented by an affine transformation method or an interpolation alignment method. According to the front face image of the user, the emotion characteristics of the user during interaction are determined, so that the accuracy of emotion characteristic identification of the user is improved, the matching degree of the determined expression characteristics of the virtual customer service is improved, and the interaction effect between the virtual customer service and the user is improved.
Next, in operation S340, according to the emotional characteristics, an expressive characteristic suitable for the virtual customer service is determined.
Next, in operation S230, an interactive response video of the virtual customer service is generated according to the interactive data and the expression features.
Next, in operation S240, the interactive response video is displayed to the user.
Operation S340, operation S230, and operation S240 are similar to the previous embodiments, and are not described herein again.
According to the embodiment of the invention, the interactive image data between the user and the virtual customer service is firstly obtained, the feature extraction is carried out on the interactive image data to obtain at least one image feature in the interactive image data, then the emotion feature matched with the at least one image feature is determined by using a preset emotion recognition model to obtain the emotion feature matched with the interactive data, then the expression feature suitable for the virtual customer service is determined according to the emotion feature, then the interactive response video of the virtual customer service is generated according to the interactive data and the expression feature, and the interactive response video is displayed for the user. In the technical scheme, the expression characteristics of the virtual customer service are determined according to the image characteristics of the interactive image data, the interactive image data comprises the face image of the user, and the image characteristics of the interactive image data indicate the emotion characteristics of the user during interaction, so that the determined expression characteristics of the virtual customer service are adaptive to the emotion characteristics of the user, which is beneficial to improving the sense of reality of the virtual customer service, improving the interaction efficiency of the virtual customer service and the user, and improving the interaction effect of the virtual customer service and the user.
Fig. 5 schematically shows a block diagram of an interaction processing device according to an embodiment of the present disclosure.
As shown in fig. 5, the interaction processing apparatus 500 includes an obtaining module 501, a first processing module 502, a second processing module 503, and a display module 504. The interactive processing device may perform the method described above with reference to the method embodiment, and is not described herein again.
Specifically, the obtaining module 501 is configured to obtain interaction data between a user and a virtual customer service; the first processing module 502 is configured to determine an expression feature suitable for the virtual customer service according to the interaction data; the second processing module 503 is configured to generate an interactive response video of the virtual customer service according to the interactive data and the expression features; a display module 504 for displaying the interactive response video to the user.
According to the embodiment of the disclosure, after the interactive data between the user and the virtual customer service is acquired, the expression characteristics suitable for the virtual customer service are determined according to the interactive data, then the interactive response video of the virtual customer service is generated according to the interactive data and the expression characteristics, and the generated interactive response video is displayed for the user. According to the acquired interaction data, the expression characteristics suitable for the virtual customer service are determined, then the interaction response video is generated and displayed according to the determined expression characteristics, so that the sense of reality of the virtual customer service can be effectively enhanced, and the interestingness and flexibility of the interaction of the virtual customer service are improved.
As an alternative embodiment, the first processing module includes a first processing sub-module, configured to determine, according to the interaction data, an emotional characteristic that matches the interaction data; and the second processing submodule is used for determining the expression characteristics suitable for the virtual customer service according to the emotion characteristics.
As an alternative embodiment, the obtaining module includes a first obtaining sub-module, configured to obtain interactive text data between the user and the virtual customer service. The first processing sub-module comprises a first processing unit and a second processing unit, wherein the first processing unit is used for extracting the characteristics of the interactive text data to obtain at least one text characteristic in the interactive text data; and the second processing unit is used for determining the emotion characteristics matched with the at least one text characteristic by using a preset emotion recognition model so as to obtain the emotion characteristics matched with the interactive data.
As an optional embodiment, the second processing sub-module includes a third processing unit, configured to determine, according to a preset association relationship between the emotional feature and the expression category, a target expression category associated with the emotional feature; and the fourth processing unit is used for determining the pose parameters of at least one preset facial feature point of the virtual customer service according to the target expression category so as to obtain the expression features.
As an optional embodiment, the second processing module includes a third processing sub-module, configured to generate an interactive response audio according to interactive response data in the interactive data; the fourth processing submodule is used for controlling the virtual customer service to generate a target expression according to the expression characteristics; and the fifth processing submodule is used for generating an interactive response video according to the interactive response audio, the target expression and the preset initial video of the virtual customer service.
As an optional embodiment, the obtaining module further includes a second obtaining sub-module, configured to obtain interactive image data between the user and the virtual customer service. The first processing submodule also comprises a fifth processing unit, which is used for extracting the characteristics of the interactive image data to obtain at least one image characteristic in the interactive image data; and the sixth processing unit is used for determining the emotion characteristics matched with the at least one image characteristic by using a preset emotion recognition model so as to obtain the emotion characteristics matched with the interactive data.
As an optional embodiment, the apparatus further includes a third processing module, configured to determine a sound characteristic suitable for the virtual customer service according to the emotional characteristic, where the sound characteristic includes at least one of a sound size, a sound production speed, and a sound timbre. The second processing module further comprises a sixth processing submodule for generating an interactive response video of the virtual customer service according to the interactive data, the expression characteristics and the sound characteristics.
Any of the modules according to embodiments of the present disclosure, or at least part of the functionality of any of them, may be implemented in one module. Any one or more of the modules according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules according to the embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging the circuit, or in any one of three implementations, or in any suitable combination of any of the software, hardware, and firmware. Or one or more of the modules according to embodiments of the disclosure, may be implemented at least partly as computer program modules which, when executed, may perform corresponding functions.
For example, any number of the obtaining module 501, the first processing module 502, the second processing module 503 and the display module 504 may be combined and implemented in one module, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining module 501, the first processing module 502, the second processing module 503 and the display module 504 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware and firmware, or implemented by a suitable combination of any of them. Alternatively, at least one of the obtaining module 501, the first processing module 502, the second processing module 503 and the display module 504 may be at least partially implemented as a computer program module, which when executed may perform a corresponding function.
Fig. 6 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 includes a processor 610, a computer-readable storage medium 620. The electronic device 600 may perform a method according to an embodiment of the present disclosure.
In particular, the processor 610 may comprise, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 610 may also include onboard memory for caching purposes. The processor 610 may be a single processing module or a plurality of processing modules for performing different actions of a method flow according to embodiments of the disclosure.
Computer-readable storage medium 620, for example, may be a non-volatile computer-readable storage medium, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 620 may include a computer program 621, which computer program 621 may include code/computer-executable instructions that, when executed by the processor 610, cause the processor 610 to perform a method according to an embodiment of the disclosure, or any variation thereof.
The computer program 621 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 621 may include one or more program modules, including 621A, 621B, … …, for example. It should be noted that the division and number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 610 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 610.
According to an embodiment of the present disclosure, at least one of the obtaining module 501, the first processing module 502, the second processing module 503 and the display module 504 may be implemented as a computer program module described with reference to fig. 6, which, when executed by the processor 610, may implement the respective operations described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be understood by those skilled in the art that while the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (10)

1. An interaction processing method, comprising:
acquiring interactive data between a user and a virtual customer service;
determining expression characteristics suitable for the virtual customer service according to the interaction data;
generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics;
displaying the interactive response video to a user.
2. The method of claim 1, wherein said determining, from the interaction data, an expressive feature applicable to the virtual customer service comprises:
according to the interaction data, determining emotion characteristics matched with the interaction data;
and determining expression characteristics suitable for the virtual customer service according to the emotion characteristics.
3. The method of claim 2, wherein,
the acquiring of the interaction data between the user and the virtual customer service includes:
acquiring interactive text data between a user and a virtual customer service;
the determining of the emotional characteristics matched with the interaction data according to the interaction data comprises:
performing feature extraction on the interactive text data to obtain at least one text feature in the interactive text data;
and determining the emotion characteristics matched with the at least one text characteristic by using a preset emotion recognition model to serve as the emotion characteristics matched with the interaction data.
4. The method of claim 2, wherein said determining, from the emotional features, an expressive feature applicable to the virtual customer service comprises:
determining a target expression category associated with the emotion characteristics according to a preset association relationship between the emotion characteristics and the expression categories;
and determining the pose parameters of at least one preset facial feature point of the virtual customer service according to the target expression category so as to obtain the expression features.
5. The method of claim 1, wherein the generating an interactive response video of the virtual customer service from the interaction data and the expressive features comprises:
generating interactive response audio according to interactive response data in the interactive data;
controlling the virtual customer service to generate a target expression according to the expression characteristics;
and generating the interactive response video according to the interactive response audio, the target expression and the preset initial video of the virtual customer service.
6. The method of claim 2, wherein,
the acquiring of the interaction data between the user and the virtual customer service includes:
acquiring interactive image data between a user and a virtual customer service;
the determining of the emotional characteristics matched with the interaction data according to the interaction data comprises:
performing feature extraction on the interactive image data to obtain at least one image feature in the interactive image data;
and determining the emotion characteristics matched with the at least one image characteristic by using a preset emotion recognition model to serve as the emotion characteristics matched with the interaction data.
7. The method of any of claims 1 to 6, further comprising:
determining sound characteristics suitable for the virtual customer service according to the emotional characteristics, wherein the sound characteristics comprise at least one of sound size, sound production speed and sound tone;
generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics, wherein the interactive response video comprises:
and generating an interactive response video of the virtual customer service according to the interactive data, the expression characteristics and the sound characteristics.
8. An interaction processing apparatus comprising:
the acquisition module is used for acquiring interactive data between the user and the virtual customer service;
the first processing module is used for determining expression characteristics suitable for the virtual customer service according to the interaction data;
the second processing module is used for generating an interactive response video of the virtual customer service according to the interactive data and the expression characteristics;
and the display module is used for displaying the interactive response video to the user.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer-readable storage medium storing computer-executable instructions for implementing the method of any one of claims 1 to 7 when executed.
CN202010719783.4A 2020-07-23 2020-07-23 Interaction processing method and device Pending CN111862279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010719783.4A CN111862279A (en) 2020-07-23 2020-07-23 Interaction processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010719783.4A CN111862279A (en) 2020-07-23 2020-07-23 Interaction processing method and device

Publications (1)

Publication Number Publication Date
CN111862279A true CN111862279A (en) 2020-10-30

Family

ID=72949434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010719783.4A Pending CN111862279A (en) 2020-07-23 2020-07-23 Interaction processing method and device

Country Status (1)

Country Link
CN (1) CN111862279A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233690A (en) * 2020-12-21 2021-01-15 北京远鉴信息技术有限公司 Double recording method, device, terminal and storage medium
CN112785667A (en) * 2021-01-25 2021-05-11 北京有竹居网络技术有限公司 Video generation method, device, medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647636A (en) * 2019-09-05 2020-01-03 深圳追一科技有限公司 Interaction method, interaction device, terminal equipment and storage medium
CN110688008A (en) * 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 Virtual image interaction method and device
CN110688911A (en) * 2019-09-05 2020-01-14 深圳追一科技有限公司 Video processing method, device, system, terminal equipment and storage medium
CN110874137A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Interaction method and device
CN111223498A (en) * 2020-01-10 2020-06-02 平安科技(深圳)有限公司 Intelligent emotion recognition method and device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874137A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Interaction method and device
CN110647636A (en) * 2019-09-05 2020-01-03 深圳追一科技有限公司 Interaction method, interaction device, terminal equipment and storage medium
CN110688911A (en) * 2019-09-05 2020-01-14 深圳追一科技有限公司 Video processing method, device, system, terminal equipment and storage medium
CN110688008A (en) * 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 Virtual image interaction method and device
CN111223498A (en) * 2020-01-10 2020-06-02 平安科技(深圳)有限公司 Intelligent emotion recognition method and device and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233690A (en) * 2020-12-21 2021-01-15 北京远鉴信息技术有限公司 Double recording method, device, terminal and storage medium
CN112233690B (en) * 2020-12-21 2021-03-16 北京远鉴信息技术有限公司 Double recording method, device, terminal and storage medium
CN112785667A (en) * 2021-01-25 2021-05-11 北京有竹居网络技术有限公司 Video generation method, device, medium and electronic equipment

Similar Documents

Publication Publication Date Title
US10635893B2 (en) Identity authentication method, terminal device, and computer-readable storage medium
US11727688B2 (en) Method and apparatus for labelling information of video frame, device, and storage medium
CN111432233B (en) Method, apparatus, device and medium for generating video
CN107578017B (en) Method and apparatus for generating image
CN111415677B (en) Method, apparatus, device and medium for generating video
US20210174072A1 (en) Microexpression-based image recognition method and apparatus, and related device
Le et al. Live speech driven head-and-eye motion generators
EP3617946B1 (en) Context acquisition method and device based on voice interaction
US11551393B2 (en) Systems and methods for animation generation
CN107609506B (en) Method and apparatus for generating image
CN108920640B (en) Context obtaining method and device based on voice interaction
CN113077537B (en) Video generation method, storage medium and device
CN112333179A (en) Live broadcast method, device and equipment of virtual video and readable storage medium
CN113392201A (en) Information interaction method, information interaction device, electronic equipment, medium and program product
CN113297891A (en) Video information processing method and device and electronic equipment
CN111401101A (en) Video generation system based on portrait
US11076091B1 (en) Image capturing assistant
CN111862279A (en) Interaction processing method and device
CN113299312A (en) Image generation method, device, equipment and storage medium
CN114694224A (en) Customer service question and answer method, customer service question and answer device, customer service question and answer equipment, storage medium and computer program product
CN113223125B (en) Face driving method, device, equipment and medium for virtual image
CN117252947A (en) Image processing method, image processing apparatus, computer, storage medium, and program product
CN111260756B (en) Method and device for transmitting information
JP5909472B2 (en) Empathy interpretation estimation apparatus, method, and program
CN113744371B (en) Method, device, terminal and storage medium for generating face animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination