CN111723577A - Method, device and equipment for interrogation based on preset interrogation mode - Google Patents

Method, device and equipment for interrogation based on preset interrogation mode Download PDF

Info

Publication number
CN111723577A
CN111723577A CN202010407248.5A CN202010407248A CN111723577A CN 111723577 A CN111723577 A CN 111723577A CN 202010407248 A CN202010407248 A CN 202010407248A CN 111723577 A CN111723577 A CN 111723577A
Authority
CN
China
Prior art keywords
emotion
interrogation
preset
expected
session
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010407248.5A
Other languages
Chinese (zh)
Inventor
张宗奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010407248.5A priority Critical patent/CN111723577A/en
Publication of CN111723577A publication Critical patent/CN111723577A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for interrogation based on a preset interrogation mode, belonging to the technical field of interrogation application, wherein the method comprises the steps of starting the interrogation mode and carrying out interrogation on an interrogated monomer; respectively acquiring emotion indexes of a single audited body when a current session theme in a current audited session theme set is asked by auditors, when the single audited body answers and when the single audited body answers; judging whether the emotion indexes have expected emotions or not and judging whether the intensity of the expected emotions reaches an emotion intensity threshold value or not; if expected emotion appears in the emotion indexes and the intensity of the expected emotion reaches an emotion intensity threshold value, acquiring a next session topic of a current session topic set in the interrogation session table for interrogation; otherwise, acquiring the current session theme in the current interrogation session theme set for interrogation, and finishing interrogation based on the iteration model and the cycle model. The method and the device help to improve the success rate and efficiency of interrogation.

Description

Method, device and equipment for interrogation based on preset interrogation mode
Technical Field
The application relates to the technical field of interrogation application, in particular to a method, a device and equipment for interrogation based on a preset interrogation mode.
Background
Interrogation is a criminal investigation measure, and aims to collect case information more quickly, and in the traditional interrogation process, because sessions during interrogation are scattered and unordered, and interrogation topics in the interrogation process are unstructured, interrogation difficulty and interrogation task amount are increased.
The traditional interrogation method mainly uses subjective judgment of an interrogation person as a basis, but the interrogation is useless and has poor effect often because the judgment of the current response of the interrogated person is wrong or a corresponding correct interrogation countermeasure is not taken, and the success rate and the efficiency of the interrogation are seriously influenced by the mode. Therefore, in the prior art, when the interrogation is performed, the interrogation efficiency and the success rate are reduced due to the lack of a corresponding auxiliary method.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a device, and a storage medium for interrogation based on a preset interrogation mode, so as to solve the problem that interrogation efficiency and success rate are reduced due to lack of a corresponding auxiliary method when interrogation is performed in the prior art.
In order to solve the above technical problem, an embodiment of the present application provides a method for performing interrogation based on a preset interrogation mode, which adopts the following technical scheme:
a method for interrogation based on a preset interrogation mode comprises the following steps:
starting a preset interrogation mode, acquiring a session theme concentrated by the preset interrogation session theme based on a preset interrogation session table, and interrogating a single body to be interrogated, wherein the preset interrogation mode comprises the following steps: a religious trial mode;
based on a preset multi-mode emotion analysis model, emotion indexes of a single audited body are respectively obtained when a current session theme in a current audition session theme set is asked by auditors, when the current session theme is answered by the audited body and when the current session theme is answered by the audited body;
based on a preset emotion expected change table and a preset emotion intensity table, respectively judging whether expected emotion appears in emotion indexes of the audited single body when the current conversation theme is asked by auditors, is answered by the audited single body and is answered by the auditors and judging whether the intensity of the expected emotion reaches a preset emotion intensity threshold value;
if the expected emotion appears in the emotion indexes and the intensity of the expected emotion reaches a preset emotion intensity threshold value, acquiring a next session theme after the current session theme in the current interrogation session theme set in a preset interrogation session table for interrogation; otherwise, continuing to acquire the current session theme in the current interrogation session theme set for interrogation, continuing to execute the steps 202, 203 and 204 based on a preset iteration model until a preset condition when a next session theme after the current session theme in the current interrogation session theme set is started is met, and stopping iteration;
and based on a preset cyclic interrogation model, circularly executing the steps 202, 203 and 204 until the interrogation of the last session topic in the current interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches a preset emotion intensity threshold value, acquiring the first session topic in the next interrogation session topic set for interrogation, until the interrogation of the last session topic in the last interrogation session topic set is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches the preset emotion intensity threshold value, and ending the interrogation.
Further, in the method for interrogation based on the preset interrogation mode, the preset interrogation session table includes:
the system comprises a plurality of preset interrogation session topic sets which are classified according to a preset classification format, wherein each interrogation session topic set comprises: a plurality of preset conversation themes.
Further, in the method for interrogation based on the preset interrogation mode, the preset multi-modal emotion analysis model includes:
based on a preset emotion analysis submodel, acquiring a plurality of multivariate related index values of the investigated monomer, and generating a multivariate index set;
acquiring emotion comprehensive indexes of the elements in the multi-element index set in a clustering mode, and generating an emotion comprehensive index set;
the cluster comprises a plurality of same preset emotion algorithm models, and each preset emotion algorithm model performs algorithm processing on elements in the multi-element index set to obtain an emotion comprehensive index.
Further, the method for interrogation based on the preset interrogation mode, wherein the obtaining of the emotion indexes of the interrogated individual when the current session topic in the current interrogation session topic set is asked by the interrogated individual, when the interrogated individual answers and when the interrogated individual answers respectively comprises:
acquiring the occurrence times of different emotions in the emotion comprehensive index set based on a preset frequency judgment model, acquiring the emotion with the occurrence times not being 0, and generating a frequency index set;
judging the intensity index of each element in the frequency index set based on a preset emotion intensity judgment model;
and taking the elements in the frequency index set and the intensity index of each element in the frequency index set as the emotion indexes of the audited individual when the current conversation theme is asked by the audited individual, is answered by the audited individual and is answered by the audited individual.
Further, the method for interrogation based on the preset interrogation mode, wherein the step of respectively judging whether the emotion indexes of the interrogated monomer appear expected emotion when the current conversation theme is asked by the interrogators, when the interrogated monomer answers and when the interrogated monomer answers includes:
acquiring the elements in the frequency index set, comparing the elements with expected emotions in the preset emotion expected change table, and judging whether the elements in the frequency index set have the expected emotions or not;
the preset emotion expected change table corresponds to a plurality of preset expected emotions when each conversation theme is asked by an auditor, answered by an auditor and answered by the auditor.
Further, in the method for interrogation based on a preset interrogation mode, the determining whether the intensity of the expected emotion reaches a preset emotion intensity threshold includes:
if the elements in the frequency index set have expected emotions, acquiring a frequency value corresponding to the expected emotions, comparing the frequency value with a frequency value corresponding to an emotion intensity level preset by the expected emotions in the preset emotion intensity table, and determining the intensity of the expected emotions;
comparing the intensity of the expected emotion with a preset emotion intensity threshold value, and judging whether the intensity of the expected emotion reaches the preset emotion intensity threshold value;
the preset emotion intensity table is a preset emotion intensity level in advance based on the frequency value corresponding to the emotion.
Further, in the method for interrogation based on the preset interrogation mode, the preset iterative model includes:
when the conversation theme is asked by the auditor, answered by the auditor and answered by the auditor respectively, in the three cases, the emotion of the auditor does not have expected emotion, or the auditor has the expected emotion but the intensity of the expected emotion does not reach the preset emotion intensity threshold value, and starting is carried out.
Further, in the method for interrogation based on the preset interrogation mode, the preset cyclic interrogation model includes:
when the conversation theme is asked by the auditor, answered by the auditor or answered by the auditor, at least one of the three situations is that the emotion of the audited individual reaches the expected emotion, and the conversation theme is started when the intensity of the expected emotion reaches a preset emotion intensity threshold.
In order to solve the above technical problem, an embodiment of the present application further provides a device for interrogation based on a preset interrogation mode, and the following technical scheme is adopted:
an apparatus for interrogation based on a predetermined interrogation mode, comprising:
the interrogation starting module is used for starting a preset interrogation mode, acquiring a session theme concentrated by a preset interrogation session theme based on a preset interrogation session table, and interrogating the interrogated monomers;
the emotion index acquisition module is used for respectively acquiring emotion indexes of the audited individuals when the current session theme in the current audited session theme set is asked by the auditor, answered by the audited individuals and answered by the auditor based on a preset multi-mode emotion analysis model;
the system comprises an expected judgment module, a judging module and a judging module, wherein the expected judgment module is used for respectively judging whether the emotion indexes of the audited single body have expected emotion when the current conversation theme is asked by the auditor, the audited single body answers and the auditor answers and judging whether the intensity of the expected emotion reaches a preset emotion intensity threshold value or not based on a preset emotion expected change table and a preset emotion intensity table;
the iterative interrogation module is used for acquiring a next session theme after the current session theme is centralized in a preset interrogation session table for interrogation if the expected emotion appears in the emotion index and the intensity of the expected emotion reaches a preset emotion intensity threshold; otherwise, continuing to acquire the current session theme in the current interrogation session theme set for interrogation, continuing to execute the steps 202, 203 and 204 based on a preset iteration model until a preset condition when a next session theme after the current session theme in the current interrogation session theme set is started is met, and stopping iteration;
and a cyclic interrogation module, configured to cyclically execute the steps 202, 203, and 204 based on a preset cyclic interrogation model until interrogation of the last session topic in the current interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches a preset emotion intensity threshold, acquire the first session topic in the next interrogation session topic set for interrogation, until interrogation of the last session topic in the last interrogation session topic set is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches the preset emotion intensity threshold, and terminate the interrogation.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device includes a memory and a processor, where the memory stores a computer program, and the processor implements, when executing the computer program, the steps of the method for interrogation based on a preset interrogation mode provided in the embodiment of the present application.
In order to solve the above technical problem, an embodiment of the present application further provides a nonvolatile computer-readable storage medium, which adopts the following technical solutions:
a non-transitory computer-readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements the steps of a method for interrogation based on a preset interrogation mode as set forth in an embodiment of the present application.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the embodiment of the application discloses a method, a device, equipment and a storage medium for interrogation based on a preset interrogation mode, and the interrogation of an interrogated monomer is carried out by starting the interrogation mode; respectively acquiring emotion indexes of a single audited body when a current session theme in a current audited session theme set is asked by auditors, when the single audited body answers and when the single audited body answers; judging whether the emotion indexes have expected emotions or not and judging whether the intensity of the expected emotions reaches an emotion intensity threshold value or not; if the expected emotion appears in the emotion indexes and the intensity of the expected emotion reaches an emotion intensity threshold value, acquiring a session topic in a next preset interrogation session topic set in the interrogation session table for interrogation; otherwise, the current session theme in the current interrogation session theme set is continuously obtained for interrogation, and interrogation is completed based on the iteration model and the cycle model.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is a diagram of an exemplary system architecture to which embodiments of the present application may be applied;
FIG. 2 is a flowchart of an embodiment of a method for interrogation based on a predetermined interrogation mode in an embodiment of the present application;
FIG. 3 is a flow chart of a process of a multi-modal emotion analysis model in an embodiment of the present application;
FIG. 4 is a flow chart of the implementation of a multi-modal emotion analysis model in an embodiment of the present application;
FIG. 5 is a flowchart illustrating a process of determining emotional indicators of the audited individual when the current conversation topic of the audited individual is asked by the auditor, when the audited individual answers, and when the auditor answers in the embodiment of the application;
fig. 6 is an execution flowchart of emotion indicators when judging that the current conversation topic of the audited individual is asked by the auditor, when the audited individual answers, and when the auditor answers in the embodiment of the application;
FIG. 7 is a flowchart illustrating an implementation of a predetermined iterative model in an embodiment of the present application;
FIG. 8 is a flowchart illustrating the execution of a predetermined loop model in an embodiment of the present application;
FIG. 9 is a flowchart of an embodiment of a method for interrogation based on a predetermined interrogation mode in accordance with an embodiment of the present application;
FIG. 10 is a schematic structural diagram illustrating an embodiment of an apparatus for interrogation based on a predetermined interrogation mode according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of an emotion index acquisition module in an embodiment of the present application;
FIG. 12 is a schematic structural diagram of a multi-modal emotion analysis model in an embodiment of the present application;
FIG. 13 is a schematic structural diagram of an anticipation determination module in an embodiment of the present application;
fig. 14 is a schematic structural diagram of an embodiment of a computer device in an embodiment of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the method for performing interrogation based on the preset interrogation mode provided in the embodiment of the present application is generally executed by a server/terminal device, and accordingly, the apparatus for performing interrogation based on the preset interrogation mode is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flowchart of an embodiment of the method for interrogation based on a preset interrogation mode of the present application is shown, where the method for interrogation based on a preset interrogation mode includes the following steps:
step 201, starting a preset interrogation mode, acquiring a session theme concentrated in the preset interrogation session theme based on a preset interrogation session table, and interrogating a single body to be interrogated, wherein the preset interrogation mode includes: a clandestine trial mode.
In this embodiment, the preset interrogation session table includes: the system comprises a plurality of preset interrogation session topic sets which are classified according to a preset classification format, wherein each interrogation session topic set comprises: a plurality of preset conversation themes.
For example, the preset trial session table includes three parts, and the three parts are used as three preset trial session topic sets.
The topic of the first part is to establish a consistent relationship with the trial monomers, and includes several conversation topics, for example: "how long you are cheering up from the investigation department, what urgent need to deal with is what you need to help you, although you say! How your physical condition is, what diseases there are although, we think about the solution together! ".
The second part of the topic is to communicate case information with the polled individual, including several conversation topics, such as: "you know how you find you involved in a case at all, you know the technology, means and way to find and collect crime evidence, you find you have bugs about your criminal process, what bugs are, what bugs mean you are! ".
The third part of the topic is to achieve the common recognition with the audited single body, including several conversation topics, such as: "what you expect for your final outcome now, what way you intend to get the desired outcome, you think that the repent performance can affect the crime result, you know that the decision book mainly has several components, you know that the merits of the repent performance are determined by those factors! ".
In this embodiment, the preset interrogation mode includes: a religious interrogation mode, which is a structured interrogation mode for criminal interrogation.
Step 202, based on a preset multi-mode emotion analysis model, emotion indexes of the audited individual when the current session topic in the current audited session topic set is asked by the auditor, when the audited individual answers and when the audited individual answers are respectively obtained.
In some embodiments of the present application, the preset multi-modal emotion analysis model includes: based on a preset emotion analysis submodel, acquiring a plurality of multivariate related index values of the investigated monomer, and generating a multivariate index set; acquiring emotion comprehensive indexes of the elements in the multi-element index set in a clustering mode, and generating an emotion comprehensive index set; the cluster comprises a plurality of same preset emotion algorithm models, and each preset emotion algorithm model performs algorithm processing on elements in the multi-element index set to obtain an emotion comprehensive index.
In some embodiments of the present application, the obtaining of emotion indexes of the audited individual when the current session topic in the current audited session topic set is asked by the auditor, when the audited individual answers, and when the audited individual answers respectively includes: acquiring the occurrence times of different emotions in the emotion comprehensive index set based on a preset frequency judgment model, acquiring the emotion with the occurrence times not being 0, and generating a frequency index set; judging the intensity index of each element in the frequency index set based on a preset emotion intensity judgment model; and taking the elements in the frequency index set and the intensity index of each element in the frequency index set as the emotion indexes of the audited individual when the current conversation theme is asked by the audited individual, is answered by the audited individual and is answered by the audited individual.
Specifically, referring to fig. 3 and fig. 4, fig. 3 is a flowchart illustrating processing of a multi-modal emotion analysis model in an embodiment of the present application, and fig. 4 is a flowchart illustrating execution of the multi-modal emotion analysis model in the embodiment of the present application.
Fig. 3 shows that the processing steps of the multi-modal emotion analysis model in the embodiment of the present application include:
301, acquiring a plurality of multivariate correlation index values of the investigated monomers based on a preset emotion analysis submodel, and generating a multivariate index set;
302, acquiring emotion comprehensive indexes of the elements in the multi-element index set in a clustering mode, and generating an emotion comprehensive index set;
303, acquiring the frequency of different emotions in the emotion comprehensive index set based on a preset frequency judgment model, acquiring the emotion with the frequency of occurrence not being 0, and generating a frequency index set;
304, judging the intensity index of each element in the frequency index set based on a preset emotion intensity judgment model;
and 305, taking the elements in the frequency index set and the intensity index of each element in the frequency index set as the emotion indexes of the audited individual when the current conversation theme is asked by the auditor, answered by the audited individual and answered by the auditor.
Fig. 4 shows an execution flow of the multi-modal emotion analysis model in the embodiment of the present application, which is specifically as follows: starting a preset emotion analysis sub-model, acquiring a plurality of multivariate related index values of an audited monomer, generating a multivariate index set, acquiring emotion comprehensive indexes of elements in the multivariate index set in a clustering mode, generating an emotion comprehensive index set, starting a preset frequency judgment model, acquiring the times of different emotions in the emotion comprehensive index set, acquiring emotions with the times of occurrence different from 0, generating a frequency index set, starting a preset emotion intensity judgment model, judging the intensity index of each element in the frequency index set, and taking the intensity index of each element in the frequency index set and the frequency index set as the emotion index of the audited monomer when a current conversation theme is asked by the auditor, answered by the audited monomer and answered by the auditor.
The preset emotion analysis submodel can obtain an index related to emotion judgment based on different models, such as: heart rate, expression, voice, etc.
The clustering mode, namely the preset multi-modal emotion analysis model comprises a plurality of algorithm models for acquiring emotion comprehensive indexes, and the algorithm models are used for respectively processing the concentrated elements of the multiple indexes to acquire a plurality of emotion comprehensive indexes.
For example: the preset emotion types include: anger, slight, disgust, fear, joy, sadness, anxiety, passivity, urgency, etc., and the emotion comprehensive index is different emotion types.
And the preset frequency judgment model acquires elements in the emotion comprehensive index set and acquires the occurrence times of the same emotion in the set.
The preset emotional intensity judgment model judges the intensity index of each element in the frequency index set based on a preset intensity setting mode, for example: the frequency index contains anger, fear two kinds of different elements in concentrating, and the element number that the frequency index is concentrated is 11, and anger's number is 9, and the number of fear is 2, and the predetermined intensity setting mode of hypothesis is: the anger is the first-level intensity if the number is 1, the number is the second-level intensity if the number is 2, the number is the third-level intensity if the number is 3, the number is the fourth-level intensity if the number is 4, the number is the fifth-level intensity if the number is 5, the number is the sixth-level intensity if the number is 6, the number is the seventh-level intensity if the number is 7, the number is the eighth-level intensity if the number is 8, the number is the ninth-level intensity if the number is 9, the number is the tenth-level intensity if the number is 10, and the number is the full-level intensity if the number; another preset intensity setting mode is as follows: assuming that the intensity of emotion with the same number of occurrences of emotion of 1 to 5 is set to calm and the intensity of emotion with the same number of occurrences of emotion of 6 to 10 is set to excited, the above frequency index concentrates anger as excitation intensity and fear as calm intensity.
Step 203, based on the preset emotion expected change table and the preset emotion intensity table, respectively judging whether the emotion indexes of the audited single body are expected in the current conversation theme when asked by the auditor, when the audited single body answers and when the audited single body answers, and judging whether the intensity of the expected emotion reaches a preset emotion intensity threshold value.
In some embodiments of the present application, the determining whether the emotion indicators of the audited individual when the current conversation topic is asked by the auditor, when the audited individual answers, and when the auditor answers respectively have expected emotion includes: acquiring the elements in the frequency index set, comparing the elements with expected emotions in the preset emotion expected change table, and judging whether the elements in the frequency index set have the expected emotions or not; the preset emotion expected change table corresponds to a plurality of preset expected emotions when each conversation theme is asked by an auditor, answered by an auditor and answered by the auditor.
In some embodiments of the present application, the determining whether the intensity of the expected emotion reaches a preset emotion intensity threshold includes: if the elements in the frequency index set have expected emotions, acquiring a frequency value corresponding to the expected emotions, comparing the frequency value with a frequency value corresponding to an emotion intensity level preset by the expected emotions in the preset emotion intensity table, and determining the intensity of the expected emotions; comparing the intensity of the expected emotion with a preset emotion intensity threshold value, and judging whether the intensity of the expected emotion reaches the preset emotion intensity threshold value; the preset emotion intensity table is a preset emotion intensity level in advance based on the frequency value corresponding to the emotion.
Specifically, referring to fig. 5 and fig. 6, fig. 5 is a processing flowchart of the emotion indexes when the current conversation topic of the audited single body is asked by the auditor, when the audited single body answers, and when the audited single body answers in the embodiment of the present application, and fig. 6 is a logic flowchart of the emotion indexes when the current conversation topic of the audited single body is asked by the auditor, when the audited single body answers, and when the audited single body answers in the embodiment of the present application.
Fig. 5 shows that in the embodiment of the present application, the processing steps of determining the emotion indexes of the listened individual when the current conversation topic is asked by the auditor, when the listened individual answers, and when the audited individual answers include:
501, acquiring the elements in the frequency index set, comparing the elements with expected emotions in the preset emotion expected change table, and judging whether the elements in the frequency index set have the expected emotions;
502, if there is an expected emotion in the elements in the frequency index set, obtaining a frequency value corresponding to the expected emotion, comparing the frequency value with a frequency value corresponding to an emotion intensity level preset by the expected emotion in the preset emotion intensity table, and determining the intensity of the expected emotion;
503, comparing the intensity of the expected emotion with a preset emotion intensity threshold, and determining whether the intensity of the expected emotion reaches the preset emotion intensity threshold.
Fig. 6 shows an execution flow of judging emotion indexes of the listened individual when the current conversation topic of the listened individual is asked by the auditor, when the listened individual answers, and when the audited individual answers in the embodiment of the present application, specifically as follows: obtaining the elements in the frequency index set, comparing the elements with expected emotions in the preset emotion expected change table, judging whether the elements in the frequency index set have the expected emotions or not, if the elements in the frequency index set have the expected emotions, obtaining a frequency value corresponding to the expected emotions, comparing the frequency value with a frequency value corresponding to an emotion intensity level preset by the expected emotions in the preset emotion intensity table, determining the intensity of the expected emotions, comparing the intensity of the expected emotions with a preset emotion intensity threshold value, and judging whether the intensity of the expected emotions reaches the preset emotion intensity threshold value or not.
Step 204, if the expected emotion appears in the emotion index and the intensity of the expected emotion reaches a preset emotion intensity threshold value, acquiring a next session theme after the current session theme is centralized in a preset interrogation session table for interrogation; otherwise, continuing to acquire the current session theme in the current interrogation session theme set for interrogation, continuing to execute the steps 202, 203 and 204 based on a preset iteration model until a preset condition when a next session theme after the current session theme in the current interrogation session theme set is started is met, and stopping iteration.
Referring to fig. 7 specifically, fig. 7 is an execution flowchart of the iterative model preset in the embodiment of the present application, which specifically includes the following steps: when the conversation subjects are asked by the inquiries, answered by the inquiries and answered by the inquiries respectively, in the three cases, the emotion of the audited single body does not have expected emotion, or when the expected emotion appears in the audited single body but the intensity of the expected emotion does not reach the preset emotion intensity threshold value, starting an iteration model, continuously obtaining emotion indexes when the current conversation subject in the current inquiries conversation subject set is asked by the inquiries, the current conversation subject is answered by the inquiries and the current conversation subject is answered by the inquiries based on a preset emotion analysis model, judging whether the emotion indexes when the current conversation subject in the current inquiries conversation subject set is asked by the inquiries, the current conversation subject is answered by the inquiries and answered by the inquiries have expected emotion or not based on a preset emotion expected change table and a preset emotion intensity table, and judging whether the intensity of the expected emotion reaches the preset emotion or not And if the emotion intensity threshold is reached, stopping iteration.
In some embodiments of the present application, the preset iterative model includes: when the conversation theme is asked by the auditor, answered by the auditor and answered by the auditor respectively, in the three cases, the emotion of the auditor does not have expected emotion, or the auditor has the expected emotion but the intensity of the expected emotion does not reach the preset emotion intensity threshold value, and starting is carried out.
Step 205, based on a preset cyclic interrogation model, cyclically executing the steps 202, 203 and 204 until the interrogation of the last session topic in the current interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches a preset emotion intensity threshold, acquiring the first session topic in the next interrogation session topic set for interrogation, until the interrogation of the last session topic in the last interrogation session topic set is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches the preset emotion intensity threshold, and ending the interrogation.
Specifically referring to fig. 8, fig. 8 is an execution flowchart of a loop model preset in the embodiment of the present application, which specifically includes the following steps: when the conversation subjects are asked by the inquiries, answered by the inquiries or answered by the inquiries respectively, in the three cases, at least one time, the emotion of the audited single body reaches the expected emotion, and meanwhile, when the intensity of the expected emotion reaches the preset emotion intensity threshold value, a circulation model is started, the emotion indexes of the audited single body when the current conversation subject in the current conversation subject set is asked by the inquiries, answered by the inquiries and answered by the inquiries are obtained based on the preset multi-mode emotion analysis model, whether the emotion indexes of the audited single body when the current conversation subject in the current conversation subject set is asked by the inquiries, answered by the inquiries and answered by the inquiries appear in the current conversation subject set and whether the intensity of the expected emotion reaches the preset emotion intensity threshold value or not are judged based on the preset emotion expected change table and the preset emotion intensity table, and if so, acquiring a next session topic after the current session topic in the current session topic set for interrogation until the interrogation of the last session topic in the current interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches a preset emotion intensity threshold, acquiring a first session topic in the next interrogation session topic set after the current interrogation session topic set for interrogation until the interrogation of the last session topic in the last interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches the preset emotion intensity threshold, and ending the interrogation.
Wherein the determining of the next conversation topic and the next interrogation conversation topic set comprises: the determination is made based on a preset number.
In some embodiments of the present application, the preset cyclic interrogation model comprises: when the conversation theme is asked by the auditor, answered by the auditor or answered by the auditor, at least one of the three situations is that the emotion of the audited individual reaches the expected emotion, and the conversation theme is started when the intensity of the expected emotion reaches a preset emotion intensity threshold.
Referring to fig. 9 specifically, fig. 9 is an execution flowchart of an embodiment of the method for interrogation based on the preset interrogation mode in the embodiment of the present application, specifically as follows: starting a preset interrogation mode, acquiring a conversation theme in a preset interrogation conversation theme set based on a preset interrogation conversation table, interrogating a single body to be interrogated, respectively acquiring emotion indexes when a current conversation theme in the current interrogation conversation theme set is asked by an interrogation person, when the current conversation theme is answered by the interrogation unit and when the current conversation theme is answered by the interrogation unit based on a preset multi-mode emotion analysis model, judging whether expected emotions appear in the emotion indexes when the current conversation theme of the interrogation unit is asked by the interrogation person, when the current conversation theme is answered by the interrogation unit and whether the intensity of the expected emotions reaches a preset emotion intensity threshold based on a preset emotion expected change table and a preset emotion intensity table, if the expected emotions appear in the emotion indexes and the intensity of the expected emotions reaches the preset emotion intensity threshold, acquiring a next session topic after the current session topic in the current interrogation session topic set in a preset interrogation session table for interrogation, and based on a preset cyclic interrogation model, executing the steps 202, 203 and 204 in a cyclic manner until the interrogation of the last session topic in the current interrogation session topic set in the preset interrogation session table is completed, and the expected emotion and the intensity of the expected emotion in the emotion index reach a preset emotion intensity threshold value, acquiring the first session topic in the next interrogation session topic set after the current interrogation session topic set for interrogation until the interrogation of the last session topic in the last interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion in the emotion index reach a preset emotion intensity threshold value, and finishing interrogation, otherwise, acquiring other session topics in the current interrogation session topic set for interrogation, and continuing to execute the steps 202, 203 and 204 based on a preset iteration model until a preset condition when a next session topic after the current session topic in the current interrogation session topic set is started is met, and stopping iteration.
According to the method for interrogation based on the preset interrogation mode, interrogation can be performed on an interrogated monomer by starting the interrogation mode; respectively acquiring emotion indexes of a single audited body when a current session theme in a current audited session theme set is asked by auditors, when the single audited body answers and when the single audited body answers; judging whether the emotion indexes have expected emotions or not and judging whether the intensity of the expected emotions reaches an emotion intensity threshold value or not; if the expected emotion appears in the emotion indexes and the intensity of the expected emotion reaches an emotion intensity threshold value, acquiring a session topic in a next preset interrogation session topic set in the interrogation session table for interrogation; otherwise, continuing to acquire the current session topic in the current interrogation session topic set for interrogation, and finishing interrogation based on the iteration model and the cycle model. The method and the device help to improve the success rate and efficiency of interrogation.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 10, as an implementation of the method shown in fig. 2, the present application provides an embodiment of an apparatus for performing interrogation based on a preset interrogation mode, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 10, the apparatus 10 for interrogation based on the preset interrogation mode according to the embodiment includes: an interrogation starting module 10a, an emotion index obtaining module 10b, an expectation judging module 10c, an iterative interrogation module 10d and a cyclic interrogation module 10 e. Wherein:
an interrogation starting module 10a, configured to start a preset interrogation mode, obtain a session topic in a preset interrogation session topic set based on a preset interrogation session table, and perform interrogation on an interrogated monomer, where the preset interrogation mode includes: a religious trial mode;
the emotion index acquisition module 10b is used for respectively acquiring emotion indexes of the audited subject when the current session subject in the current audition session subject set is asked by the auditor, when the audited subject answers and when the audited subject answers based on a preset multi-mode emotion analysis model;
the anticipation judgment module 10c is configured to respectively judge whether an emotion index of the audited single body appears with an anticipated emotion when the current conversation theme is asked by the auditor, answered by the audited single body, and answered by the auditor, and judge whether the intensity of the anticipated emotion reaches a preset emotion intensity threshold value, based on a preset emotion anticipation change table and a preset emotion intensity table;
the iterative interrogation module 10d is configured to, if the expected emotion appears in the emotion indicator and the intensity of the expected emotion reaches a preset emotion intensity threshold, obtain a next session topic after the current session topic in the current interrogation session topic set in a preset interrogation session table and interrogate; otherwise, continuing to acquire the current session theme in the current interrogation session theme set for interrogation, continuing to execute the steps 202, 203 and 204 based on a preset iteration model until a preset condition when a next session theme after the current session theme in the current interrogation session theme set is started is met, and stopping iteration;
and a cyclic interrogation module 10e, configured to cyclically execute the steps 202, 203, and 204 based on a preset cyclic interrogation model until interrogation of a last session topic in a current interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion appearing in the emotion index reach a preset emotion intensity threshold, acquire a first session topic in a next interrogation session topic set for interrogation, until interrogation of the last session topic in the last interrogation session topic set is completed, and the intensity of the expected emotion appearing in the emotion index reach the preset emotion intensity threshold, and terminate the interrogation.
In some embodiments of the present application, the interrogation session table preset in the interrogation starting module 10a includes a plurality of preset interrogation session topic sets classified according to a predetermined classification format, and each of the preset interrogation session topic sets includes: a plurality of preset conversation themes.
In some embodiments of the present application, as shown in fig. 11, fig. 11 is a schematic structural diagram of an emotion index obtaining module in an embodiment of the present application, where the emotion index obtaining module 10b includes a multivariate index set obtaining unit 10b1, an emotion comprehensive index set obtaining unit 10b2, a frequency index set judging unit 10b3, an intensity index judging unit 10b4, and an emotion index obtaining unit 10b 5.
In some embodiments of the present application, the multiple index set obtaining unit 10b1 is configured to obtain a plurality of multiple relevant index values of the audited individual based on a preset emotion analysis submodel, and generate a multiple index set.
In some embodiments of the present application, the emotion comprehensive index set obtaining unit 10b2 is configured to perform emotion comprehensive index obtaining on the elements in the multiple index sets in a clustering manner, and generate an emotion comprehensive index set.
In some embodiments of the present application, the frequency index set determination unit 10b3 is configured to obtain, based on a preset frequency determination model, the number of occurrences of different emotions in the emotion integration index set, obtain an emotion of which the number of occurrences is not 0, and generate a frequency index set.
In some embodiments of the present application, the intensity index determination unit 10b4 is configured to determine the intensity index of each element in the frequency index set based on a preset emotional intensity determination model.
In some embodiments of the present application, the emotion index obtaining unit 10b5 is configured to use the element in the frequency index set and the intensity index of each element in the frequency index set as the emotion index of the polled individual when the subject of the current conversation is asked by the polled individual, when the polled individual answers, and when the polled individual answers.
In some embodiments of the present application, as shown in fig. 12, fig. 12 is a schematic structural diagram of a multi-modal emotion analysis model in the embodiments of the present application, where the multi-modal emotion analysis model includes an emotion analysis sub-model 12a, a plurality of emotion algorithm models 12b, a frequency judgment model 12c, and an emotion intensity judgment model 12 d.
In some embodiments of the present application, emotion analysis submodel 12a is configured to obtain a plurality of multivariate correlation index values of the monomers to be interrogated and generate a multivariate index set.
In some embodiments of the present application, the emotion algorithm models 12b are used for performing algorithm processing on elements in the multivariate index set to obtain an emotion comprehensive index, and generate an emotion comprehensive index set.
In some embodiments of the present application, the frequency determination model 12c is configured to obtain the number of occurrences of different emotions in the emotion integration index set, obtain an emotion of which the number of occurrences is not 0, and generate a frequency index set.
In some embodiments of the present application, the emotion intensity determination model 12d is configured to determine an intensity indicator for each element in the set of frequency indicators.
In some embodiments of the present application, as shown in fig. 13, fig. 13 is a schematic structural diagram of an expectation judgment module in an embodiment of the present application, and the expectation judgment module 10c includes an expectation judgment unit 10c1 and an emotion intensity judgment unit 10c 2.
In some embodiments of the present application, the expected emotion determining unit 10c1 is configured to obtain the elements in the frequency index set, compare the obtained elements with expected emotions in the preset expected emotion change table, and determine whether the expected emotion exists in the elements in the frequency index set; the preset emotion expected change table corresponds to a plurality of preset expected emotions when each conversation theme is asked by an auditor, answered by an auditor and answered by the auditor.
In some embodiments of the present application, the emotion intensity determining unit 10c2 is configured to, if there is an expected emotion in the elements in the frequency index set, obtain a frequency value corresponding to the expected emotion, compare the frequency value with a frequency value corresponding to an emotion intensity level preset by the expected emotion in the preset emotion intensity table, and determine the intensity of the expected emotion; comparing the intensity of the expected emotion with a preset emotion intensity threshold value, and judging whether the intensity of the expected emotion reaches the preset emotion intensity threshold value; the preset emotion intensity table is a preset emotion intensity level in advance based on the frequency value corresponding to the emotion.
In some embodiments of the application, the iterative model in the iterative interrogation module 10d is started when the conversation topic is asked by the interrogation personnel, answered by the interrogation unit and answered by the interrogation unit respectively, and in all three cases, the emotion of the interrogation unit does not have expected emotion, or the emotion of the interrogation unit has expected emotion but the intensity of the expected emotion does not reach a preset emotion intensity threshold value.
In some embodiments of the present application, the cyclic interrogation module 10e of the cyclic interrogation module is configured to start the cyclic interrogation module when the conversation topic is asked by the interrogation personnel, answered by the interrogation unit, or answered by the interrogation unit, where at least one emotion of the interrogation unit reaches an expected emotion, and the intensity of the expected emotion reaches a preset emotion intensity threshold.
According to the device for interrogation based on the preset interrogation mode, interrogation is conducted on the monomer to be interrogated by starting the interrogation mode; respectively acquiring emotion indexes of a single audited body when a current session theme in a current audited session theme set is asked by auditors, when the single audited body answers and when the single audited body answers; judging whether the emotion indexes have expected emotions or not and judging whether the intensity of the expected emotions reaches an emotion intensity threshold value or not; if the expected emotion appears in the emotion indexes and the intensity of the expected emotion reaches an emotion intensity threshold value, acquiring a session topic in a next preset interrogation session topic set in the interrogation session table for interrogation; otherwise, continuing to acquire the current session topic in the current interrogation session topic set for interrogation, and finishing interrogation based on the iteration model and the cycle model. The method and the device help to improve the success rate and efficiency of interrogation.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 14, fig. 14 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 14 includes a memory 14a, a processor 14b, and a network interface 14c communicatively coupled to each other via a system bus. It should be noted that only a computer device 14 having components 14a-14c is shown, but it should be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 14a includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 14a may be an internal storage unit of the computer device 14, such as a hard disk or a memory of the computer device 14. In other embodiments, the memory 14a may also be an external storage device of the computer device 14, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the computer device 14. Of course, the memory 14a may also include both internal and external storage devices of the computer device 14. In this embodiment, the memory 14a is generally used for storing an operating system and various application software installed on the computer device 14, such as a program code of a method for performing an interrogation based on a preset interrogation mode. In addition, the memory 14a may also be used to temporarily store various types of data that have been output or are to be output.
The processor 14b may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip in some embodiments. The processor 14b is typically used to control the overall operation of the computer device 14. In this embodiment, the processor 14b is configured to run a program code stored in the memory 14a or process data, for example, run a program code of the method for performing interrogation based on a preset interrogation mode.
The network interface 14c may comprise a wireless network interface or a wired network interface, and the network interface 14c is generally used to establish a communication link between the computer device 14 and other electronic devices.
The present application further provides another embodiment, which is to provide a non-transitory computer-readable storage medium storing a program for performing an interrogation based on a preset interrogation mode, where the program for performing an interrogation based on a preset interrogation mode is executable by at least one processor, so that the at least one processor performs the steps of the method for performing an interrogation based on a preset interrogation mode as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A method for interrogation based on a preset interrogation mode is characterized by comprising the following steps:
starting a preset interrogation mode, acquiring a conversation theme in a preset interrogation conversation theme set based on a preset interrogation conversation table, and interrogating the interrogated monomers;
based on a preset multi-mode emotion analysis model, emotion indexes of a single audited body are respectively obtained when a current session theme in a current audition session theme set is asked by auditors, when the current session theme is answered by the audited body and when the current session theme is answered by the audited body;
based on a preset emotion expected change table and a preset emotion intensity table, respectively judging whether expected emotion appears in emotion indexes of the audited single body when the current conversation theme is asked by auditors, is answered by the audited single body and is answered by the auditors and judging whether the intensity of the expected emotion reaches a preset emotion intensity threshold value;
if the expected emotion appears in the emotion indexes and the intensity of the expected emotion reaches a preset emotion intensity threshold value, acquiring a next session theme after the current session theme in the current interrogation session theme set in a preset interrogation session table for interrogation; otherwise, continuing to acquire the current session theme in the current interrogation session theme set for interrogation, continuing to execute the steps 202, 203 and 204 based on a preset iteration model until a preset condition when a next session theme after the current session theme in the current interrogation session theme set is started is met, and stopping iteration;
and based on a preset cyclic interrogation model, circularly executing the steps 202, 203 and 204 until the interrogation of the last session topic in the current interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches a preset emotion intensity threshold value, acquiring the first session topic in the next interrogation session topic set for interrogation, until the interrogation of the last session topic in the last interrogation session topic set is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches the preset emotion intensity threshold value, and ending the interrogation.
2. The method for interrogation based on the preset interrogation mode of claim 1, wherein the preset multimodal emotion analysis model comprises:
based on a preset emotion analysis submodel, acquiring a plurality of multivariate related index values of the investigated monomer, and generating a multivariate index set;
acquiring emotion comprehensive indexes of the elements in the multi-element index set in a clustering mode, and generating an emotion comprehensive index set;
the cluster comprises a plurality of same preset emotion algorithm models, and each preset emotion algorithm model performs algorithm processing on elements in the multi-element index set to obtain an emotion comprehensive index.
3. The method for interrogation based on the preset interrogation mode as claimed in claim 2, wherein the interrogation mode is a preset interrogation mode
The obtaining of emotion indexes of the audited individual when the current session topic in the current audited session topic set is asked by the auditor, when the audited individual answers, and when the auditor answers respectively includes:
acquiring the occurrence times of different emotions in the emotion comprehensive index set based on a preset frequency judgment model, acquiring the emotion with the occurrence times not being 0, and generating a frequency index set;
judging the intensity index of each element in the frequency index set based on a preset emotion intensity judgment model;
and taking the elements in the frequency index set and the intensity index of each element in the frequency index set as the emotion indexes of the audited individual when the current conversation theme is asked by the audited individual, is answered by the audited individual and is answered by the audited individual.
4. The method for interrogation according to claim 3, wherein the step of respectively judging whether the emotion indicators of the audited individual when the current conversation topic is asked by the auditor, answered by the audited individual, and answered by the auditor show expected emotions comprises:
acquiring the elements in the frequency index set, comparing the elements with expected emotions in the preset emotion expected change table, and judging whether the elements in the frequency index set have the expected emotions or not;
the preset emotion expected change table corresponds to a plurality of preset expected emotions when each conversation theme is asked by an auditor, answered by an auditor and answered by the auditor.
5. The method for interrogation according to claim 4, wherein the interrogation is based on a predetermined interrogation mode
Wherein the judging whether the intensity of the expected emotion reaches a preset emotion intensity threshold value comprises:
if the elements in the frequency index set have expected emotions, acquiring a frequency value corresponding to the expected emotions, comparing the frequency value with a frequency value corresponding to an emotion intensity level preset by the expected emotions in the preset emotion intensity table, and determining the intensity of the expected emotions;
comparing the intensity of the expected emotion with a preset emotion intensity threshold value, and judging whether the intensity of the expected emotion reaches the preset emotion intensity threshold value;
the preset emotion intensity table is a preset emotion intensity level in advance based on the frequency value corresponding to the emotion.
6. The method for interrogation based on the preset interrogation mode of claim 5, wherein the preset iterative model comprises:
when the conversation theme is asked by the auditor, answered by the auditor and answered by the auditor respectively, in the three cases, the emotion of the auditor does not have expected emotion, or the auditor has the expected emotion but the intensity of the expected emotion does not reach the preset emotion intensity threshold value, and starting is carried out.
7. The method for interrogation based on the preset interrogation mode of claim 6, wherein the preset cyclic interrogation model comprises:
when the conversation theme is asked by the auditor, answered by the auditor or answered by the auditor, at least one of the three situations is that the emotion of the audited individual reaches the expected emotion, and the conversation theme is started when the intensity of the expected emotion reaches a preset emotion intensity threshold.
8. An apparatus for interrogation based on a predetermined interrogation mode, comprising:
the trial starting module is used for starting a preset trial mode, acquiring a session theme concentrated by the preset trial session theme based on a preset trial session table, and performing trial on a single trial, wherein the preset trial mode comprises the following steps: a religious trial mode;
the emotion index acquisition module is used for respectively acquiring emotion indexes of the audited individuals when the current session theme in the current audited session theme set is asked by the auditor, answered by the audited individuals and answered by the auditor based on a preset multi-mode emotion analysis model;
the system comprises an expected judgment module, a judging module and a judging module, wherein the expected judgment module is used for respectively judging whether the emotion indexes of the audited single body have expected emotion when the current conversation theme is asked by the auditor, the audited single body answers and the auditor answers and judging whether the intensity of the expected emotion reaches a preset emotion intensity threshold value or not based on a preset emotion expected change table and a preset emotion intensity table;
the iterative interrogation module is used for acquiring a next session theme after the current session theme is centralized in a preset interrogation session table for interrogation if the expected emotion appears in the emotion index and the intensity of the expected emotion reaches a preset emotion intensity threshold; otherwise, continuing to acquire the current session theme in the current interrogation session theme set for interrogation, continuing to execute the steps 202, 203 and 204 based on a preset iteration model until a preset condition when a next session theme after the current session theme in the current interrogation session theme set is started is met, and stopping iteration;
and a cyclic interrogation module, configured to cyclically execute the steps 202, 203, and 204 based on a preset cyclic interrogation model until interrogation of the last session topic in the current interrogation session topic set in the preset interrogation session table is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches a preset emotion intensity threshold, acquire the first session topic in the next interrogation session topic set for interrogation, until interrogation of the last session topic in the last interrogation session topic set is completed, and the intensity of the expected emotion and the expected emotion appearing in the emotion index reaches the preset emotion intensity threshold, and terminate the interrogation.
9. A computer device comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of the method for interrogation based on a preset interrogation mode according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium, having a computer program stored thereon, which, when being executed by a processor, implements the steps of the method for interrogation based on a preset interrogation mode as claimed in any one of claims 1 to 7.
CN202010407248.5A 2020-05-14 2020-05-14 Method, device and equipment for interrogation based on preset interrogation mode Pending CN111723577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010407248.5A CN111723577A (en) 2020-05-14 2020-05-14 Method, device and equipment for interrogation based on preset interrogation mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010407248.5A CN111723577A (en) 2020-05-14 2020-05-14 Method, device and equipment for interrogation based on preset interrogation mode

Publications (1)

Publication Number Publication Date
CN111723577A true CN111723577A (en) 2020-09-29

Family

ID=72564437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010407248.5A Pending CN111723577A (en) 2020-05-14 2020-05-14 Method, device and equipment for interrogation based on preset interrogation mode

Country Status (1)

Country Link
CN (1) CN111723577A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484093A (en) * 2015-09-01 2017-03-08 卡西欧计算机株式会社 Session control, dialog control method
CN109543658A (en) * 2018-12-25 2019-03-29 中国政法大学 Intelligence hearing householder method and device
US20190149494A1 (en) * 2017-11-16 2019-05-16 International Business Machines Corporation Emotive tone adjustment based cognitive management

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484093A (en) * 2015-09-01 2017-03-08 卡西欧计算机株式会社 Session control, dialog control method
US20190149494A1 (en) * 2017-11-16 2019-05-16 International Business Machines Corporation Emotive tone adjustment based cognitive management
CN109543658A (en) * 2018-12-25 2019-03-29 中国政法大学 Intelligence hearing householder method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
知乎用户: "如何审讯-关于"审讯与反审讯"的长篇大论", pages 1 - 10, Retrieved from the Internet <URL:https://www.zhihu.com/question/45486535> *

Similar Documents

Publication Publication Date Title
CN103646646B (en) A kind of sound control method and electronic equipment
CN111368043A (en) Event question-answering method, device, equipment and storage medium based on artificial intelligence
CN107863108B (en) Information output method and device
CN107015964B (en) Intelligent robot development-oriented custom intention implementation method and device
CN112836521A (en) Question-answer matching method and device, computer equipment and storage medium
CN113160819B (en) Method, apparatus, device, medium, and product for outputting animation
CN112395391B (en) Concept graph construction method, device, computer equipment and storage medium
CN112446209A (en) Method, equipment and device for setting intention label and storage medium
CN112418059A (en) Emotion recognition method and device, computer equipment and storage medium
CN114090792A (en) Document relation extraction method based on comparison learning and related equipment thereof
CN112669850A (en) Voice quality detection method and device, computer equipment and storage medium
CN112100491A (en) Information recommendation method, device and equipment based on user data and storage medium
CN111639360A (en) Intelligent data desensitization method and device, computer equipment and storage medium
CN111311374A (en) University student-based idle commodity exchange method, device, equipment and storage medium
CN114548114B (en) Text emotion recognition method, device, equipment and storage medium
CN111723577A (en) Method, device and equipment for interrogation based on preset interrogation mode
CN115373634A (en) Random code generation method and device, computer equipment and storage medium
CN114637831A (en) Data query method based on semantic analysis and related equipment thereof
CN114398466A (en) Complaint analysis method and device based on semantic recognition, computer equipment and medium
CN112949317B (en) Text semantic recognition method and device, computer equipment and storage medium
CN113157896B (en) Voice dialogue generation method and device, computer equipment and storage medium
CN115409559A (en) Target enterprise user screening method and device, computer equipment and storage medium
CN112220479A (en) Genetic algorithm-based examined individual emotion judgment method, device and equipment
CN117251631A (en) Information recommendation method, device, equipment and storage medium based on artificial intelligence
CN117217684A (en) Index data processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination