CN115101151A - Character testing method and device based on man-machine conversation and electronic equipment - Google Patents
Character testing method and device based on man-machine conversation and electronic equipment Download PDFInfo
- Publication number
- CN115101151A CN115101151A CN202211022595.1A CN202211022595A CN115101151A CN 115101151 A CN115101151 A CN 115101151A CN 202211022595 A CN202211022595 A CN 202211022595A CN 115101151 A CN115101151 A CN 115101151A
- Authority
- CN
- China
- Prior art keywords
- personality
- target
- training
- dialogue
- dialog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 101
- 238000012549 training Methods 0.000 claims description 81
- 238000000034 method Methods 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 238000003066 decision tree Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 238000007477 logistic regression Methods 0.000 claims description 3
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 10
- 238000013461 design Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000590419 Polygonia interrogationis Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/167—Personality evaluation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/7475—User input or interface means, e.g. keyboard, pointing device, joystick
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Theoretical Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Pathology (AREA)
- Psychiatry (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Psychology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Educational Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Social Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Machine Translation (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the application discloses a personality testing method based on man-machine conversation, which comprises the following steps: acquiring dialogue data of a target user; extracting target dialog features from the dialog data, the target dialog features characterizing personality characteristics of the target user; and determining the personality attribute corresponding to the target dialogue characteristic according to a preset personality attribute standard, and outputting the personality attribute as a personality test result of the target user. The character testing is carried out in a dialogue mode, the interestingness of the testing is increased, the character attributes are determined according to character attribute standards supported by professional psychology theories, the testing results are professional and reliable, the model is further optimized according to feedback, and the accuracy of the testing is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a personality testing method and device based on man-machine conversation and electronic equipment.
Background
Most of the existing character testing methods rely on a scale, which usually includes a series of preset questions, and each question corresponds to a plurality of answers, wherein each answer under each question represents a character representation. The testee selects the answers of all the questions in the scale according to own habits, and the character testing result of the testee is obtained through all the answers selected by the testee. However, the form of the scale is single, and the testing process is tedious, so that the use feeling of the testee is poor.
Another character testing method is to analyze the character of a tested person by identifying a commodity photographed by the tested person. However, the personality test method is not supported by relevant psychological theories, so that the accuracy of the obtained personality test result is not high.
Disclosure of Invention
The embodiment of the application provides a personality testing method and device based on man-machine conversation and electronic equipment, and can solve the problems of boring and low accuracy in the personality testing process.
In a first aspect, an embodiment of the present application provides a personality testing method based on human-computer conversation, where the method includes:
obtaining dialogue data of a target user;
extracting target dialog features from the dialog data, the target dialog features characterizing personality characteristics of the target user;
and determining the personality attribute corresponding to the target dialogue characteristic according to a preset personality attribute standard to serve as the personality test result of the target user.
In an optional design, determining a personality attribute corresponding to the target dialog feature according to a preset personality attribute standard includes:
and inputting the target dialogue features into a pre-trained test model to obtain character attributes corresponding to the target dialogue features.
In an alternative design, the pre-trained test model is trained by:
obtaining training sample data and a scale test result corresponding to each group of data in the training sample data, and taking all scale test results as the preset character attribute standard;
acquiring a conversation feature set from the training sample data to serve as a training sample feature set;
selecting a target training rule from a plurality of pre-deployed training rules according to the training sample feature set;
and training a test network to be trained by using the training sample feature set, the preset character attribute standard and the target training rule to obtain the pre-trained test model.
In an alternative design, any of the dialog features in the dialog feature set includes at least one of:
topic keywords, tone word habit characteristics, punctuation mark habit characteristics, field length of unit text, conversation time interval and habit change characteristics.
In an alternative design, the selecting a target training rule from a plurality of pre-deployed training rules according to the training sample feature set includes:
respectively operating the training sample feature sets according to the plurality of pre-deployed training rules to respectively obtain a training result;
and taking the training rule with the training result closest to the preset value as the target training rule.
In an alternative design, the plurality of pre-deployed training rules include at least two of the following rules:
logistic regression, K-nearest neighbor KNN, random forest and decision tree.
In an optional design, after determining a personality attribute corresponding to the target dialog feature according to a preset personality attribute standard as a personality test result of the target user, the method further includes:
obtaining the test feedback of the target user;
and optimizing the pre-trained test model according to the test feedback.
In a second aspect, an embodiment of the present application provides a personality testing device based on human-computer conversation, where the device includes:
the acquisition module is used for acquiring the dialogue data of the target user;
an extraction module for extracting target dialogue features from the dialogue data, wherein the target dialogue features represent character features of the target user;
and the processing module is used for determining the personality attribute corresponding to the target conversation characteristic according to a preset personality attribute standard and outputting the personality attribute as a personality test result of the target user.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and one or more processors; wherein the memory is to store computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform some or all of the steps of the human-machine dialog based personality testing method of the first aspect or various possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium, where instructions are stored, and when the instructions are executed on a computer, the instructions cause the computer to perform part or all of the steps of the human-computer dialogue based personality testing method in the first aspect or various possible implementations of the first aspect.
The embodiment of the application provides a personality testing method based on man-machine conversation, which comprises the following steps: acquiring dialogue data of a target user; extracting target dialog features from the dialog data, the target dialog features characterizing personality characteristics of the target user; and determining the personality attribute corresponding to the target dialogue characteristic according to a preset personality attribute standard, and outputting the personality attribute as a personality test result of the target user. The target user and the equipment are used for testing in a dialogue mode, the interestingness of the test is improved, the preset personality attribute standard is combined with the psychology related scale and the language characteristics, professional psychology theory support is achieved, and the accuracy of the test result is improved.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a personality testing method based on human-computer conversation according to an embodiment of the present application;
FIG. 2 is a flowchart of a training and testing method provided by an embodiment of the present application;
fig. 3 is a structural diagram of a personality testing device based on human-computer interaction according to an embodiment of the present application;
fig. 4 is a structural diagram of a personality testing device based on human-computer interaction according to an embodiment of the present application.
Detailed Description
The following describes technical solutions of embodiments of the present application with reference to the drawings in the embodiments of the present application.
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that although the terms first, second, etc. may be used in the following embodiments to describe a class of objects, the objects should not be limited to these terms. These terms are only used to distinguish between particular objects of that class of objects.
The embodiment of the application provides a personality testing method based on man-machine conversation and electronic equipment.
The personality testing method based on man-machine conversation related to the embodiment of the application is described below through several implementation modes.
As shown in fig. 1, fig. 1 illustrates a personality testing method 100 (hereinafter referred to as method 100) based on man-machine conversation, where the method 100 includes the following steps:
step S101, session data of the target user is acquired.
Step S102, extracting target dialogue characteristics from the dialogue data, wherein the target dialogue characteristics represent the character characteristics of the target user.
And step S103, determining the personality attribute corresponding to the target dialogue characteristic according to a preset personality attribute standard, and outputting the personality attribute as the personality test result of the target user.
In this embodiment, the personality test is performed in a man-machine conversation mode, and the target user is required to have a conversation communication with the device, and the number of conversation turns is not less than forty turns so as to obtain enough conversation data to perform the personality test. And then extracting the dialogue features capable of representing the personality attributes of the target user from the dialogue data, such as: and determining corresponding personality attributes from preset personality attribute standards according to the conversation characteristics to obtain a test result, wherein the theme keywords of the target user in the conversation process, the tone words and punctuation mark using conditions of the target user, the input length of each pair of conversation target users, the reply time between different turns and the like.
In the embodiment, the personality test is performed in a man-machine conversation mode, so that the test process is not boring and interesting, in addition, the preset personality attribute standard is obtained by the user according to the professional psychology scale test, different personality characteristics correspond to different personality attributes, and the corresponding personality attributes can be found from the personality attribute standard, so that the test result is supported by the professional psychology theory, and the result is more accurate.
In an optional implementation manner, determining a personality attribute corresponding to the target dialog feature according to a preset personality attribute standard includes:
and inputting the target dialogue features into a pre-trained test model to obtain character attributes corresponding to the target dialogue features.
In an alternative embodiment, the pre-trained test model is obtained by training the following method:
obtaining training sample data and a scale test result corresponding to each group of data in the training sample data, and taking all scale test results as the preset character attribute standard;
acquiring a conversation feature set from the training sample data to serve as a training sample feature set;
selecting a target training rule from a plurality of pre-deployed training rules according to the training sample feature set;
and training a test network to be trained by using the training sample feature set, the preset character attribute standard and the target training rule to obtain the pre-trained test model.
In this embodiment, the correspondence from the dialogue data to the personality attributes is accomplished by building a pre-trained test model. The specific implementation manner is shown in fig. 2, and fig. 2 shows a flow chart of a training and testing method, which includes acquiring training sample data, recruiting a plurality of testees, randomly grouping the testees to have two-to-two conversations, and acquiring conversation data as the training sample data, although the training sample data may be acquired in other manners. And meanwhile, carrying out professional psychological scale test on the testee, taking the obtained scale test result as the character label of each testee, and summarizing all the character labels to obtain the preset character attribute standard. And then obtaining training sample feature sets of multiple dimensions from the training sample data, wherein the training sample feature sets mainly comprise some features for characterizing the character description of the user, such as: and then training by adopting various machine learning methods, carrying out cross-folding verification to determine an optimal target training rule, and training on the basis of the optimal target training rule to obtain a pre-trained test model. Because the personality label corresponding to each tested person is obtained, after the training sample feature set is obtained, the personality label of the tested person corresponding to a certain or some specific features can be determined from the preset personality attribute standard, and the purpose of determining the personality attribute according to the conversation feature is achieved.
In this embodiment, sufficient training sample data and a scale test result corresponding to each user are obtained to ensure that the obtained training sample feature set includes all human behavior characteristics, such as personality traits, temporary states, emotions, and the like, and the preset personality attribute standard can include all personality attributes of a human, so as to determine the corresponding user and the corresponding features. Therefore, the test model can comprehensively and standard test the dialogue data of the target user and output the character attribute.
In an alternative embodiment, any of the set of dialog features includes at least one of the following features:
topic keywords, tone word habit characteristics, punctuation mark habit characteristics, field length of unit text, conversation time interval and habit change characteristics.
In this embodiment, the training sample feature set and the target dialog features include, but are not limited to, those described above, and may also include other features capable of characterizing the personality attributes of the user. For example, extracting a key field capable of representing the subject content of the current conversation from training sample data, and representing a key word or a word of the current conversation topic, and using conditions of the tone word in the user conversation process, such as whether each sentence contains: and (5) voice words such as, calash, Domo, woolen, and the like, and carrying out statistics. It is also possible to extract the punctuation mark usage category and frequency of each sentence from the training sample data, for example, in a case where a large number of exclamation marks or question marks appear in a sentence, or in a case where each sentence is short and does not include punctuation marks. And the lengths of dialogs in different turns, the number of words and fields of each dialog sent, the reply time interval between different turns and the habit change condition of a user in the dialog process can be extracted. For example, the language habits of users in a certain group of data can always repeat to speak a certain word, the conversation time interval is long, and the corresponding users can be determined to be \33148oran longhui type personality according to the characteristics through comparison by the preset personality attribute standards. Through the characteristics, the personality attributes corresponding to different conversation characteristics of a person can be comprehensively analyzed, and further personality testing of the user is realized.
In an alternative embodiment, the selecting a target training rule from a plurality of pre-deployed training rules according to the training sample feature set includes:
respectively operating the training sample feature set according to the plurality of pre-deployed training rules to respectively obtain a training result;
and taking the training rule with the training result closest to the preset value as the target training rule.
In an alternative embodiment, the plurality of pre-deployed training rules include at least two of the following rules:
logistic regression, K-nearest neighbor KNN, random forest and decision tree.
In this embodiment, a plurality of training rules are evaluated in a cross-validation manner, so as to determine an optimal training rule, which greatly improves the efficiency of testing.
In an optional implementation manner, after determining, according to a preset personality attribute standard, a personality attribute corresponding to the target dialog feature, as a personality test result of the target user, the method further includes:
obtaining the test feedback of the target user;
and optimizing the pre-trained test model according to the test feedback.
In this embodiment, as shown in fig. 2, after the character test result is output to the user, the user can give feedback of the test result according to the situation of the user, the accurate place is continuously maintained, the place where the test result is inaccurate is modified, the test model is continuously optimized, and the accuracy of the test is increased.
In summary, the personality testing method based on the man-machine conversation increases the interestingness of the test by performing the personality test in a conversation mode, determines the personality attributes according to the personality attribute standard supported by the professional psychology theory, enables the test result to be professional and reliable, further optimizes the model according to feedback, and improves the accuracy of the test.
Corresponding to the method 100, an apparatus for performing the method is also provided in the embodiments of the present application.
As shown in fig. 3, fig. 3 illustrates a personality testing device 300 based on human-computer conversation, which includes:
an obtaining module 301, configured to obtain session data of a target user.
An extracting module 302, configured to extract a target dialog feature from the dialog data, where the target dialog feature represents a personality feature of the target user.
And the processing module 303 is configured to determine a personality attribute corresponding to the target dialog feature according to a preset personality attribute standard, and output the personality attribute as a personality test result of the target user.
It should be understood that the apparatus 300 is also used for performing part or all of the steps of the corresponding method in fig. 1 and 2, and the specific implementation process is described with reference to the above-mentioned embodiment illustrated in fig. 1 and 2 and is not described in detail here.
It is understood that the above division of each module/unit is only a division of a logic function, and in actual implementation, the functions of the above modules may be integrated into a hardware entity, for example, the functions of the extraction module and the processing module may be integrated into a processor, the functions of the acquisition module may be integrated into a transceiver, and programs and instructions for implementing the functions of the above modules may be maintained in a memory. For example, fig. 4 provides an electronic device 41, the electronic device 41 including may include a processor 411, a transceiver 412, and a memory 413. The transceiver 412 is used for performing transceiving of data and signals in the method 100. The memory 413 may be used to store programs/code or the like needed by the processor 411 to perform the method 100.
The specific implementation process is described with reference to the embodiments illustrated in fig. 1 and fig. 2, and is not described in detail here.
In specific implementation, corresponding to the foregoing electronic device 41, an embodiment of the present application further provides a computer storage medium, where the computer storage medium provided in the electronic device 41 may store a program, and when the program is executed, part or all of the steps in the embodiments of the method 100 may be implemented. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
It can be understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a control device of a cloud game, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While alternative embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present invention.
Claims (10)
1. A character testing method based on man-machine conversation is characterized in that the method comprises the following steps:
acquiring dialogue data of a target user;
extracting target dialog features from the dialog data, the target dialog features characterizing personality characteristics of the target user;
and determining the personality attribute corresponding to the target dialogue characteristic according to a preset personality attribute standard, and outputting the personality attribute serving as the personality test result of the target user.
2. The method of claim 1, wherein determining the personality attribute corresponding to the target dialog feature according to a preset personality attribute criterion comprises:
and inputting the target dialogue features into a pre-trained test model to obtain character attributes corresponding to the target dialogue features.
3. The method of claim 2, wherein the pre-trained test model is trained by:
obtaining training sample data and a scale test result corresponding to each group of data in the training sample data, and taking all scale test results as the preset character attribute standard;
acquiring a conversation feature set from the training sample data to serve as a training sample feature set;
selecting a target training rule from a plurality of pre-deployed training rules according to the training sample feature set;
and training a test network to be trained by using the training sample feature set, the preset character attribute standard and the target training rule to obtain the pre-trained test model.
4. The method of claim 3, wherein any of the set of dialog features includes at least one of:
topic keywords, tone word habit characteristics, punctuation mark habit characteristics, field length of unit text, conversation time interval and habit change characteristics.
5. The method of claim 3, wherein selecting a target training rule from a plurality of pre-deployed training rules according to the training sample feature set comprises:
respectively operating the training sample feature set according to the plurality of pre-deployed training rules to respectively obtain a training result;
and taking the training rule with the training result closest to the preset value as the target training rule.
6. The method of claim 3, wherein the plurality of pre-deployed training rules include at least two of the following rules:
logistic regression, K-nearest neighbor KNN, random forest and decision tree.
7. The method of claim 3, after determining the personality attribute corresponding to the target dialog feature according to a preset personality attribute standard as a personality test result of the target user, further comprising:
obtaining the test feedback of the target user;
and optimizing the pre-trained test model according to the test feedback.
8. A personality testing device based on human-computer interaction, the device comprising:
the acquisition module is used for acquiring the dialogue data of the target user;
an extraction module for extracting target dialogue features from the dialogue data, the target dialogue features characterizing personality features of the target user;
and the processing module is used for determining the personality attribute corresponding to the target dialogue characteristic according to a preset personality attribute standard and outputting the personality test result as the personality test result of the target user.
9. An electronic device, wherein the electronic device comprises memory and one or more processors; wherein the memory is configured to store computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the human-machine dialog based personality testing method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that it comprises a computer program which, when run on a computer, causes the computer to execute the human-machine dialog based personality testing method of any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211022595.1A CN115101151A (en) | 2022-08-25 | 2022-08-25 | Character testing method and device based on man-machine conversation and electronic equipment |
CN202310861533.8A CN116898441B (en) | 2022-08-25 | 2023-07-13 | Character testing method and device based on man-machine conversation and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211022595.1A CN115101151A (en) | 2022-08-25 | 2022-08-25 | Character testing method and device based on man-machine conversation and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115101151A true CN115101151A (en) | 2022-09-23 |
Family
ID=83301355
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211022595.1A Withdrawn CN115101151A (en) | 2022-08-25 | 2022-08-25 | Character testing method and device based on man-machine conversation and electronic equipment |
CN202310861533.8A Active CN116898441B (en) | 2022-08-25 | 2023-07-13 | Character testing method and device based on man-machine conversation and electronic equipment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310861533.8A Active CN116898441B (en) | 2022-08-25 | 2023-07-13 | Character testing method and device based on man-machine conversation and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN115101151A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175227A (en) * | 2019-05-10 | 2019-08-27 | 神思电子技术股份有限公司 | A kind of dialogue auxiliary system based on form a team study and level reasoning |
CN110689078A (en) * | 2019-09-29 | 2020-01-14 | 浙江连信科技有限公司 | Man-machine interaction method and device based on personality classification model and computer equipment |
CN113377938A (en) * | 2021-06-24 | 2021-09-10 | 北京小米移动软件有限公司 | Conversation processing method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110990530A (en) * | 2019-11-28 | 2020-04-10 | 北京工业大学 | Microblog owner character analysis method based on deep learning |
CN111694940A (en) * | 2020-05-14 | 2020-09-22 | 平安科技(深圳)有限公司 | User report generation method and terminal equipment |
-
2022
- 2022-08-25 CN CN202211022595.1A patent/CN115101151A/en not_active Withdrawn
-
2023
- 2023-07-13 CN CN202310861533.8A patent/CN116898441B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175227A (en) * | 2019-05-10 | 2019-08-27 | 神思电子技术股份有限公司 | A kind of dialogue auxiliary system based on form a team study and level reasoning |
CN110689078A (en) * | 2019-09-29 | 2020-01-14 | 浙江连信科技有限公司 | Man-machine interaction method and device based on personality classification model and computer equipment |
CN113377938A (en) * | 2021-06-24 | 2021-09-10 | 北京小米移动软件有限公司 | Conversation processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN116898441A (en) | 2023-10-20 |
CN116898441B (en) | 2024-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6819990B2 (en) | Dialogue system and computer programs for it | |
CN111177359A (en) | Multi-turn dialogue method and device | |
CN110795913B (en) | Text encoding method, device, storage medium and terminal | |
US11531693B2 (en) | Information processing apparatus, method and non-transitory computer readable medium | |
CN114547274B (en) | Multi-turn question and answer method, device and equipment | |
CN108304387B (en) | Method, device, server group and storage medium for recognizing noise words in text | |
JP4668621B2 (en) | Automatic evaluation of excessive repeated word usage in essays | |
CN110569354A (en) | Barrage emotion analysis method and device | |
KR101410601B1 (en) | Spoken dialogue system using humor utterance and method thereof | |
CN112685550B (en) | Intelligent question-answering method, intelligent question-answering device, intelligent question-answering server and computer readable storage medium | |
CN112667796A (en) | Dialog reply method and device, electronic equipment and readable storage medium | |
KR20210056114A (en) | Device for automatic question answering | |
CN112287085B (en) | Semantic matching method, system, equipment and storage medium | |
JP6942759B2 (en) | Information processing equipment, programs and information processing methods | |
CN117370190A (en) | Test case generation method and device, electronic equipment and storage medium | |
CN110263346B (en) | Semantic analysis method based on small sample learning, electronic equipment and storage medium | |
CN112559711A (en) | Synonymous text prompting method and device and electronic equipment | |
CN116821290A (en) | Multitasking dialogue-oriented large language model training method and interaction method | |
CN112199958A (en) | Concept word sequence generation method and device, computer equipment and storage medium | |
CN115101151A (en) | Character testing method and device based on man-machine conversation and electronic equipment | |
US20220108071A1 (en) | Information processing device, information processing system, and non-transitory computer readable medium | |
CN115221303A (en) | Dialogue processing method and dialogue processing device | |
CN113901793A (en) | Event extraction method and device combining RPA and AI | |
CN113642334A (en) | Intention recognition method and device, electronic equipment and storage medium | |
CN113658609B (en) | Method and device for determining keyword matching information, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220923 |
|
WW01 | Invention patent application withdrawn after publication |