CN116776861A - Thread generation and display method, device, computer equipment and storage medium - Google Patents

Thread generation and display method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116776861A
CN116776861A CN202310799770.6A CN202310799770A CN116776861A CN 116776861 A CN116776861 A CN 116776861A CN 202310799770 A CN202310799770 A CN 202310799770A CN 116776861 A CN116776861 A CN 116776861A
Authority
CN
China
Prior art keywords
information
clue
model
training
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310799770.6A
Other languages
Chinese (zh)
Inventor
张政统
吴辉扬
马金龙
佟佳奇
吴文亮
邓其春
黎子骏
盘子圣
黄祥康
曾锐鸿
兰翔
廖艳冰
马飞
熊佳
徐志坚
谢睿
陈光尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Quyan Network Technology Co ltd
Original Assignee
Guangzhou Quyan Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Quyan Network Technology Co ltd filed Critical Guangzhou Quyan Network Technology Co ltd
Priority to CN202310799770.6A priority Critical patent/CN116776861A/en
Publication of CN116776861A publication Critical patent/CN116776861A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a thread generation and display method, a thread generation and display device, computer equipment, a storage medium and a computer program product. The method comprises the following steps: under the condition that the user account is detected to enter the virtual space, obtaining answer information of the user account; the answer information is the answer information input by the user account aiming at the target questionnaire before entering the virtual space; according to the answer information, determining the person setting description clue information aiming at the user account; and displaying the human-set description cue information in the virtual space in response to the virtual space meeting the trigger condition for displaying the cue information. By adopting the method, the communication efficiency between the wheat connecting users can be improved.

Description

Thread generation and display method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for generating and displaying clues.
Background
With the development of internet technology, the application of the internet friend-making platform is more and more widespread, and in the internet friend-making platform, users can perform friend-making activities in an interactive mode in a live broadcasting room.
In the traditional method, users can search the headset users with preset matching relation with the current users in a mode of chat interaction with the headset users in a live broadcasting room, and the mode of interaction through autonomous chat among the users has the problem of low communication efficiency.
Disclosure of Invention
Accordingly, there is a need for a thread generation and display method, apparatus, computer device, computer readable storage medium, and computer program product that can improve communication efficiency between users connected to wheat.
In a first aspect, the present application provides a method for generating and displaying clues. The method comprises the following steps:
under the condition that the user account is detected to enter a virtual space, obtaining answer information of the user account; the answer information is the answer information input by the user account aiming at a target questionnaire before entering the virtual space;
according to the answer information, determining the person setting description clue information aiming at the user account;
and responding to the virtual space meeting the triggering condition of displaying the clue information, and displaying the person setting description clue information in the virtual space.
In one embodiment, the determining, according to the answer information, the person setting description cue information for the user account includes:
according to answer options selected by the user account and reflected by the answer information, taking resolution corresponding to the selected answer options as target resolution;
and outputting a target clue matched with the target analysis by a pre-trained clue generation model as the human set description clue information.
In one embodiment, the method further comprises:
acquiring sample data; the sample data comprises sample analysis and sample clues;
constructing model training data aiming at a generated model according to the sample analysis and the sample clue;
and training the generative model by using the model training data to obtain the pre-trained clue generating model.
In one embodiment, constructing model training data for a generative model from the sample parsing and the sample clues includes:
determining a training instruction statement for the generative model according to the sample analysis and the sample clue;
the training instruction statement is used as the model training data for the generative model.
In one embodiment, determining a training instruction statement for the generative model from the sample parsing and the sample clues includes:
combining the sample analysis and the sample clue according to the corresponding relation to obtain a model training example; the corresponding relation comprises the corresponding relation between each sample analysis and the sample clue;
and generating the training instruction statement according to the model training example.
In one embodiment, the generating the training instruction sentence according to the model training example includes:
acquiring training requirement information; the training requirement information is information which is embodied in the model training examples and is used for providing specific requirements for the output content of the model; the training requirement information comprises requirement description information and attention point description information;
and combining the training instruction statement according to the training requirement information and the model training example.
In a second aspect, the application further provides a device for generating and displaying clues. The device comprises:
the answer acquisition module is used for acquiring answer information of the user account under the condition that the user account is detected to enter the virtual space; the answer information is the answer information input by the user account aiming at a target questionnaire before entering the virtual space;
the clue determining module is used for determining the person setting description clue information aiming at the user account according to the answer information;
and the clue display module is used for displaying the human-set description clue information in the virtual space in response to the virtual space meeting the trigger condition for displaying the clue information.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
under the condition that the user account is detected to enter a virtual space, obtaining answer information of the user account; the answer information is the answer information input by the user account aiming at a target questionnaire before entering the virtual space;
according to the answer information, determining the person setting description clue information aiming at the user account;
and responding to the virtual space meeting the triggering condition of displaying the clue information, and displaying the person setting description clue information in the virtual space.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
under the condition that the user account is detected to enter a virtual space, obtaining answer information of the user account; the answer information is the answer information input by the user account aiming at a target questionnaire before entering the virtual space;
according to the answer information, determining the person setting description clue information aiming at the user account;
and responding to the virtual space meeting the triggering condition of displaying the clue information, and displaying the person setting description clue information in the virtual space.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
under the condition that the user account is detected to enter a virtual space, obtaining answer information of the user account; the answer information is the answer information input by the user account aiming at a target questionnaire before entering the virtual space;
according to the answer information, determining the person setting description clue information aiming at the user account;
and responding to the virtual space meeting the triggering condition of displaying the clue information, and displaying the person setting description clue information in the virtual space.
According to the cue generation and display method, the device, the computer equipment, the storage medium and the computer program product, when the fact that the user account enters the virtual space is detected, the answer information of the user account is obtained, then the person setting description cue information aiming at the user account is determined according to the answer information, the person setting description cue information is displayed when the virtual space meets the triggering condition for displaying the cue information, the person setting description cue information can accurately describe the character setting characteristics of the user, the person setting description cue information guides the user to communicate in a prompting statement mode, communication efficiency among the users in the process of wheat connection interaction can be greatly improved, and the user can conveniently complete friend making requirements according to prompts of interaction objects.
Drawings
FIG. 1 is a flow chart of a method of generating and displaying a wire in one embodiment.
Fig. 2 is a flow chart of a method of generating and displaying a wire in another embodiment.
Fig. 3 is a flow chart of a method of generating and displaying a wire in another embodiment.
Fig. 4 is a block diagram of a cable generating and displaying device in one embodiment.
Fig. 5 is an internal structural diagram of a computer device in one embodiment.
Fig. 6 is an internal structural view of a computer device in another embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a method for generating and displaying clues is provided, which includes the following steps:
s110, under the condition that the user account is detected to enter the virtual space, answer information of the user account is obtained.
The user account refers to an account of a user in the network friend-making platform; the virtual space refers to a virtual space used for network friend making interaction in a network friend making platform, such as a live broadcast room supporting wheat connection; the answer information is answer information input by the user account for the target questionnaire before entering the virtual space.
When a user needs to make friends in a network friend-making platform through an account in a client, the user needs to fill in a questionnaire set by the client for the user, the questionnaire comprises questions and corresponding answer options, the user answers the questions in the questionnaire, the answer options meeting the situation of the user are selected as answer information, when the user enters a live broadcasting room for making friends through the client, the system inquires the answer information corresponding to the account of the user, and the answer information is used for the subsequent steps.
S120, according to the answer information, determining the person setting description cue information aiming at the user account.
Wherein, the person-setting descriptive cue information refers to prompt information for describing person settings of respondents, and respondents refer to users who answer questions in a questionnaire. In practical applications, the person-setting description thread information may refer to thread information corresponding to answer options selected by the respondent for the questionnaire.
The system queries answer information corresponding to the account of the user, and then converts the answer information into person setting description clue information; in particular, the manner of transformation may be, but is not limited to,: and extracting keywords representing user character settings in the answer information, and converting sentence patterns of original sentences by utilizing the keywords to form human description clue information.
S130, in response to the virtual space meeting the trigger condition for displaying the clue information, the person setting description clue information is displayed in the virtual space.
The trigger condition for displaying the clue information refers to a trigger condition for enabling the client to display the clue information to the user, and in a specific application, the trigger condition refers to a situation that the user account enters a user inter-connection in the virtual space.
The system converts answer information of a user a and a user B into human-set description clue information in advance, when the user a and the user B are successfully connected in a live broadcast room of the same network friend-making platform, the human-set description clue information is displayed in the live broadcast room, specifically, the user a displays human-set description clue information of the user B in a display page of the live broadcast room, and likewise, the user B displays human-set description clue information of the user a in a display page of the live broadcast room, and the user a and the user B take clue descriptions of the opposite parties as auxiliary prompts to carry out network friend-making activities by taking the live broadcast room connection as an interaction means.
According to the cue generation and display method, the answer information of the user account is obtained when the user account is detected to enter the virtual space, then the person setting description cue information aiming at the user account is determined according to the answer information, the person setting description cue information is displayed when the virtual space meets the trigger condition for displaying the cue information, the person setting description cue information can accurately describe the character setting characteristics of the user, the person setting description cue information guides the user to communicate in a prompt statement mode, communication efficiency among the users in the process of communicating with the wheat is greatly improved, and the user can conveniently complete friend making requirements according to prompts of interaction objects.
In one embodiment, S120 includes: according to answer options selected by a user account reflected by the answer information, taking resolution corresponding to the selected answer options as target resolution; and outputting a target clue matched with the target analysis through a pre-trained clue generation model, and taking the target clue as human setting description clue information.
The target analysis is analysis corresponding to answer options in the answer information of the user.
Specifically, if the question the user answers is "if you wait for your friend in a coffee shop, you choose which thing to do," the answer options for the question include: "A. Browse social media or play games on a cell phone; B. listening to surrounding music or reading books; C. actively communicating with strangers or identifying new friends; D. slightly lay or think of things ", the answer resolution corresponding to the answer option includes: "A. corresponds to an inward, unique, quiet character type; B. character types corresponding to open, enjoy life and self-law; C. the character type corresponds to outward, good interaction and openness; D. corresponding to depth of consideration, perceptibility, creative character type. If the answer information of the user is "the user selects the option A", the user resolves to the object of the question as "corresponding inward, unique and quiet character type".
The pre-trained clue generation model refers to a pre-trained model, and the clue generation model is used for outputting target clues corresponding to each target resolution, and specifically includes, but is not limited to, a generation type countermeasure network (GAN, generative Adversarial Networks), a large language model (LLM, large Language Model), and the like.
And the target clues are clues which are generated by the clue generating model and correspond to each target analysis. Specifically, if the user resolves the objective for the subject to "corresponding inlier, unique, quiet personality type," the objective clue may be "he may compare inlier, relatively quiet, like unique.
In this embodiment, the resolution corresponding to the answer option selected by the user account is used as the target resolution, the target clue matched with the target resolution is output through the pre-trained clue generating model, the output target clue is used as the clue information of the user setting description, and the clue associated with the user setting of the user is obtained through converting the resolution corresponding to each answer option in the topic, so that the accuracy of the clue information of the user setting description in the user description process is improved.
In one embodiment, the method further comprises: acquiring sample data; sample data includes sample parsing and sample clues; constructing model training data for a generated model according to sample analysis and sample clues; and training the generating model by using model training data to obtain a pre-trained clue generating model.
The sample data are data for forming training aiming at the model, the sample analysis is analysis corresponding to answer options in the sample data, the sample clues are clues corresponding to the sample analysis, and the sample clues are used for representing the personal characteristics of the user who selects the corresponding answer options.
The generative model is a neural network model before training, and specifically includes, but is not limited to, a generative countermeasure network (GAN, generative Adversarial Networks), a large language model (LLM, large Language Model), and the like.
The model training data are used for training the generated model to obtain a trained clue generated model, and further completing a follow-up clue output task.
For example, the model training data provided for the generated model is different in combination manner due to the different types of the model, and the model training data is used for model training in a natural manner. Taking the example of a generative model as a large language model, model training modes include, but are not limited to, a prompt (prompt) mode.
In this embodiment, sample data is obtained first, then model training data is constructed according to sample analysis and sample clues in the sample data, and then a model training data training generation model is adopted to obtain a pre-trained clue generation model, and the pre-training of the clue generation model is performed by using preset sample data, so that the real-time performance of generating clue information in human setting description is ensured, and further the communication efficiency between users in the process of communicating with each other is improved.
In one embodiment, constructing model training data for a generative model from sample parsing and sample cues comprises: determining a training instruction statement aiming at the generated model according to sample analysis and sample clues; the training instruction sentences are used as model training data for the generative model.
When the generated model is a large language model, the training instruction sentences are equivalent to model training data, and the training instruction sentences represent the model training data and serve as control models in the form of training instruction sentences to train.
In this embodiment, according to sample analysis and sample clues, a training instruction statement for the generated model is determined, the training instruction statement is used as model training data for the generated model, the control of the model training process is completed by introducing the training instruction statement, and the training mode of the model is tightly combined with the training data, so that the model training efficiency is improved.
In one embodiment, determining a training instruction statement for a generative model from a sample parsing and sample clues comprises: combining the sample analysis and the sample clues according to the corresponding relation to obtain a model training example; the corresponding relation comprises the corresponding relation between each sample analysis and the sample clue; according to the model training example, training instruction statements are generated.
The model training examples are examples for constructing training instruction sentences and representing the corresponding relation between sample analysis and sample clues.
Illustratively, one model training example is:
sample analysis: corresponding to the inward, unique, quiet character type.
Sample parsing corresponds to sample clues: he may be relatively inward, relatively quiet, and like uniqueness.
Another model training example is:
sample analysis: corresponding to the outward, well-intercourse and open character types.
Sample parsing corresponds to sample clues: he may be more open and like to communicate.
In this embodiment, the sample analysis and the sample clue are combined according to the corresponding relation to obtain the model training example, and further, the training instruction statement is generated according to the model training example, so that the information in the example forms a tight relation, the generated training instruction statement can provide basis for the model more forcefully, and therefore, the data accuracy of the question bank generating model output question bank is improved.
In one embodiment, generating training instruction statements according to a model training example includes: acquiring training requirement information; the training requirement information is information which is embodied in a model training example and is used for providing specific requirements for the output content of the model; the training requirement information comprises requirement description information and attention point description information; and combining according to the training requirement information and the model training examples to obtain training instruction sentences.
The training requirement information is information which is embodied in a model training example and is used for making specific requirements on output content of the model, the training requirement information comprises requirement description information and attention point description information, and in general, the attention point description information is a hard requirement which is more specific than the requirement description information, such as word number limitation and the like.
Illustratively, one training requirement information is: "please convert the parsed content of the answer into clues with a certain format; note that: the number of words of the clue should be within 20 words ", wherein the requirement description information is" please convert the resolution content of the answer into a clue with a certain format ", and the attention point description information is" attention: the number of words of the thread should be within 20 words.
In this embodiment, training requirement information is first obtained, and then training instruction sentences are obtained by combining the training requirement information and model training examples, specific requirements for output data of a cable generation model are proposed by means of the training requirement information, and model tuning can be achieved in a mode of gradually improving the training requirement information in a model training stage, so that model training efficiency of a generated model is improved.
In one embodiment, training a generative model using model training data to obtain a pre-trained cue generation model comprises:
inputting training requirement information into a generated model in the form of a problem;
model training examples corresponding to the training requirement information are input into the generative model in a mode of answering the questions.
Illustratively, training requirement information in the form of questions may be: "please convert the parsed content of the answer into clues with a certain format; note that: the number of words of the thread should be within 20 words ", and corresponding model training examples that exist in a manner of answers to questions may be: sample analysis: corresponding to the inward, unique, quiet character type. Sample parsing corresponds to sample clues: he may be relatively inward, relatively quiet, and like to be unique.
In this embodiment, the training requirement information is input into the generated model in the form of a question, and the model training examples are input into the generated model in the form of answers to the question, so that the training requirement information and the model training examples are respectively corresponding and tightly combined, training of the model is guided by specific requirements, meanwhile, output content of the model is guided by specific examples, training efficiency of the model is improved, and accuracy of the output content of the model is guaranteed.
In one embodiment, as shown in fig. 2, training a generative model using model training data results in a pre-trained cue generation model, comprising:
s210, sample analysis is used as input information.
S220, taking the sample clue as output information.
S230, combining according to the input information and the output, and taking the combined data as model training data.
The training method of the model in the present embodiment is a neural network model training method based on a data tag, the input information is data to be input into the generative model, and the output information is information to be output by setting the generative model.
In this embodiment, sample analysis is used as input information, sample clues are used as output information, the input information and the output are combined, and the combined data is used as model training data, namely, a data format is limited in a mode of setting the input information and the output information, model training is realized, and model training efficiency is improved.
In another embodiment, as shown in fig. 3, a method for generating and displaying clues is provided, which includes the following steps:
s301, acquiring sample data.
S302, combining sample analysis and sample clues according to the corresponding relation to obtain a model training example.
S303, acquiring training requirement information.
S304, according to the training requirement information and the model training examples, combining to obtain training instruction sentences.
S305, taking the training instruction statement as model training data for the generative model.
S306, inputting training requirement information into the generated model in the form of questions.
S307, the model training examples corresponding to the training requirement information are input into the generated model in a mode of answering the questions.
And S308, training the generative model according to the data input to obtain a pre-trained clue generating model.
S309, outputting the target clue matched with the target analysis as the human set description clue information through the pre-trained clue generation model.
S310, in response to the virtual space meeting the trigger condition for displaying the clue information, the person setting description clue information is displayed in the virtual space.
It should be noted that, the specific limitation of the above steps may be referred to the specific limitation of a method for generating and displaying clues, which is not described herein.
In one embodiment, the thread is generated based on a manual editing or keyword library mode, and the thread generation mode of the embodiment has low efficiency, lacks a dynamic generation mode, cannot quickly generate threads meeting the requirements of users, is difficult to accurately describe the characteristics of the users, and is difficult to accurately describe the characteristics of the users, so that the matching effect among subsequent users is not ideal. Based on the method, the method for generating and displaying the clues is provided to accurately and efficiently output the information of the human-set description clues, realize the dynamic generation of clues, meet the continuous changing demands of users, promote the communication efficiency of users when the users carry out the wheat linking interaction, and further promote the matching accuracy of the users in the wheat linking process.
The thread generation and presentation method is described in detail below in one specific embodiment. It is to be understood that the following description is exemplary only and is not intended to limit the application to the details of construction and the arrangements of the components set forth herein.
In this embodiment, model training needs to be performed in advance.
In the model training stage, a general model training method and a large model training method are included; general model training is training of generating model in the form of data of < input information, output content >; the large model training method can train by using the latest large model-based sample mode, and generate questionnaires in the form of < description requirement, addition example and description attention point > and directly generate content by reasoning; the human-set descriptive clue information can also be generated by reasoning after finetune is performed by using the specific domain data based on the large model.
According to the application, firstly, based on answer information input by an answer user and personal word setting information adopted for generating a question, the user characteristics are analyzed in real time by utilizing a deep learning model. In the model training process, a data form of < input information and output clue content > can be adopted, and training can be performed based on a prompt mode of a large model.
For example, in a large model based template approach, the template of template may be:
please translate the parsed content of the answer into a clue with a format, as exemplified below:
examples are:
resolution 1: corresponding to the inward, unique and quiet character type;
thread 1: he may be relatively inward, relatively quiet, and like uniqueness;
analysis 2: the character type corresponds to outward, good interaction and openness;
thread 2: he may be more open, like to communicate;
note that: the number of words of the thread should be within 20 words.
And finally, automatically generating clue information describing the characteristics of the answering user by the system according to the result of model prediction, and searching by other users. The whole process realizes the dynamic generation of clues meeting the requirements of users, and improves the matching efficiency and effect.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a thread generating and displaying device for realizing the above related thread generating and displaying method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of the embodiment of the device for generating and displaying a thread provided below may be referred to the limitation of the method for generating and displaying a thread hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 4, there is provided a thread generating and displaying apparatus, including: an answer acquisition module 401, a clue determination module 402, a clue display module 403, wherein:
the answer acquisition module is used for acquiring answer information of the user account under the condition that the user account is detected to enter the virtual space; the answer information is the answer information input by the user account aiming at the target questionnaire before entering the virtual space;
the clue determining module is used for determining the human-set description clue information aiming at the user account according to the answer information;
and the clue display module is used for displaying the human-set description clue information in the virtual space in response to the virtual space meeting the trigger condition for displaying the clue information.
In one embodiment, the thread determination module is further to:
according to answer options selected by a user account reflected by the answer information, taking resolution corresponding to the selected answer options as target resolution;
and outputting a target clue matched with the target analysis through a pre-trained clue generation model, and taking the target clue as human setting description clue information.
In one embodiment, the thread generating and displaying device further comprises a model training module for:
acquiring sample data; sample data includes sample parsing and sample clues;
constructing model training data for a generated model according to sample analysis and sample clues;
and training the generating model by using model training data to obtain a pre-trained clue generating model.
In one embodiment, the model training module is further to:
determining a training instruction statement aiming at the generated model according to sample analysis and sample clues;
the training instruction sentences are used as model training data for the generative model.
In one embodiment, the model training module is further to:
combining the sample analysis and the sample clues according to the corresponding relation to obtain a model training example; the corresponding relation comprises the corresponding relation between each sample analysis and the sample clue;
according to the model training example, training instruction statements are generated.
In one embodiment, the model training module is further to:
acquiring training requirement information; the training requirement information is information which is embodied in a model training example and is used for providing specific requirements for the output content of the model; the training requirement information comprises requirement description information and attention point description information;
and combining according to the training requirement information and the model training examples to obtain training instruction sentences.
The modules in the thread generating and displaying device can be implemented in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing related data such as clue information. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a thread generation and presentation method.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a thread generation and presentation method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the structures shown in fig. 5-6 are block diagrams of only portions of structures associated with aspects of the application and are not intended to limit the computer device to which aspects of the application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
In one embodiment, a computer device includes a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of the method embodiments described above.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A thread generation and display method, the method comprising:
under the condition that the user account is detected to enter a virtual space, obtaining answer information of the user account; the answer information is the answer information input by the user account aiming at a target questionnaire before entering the virtual space;
according to the answer information, determining the person setting description clue information aiming at the user account;
and responding to the virtual space meeting the triggering condition of displaying the clue information, and displaying the person setting description clue information in the virtual space.
2. The method of claim 1, wherein determining personally set descriptive cue information for the user account based on the answer information comprises:
according to answer options selected by the user account and reflected by the answer information, taking resolution corresponding to the selected answer options as target resolution;
and outputting a target clue matched with the target analysis by a pre-trained clue generation model as the human set description clue information.
3. The method according to claim 2, wherein the method further comprises:
acquiring sample data; the sample data comprises sample analysis and sample clues;
constructing model training data aiming at a generated model according to the sample analysis and the sample clue;
and training the generative model by using the model training data to obtain the pre-trained clue generating model.
4. A method according to claim 3, wherein constructing model training data for a generative model from the sample parsing and the sample cues comprises:
determining a training instruction statement for the generative model according to the sample analysis and the sample clue;
the training instruction statement is used as the model training data for the generative model.
5. The method of claim 4, wherein determining a training instruction statement for the generative model based on the sample parsing and the sample cues comprises:
combining the sample analysis and the sample clue according to the corresponding relation to obtain a model training example; the corresponding relation comprises the corresponding relation between each sample analysis and the sample clue;
and generating the training instruction statement according to the model training example.
6. The method of claim 5, wherein generating the training instruction statement from the model training example comprises:
acquiring training requirement information; the training requirement information is information which is embodied in the model training examples and is used for providing specific requirements for the output content of the model; the training requirement information comprises requirement description information and attention point description information;
and combining the training instruction statement according to the training requirement information and the model training example.
7. A thread generation and display device, the device comprising:
the answer acquisition module is used for acquiring answer information of the user account under the condition that the user account is detected to enter the virtual space; the answer information is the answer information input by the user account aiming at a target questionnaire before entering the virtual space;
the clue determining module is used for determining the person setting description clue information aiming at the user account according to the answer information;
and the clue display module is used for displaying the human-set description clue information in the virtual space in response to the virtual space meeting the trigger condition for displaying the clue information.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202310799770.6A 2023-06-30 2023-06-30 Thread generation and display method, device, computer equipment and storage medium Pending CN116776861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310799770.6A CN116776861A (en) 2023-06-30 2023-06-30 Thread generation and display method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310799770.6A CN116776861A (en) 2023-06-30 2023-06-30 Thread generation and display method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116776861A true CN116776861A (en) 2023-09-19

Family

ID=88007860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310799770.6A Pending CN116776861A (en) 2023-06-30 2023-06-30 Thread generation and display method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116776861A (en)

Similar Documents

Publication Publication Date Title
CN111260545B (en) Method and device for generating image
CN110491218A (en) A kind of online teaching exchange method, device, storage medium and electronic equipment
Park et al. Unraveling the relationships between smartphone use, exposure to heterogeneity, political efficacy, and political participation: a mediation model approach
CN109948151A (en) The method for constructing voice assistant
Wilks et al. A prototype for a conversational companion for reminiscing about images
CN113761156A (en) Data processing method, device and medium for man-machine interaction conversation and electronic equipment
CN112307166B (en) Intelligent question-answering method and device, storage medium and computer equipment
CN117033599A (en) Digital content generation method and related equipment
CN117407516A (en) Information extraction method, information extraction device, electronic equipment and storage medium
CN113542797A (en) Interaction method and device in video playing and computer readable storage medium
CN116776861A (en) Thread generation and display method, device, computer equipment and storage medium
CN112115703B (en) Article evaluation method and device
Godwin-Jones Technology-mediated SLAEvolving Trends and Emerging Technologies
Trang CHATBOT TO SUPPORT LEARNING AMONG NEWCOMERS IN CITIZEN SCIENCE
Kelly et al. Talking about “bioluminescence” and “puppies of the ocean”: An anti‐deficit exploration of how families create and use digital artifacts for informal science learning during and after an aquarium visit
CN117056770A (en) Questionnaire generation method, device, computer equipment and storage medium
Narasimhan Digital accessibility in the Asia-Pacific Region
CN116774876A (en) Page display method, page display device, computer equipment and storage medium
CN117173945A (en) Problem processing method, device, computer equipment and storage medium
CN116501948A (en) Searching method, searching device, electronic equipment and storage medium
CN113626622A (en) Multimedia data display method in interactive teaching and related equipment
CN116450010A (en) Information display method, device, computer equipment and storage medium
CN117494761A (en) Information processing and model training method, device, equipment, medium and program product
CN117407504A (en) Operation and maintenance processing method, apparatus, device, storage medium and program product
CN117010334A (en) Text information generation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination