CN117539371A - Text content display method, apparatus, device and computer readable storage medium - Google Patents

Text content display method, apparatus, device and computer readable storage medium Download PDF

Info

Publication number
CN117539371A
CN117539371A CN202311754949.6A CN202311754949A CN117539371A CN 117539371 A CN117539371 A CN 117539371A CN 202311754949 A CN202311754949 A CN 202311754949A CN 117539371 A CN117539371 A CN 117539371A
Authority
CN
China
Prior art keywords
text
text content
feature vector
prompt word
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311754949.6A
Other languages
Chinese (zh)
Inventor
于鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202311754949.6A priority Critical patent/CN117539371A/en
Publication of CN117539371A publication Critical patent/CN117539371A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Abstract

The application discloses a text content display method, device and equipment and a computer readable storage medium, and belongs to the technical field of computers. The method is applied to a collaborative office client, the collaborative office client comprises a text generation model, and the method comprises the following steps: displaying a text interaction page, wherein a text editing area is displayed in the text interaction page; responding to the triggering operation of the content generating function, displaying a prompt word page, wherein at least one prompt word is displayed in the prompt word page; responding to a triggering operation aiming at a target prompt word in the at least one prompt word, and calling the text generation model to process the target prompt word to obtain first text content; and displaying the first text content in the text editing area. The method improves the flexibility and the display efficiency of the display of the text content.

Description

Text content display method, apparatus, device and computer readable storage medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a computer readable storage medium for displaying text content.
Background
With the continuous development of computer technology, more and more applications provide a function of generating text contents. Therefore, a method for displaying the generated text content is needed to display the generated text content, so that the display of the text content is more flexible, and the display efficiency of the generated text content is further improved.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a computer readable storage medium for displaying text content, which can be used for improving the flexibility of displaying the text content and improving the display efficiency of the text content. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for displaying text content, where the method is applied to a collaboration office client, and the collaboration office client includes a text generation model, and the method includes:
displaying a text interaction page, wherein a text editing area is displayed in the text interaction page;
responding to the triggering operation of the content generating function, displaying a prompt word page, wherein at least one prompt word is displayed in the prompt word page;
responding to a triggering operation aiming at a target prompt word in the at least one prompt word, and calling the text generation model to process the target prompt word to obtain first text content;
And displaying the first text content in the text editing area.
In a possible implementation manner, a text display area is further displayed in the text interaction page, and second text content is displayed in the text display area;
the method further comprises the steps of:
acquiring the second text content;
the method for calling the text generation model to process the target prompt word to obtain first text content comprises the following steps:
and calling the text generation model to process the target prompt word and the second text content to obtain the first text content.
In one possible implementation manner, the calling the text generation model to process the target prompt word and the second text content to obtain the first text content includes:
according to the target prompt word and the second text content, a first feature vector is obtained, and the first feature vector is used for representing the target prompt word and the second text content;
inputting the first feature vector into the text generation model, wherein the first feature vector is used for the text generation model to acquire a plurality of feature words according to the first feature vector, and generating the first text content according to the plurality of feature words, and the first text content comprises the plurality of feature words;
And acquiring the first text content output by the text generation model.
In one possible implementation manner, the obtaining a first feature vector according to the target prompt word and the second text content includes:
acquiring a second feature vector corresponding to the target prompt word, wherein the second feature vector is used for representing the target prompt word;
acquiring a third feature vector corresponding to the second text content, wherein the third feature vector is used for representing the second text content;
and acquiring the first feature vector according to the second feature vector and the third feature vector.
In a possible implementation manner, the obtaining the first feature vector according to the second feature vector and the third feature vector includes:
splicing the second characteristic vector and the third characteristic vector to obtain the first characteristic vector; or,
adding the numerical values of the same dimension in the second characteristic vector and the third characteristic vector to obtain the first characteristic vector; or,
and multiplying the numerical value of the same dimension in the second characteristic vector and the third characteristic vector to obtain the first characteristic vector.
In one possible implementation manner, after the text editing area displays the first text content, the method further includes:
displaying a feedback page, wherein a plurality of feedback controls are displayed in the feedback page, and any feedback control corresponds to one satisfaction degree;
and responding to the triggering operation of any feedback control in the plurality of feedback controls, and taking the satisfaction degree corresponding to any feedback control as the satisfaction degree of the first text content.
In one possible implementation, the method further includes:
and updating the text generation model in response to satisfaction with the first text content not meeting satisfaction requirements.
In another aspect, an embodiment of the present application provides a display device for text content, including:
the display module is used for displaying a text interaction page, and a text editing area is displayed in the text interaction page;
the display module is also used for responding to the triggering operation of the content generating function and displaying a prompt word page, wherein at least one prompt word is displayed in the prompt word page;
the acquisition module is used for responding to the triggering operation of the target prompt word in the at least one prompt word, and calling a text generation model to process the target prompt word so as to obtain first text content;
The display module is further configured to display the first text content in the text editing area.
In a possible implementation manner, a text display area is further displayed in the text interaction page, and second text content is displayed in the text display area;
the acquisition module is further used for acquiring the second text content;
and the acquisition module is used for calling the text generation model to process the target prompt word and the second text content so as to obtain the first text content.
In a possible implementation manner, the obtaining module is configured to obtain a first feature vector according to the target prompt word and the second text content, where the first feature vector is used to characterize the target prompt word and the second text content; inputting the first feature vector into the text generation model, wherein the first feature vector is used for the text generation model to acquire a plurality of feature words according to the first feature vector, and generating the first text content according to the plurality of feature words, and the first text content comprises the plurality of feature words; and acquiring the first text content output by the text generation model.
In a possible implementation manner, the obtaining module is configured to obtain a second feature vector corresponding to the target prompt word, where the second feature vector is used to characterize the target prompt word; acquiring a third feature vector corresponding to the second text content, wherein the third feature vector is used for representing the second text content; and acquiring the first feature vector according to the second feature vector and the third feature vector.
In a possible implementation manner, the obtaining module is configured to splice the second feature vector and the third feature vector to obtain the first feature vector; or, adding the numerical values of the same dimension in the second characteristic vector and the third characteristic vector to obtain the first characteristic vector; or multiplying the numerical value of the same dimension in the second characteristic vector and the third characteristic vector to obtain the first characteristic vector.
In one possible implementation manner, the display module is further configured to display a feedback page, where a plurality of feedback controls are displayed in the feedback page, and any feedback control corresponds to a satisfaction degree;
the obtaining module is further configured to respond to a triggering operation for any feedback control in the plurality of feedback controls, and take satisfaction corresponding to any feedback control as satisfaction to the first text content.
In one possible implementation, the apparatus further includes:
and the updating module is used for updating the text generation model in response to the satisfaction degree of the first text content not meeting the satisfaction requirement.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one piece of program code, and the at least one piece of program code is loaded and executed by the processor, so that the computer device implements a method for displaying text content according to any one of the foregoing methods.
In another aspect, there is provided a computer readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to cause a computer to implement any of the above-described methods for displaying text content.
In another aspect, there is provided a computer program or computer program product having stored therein at least one computer instruction that is loaded and executed by a processor to cause the computer to implement a method of displaying text content of any of the above.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects:
according to the technical scheme, the collaborative office software comprises the text generation model, so that the text generation model is called to generate the first text content, and the generated first text content is directly displayed in the text editing area. According to the method, a user does not need to manually copy and paste the first text content, so that the first text content is displayed in the text editing area, the flexibility of displaying the first text content in the text editing box is high, the display efficiency of the first text content is high, and the safety of the first text content is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of a method for displaying text content according to an embodiment of the present application;
Fig. 2 is a flowchart of a method for displaying text content according to an embodiment of the present application;
fig. 3 is a schematic display diagram of a home page of a collaboration office client according to an embodiment of the present application;
fig. 4 is a schematic display diagram of a text interaction page provided in an embodiment of the present application;
FIG. 5 is a schematic illustration of a prompt word page according to an embodiment of the present disclosure;
fig. 6 is a schematic display diagram of a feedback page according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a display device for text content according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of a method for displaying text content according to an embodiment of the present application, where, as shown in fig. 1, the implementation environment includes: a terminal device 101. A collaborative office client is installed and operated in the terminal device 101, and the collaborative office client includes a text generation model, and is used for executing the display method of text content provided in the embodiment of the present application.
Alternatively, the terminal device 101 may be any electronic device product that can perform man-machine interaction with a user through one or more manners of a keyboard, a touch pad, a remote controller, a voice interaction or a handwriting device. For example, a PC (Personal Computer ), a mobile phone, a smart phone, a PDA (Personal Digital Assistant ), a wearable device, a PPC (Pocket PC, palm computer), a tablet computer, a smart car machine, a smart television, a smart sound box, a smart watch, and the like.
The terminal device 101 may refer broadly to one of a plurality of terminal devices, and the present embodiment is illustrated only with the terminal device 101. Those skilled in the art will appreciate that the number of terminal devices 101 may be greater or lesser. For example, the number of the terminal devices 101 may be only one, or the number of the terminal devices 101 may be tens or hundreds, or more, and the number and the device type of the terminal devices 101 are not limited in the embodiment of the present application.
It will be appreciated by those skilled in the art that the above-described terminal device 101 is merely illustrative, and that other existing or future-occurring terminal devices, as applicable to the present application, are also intended to be encompassed within the scope of the present application and are hereby incorporated by reference herein.
The embodiment of the present application provides a method for displaying text content, which may be applied to the implementation environment shown in fig. 1, and takes a flowchart of the method for displaying text content provided in the embodiment of the present application shown in fig. 2 as an example, where the method may be performed by a collaboration office client installed and running in the terminal device 101 in fig. 1. As shown in fig. 2, the method includes the following steps 201 to 204.
In step 201, a text interaction page is displayed in which a text editing area is displayed.
In the exemplary embodiment of the application, a collaborative office client is installed and operated in the terminal equipment, and the collaborative office client comprises a text generation model. The collaboration office client may be any collaboration office client, and the text generation model may be a generated dialogue model, or may be another model capable of generating text content, which is not limited in this embodiment of the present application. The generative dialog model may perform language generation and dialog interactions. The text generation model uses deep learning technology, and can automatically generate language for various natural language processing tasks. The generated dialogue model can be applied to the fields of robots, intelligent customer service, virtual characters and the like, and personalized services and solutions for solving problems are provided by dialogue with users. Meanwhile, the method can be applied to the fields of text creation, automatic translation, automatic abstracting and the like, and provides more efficient text processing service for users. The application of the generated dialogue model can be expanded to a plurality of industry fields, and more intelligent services and solutions are provided for users by dialogue with the users. In summary, the generated dialogue model is used as a vertical application, has wide application prospect, can be applied to multiple fields, and provides more efficient and intelligent service and solution for users.
The display interface of the terminal device displays relevant information of the collaboration office client, where the relevant information of the collaboration office client may be a name of the collaboration office client, an icon of the collaboration office client, or other information capable of uniquely representing the collaboration office client. When a user wants to operate the collaborative office client, the user triggers related information of the collaborative office client, the terminal equipment receives triggering operation aiming at the related information of the collaborative office client, a home page of the collaborative office client is displayed, and object information of at least one object is displayed in the home page of the collaborative office client. At least one object may be a meeting, a task, or a document. Object information of an object includes, but is not limited to, the name of the object, introduction information of the object, the type of the object. The user may join any meeting, join any task, join any document. The relevant information that the user triggers the collaborative office client may be relevant information that the user clicks the collaborative office client.
Fig. 3 is a schematic display diagram of a home page of a collaboration office client according to an embodiment of the present application. In fig. 3, object information of three objects is shown. Wherein the name of the first object is "object one", the introduction information of the first object is "XXXX", and the type of the first object is "meeting"; the name of the second object is "object two", the introduction information of the second object is "AAAAAA", and the type of the second object is "document"; the name of the third object is "object three", the introduction information of the third object is "BBBBBB", and the type of the third object is "task".
It should be noted that, the number of object information of the objects displayed in the home page of the collaboration office client may be more or less, which is not limited in the embodiment of the present application.
When the user triggers the object information of the target object, the user is indicated to want to join the target object, and then a text interaction page is displayed, and a text editing area is displayed in the text interaction page. The target object is any one of the at least one object.
Fig. 4 is a schematic display diagram of a text interaction page according to an embodiment of the present application. A text editing area 401 is shown in fig. 4.
In one possible implementation, a creation control is also displayed in the home page of the collaboration office client, the creation control being used to create the object. As 301 in fig. 3, is to create a control. When the user does not want to join the created object, the user can trigger the creation control, respond to the triggering operation of the user on the creation control, display at least one selectable type, respond to the triggering operation on any selectable type, display a text interaction page, and represent that the object of any selectable type is created after the text interaction page is displayed, wherein the text interaction page is the text interaction page corresponding to the object of any selectable type.
In step 202, in response to a trigger operation for generating a content function, a cue word page is displayed in which at least one cue word is displayed.
In one possible implementation manner, the prompt word page may be displayed on the text interaction page in a superimposed manner, or may be displayed separately.
A generation control is also displayed in the text interaction page, where the generation control is used to trigger a function of generating text, such as 402 in fig. 4. When the user wants to generate text content, the user triggers the generation control, the collaborative office client receives the triggering operation of the content generation function, and a prompt word page is displayed in response to the triggering operation of the content generation function, wherein at least one prompt word is displayed in the prompt word page. Fig. 5 is a schematic display diagram of a prompt word page according to an embodiment of the present application. Ten prompt words are displayed in the prompt word page shown in fig. 5, and the ten prompt words are shown in fig. 5, which are not described in detail herein.
The user triggering generation control may be a user clicking generation control, or may trigger the generation control in other manners, which is not limited in the embodiment of the present application.
In step 203, in response to a triggering operation for a target prompt word in the at least one prompt word, a text generation model is called to process the target prompt word, so as to obtain first text content.
The triggering operation for the target prompt word may be a clicking operation for the target prompt word, or may be other operations, which is not limited in the embodiment of the present application. When the user triggers the target prompt, it is indicated that the first text content that the user wishes to generate is text content associated with the target prompt. Responding to the triggering operation aiming at the target prompt word, calling a text generation model to process the target prompt word, and obtaining the first text content comprises the following steps: obtaining a second feature vector corresponding to the target prompt word, wherein the second feature vector is used for representing the target prompt word; inputting a second feature vector into a text generation model, wherein the second feature vector is used for the text generation model to acquire a plurality of feature words according to the second feature vector, and generating first text content according to the plurality of feature words, and the first text content comprises the plurality of feature words; and acquiring the first text content output by the text generation model.
The process of obtaining the second feature vector corresponding to the target prompt word is described in the following process, which is not described herein. The process of inputting the second feature vector into the text generation model and obtaining the first text content output by the text generation model is similar to the process of inputting the first feature vector into the text generation model and obtaining the first text content output by the text generation model, and will not be described in detail herein.
In one possible implementation manner, a text display area is further displayed in the text interaction page, and second text content is displayed in the text display area, wherein the second text content is text content corresponding to the target object. There is no overlap between the text display area and the text editing area. As in 403 in fig. 4, a text display area, and "xxxxxxxxxxxxxxxxxxxx" displayed in the text display area 403 is a second text content.
And calling the text generation model to process the target prompt word, and acquiring second text content before the first text content is obtained. The process of calling the text generation model to process the target prompt word to obtain the first text content comprises the following steps: and calling a text generation model to process the target prompt word and the second text content to obtain the first text content.
In one possible implementation manner, the process of calling the text generation model to process the target prompt word and the second text content to obtain the first text content includes: according to the target prompt word and the second text content, a first feature vector is obtained, and the first feature vector is used for representing the target prompt word and the second text content; inputting a first feature vector into a text generation model, wherein the first feature vector is used for the text generation model to acquire a plurality of feature words, and generating first text content according to the plurality of feature words, and the first text content comprises the plurality of feature words; and acquiring the first text content output by the text generation model.
The process of obtaining the first feature vector according to the target prompt word and the second text content comprises the following steps: obtaining a second feature vector corresponding to the target prompt word, wherein the second feature vector is used for representing the target prompt word; acquiring a third feature vector corresponding to the second text content, wherein the third feature vector is used for representing the second text content; and acquiring a first feature vector according to the second feature vector and the third feature vector.
In one possible implementation, the collaboration office client further includes a feature vector acquisition model for acquiring feature vectors. Optionally, the feature vector acquisition model is any model capable of acquiring a feature vector, and the feature vector acquisition model is not limited in the embodiment of the present application. The feature vector acquisition model may be a convolutional neural network (Convolutional Neural Networks, CNN) model, a deep neural network (Deep Neural Networks, DNN) model, or a long-short term memory (Long Short Term Memory, LSTM) model, for example.
And acquiring a second feature vector corresponding to the target prompt word by calling a feature vector acquisition model, and acquiring a third feature vector corresponding to the second text content by calling a feature vector acquisition model. Optionally, the target prompt word is input into the feature vector acquisition model, and the feature vector output by the feature vector acquisition model is used as a second feature vector corresponding to the target prompt word. And inputting the second text content into a feature vector acquisition model, and taking the feature vector output by the feature vector acquisition model as a third feature vector corresponding to the second text content.
The embodiment of the application provides three methods for acquiring a first feature vector according to a second feature vector and a third feature vector.
The first method is to splice the second feature vector and the third feature vector to obtain a first feature vector.
Alternatively, the third feature vector may be spliced before the second feature vector to obtain the first feature vector, or the third feature vector may be spliced after the second feature vector to obtain the first feature vector, which is not limited in the embodiment of the present application.
Illustratively, the second feature vector is (a, B, C, D), the third feature vector is (E, F, G, H), then the first feature vector is (a, B, C, D, E, F, G, H), or the first feature vector is (E, F, G, H, a, B, C, D).
And in the second method, the numerical values of the same dimension in the second feature vector and the third feature vector are added to obtain the first feature vector.
Illustratively, the second feature vector is (a, B, C, D), the third feature vector is (E, F, G, H), and the first feature vector is (a+e, b+f, c+g, d+h).
And multiplying the numerical values of the same dimension in the second characteristic vector and the third characteristic vector to obtain the first characteristic vector.
Illustratively, the second feature vector is (a, B, C, D), the third feature vector is (E, F, G, H), and the first feature vector is (AE, BF, CG, DH).
It should be noted that any of the above methods may be selected to determine the first feature vector, which is not limited in the embodiments of the present application.
Inputting the first feature vector into a text generation model, and acquiring the first text content output by the text generation model comprises the following steps: the text generation model comprises a plurality of candidate words, feature vectors corresponding to the candidate words are obtained, and the feature vector corresponding to any candidate word is used for representing any candidate word. Determining the matching degree of each candidate word with the target prompt word and the second text content according to the first feature vector and the feature vector corresponding to each candidate word; taking candidate words which have matching degree meeting the matching requirement with the target prompt word and the second text content as feature words; according to the feature words, first text content is generated, wherein the first text content comprises a plurality of feature words.
The process of obtaining the feature vector corresponding to each candidate word is similar to the process of obtaining the first feature vector corresponding to the target prompt word, and will not be described herein. The matching degree meeting the matching requirement may mean that the matching degree is greater than a matching threshold, which is set based on experience, or is adjusted according to an implementation environment, which is not limited in the embodiment of the present application. Illustratively, the match threshold is 80.
Optionally, an input box is also displayed in the prompt word page, where the input box is used to obtain the reference prompt word, and 501 in fig. 5 is an input box. The user inputs content in the input box, and a reference prompt word is determined according to the content input by the user. After the reference prompt word is obtained, the text generation model can be called to process the reference prompt word to obtain first text content, or the text generation model can be called to process the reference prompt word and second text content to obtain first text content. The process of calling the text generation model to process the reference prompt word to obtain the first text content is similar to the process of calling the text generation model to process the target prompt word to obtain the first text content; the process of calling the text generation model to process the reference prompt word and the second text content to obtain the first text content is similar to the process of calling the text generation model to process the target prompt word and the second text content to obtain the first text content, and detailed description is omitted here.
Optionally, the process of determining the reference prompt word according to the content input by the user includes: and taking the content input by the user as a reference prompt word, or taking the content corresponding to the content input by the user as the reference prompt word.
Illustratively, the content input by the user is a, and thus a is taken as a reference prompt. For another example, since the content input by the user is a and the content corresponding to a is B, B is used as the reference term.
In step 204, first text content is displayed in a text editing area.
In one possible implementation, after the first text content is obtained in step 203, the first text content is displayed in a text editing area so that the user can see the first text content. The user can edit or modify the first text content displayed in the text editing area, so that the edited or modified first text content meets the requirements of the user.
Optionally, a send control, such as send control 404 in fig. 4, is also displayed in the text interaction page. When the first text content is not displayed in the text editing area, the sending control is in a non-triggerable state; when the first text content is displayed in the text editing area, the sending control is in a triggerable state. When the sending control is in the non-triggerable state, the interactive text page does not change even if the user triggers the sending control. When the transmission control is in a triggerable state, the user triggers the transmission control, the first text content displayed in the text editing area is canceled from being displayed, and the first text content is displayed in the text display area.
The sending control is in a non-triggerable state, which may be that the sending control is displayed in gray, and the sending control is in a triggerable state, which may be that the sending control is displayed in white. Of course, the display manner in which the sending control is in the non-triggerable state and the sending control is in the triggerable state may be other, which is not limited in the embodiment of the present application.
In one possible implementation manner, after the first text content is displayed in the text editing area, a feedback page may also be displayed, where a plurality of feedback controls are displayed in the feedback page, where any feedback control corresponds to one satisfaction degree, and satisfaction degrees corresponding to the plurality of feedback controls are different. The feedback page can be displayed on the text interaction page in a superimposed manner or can be displayed independently, and the display mode of the feedback page is not limited in the embodiment of the application.
Optionally, when the first text content is displayed in the text editing area and the display duration of the first text content is a duration threshold, displaying a feedback page. The time length threshold is set based on experience, or adjusted according to implementation environment, or determined based on the word number of the first text content, which is not limited in the embodiment of the present application. When the word number of the first text content is determined based on the time length threshold value, the word number of the first text content is proportional to the time length threshold value. That is, the more words of the first text content, the greater the duration threshold, and conversely, the fewer words of the first text content, the smaller the duration threshold. Illustratively, the duration threshold is 3 seconds. That is, when the display duration of the first text content in the text editing area is 3 seconds, the feedback page is displayed.
The feedback page is displayed only when the display time of the first text content is the time threshold, so that the user can read the first text content completely, the user can fully know the first text content, and the satisfaction degree of the user on the first text content is more accurate.
Fig. 6 is a schematic display diagram of a feedback page according to an embodiment of the present application. In fig. 6, three feedback controls are shown, feedback control 601, feedback control 602, feedback control 603, respectively. Wherein the satisfaction degree corresponding to the feedback control 601 is "dissatisfaction", the satisfaction degree corresponding to the feedback control 602 is "satisfaction", and the satisfaction degree corresponding to the feedback control 603 is "very satisfaction".
And responding to the triggering operation of any feedback control in the plurality of feedback controls, and taking the satisfaction degree corresponding to any feedback control as the satisfaction degree of the first text content. Illustratively, the user triggering the feedback control 602 indicates that the user's satisfaction with the first text content is "satisfied.
After determining the satisfaction of the user with the first text content, the text generation model may also be updated in response to the satisfaction of the user with the first text content not meeting the satisfaction requirement. Illustratively, the parameters of the text generation model are updated.
Optionally, the feature vector acquisition model may also be updated in response to satisfaction with the first text content not meeting the satisfaction requirement. Illustratively, the parameters of the feature vector acquisition model are updated.
The satisfaction degree not meeting the satisfaction requirement may mean that the satisfaction degree is not satisfied, and the satisfaction degree not meeting the satisfaction requirement may be other, and the embodiment of the present application does not limit the satisfaction degree.
In one possible implementation manner, after the first text content is displayed in the text editing area, a scoring page may be displayed, where a plurality of scores are displayed in the scoring page, and the scoring page may be displayed on the text interaction page in a superimposed manner or may be displayed separately. Any score is taken as a score for the first text content in response to a triggering operation for any score of the plurality of scores. The text generation model is updated based on the score for the first text content being below the score threshold. Wherein the scoring threshold is set based on experience or adjusted according to the implementation environment, which is not limited in the embodiments of the present application. Illustratively, the scoring threshold is 6.
In the above method, since the collaborative office software includes the text generation model, the text generation model is called to generate the first text content, and the generated first text content is directly displayed in the text editing area. According to the method, a user does not need to manually copy and paste the first text content, so that the first text content is displayed in the text editing area, the flexibility of displaying the first text content in the text editing box is high, the display efficiency of the first text content is high, and the safety of the first text content is improved.
Moreover, when the first text content is generated, not only the target prompt word selected by the user but also the second text content displayed in the text display area are considered, and the first text content is generated according to the target prompt word and the second text content, so that the generated first text content is not only the content related to the target prompt word, but also the content related to the second text content, namely, the generated first text content meets the text generation requirement of the user and is also related to the context.
Fig. 7 is a schematic structural diagram of a display device for text content according to an embodiment of the present application, where, as shown in fig. 7, the device includes:
The display module 701 is configured to display a text interaction page, where a text editing area is displayed;
the display module 701 is further configured to display a prompt word page in response to a triggering operation of the content generating function, where at least one prompt word is displayed in the prompt word page;
the obtaining module 702 is configured to respond to a triggering operation for a target prompt word in the at least one prompt word, and call a text generation model to process the target prompt word, so as to obtain first text content;
the display module 701 is further configured to display the first text content in the text editing area.
In one possible implementation, a text display area is also displayed in the text interaction page, and second text content is displayed in the text display area;
the obtaining module 702 is further configured to obtain second text content;
and the obtaining module 702 is configured to invoke the text generation model to process the target prompt word and the second text content, so as to obtain the first text content.
In one possible implementation, the obtaining module 702 is configured to obtain a first feature vector according to the target prompt word and the second text content, where the first feature vector is used to characterize the target prompt word and the second text content; inputting a first feature vector into a text generation model, wherein the first feature vector is used for the text generation model to acquire a plurality of feature words according to the first feature vector, and generating first text content according to the plurality of feature words, and the first text content comprises the plurality of feature words; and acquiring the first text content output by the text generation model.
In one possible implementation manner, the obtaining module 702 is configured to obtain a second feature vector corresponding to the target prompt word, where the second feature vector is used to characterize the target prompt word; acquiring a third feature vector corresponding to the second text content, wherein the third feature vector is used for representing the second text content; and acquiring a first feature vector according to the second feature vector and the third feature vector.
In a possible implementation manner, the obtaining module 702 is configured to splice the second feature vector and the third feature vector to obtain a first feature vector; or, adding the numerical values of the same dimension in the second characteristic vector and the third characteristic vector to obtain a first characteristic vector; or multiplying the numerical values of the same dimension in the second feature vector and the third feature vector to obtain the first feature vector.
In a possible implementation manner, the display module 701 is further configured to display a feedback page, where a plurality of feedback controls are displayed, and any feedback control corresponds to a satisfaction degree;
the obtaining module 702 is further configured to respond to a triggering operation for any feedback control of the plurality of feedback controls, and take satisfaction corresponding to any feedback control as satisfaction to the first text content.
In one possible implementation, the apparatus further includes:
and the updating module is used for updating the text generation model in response to the satisfaction degree of the first text content not meeting the satisfaction requirement.
In the above device, since the collaborative office software includes the text generation model, the text generation model is called to generate the first text content, and the generated first text content is directly displayed in the text editing area. The first text content is displayed in the text editing area without the process of manually copying and pasting the first text content by a user, so that the flexibility of displaying the first text content in the text editing box is higher, the display efficiency of the first text content is higher, and the safety of the first text content is also improved.
It should be understood that, in implementing the functions of the apparatus provided above, only the division of the above functional modules is illustrated, and in practical application, the above functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 8 shows a block diagram of a terminal device 800 according to an exemplary embodiment of the present application. The terminal device 800 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal device 800 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the terminal device 800 includes: a processor 801 and a memory 802.
Processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 801 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 801 may integrate a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of the content that the display screen is required to display. In some embodiments, the processor 801 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement a method of displaying text content provided by method embodiments in the present application.
In some embodiments, the terminal device 800 may further optionally include: a peripheral interface 803, and at least one peripheral. The processor 801, the memory 802, and the peripheral interface 803 may be connected by a bus or signal line. Individual peripheral devices may be connected to the peripheral device interface 803 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, a display 805, a camera assembly 806, audio circuitry 807, and a power supply 809.
Peripheral interface 803 may be used to connect at least one Input/Output (I/O) related peripheral to processor 801 and memory 802. In some embodiments, processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 804 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 804 may communicate with other terminal devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 804 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to collect touch signals at or above the surface of the display 805. The touch signal may be input as a control signal to the processor 801 for processing. At this time, the display 805 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 805 may be one, and disposed on a front panel of the terminal device 800; in other embodiments, the display 805 may be at least two, and disposed on different surfaces of the terminal device 800 or in a folded design; in other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal device 800. Even more, the display 805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 805 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 806 is used to capture images or video. Optionally, the camera assembly 806 includes a front camera and a rear camera. Typically, a front camera is provided at the front panel of the terminal device 800, and a rear camera is provided at the rear of the terminal device 800. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 801 for processing, or inputting the electric signals to the radio frequency circuit 804 for voice communication. For stereo acquisition or noise reduction purposes, a plurality of microphones may be respectively disposed at different portions of the terminal device 800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 807 may also include a headphone jack.
The power supply 809 is used to power the various components in the terminal device 800. The power supply 809 may be an alternating current, direct current, disposable battery, or rechargeable battery. When the power supply 809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal device 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyroscope sensor 812, pressure sensor 813, optical sensor 815, and proximity sensor 816.
The acceleration sensor 811 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal apparatus 800. For example, the acceleration sensor 811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 801 may control the display screen 805 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 811. Acceleration sensor 811 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal device 800, and the gyro sensor 812 may collect a 3D motion of the user to the terminal device 800 in cooperation with the acceleration sensor 811. The processor 801 may implement the following functions based on the data collected by the gyro sensor 812: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 813 may be disposed at a side frame of the terminal device 800 and/or at a lower layer of the display 805. When the pressure sensor 813 is provided at a side frame of the terminal device 800, a grip signal of the terminal device 800 by a user can be detected, and the processor 801 performs left-right hand recognition or quick operation according to the grip signal acquired by the pressure sensor 813. When the pressure sensor 813 is disposed at the lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the display screen 805 based on the intensity of ambient light collected by the optical sensor 815. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 805 is turned up; when the ambient light intensity is low, the display brightness of the display screen 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera module 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also called a distance sensor, is typically provided at the front panel of the terminal device 800. The proximity sensor 816 is used to collect the distance between the user and the front face of the terminal device 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front face of the terminal device 800 gradually decreases, the processor 801 controls the display 805 to switch from the bright screen state to the off screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal device 800 gradually increases, the processor 801 controls the display 805 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one piece of program code loaded and executed by a processor to cause a computer to implement a method of displaying text content of any one of the above.
Alternatively, the above-mentioned computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Read-Only optical disk (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program or a computer program product having at least one computer instruction stored therein, the at least one computer instruction being loaded and executed by a processor to cause a computer to implement a method of displaying text content of any of the above.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, at least one hint term and some information referred to in this application are obtained with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to any modification, equivalents, or improvements made within the principles of the present application.

Claims (10)

1. A method for displaying text content, the method being applied to a collaborative office client, the collaborative office client including a text generation model, the method comprising:
displaying a text interaction page, wherein a text editing area is displayed in the text interaction page;
responding to the triggering operation of the content generating function, displaying a prompt word page, wherein at least one prompt word is displayed in the prompt word page;
responding to a triggering operation aiming at a target prompt word in the at least one prompt word, and calling the text generation model to process the target prompt word to obtain first text content;
and displaying the first text content in the text editing area.
2. The method of claim 1, wherein a text display area is further displayed in the text interaction page, the text display area having second text content displayed therein;
The method further comprises the steps of:
acquiring the second text content;
the method for calling the text generation model to process the target prompt word to obtain first text content comprises the following steps:
and calling the text generation model to process the target prompt word and the second text content to obtain the first text content.
3. The method of claim 2, wherein said invoking the text generation model to process the target cue word and the second text content results in the first text content comprises:
according to the target prompt word and the second text content, a first feature vector is obtained, and the first feature vector is used for representing the target prompt word and the second text content;
inputting the first feature vector into the text generation model, wherein the first feature vector is used for the text generation model to acquire a plurality of feature words according to the first feature vector, and generating the first text content according to the plurality of feature words, and the first text content comprises the plurality of feature words;
and acquiring the first text content output by the text generation model.
4. The method of claim 3, wherein the obtaining a first feature vector from the target cue word and the second text content comprises:
acquiring a second feature vector corresponding to the target prompt word, wherein the second feature vector is used for representing the target prompt word;
acquiring a third feature vector corresponding to the second text content, wherein the third feature vector is used for representing the second text content;
and acquiring the first feature vector according to the second feature vector and the third feature vector.
5. The method of claim 4, wherein the obtaining the first feature vector from the second feature vector and the third feature vector comprises:
splicing the second characteristic vector and the third characteristic vector to obtain the first characteristic vector; or,
adding the numerical values of the same dimension in the second characteristic vector and the third characteristic vector to obtain the first characteristic vector; or,
and multiplying the numerical value of the same dimension in the second characteristic vector and the third characteristic vector to obtain the first characteristic vector.
6. The method of any one of claims 1 to 5, wherein after the text editing area displays the first text content, the method further comprises:
Displaying a feedback page, wherein a plurality of feedback controls are displayed in the feedback page, and any feedback control corresponds to one satisfaction degree;
and responding to the triggering operation of any feedback control in the plurality of feedback controls, and taking the satisfaction degree corresponding to any feedback control as the satisfaction degree of the first text content.
7. The method of claim 6, wherein the method further comprises:
and updating the text generation model in response to satisfaction with the first text content not meeting satisfaction requirements.
8. A display device for text content, the device comprising:
the display module is used for displaying a text interaction page, and a text editing area is displayed in the text interaction page;
the display module is also used for responding to the triggering operation of the content generating function and displaying a prompt word page, wherein at least one prompt word is displayed in the prompt word page;
the acquisition module is used for responding to the triggering operation of the target prompt word in the at least one prompt word, and calling a text generation model to process the target prompt word so as to obtain first text content;
the display module is further configured to display the first text content in the text editing area.
9. A computer device, characterized in that it comprises a processor and a memory, in which at least one piece of program code is stored, which is loaded and executed by the processor, to cause the computer device to implement the method of displaying text content according to any of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one piece of program code, the at least one piece of program code being loaded and executed by a processor to cause a computer to implement a method of displaying text content as claimed in any one of claims 1 to 7.
CN202311754949.6A 2023-12-19 2023-12-19 Text content display method, apparatus, device and computer readable storage medium Pending CN117539371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311754949.6A CN117539371A (en) 2023-12-19 2023-12-19 Text content display method, apparatus, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311754949.6A CN117539371A (en) 2023-12-19 2023-12-19 Text content display method, apparatus, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117539371A true CN117539371A (en) 2024-02-09

Family

ID=89786274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311754949.6A Pending CN117539371A (en) 2023-12-19 2023-12-19 Text content display method, apparatus, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117539371A (en)

Similar Documents

Publication Publication Date Title
CN110471858B (en) Application program testing method, device and storage medium
CN112163406B (en) Interactive message display method and device, computer equipment and storage medium
CN109948581B (en) Image-text rendering method, device, equipment and readable storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN111126958B (en) Schedule creation method, schedule creation device, schedule creation equipment and storage medium
CN112870697B (en) Interaction method, device, equipment and medium based on virtual relation maintenance program
CN112764600B (en) Resource processing method, device, storage medium and computer equipment
CN112860046B (en) Method, device, electronic equipment and medium for selecting operation mode
CN110852093A (en) Text information generation method and device, computer equipment and storage medium
CN112311652B (en) Message sending method, device, terminal and storage medium
CN114595019A (en) Theme setting method, device and equipment of application program and storage medium
CN117539371A (en) Text content display method, apparatus, device and computer readable storage medium
CN112487162A (en) Method, device and equipment for determining text semantic information and storage medium
CN112560472B (en) Method and device for identifying sensitive information
CN110795465B (en) User scale prediction method, device, server and storage medium
CN117055788A (en) Method, device and equipment for displaying and interacting multimedia resources
CN117032471A (en) Text content generation method, device and equipment
CN117524227A (en) Voice control method, device, equipment and computer readable storage medium
CN117942570A (en) Virtual object interaction method, device, equipment and computer readable storage medium
CN117591746A (en) Resource recommendation method, device, computer equipment and storage medium
CN116820318A (en) Content display method, device, equipment and computer readable storage medium
CN116320582A (en) Video display method and device, electronic equipment and storage medium
CN116684371A (en) Service function access method, device, equipment and computer readable storage medium
CN114817796A (en) Information content commenting method, device, equipment and readable storage medium
CN117668095A (en) Data display method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination