CN113590247B - Text creation method and computer program product - Google Patents

Text creation method and computer program product Download PDF

Info

Publication number
CN113590247B
CN113590247B CN202110825484.3A CN202110825484A CN113590247B CN 113590247 B CN113590247 B CN 113590247B CN 202110825484 A CN202110825484 A CN 202110825484A CN 113590247 B CN113590247 B CN 113590247B
Authority
CN
China
Prior art keywords
text
description
display interface
video
script
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110825484.3A
Other languages
Chinese (zh)
Other versions
CN113590247A (en
Inventor
丁建栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Alibaba Cloud Feitian Information Technology Co ltd
Original Assignee
Hangzhou Alibaba Cloud Feitian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Alibaba Cloud Feitian Information Technology Co ltd filed Critical Hangzhou Alibaba Cloud Feitian Information Technology Co ltd
Priority to CN202110825484.3A priority Critical patent/CN113590247B/en
Publication of CN113590247A publication Critical patent/CN113590247A/en
Application granted granted Critical
Publication of CN113590247B publication Critical patent/CN113590247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/106Display of layout of documents; Previewing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a text creation method and a computer program product, wherein the text creation method is used for generating a text describing an object and comprises the following steps: obtaining object information; generating a description text for describing the object according to the object information in response to the text generation operation, wherein the description text comprises at least one shot script, and the shot script comprises a title and a line corresponding to the title; the description text display interface is displayed, wherein the description text display interface comprises at least one sub-mirror display area for displaying the sub-lens script, and the sub-mirror display area comprises a title area for displaying titles and a station word area for displaying station words; and responding to the rewrite operation of the description text displayed on the description text display interface, acquiring rewrite information, rewriting the lines in the description text according to the rewrite information, and displaying the rewritten lines in the line area. The scheme of the application greatly simplifies the production process of the descriptive text.

Description

Text creation method and computer program product
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a text creation method and a computer program product.
Background
With the maturation of internet technology and the development of 5G technology, the network rate is further improved, so that the content on the network is gradually visualized. Compared with the traditional image-text content, the video is more complex to produce, and the generated video is obtained by writing a script file of the video, shooting, editing and the like according to the script file.
In the steps of producing the video, the script file is used for determining elements such as shooting shots, lines, text and the like, is a precondition of the subsequent shooting and editing steps, and is a basis for guaranteeing the quality of the video. However, when writing a script file of a video, a plurality of contents such as a line of a video and an object in the video need to be considered, so that the writing of the script file is time-consuming and laborious.
Disclosure of Invention
In view of the above, the present application provides a text authoring solution to at least partially address the above-mentioned problems.
According to a first aspect of the present application, there is provided a text authoring method for generating text describing an object, comprising: obtaining object information; generating a description text for describing the object according to the object information in response to the text generation operation, wherein the description text comprises at least one shot script, and the shot script comprises a title and a line corresponding to the title; the method comprises the steps of displaying a description text display interface, wherein the description text display interface comprises at least one sub-mirror display area for displaying a shot script, and the sub-mirror display area comprises a title area for displaying the title and a line area for displaying the line; and responding to the rewrite operation of the description text displayed on the description text display interface, acquiring rewrite information, rewriting the lines in the description text according to the rewrite information, and displaying the rewritten lines in the line area.
According to a second aspect of the present application, there is provided a computer program product having a computer program stored thereon, which when executed by a processor implements the text authoring method of the first aspect.
According to the text rewriting scheme, the description text for describing the object is generated according to the object information and displayed through the description text display interface in response to the text generation operation, and the description text is rewritten according to the rewriting operation of the description text displayed by the description text display interface, so that the rewritten description text is obtained. Therefore, the rewritten description text capable of expressing the content desired by the user can be automatically generated based on the object information, and the production process of the description text is greatly simplified. When the scheme is applied to the video shooting field, the generated description text can be directly used as the video script, so that the production process of the video script is simplified.
Drawings
In order to more clearly illustrate the technical solutions of the present application or the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the prior art descriptions, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1A is a flow chart of steps of a text authoring method in accordance with an embodiment of the present application;
FIG. 1B is a schematic diagram of an example of a scenario in the embodiment of FIG. 1A;
FIG. 2A is a flow chart illustrating steps of a text authoring method in accordance with an embodiment of the present application;
FIG. 2B is a schematic diagram of a description item presentation interface;
FIG. 2C is a schematic illustration of an interface with multiple mirror display regions;
FIG. 2D is a schematic diagram of an interface including a video template;
FIG. 3A is a schematic structural diagram of a model according to an embodiment of the present disclosure;
FIG. 3B is a flowchart illustrating steps of a model training method according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a text rewriting device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions in the present application better understood by those skilled in the art, the technical solutions in the present application will be clearly and completely described below with reference to the drawings in the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of protection of the present application.
The specific implementation of the present application is further described below with reference to the drawings of the present application.
FIG. 1A is a flowchart illustrating steps of a text authoring method for generating text describing an object according to one embodiment of the present application, where the text authoring method includes:
s101, obtaining object information.
In this embodiment, the object information may include any data that can be located to the object, such as the name of the object, the ID of the object, the link of the object, and the picture of the object, which is not limited in this embodiment.
S102, responding to the text generation operation, and generating a description text for describing the object according to the object information, wherein the description text comprises at least one shot script, and the shot script comprises a title and a line corresponding to the title.
In the present embodiment, the description text may be a description text generated from data obtained by mining according to the detail information, comment information, and the like of the object, which is not limited in the present embodiment. For example, a corresponding title template may be determined according to the object, where the title template may include a plurality of titles, and then a speech corresponding to each title is reproduced; the title and the corresponding line of the title can be generated according to the object at the same time; or generating the lines according to the objects and then determining the titles corresponding to the lines.
S103, displaying a description text display interface, wherein the description text display interface comprises at least one sub-mirror display area for displaying the sub-lens script, and the sub-mirror display area comprises a title area for displaying titles and a line area for displaying lines.
The descriptive text may be presented through a descriptive text presentation interface to enable a user to learn the content of the descriptive text.
The description text display interface comprises at least one sub-mirror display area for displaying the sub-lens script, wherein the sub-mirror display area comprises a title area for displaying titles and a station word area for displaying station words.
In this embodiment, the title included in the description text may be a title template determined according to the object, for example, the object is a food commodity, and the title may include a manufacturing process and the like; the speech in the descriptive text may be speech generated by generating a model based on detailed information of the object or the like. Of course, the foregoing is illustrative and not limiting of the present application.
S104, responding to the rewrite operation of the description text displayed on the description text display interface, acquiring rewrite information, rewriting the lines in the description text according to the rewrite information, and displaying the rewritten lines in the line area.
In this embodiment, the rewriting information may include: the present embodiment is not limited to rewriting emotion characteristics, rewriting character characteristics, and the like.
When the description text is displayed through the description text display interface, the description text display interface can further comprise a rewrite button, and after the user inputs the rewrite information, the rewrite button can be triggered. After the rewrite button is triggered, the written line can be rewritten according to the rewrite information, so as to obtain the rewritten line, and the rewritten line is displayed in the line area of the description text display interface.
If the rewritten description text reaches the user expectation, the rewritten description text can be directly output; if the rewritten result does not reach the user 'S expectation, the rewriting may be performed again through the above step S104 until the rewritten result reaches the user' S expectation; or if the rewritten result does not reach the user's expectation, the user can directly edit the content of the speech region to adjust the specific speech content.
In the solution provided in this embodiment, a description text for describing an object may be generated according to object information in response to a text generation operation, and rewritten in response to a rewrite operation for the description text presented in the description text presentation interface, to obtain a rewritten description text. Therefore, the rewritten description text capable of expressing the content desired by the user can be automatically generated based on the object information, and the production process of the description text is greatly simplified. When the scheme is applied to the video shooting field, the generated description text can be directly used as the video script, so that the production process of the video script is simplified.
Referring to fig. 1B, a schematic diagram of an example of a scenario of the present embodiment is shown, and as shown, an example is given by taking an object as a commodity of an e-commerce platform.
As shown in the upper side of fig. 1B, the graphical user interface a includes an object input box, and a user may input a link or ID of a commodity on the e-commerce platform into the input box and click an OK button, so that a description text corresponding to the commodity may be determined according to the link or ID of the commodity.
Another graphical user interface B (description text presentation interface) may then be presented, above which may include the name of the item entered, and on the left side of the interface a plurality of shot scripts including description text determined for the item, such as for introducing the item, introduction of the item, guide conversion, etc. The user can adjust the order of the individual shot scripts by dragging the shot scripts up or down.
The right side of the interface can comprise the speech of each shot script, for example, the speech of people under the condition of ' introducing goods ' is ' the beef is too much in bar, loving; the "commodity introduction" may include a variety of angular introductions such as introductions for ingredients, introductions for manufacturing processes, introductions for commodity efficacy, and the like.
The user can also edit the resulting descriptive text through the graphical user interface B. The upper right part of the interface comprises a 'rewriting' button, an input box which can pop up rewriting information after triggering the button, and a user can determine emotion rewriting characteristics serving as the rewriting information and rewrite the descriptive text according to the emotion rewriting characteristics after inputting contents such as happiness, surprise, objective statement and the like through the input box.
The descriptive text obtained after overwriting can also be presented through a graphical user interface C (descriptive text presentation interface), and the layout of the graphical user interface C can be the same as that of the graphical user interface B. The user can also edit the resulting rewritten descriptive text through the graphical user interface C. After the user determines that the rewritten descriptive text accords with the expectations, the user can click a download button at the upper right corner of the upper right interface to download the rewritten descriptive text to the local.
In the scheme provided by the embodiment, the rewritten description text capable of expressing the content expected by the user can be automatically generated based on the object information, so that the production process of the description text is greatly simplified. When the scheme is applied to the video shooting field, the generated description text can be directly used as the video script, so that the production process of the video script is simplified.
The text authoring method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: servers, mobile terminals (such as mobile phones, PADs, etc.), and PCs, etc.
Fig. 2A is a flowchart of steps of a text authoring method according to an embodiment of the present application, where, as shown in the drawing, the method includes:
s201, obtaining object information.
The specific implementation of this step is similar to that of step S101, and will not be described here again.
S202, at least one description item corresponding to the object is obtained according to the object information, and the description item is displayed in a description item display interface.
In this embodiment, the description item applies the feature describing the object, for example, if the object is a commodity, the description item may be each user interest point of the commodity; if the object is a knowledge point, the descriptive term may be the emphasis of the knowledge point.
In this embodiment, the object may be pre-mined by using a mining model for mining the points of interest, so as to generate a knowledge graph corresponding to the object. When executing step S202, the query may be performed in a preset knowledge graph according to the object information, so as to obtain at least one description item corresponding to the object.
Fig. 2B is a schematic diagram showing a description item, where a plurality of description items corresponding to "nuts" are shown, for example, nutrition, primordial qi full, original fragrance of nuts, scientific proportion, and the like, and a cancel button "cancel" for canceling the operation of acquiring the description item and a determination button "OK" for determining that the description item generation result meets the expectations are included at the lower right of the description item.
Optionally, in any embodiment of the present application, the description item display interface displays a plurality of description items in a text manner, and the text creation method may further include: responding to the received selection operation for the description items, and changing the selected description items into an editing state; and receiving text editing operation of a user for the descriptive items, and determining and displaying the edited descriptive items. Therefore, the edited description item can be more in accordance with the user's expectations, and the description text generated according to the description item can be more in accordance with the user's expectations.
For example, "three" may be double-clicked with a mouse so that the description item is changed to an edit state, and then the user may perform a text editing operation on "three" via an input device such as a keyboard, for example, to "more than three" or the like. After the modification is completed, any position except the description item can be clicked by a mouse, so that the description item exits from the change state, and more than three edited description items are displayed.
Optionally, in any embodiment of the present application, the description item display interface may further include a description item addition option, and the method may further include: receiving a selection operation of a user for adding a description item, and adding a candidate display area on a description item display interface, wherein a plurality of candidate description items are displayed in the candidate display area, and the plurality of candidate description items are determined according to the object; and hiding the candidate display area in response to a selection operation for the plurality of candidate description items, and displaying the selected candidate description items as description items on a description item display interface. Therefore, the user can add the description items as required, and the description text generated according to the description items is more in line with the user's expectations.
For example, as shown in fig. 2B, a description item adding option "+" may be displayed in the interface, after receiving a selection operation for the description item adding option "+", a candidate display area is added in the description item display interface, and a plurality of candidate description items are displayed in the candidate display area for a user to select, where the user may select the candidate description items as newly added description items.
Optionally, in this embodiment, if the object is a commodity, the candidate description item may be determined according to evaluation information, detail page information, and the like of the commodity, and may also be determined according to a user interest point corresponding to a category to which the commodity belongs; if the object is a knowledge point, the candidate description item may be an emphasis corresponding to the knowledge point, or may be other knowledge points extended from the knowledge point. Of course, the foregoing is by way of example only and is not intended as a limitation on the candidate description.
S203, generating description text for describing the object according to the displayed description item in response to the text generation operation.
In this embodiment, the text generation model may be trained in advance, and step S203 may specifically include: in response to the text generation operation, the presented description items are input to a text generation model, and description text for describing the object is output through the text generation model.
S204, responding to the rewrite operation of the description text presented by the description text presentation interface, and acquiring rewrite information.
In this embodiment, the description text display interface further includes a rewrite option, the rewrite operation includes a selection operation of the rewrite option, and step S204 may further include: responsive to a selection operation for the rewrite option, displaying a plurality of rewrite options in a description text display interface; a selection operation for a plurality of overwrite choices is received, the overwrite choices selected are determined, and overwrite information is acquired based on the overwrite choices selected.
Optionally, in any embodiment of the present application, the rewritten information includes rewritten emotion characteristics, and the rewritten descriptive text is a text for describing the object with rewritten emotion style.
S205, rewriting the lines in the description text according to the rewriting information.
In this embodiment, step S205 may further include: the rewriting information and the speech in the descriptive text are input into the text rewriting model, and the rewritten speech is output from the text rewriting model.
Specifically, the text rewrite model may be trained in advance according to the sample data corresponding to the plurality of rewrite options, and then, after a selection operation for the plurality of rewrite options may be received, the description text and the selected rewrite option may be input into the text rewrite model, so that the speech in the description text is rewritten by the text rewrite model.
In this embodiment, the text rewrite model may be a transformation model, and of course, the text rewrite model may be another model, which is not limited in this embodiment.
In addition, if the descriptive text before or after the rewriting does not meet the user's expectations, the user can directly edit the speech displayed in the split-lens display area to obtain the descriptive text meeting the user's expectations. If the fact that the user modifies the lines in the description text is determined, the modified description text can be used for optimizing the text rewrite model.
Optionally, in any embodiment of the present application, the mirror display area may further include a material area, and before step S205, the method may further include: responding to the selection operation of the material area, switching the description text display interface into a material display interface, wherein the material display interface displays a plurality of sub-mirror materials; determining a selected split lens material in response to a selection operation of a user for the split lens material; and switching the interface into a description text display interface, and displaying the selected sub-mirror material in the triggered material area. Therefore, a user can select the split-lens materials in the material library as required, the content of the video script is enriched, and the generation efficiency of the video script is further improved.
Fig. 2C shows an interface schematic with multiple sub-mirror display areas, with the material area being "+" to the right of the sub-mirror display area. After receiving a selection operation for a "+" in the interface, a material display interface can be displayed, a plurality of sub-mirror materials are displayed in the material display interface, the sub-mirror materials can comprise video materials, picture materials and the like, and the sub-mirror materials can be from a network or uploaded by a user, and the embodiment is not limited to the above.
Optionally, in any embodiment of the present application, the description text display interface includes a video generation button, and the method further includes: and responding to the selection operation of the video generation button, and generating a video file corresponding to the description text according to at least one shot script and a sub-lens material corresponding to the shot script displayed in the description text display interface. Therefore, the video can be directly generated according to the script meeting the requirements, and the efficiency of video creation is further improved.
For example, in response to a selection operation of the video generation button, generating a video file corresponding to the description text according to at least one shot script and a shot material corresponding to the shot script displayed in the description text display interface, including: responding to the selection operation of the video generation button, switching the interface into a video template display interface, wherein a plurality of video templates are displayed in the video template display interface, and the video templates comprise titles of the shot scripts, lines or display schemes of the shot materials corresponding to the shot scripts; determining a selected video template according to the selection operation of a user on a video template display interface; and generating a video file according to the selected video template, at least one sub-lens script displayed in the description text display interface and sub-lens materials corresponding to the sub-lens script. Therefore, the user can generate the video file based on the existing video template and according to the shot script and the corresponding sub-mirror material, so that the generation efficiency of the video file is further improved; moreover, the user can adopt the same video template to generate different video files, so that the unification of the video styles of a certain user can be ensured.
As shown in fig. 2C, the right side of the interface includes a video generation button "one-key-out" and, after receiving a selection operation of the video generation button, fig. 2D may be shown, and the lower side of the interface shown in fig. 2D includes a plurality of video templates, so that a user may screen the video templates through "industry", "style" and the like on the upper left side of the video templates, and may select a video template meeting the needs of the user. The video template may include: video background, a display area of a sub-mirror material, a display mode of the sub-mirror material, a display position of a title or a line, a display format of the title or the line, an artistic effect of the title or the line, and the like. In addition, with fig. 2D, contents such as video size can also be set.
Illustratively, the video template presentation interface further includes a sound editing area, the sound editing area includes a sound editing option, and the method further includes: displaying a sound editing area in a video template display interface, wherein the sound editing area comprises sound editing options, and the sound editing options comprise at least one of background music editing options, sound effect editing options, tone editing options and speech speed editing options; receiving an editing operation of the voice editing options, and determining an editing result of the voice editing options according to the editing operation; converting the speech included in the descriptive text into audio data according to the editing result of the voice editing option; according to the selected video template, at least one shot script displayed in the description text display interface and the shot material corresponding to the shot script, generating a video file, and then further comprising: and merging the video file and the audio data to obtain the audio and video data corresponding to the object. Therefore, the audio file can be generated according to the speech based on the sound editing options, and the video file and the audio file can be combined into one audio-video file based on the corresponding time of the speech in the audio file and the video file, so that the dubbing flow is simplified, and the video generation efficiency is improved.
As shown in fig. 2D, a plurality of sound editing options are included above the interface, including at least one of a background music editing option, a sound effect editing option, a tone editing option, and a speed editing option.
The background music editing option may be used to select the volume of the background music; the sound effect editing options can be used for editing the sound effect and volume of part of the prompt sound; the tone editing options can be used for editing the tone of the voice speech, such as standard male voice, standard female voice, gentle female voice and the like, and also can be used for editing the speech speed, the volume and the like of the voice speech.
According to the scheme provided by the embodiment, the plurality of description items are automatically generated and determined according to the input object information, the description text is automatically generated according to the plurality of description items, the description text can be rewritten according to the rewriting information, the video material corresponding to the sub-lens script in the description text is selected, the video file is generated according to the sub-lens script and the corresponding sub-lens material included in the description text, and the production process of the description text and the video is greatly simplified.
The text authoring method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: servers, mobile terminals (such as mobile phones, PADs, etc.), and PCs, etc.
Fig. 3A is a schematic structural diagram of a model provided in an embodiment of the present application, and fig. 3B is a flowchart illustrating steps of a model training method provided in the embodiment. The model trained by the embodiment of the present application can be used as the text rewrite model in the embodiment. In the present embodiment, description is given by taking the rewriting information as an example of the emotion rewriting feature, and the content that can be any other content as the rewriting information is also within the scope of protection of the present application.
As shown in fig. 3A, the countermeasure generation network model may include: the first generator, the second generator and the arbiter, as shown in the figure, the model training method comprises:
s301, a first description text sample for describing a sample object is obtained, a first sample emotion which can be expressed by the first description text sample is obtained, and a second sample emotion is obtained.
S302, taking a first description text sample and a first sample emotion as inputs of a first generator in the antagonism generation network model, and outputting a first emotion description text through the first generator.
In this step, the first emotion in the first descriptive text sample may be enhanced by the first generator.
S303, calculating a first loss value according to the first emotion description text, and judging the expression score of the first emotion description text expressing the emotion of the first sample through a discriminator of the countermeasure generation network model to generate a first judgment result.
S304, taking the first emotion description text and the second sample emotion as inputs of a second generator for generating a network model in an opposing manner, and outputting the second emotion description text through the second generator.
In this step, the emotion in the text may be changed to a second emotion by the second generator.
Through the above-described step S302 and step S304, a first emotion description text containing a first emotion and a second emotion description text containing a second emotion can be obtained as a countermeasure sample, so that model parameters of the countermeasure generation network model can be adjusted according to the countermeasure sample.
S305, calculating a second loss value according to the second emotion description text, and judging the expression score of the second emotion description text expressing the second emotion through a discriminator of the countermeasure generation network model, so as to generate a second judgment result.
Optionally, in this embodiment, the first generator or the second generator includes: the device comprises an encoder, a shifter and a decoder, wherein the encoder is used for encoding the descriptive text input to the generator to obtain a first feature vector corresponding to the text; the shifter is used for recoding the first feature vector according to the input emotion features to generate a second feature vector; the decoder is used for decoding the second feature vector and taking the decoding result as the descriptive text output by the generator. By arranging the migration device between the encoder and the decoder, the original structure of the transducer model can be reserved as much as possible without great modification, therefore, in the embodiment, the pre-trained transducer model can be adopted as a main framework, and the migration device is added on the basis of the pre-trained transducer model again, so that the first generator and the second generator of the application are obtained. Similarly, pre-trained models such as GPT2, T5, etc. may also be employed and are within the scope of the present application.
In this embodiment, the plurality of discriminators may include a plurality of discriminators, for example, may include two discriminators, and the two discriminators respectively discriminate the first emotion description text and the second emotion description text. When the judgment is specifically performed, not only the expression scores of the expression corresponding to the first emotion description text and the second emotion description text can be judged, but also the smoothness degree and the like of the first emotion description text and the second emotion description text can be judged.
Optionally, in this embodiment, taking the first descriptive text sample and the first sample emotion feature as inputs to a first generator in the countermeasure generation network model, outputting the first emotion descriptive text through the first generator includes: inputting the first description text sample into a second generator, rewriting the first description text sample through the second generator, and outputting a rewritten text corresponding to the first description text sample; the rewritten text and the first sample emotion characteristics are used as input of a first generator, and the first emotion description text is output through the first generator.
The process of overwriting may be: and encoding the first description text sample through an encoder to obtain a feature vector, and decoding the feature vector through a decoder to obtain the rewritten text.
Optionally, in this embodiment, the method for calculating the loss value according to the description text is implemented by the following steps: calculating an initial loss value corresponding to the descriptive text based on the initial loss function; and calculating the loss value corresponding to the descriptive text according to the initial loss value and the diversity deviation value. By setting the diversity deviation, the degree of freedom in the generation process can be improved, so that the rewriting result of the countermeasure generation network model can be more diversified, and the generation activity of the countermeasure generation network model can be improved.
In this embodiment, the initial loss function may be a cross entropy loss function, and the loss function in this embodiment may be:wherein CE is a cross entropy loss function, x is an input text, x' is an output text, and b is a diversity bias.
S306, adjusting parameters of the countermeasure generation network model at least according to the first loss value, the first judging result, the second loss value and the second judging result.
Optionally, in this embodiment, when overwriting, the method further includes: calculating a third loss value according to the rewritten text, and judging the expression score of the rewritten text for expressing the emotion characteristics of the first sample through a discriminator of the countermeasure generation network model to generate a third judging result; adjusting parameters of the countermeasure generation network model at least according to the first loss value, the first discrimination result, the second loss value and the second discrimination result, including: and adjusting parameters of the countermeasure generation network model at least according to the first loss value, the first judging result, the second loss value, the second judging result, the third loss value and the third grading result. By rewriting and adjusting the parameters of the countermeasure generation network model based on the rewriting result, the occurrence of crashes of the countermeasure generation network model can be avoided as much as possible.
When overwriting occurs, the overall loss function can be noted as:
wherein,for loss value, +_>For discrimination result, ->First penalty value calculated for text according to first emotion description>For a second penalty value calculated from the second emotion descriptive text,e1 A third loss value calculated from the rewritten text, < >>For the first discrimination result corresponding to the first emotion description text,/for the first emotion description text>A second discrimination result corresponding to the second emotion description text,and E1 is a first emotion and E2 is a second emotion for rewriting a third discrimination result corresponding to the text.
Parameters of the countermeasure generation network model may be adjusted based on the total loss function. The specific method for adjusting the parameters can refer to the related art, and will not be described herein.
In this embodiment, the trained second generator may be configured to rewrite the descriptive text according to the rewritten emotion feature, and generate the descriptive text capable of expressing the rewritten emotion.
According to the scheme provided by the embodiment, through the trained second generator, the description text can be automatically rewritten according to the emotion information input by the user, and the rewritten description text can express the rewritten emotion, so that the description text capable of expressing the emotion expected by the user can be automatically generated based on the object information, and the production process of the description text is greatly simplified.
The model training method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including, but not limited to: servers, mobile terminals (such as mobile phones, PADs, etc.), and PCs, etc.
Referring to fig. 4, an architecture diagram of a text rewriting device according to a fourth embodiment of the present application is shown, as shown in fig. 4, including:
an input module 401, configured to obtain object information and input operations corresponding to the description text presentation interface;
the display module 402 is configured to display a description text display interface and be used for displaying the rewritten speech in the speech region, where the description text display interface includes at least one lens display region for displaying the lens script, and the lens display region includes a title region for displaying the title and a speech region for displaying the speech; and
the processing module 403 is configured to receive an input operation, generate a description text for describing an object according to object information in response to the input operation generated by the text, acquire rewrite information in response to the input operation for rewriting the description text, rewrite a line in the description text according to the rewrite information, and transmit the rewritten line to the display module, where the description text includes at least one shot script, and the shot script includes a title and a line corresponding to the title.
Optionally, the display module is further configured to display a description item display interface, where the description item display interface includes a description item; the processing module is further used for obtaining at least one description item corresponding to the object according to the object information, responding to the text generation operation, and generating a description text for describing the object according to the description item.
Optionally, the description item display interface displays a plurality of description items in a text mode, and the input module is further used for receiving an input operation for the description items and an input operation for text editing of the description items; the processing module is also used for responding to the received selection operation for the description items, changing the selected description items into an editing state and determining the edited description items; the display module is also used for displaying the edited description item.
Optionally, the input module is further configured to receive an input operation for a description item add option, and to receive an input operation for a selection of a plurality of candidate description items; the description item display interface further comprises a candidate display area in which a plurality of candidate description items are displayed, the plurality of candidate description items are determined according to the object, the candidate display area is hidden in response to an input operation for selection of the plurality of candidate description items, and the selected candidate description items are displayed as description items on the description item display interface.
Optionally, the input module is further configured to receive an input operation for text generation; the processing module is further configured to input the presented description item to a text generation model in response to a text generation operation, and output descriptive text for describing the object through the text generation model.
Optionally, the descriptive text presentation interface further includes a rewrite option, and the rewrite operation includes a selection operation of the rewrite option; the input module is also used for receiving the selection operation for the rewriting options and receiving the selection operation for a plurality of rewriting options; the processor is further used for responding to the selection operation of the rewriting options, controlling the display module to display a plurality of rewriting options in the description text display interface, receiving the selection operation of the plurality of rewriting options, determining the selected rewriting options, and acquiring the rewriting information according to the selected rewriting options.
Optionally, the processor is further configured to input the rewrite information and the speech in the descriptive text into a text rewrite model, and output the rewritten speech through the text rewrite model.
Optionally, the sub-mirror display area further comprises a material area; the input module is also used for receiving the selection operation for the material area and receiving the selection operation for the sub-mirror material; the display module is further used for responding to the selection operation for the material area, switching the description text display interface into a material display interface, displaying a plurality of sub-mirror materials on the material display interface, switching the interface into the description text display interface, and displaying the selected sub-mirror materials in the triggered material area; and the processor is also used for responding to the selection operation of the user on the split lens materials and determining the selected split lens materials.
Optionally, the description text display interface includes a video generation button; the input module is also used for receiving the selection operation of the video generation button; and the processor is also used for responding to the selection operation of the video generation button and generating a video file corresponding to the description text according to at least one sub-lens script and sub-lens materials corresponding to the sub-lens script displayed in the description text display interface.
Optionally, the input module is further configured to receive a selection operation of the user on the video template display interface; the display module is also used for switching the interface into a video template display interface, wherein a plurality of video templates are displayed in the video template display interface, and the video templates comprise titles of the shot scripts, lines or display schemes of the shot materials corresponding to the shot scripts; the processor is also used for determining the selected video template according to the selection operation of the user on the video template display interface and generating a video file according to the selected video template, at least one sub-lens script displayed in the description text display interface and sub-lens materials corresponding to the sub-lens script.
Optionally, the display module is further configured to display a sound editing area in the video template display interface, where the sound editing area includes a sound editing option, and the sound editing option includes at least one of a background music editing option, a sound effect editing option, a tone editing option, and a speech speed editing option; the input module is also used for receiving editing operation of the voice editing options; the processor is also used for determining the editing result of the voice editing options according to the editing operation, converting the speech included in the description text into audio data according to the editing result of the voice editing options, and merging the video file and the audio data to obtain the audio-video data corresponding to the object.
The text rewriting device of the present embodiment is used to implement the corresponding text creation method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the text rewriting device of the present embodiment may refer to the description of the corresponding portion in the foregoing method embodiment, and will not be repeated herein.
Referring to fig. 5, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, and embodiments of the present application are not limited to specific implementations of the electronic device.
As shown in fig. 5, the electronic device may include: a processor 502, a communication interface (Communications Interface) 504, a memory 506, and a communication bus 508.
Wherein:
processor 502, communication interface 504, and memory 506 communicate with each other via communication bus 508.
A communication interface 504 for communicating with other electronic devices or servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the above-described text authoring method embodiment.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present application. The one or more processors comprised by the smart device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
A memory 506 for storing a program 510. Memory 506 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The specific implementation of each step in the program 510 may refer to the corresponding descriptions in the corresponding steps and units in the above text creation method embodiment, which are not repeated herein. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and modules described above may refer to corresponding procedure descriptions in the foregoing method embodiments, which are not repeated herein.
Another embodiment of the present application also provides a computer program product having a computer program stored thereon, which when executed by a processor implements the text authoring method provided by the above embodiment.
It should be noted that, according to implementation requirements, each component/step described in the embodiments of the present application may be split into more components/steps, and two or more components/steps or part of operations of the components/steps may be combined into new components/steps, so as to achieve the purposes of the embodiments of the present application.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium and to be stored in a local recording medium downloaded through a network, so that the methods described herein may be stored on such software processes on a recording medium using a general purpose computer, special purpose processor, or programmable or special purpose hardware such as an ASIC or FPGA. It is understood that a computer, processor, microprocessor controller, or programmable hardware includes a storage component (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor, or hardware, implements the text authoring methods described herein. Further, when the general-purpose computer accesses code for implementing the text authoring method shown herein, execution of the code converts the general-purpose computer into a special-purpose computer for executing the text authoring method shown herein.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The above embodiments are only for illustrating the embodiments of the present application, but not for limiting the embodiments of the present application, and various changes and modifications can be made by one skilled in the relevant art without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also fall within the scope of the embodiments of the present application, and the scope of the embodiments of the present application should be defined by the claims.

Claims (13)

1. A text authoring method for generating text describing an object, comprising:
obtaining object information;
generating a description text for describing the object according to the object information in response to the text generation operation, wherein the description text comprises at least one shot script, and the shot script comprises a title and a line corresponding to the title;
Displaying a description text display interface, wherein the description text display interface comprises at least one sub-mirror display area for displaying a shot script, the sub-mirror display area comprises a title area for displaying the title and a line area for displaying the line, and the description text display interface further comprises a rewriting option;
responsive to a selection operation for the rewrite option, displaying a plurality of rewrite options in the description text display interface;
receiving a selection operation for a plurality of rewrite options, determining the selected rewrite option, and acquiring rewrite information according to the selected rewrite option;
and rewriting the lines in the description text according to the rewriting information, and displaying the rewritten lines in the line area.
2. The method of claim 1, wherein the generating descriptive text for describing an object from the object information in response to a text generation operation comprises:
according to the object information, at least one description item corresponding to the object is obtained, and the description item is displayed in a description item display interface;
and generating description text for describing the object according to the presented description item in response to the text generation operation.
3. The method of claim 2, wherein the description item presentation interface presents a plurality of description items textually, the method further comprising:
changing the selected description item to an editing state in response to the received selection operation for the description item;
and receiving text editing operation of a user for the description items, and determining and displaying the edited description items.
4. The method of claim 2, wherein the descriptive item presentation interface further comprises a descriptive item add option, the method further comprising:
receiving a selection operation of a user for the description item adding option, and adding a candidate display area in a description item display interface, wherein a plurality of candidate description items are displayed in the candidate display area, and the plurality of candidate description items are determined according to the object;
and hiding the candidate display area in response to a selection operation for the plurality of candidate description items, and displaying the selected candidate description items as the description items on the description item display interface.
5. The method of claim 2, wherein the generating descriptive text for describing an object from the presented descriptive term in response to the text generating operation comprises:
In response to the text generation operation, the presented description items are input to a text generation model, and description text for describing the object is output through the text generation model.
6. The method of claim 1, wherein the rewriting of the lines in the descriptive text according to the rewrite information includes:
and inputting the rewritten information and the lines in the descriptive text into a text rewriting model, and outputting the rewritten lines through the text rewriting model.
7. The method of claim 1, wherein the mirror display region further comprises a material region, the method further comprising:
switching the description text display interface into a material display interface in response to the selection operation for the material area, wherein the material display interface displays a plurality of sub-mirror materials;
responding to the selection operation of a user on the sub-mirror materials, and determining the selected sub-mirror materials;
and switching the interface into the description text display interface, and displaying the selected sub-mirror material in the triggered material area.
8. The method of claim 7, wherein the descriptive text presentation interface includes a video generation button therein, the method further comprising:
And responding to the selection operation of the video generation button, and generating a video file corresponding to the description text according to at least one shot script and a sub-lens material corresponding to the shot script, which are displayed in the description text display interface.
9. The method of claim 8, wherein the generating, in response to the selection operation of the video generation button, the video file corresponding to the descriptive text according to at least one shot script and the sub-mirror material corresponding to the shot script displayed in the descriptive text display interface includes:
responding to the selection operation of the video generation button, switching an interface into a video template display interface, wherein a plurality of video templates are displayed in the video template display interface, and the video templates comprise titles and lines of the shot scripts or display schemes of the shot materials corresponding to the shot scripts;
determining a selected video template according to the selection operation of a user on the video template display interface;
and generating the video file according to the selected video template, at least one sub-lens script displayed in the description text display interface and sub-lens materials corresponding to the sub-lens script.
10. The method of claim 9, wherein the video template presentation interface further comprises a sound editing area comprising sound editing options therein, the method further comprising:
displaying a sound editing area in the video template display interface, wherein the sound editing area comprises sound editing options, and the sound editing options comprise at least one of background music editing options, sound effect editing options, tone editing options and speech speed editing options;
receiving an editing operation of the voice editing options, and determining an editing result of the voice editing options according to the editing operation;
converting the speech included in the descriptive text into audio data according to the editing result of the voice editing option;
the method further comprises the steps of after generating the video file according to the selected video template, at least one shot script displayed in the descriptive text display interface and the shot material corresponding to the shot script:
and merging the video file and the audio data to obtain the audio and video data corresponding to the object.
11. The method of claim 1, wherein the object information may be at least one of: the name of the object, the ID of the object, the link of the object, the image of the object.
12. The method of claim 1, wherein the rewritten information includes rewritten emotional characteristics, and the rewritten descriptive text is text for describing the object having a rewritten emotional style.
13. A computer program medium having stored thereon a computer program which, when executed by a processor, implements the text authoring method of any one of claims 1-12.
CN202110825484.3A 2021-07-21 2021-07-21 Text creation method and computer program product Active CN113590247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110825484.3A CN113590247B (en) 2021-07-21 2021-07-21 Text creation method and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110825484.3A CN113590247B (en) 2021-07-21 2021-07-21 Text creation method and computer program product

Publications (2)

Publication Number Publication Date
CN113590247A CN113590247A (en) 2021-11-02
CN113590247B true CN113590247B (en) 2024-04-05

Family

ID=78248847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110825484.3A Active CN113590247B (en) 2021-07-21 2021-07-21 Text creation method and computer program product

Country Status (1)

Country Link
CN (1) CN113590247B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115052201A (en) * 2022-05-17 2022-09-13 阿里巴巴(中国)有限公司 Video editing method and electronic equipment
CN114860995B (en) * 2022-07-05 2022-09-06 北京百度网讯科技有限公司 Video script generation method and device, electronic equipment and medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839562A (en) * 2014-03-17 2014-06-04 杨雅 Video creation system
CN103970892A (en) * 2014-05-23 2014-08-06 无锡清华信息科学与技术国家实验室物联网技术中心 Method for controlling multidimensional film-watching system based on intelligent home device
CN109144628A (en) * 2018-07-05 2019-01-04 厦门微芽互娱科技有限公司 Poster generation method, medium, terminal device and device
CN110007827A (en) * 2018-12-13 2019-07-12 阿里巴巴集团控股有限公司 Select edit methods, device, electronic equipment and computer readable storage medium
CN110059309A (en) * 2018-01-18 2019-07-26 北京京东尚科信息技术有限公司 The generation method and device of information object title
CN110825912A (en) * 2019-10-30 2020-02-21 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN111048215A (en) * 2019-12-13 2020-04-21 北京纵横无双科技有限公司 CRM-based medical video production method and system
CN111401045A (en) * 2020-03-16 2020-07-10 腾讯科技(深圳)有限公司 Text generation method and device, storage medium and electronic equipment
CN111629269A (en) * 2020-05-25 2020-09-04 厦门大学 Method for automatically shooting and generating mobile terminal short video advertisement based on mechanical arm
WO2021073315A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generation method and device, terminal and storage medium
CN112732977A (en) * 2021-01-21 2021-04-30 网娱互动科技(北京)股份有限公司 Method for quickly generating short video based on template
CN112866796A (en) * 2020-12-31 2021-05-28 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium
CN113010645A (en) * 2021-03-25 2021-06-22 腾讯科技(深圳)有限公司 Text generation method, device, equipment and storage medium
CN113010062A (en) * 2021-03-18 2021-06-22 阿里巴巴新加坡控股有限公司 Method and device for generating design scheme and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160378738A1 (en) * 2015-06-29 2016-12-29 International Business Machines Corporation Editing one or more text files from an editing session for an associated text file

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839562A (en) * 2014-03-17 2014-06-04 杨雅 Video creation system
CN103970892A (en) * 2014-05-23 2014-08-06 无锡清华信息科学与技术国家实验室物联网技术中心 Method for controlling multidimensional film-watching system based on intelligent home device
CN110059309A (en) * 2018-01-18 2019-07-26 北京京东尚科信息技术有限公司 The generation method and device of information object title
CN109144628A (en) * 2018-07-05 2019-01-04 厦门微芽互娱科技有限公司 Poster generation method, medium, terminal device and device
CN110007827A (en) * 2018-12-13 2019-07-12 阿里巴巴集团控股有限公司 Select edit methods, device, electronic equipment and computer readable storage medium
WO2021073315A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generation method and device, terminal and storage medium
CN110825912A (en) * 2019-10-30 2020-02-21 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN111048215A (en) * 2019-12-13 2020-04-21 北京纵横无双科技有限公司 CRM-based medical video production method and system
CN111401045A (en) * 2020-03-16 2020-07-10 腾讯科技(深圳)有限公司 Text generation method and device, storage medium and electronic equipment
CN111629269A (en) * 2020-05-25 2020-09-04 厦门大学 Method for automatically shooting and generating mobile terminal short video advertisement based on mechanical arm
CN112866796A (en) * 2020-12-31 2021-05-28 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium
CN112732977A (en) * 2021-01-21 2021-04-30 网娱互动科技(北京)股份有限公司 Method for quickly generating short video based on template
CN113010062A (en) * 2021-03-18 2021-06-22 阿里巴巴新加坡控股有限公司 Method and device for generating design scheme and electronic equipment
CN113010645A (en) * 2021-03-25 2021-06-22 腾讯科技(深圳)有限公司 Text generation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113590247A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US10223636B2 (en) Artificial intelligence script tool
CN113590247B (en) Text creation method and computer program product
KR102294134B1 (en) Authoring tools for synthesizing hybrid slide-canvas presentations
US20160358367A1 (en) Animation based on Content Presentation Structures
US10649618B2 (en) System and method for creating visual representation of data based on generated glyphs
CN113655999B (en) Page control rendering method, device, equipment and storage medium
CN109389427A (en) Questionnaire method for pushing, device, computer equipment and storage medium
US20180143741A1 (en) Intelligent graphical feature generation for user content
CN106601254A (en) Information inputting method, information inputting device and calculation equipment
CN112819933A (en) Data processing method and device, electronic equipment and storage medium
JP7337172B2 (en) Voice packet recommendation method, device, electronic device and program
US20230124765A1 (en) Machine learning-based dialogue authoring environment
KR20220167358A (en) Generating method and device for generating virtual character, electronic device, storage medium and computer program
CN114638232A (en) Method and device for converting text into video, electronic equipment and storage medium
US10685470B2 (en) Generating and providing composition effect tutorials for creating and editing digital content
CN112565875B (en) Method, device, equipment and computer readable storage medium for automatically generating video
CN111125384A (en) Multimedia answer generation method and device, terminal equipment and storage medium
US11809688B1 (en) Interactive prompting system for multimodal personalized content generation
KR101804679B1 (en) Apparatus and method of developing multimedia contents based on story
CN115129806A (en) Data processing method and device, electronic equipment and computer storage medium
JP7049173B2 (en) Sign language CG translation editing equipment and programs
KR102281298B1 (en) System and method for video synthesis based on artificial intelligence
KR20230023804A (en) Text-video creation methods, devices, facilities and media
KR20230016055A (en) Video processing method, device, device and computer readable storage medium
CN112734949A (en) Method and device for modifying attribute of VR (virtual reality) content, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220324

Address after: 310023 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Applicant after: Aliyun Computing Co.,Ltd.

Address before: 310023 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240227

Address after: Room 553, 5th Floor, Building 3, No. 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province, 311121

Applicant after: Hangzhou Alibaba Cloud Feitian Information Technology Co.,Ltd.

Country or region after: Zhong Guo

Address before: 310023 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Country or region before: Zhong Guo

Applicant before: Aliyun Computing Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant