CN117274991A - Resource information generation method and device, electronic equipment and readable storage medium - Google Patents

Resource information generation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117274991A
CN117274991A CN202310757505.1A CN202310757505A CN117274991A CN 117274991 A CN117274991 A CN 117274991A CN 202310757505 A CN202310757505 A CN 202310757505A CN 117274991 A CN117274991 A CN 117274991A
Authority
CN
China
Prior art keywords
target resource
resource
text
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310757505.1A
Other languages
Chinese (zh)
Inventor
丁超凡
王玉
徐博磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310757505.1A priority Critical patent/CN117274991A/en
Publication of CN117274991A publication Critical patent/CN117274991A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a resource information generation method, a device, electronic equipment and a readable storage medium, and relates to the technical field of data processing. The method comprises the following steps: determining a data mode of the target resource; determining characteristic data of the target resource according to the data mode of the target resource; acquiring demand prompt information aiming at target resources; and generating a tag text of the target resource according to the demand prompt information aiming at the target resource and the characteristic data of the target resource. The method can combine the self characteristics and the demand prompt of the resources to generate the customized label text specific to the resources, so that the label text of each resource has the characteristics different from other resource label texts, when a great number of naming or description demands of the resources exist, the method can generate the specific names or description reflecting the self characteristics of the resources aiming at the resources in multiple modes, thereby reducing the manpower and time for consulting data and conception, rapidly producing the resource label text and improving the efficiency of generating the resource label text.

Description

Resource information generation method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and apparatus for generating resource information, an electronic device, and a computer readable storage medium.
Background
In application project development, a large amount of project resources, such as prop resources, character resources, etc. in game applications, map resources, etc. in photo processing applications, are typically exposed. With the long-line operation of applications, the number of resources is increasing, and naming or description of a large number of resources requires considerable manpower and time. When a situation such as holding an activity in an application is encountered, a large amount of resources are developed for the activity, and naming or description is also required to be completed by a driver, so that the efficiency of generating resource information is low.
Disclosure of Invention
The application provides a resource information generation method, a resource information generation device, an electronic device and a computer readable storage medium, so as to solve or at least partially solve the problems. Specifically, the following is described.
In a first aspect, the present application provides a resource information generating method, where the method includes:
determining a data mode of the target resource;
determining characteristic data of the target resource according to the data mode of the target resource;
Acquiring demand prompt information aiming at the target resource;
and generating a tag text of the target resource according to the demand prompt information aiming at the target resource and the characteristic data of the target resource.
In a second aspect, an embodiment of the present application further provides a resource information generating device, where the device includes:
the first determining module is used for determining the data mode of the target resource;
the second determining module is used for determining characteristic data of the target resource according to the data mode of the target resource;
the acquisition module is used for acquiring the demand prompt information aiming at the target resource;
and the generation module is used for generating a tag text of the target resource according to the demand prompt information aiming at the target resource and the characteristic data of the target resource.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory, and computer program instructions stored on the memory and executable on the processor;
the processor, when executing the computer program instructions, implements the resource information generation method as described in the first aspect above.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium, where computer program instructions are stored, where the computer program instructions are executed by a processor to implement a resource information generating method as described in the first aspect above.
Compared with the prior art, the application has the following advantages:
in the resource information generating method provided by the embodiment of the application, firstly, the data mode of the target resource of the label text to be generated is determined, then, the characteristic data of the target resource is determined according to the data mode of the target resource, so that the self characteristics of the target resource are obtained, the demand prompt information aiming at the target resource can be obtained, the demand of the label text is described, and the label text of the target resource can be generated according to the demand prompt information aiming at the target resource and the characteristic data of the target resource. In the embodiment of the application, the self characteristics of the resources can be determined according to the data modes of the resources, and the customized label text dedicated to the resources is generated by combining the demand prompt of the resources, so that the label text of each resource has the characteristic of being different from other resource label texts.
Drawings
Fig. 1 is a flowchart of a resource information generating method provided in an embodiment of the present application;
FIG. 2 is a flowchart of another resource information generating method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a model architecture for generating resource information according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for generating a resource name based on a resource description according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for generating a resource description based on a resource name, provided by an embodiment of the present application;
FIG. 6 is a flowchart of a method for filtering tab text according to an embodiment of the present application;
fig. 7 is a block diagram of a resource information generating device provided in an embodiment of the present application;
fig. 8 is a schematic logic structure diagram of an electronic device for implementing resource information generation according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, every other embodiment that a person skilled in the art would obtain without making any inventive effort is within the scope of protection of the present application.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects.
It should be understood that in embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" is merely an association relationship describing an association object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and/or C" means comprising any 1 or any 2 or 3 of A, B, C.
It should be understood that in the embodiments of the present application, "B corresponding to a", "a corresponding to B", or "B corresponding to a", means that B is associated with a, from which B may be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
The resource information generating method in one embodiment of the present disclosure may be executed on a local terminal device or a server. When the resource information generating method is operated on the server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises a cloud server and client equipment.
In an alternative embodiment, a cloud application for implementing the resource information generation method may be run under the cloud interaction system. In the operation mode based on cloud computing, an operation main body of a cloud application program and an application picture presentation main body are separated, storage and operation of a resource information generation method are completed on a cloud server, and the role of a client device is used for receiving and sending data and presenting an application picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud server which performs information processing is a cloud end. When the cloud application for realizing the resource information generation method is used, a user operates the client device to send an operation instruction to the cloud server, the cloud server operates a cloud application program according to the operation instruction, data such as an application picture and the like are encoded and compressed, the data is returned to the client device through a network, and finally, the client device decodes and outputs the application picture.
In a possible implementation manner, the embodiment of the invention provides a resource information generating method, and a graphical user interface for implementing an application of the resource information generating method is provided through a terminal device, wherein the terminal device may be the aforementioned local terminal device or the aforementioned client device in the cloud interaction system.
The application provides a resource information generating method, and an execution subject of the method may be an electronic device, where the electronic device may be a desktop computer, a notebook computer, a game console, a smart watch, a tablet computer, a mobile phone, a television, or other electronic devices.
As shown in fig. 1, the method includes the following steps S10 to S40.
Step S10: a data modality of the target resource is determined.
A data modality may be understood as a data modality, such as text, image, video, audio, etc., and a resource as a data also has a corresponding data modality, such as text resource, image resource, video resource, audio resource, etc.
The target resource may be a type of resource, such as a weapon resource in a game, or a specific resource under a certain type, such as a bow and arrow resource under a weapon class in a game. The above-mentioned resource types may be artificially preset types, for example.
In the embodiment of the application, for the target resource of the label text to be generated, the data mode of the target resource can be determined first. Because different data modalities have different data characteristics, when corresponding tag texts are generated for the resources, the characteristics of the resources can be determined according to the data characteristics of the resources so as to generate the tag texts conforming to the characteristics of the resources, namely the exclusive tag texts of the resources.
Step S20: and determining the characteristic data of the target resource according to the data mode of the target resource.
In the embodiment of the present application, the data modes of the resource are mainly divided into two major types, one type is an image mode, and the other type is a non-image mode. For the video resource, the video resource is mainly composed of image frames, so the video resource can be classified into an image mode and a non-image mode, and the embodiment of the application is not intended to limit the embodiment.
The image features of the target resource may be extracted if the target resource is in an image modality, and the target resource features may be extracted or the input target resource features may be received if the target resource is in a non-image modality.
Step S30: and acquiring demand prompt information aiming at the target resource.
In the embodiment of the application, besides the characteristics of the target resource, the requirement prompt information for the target resource can be utilized, and the requirement prompt information can be used for describing the requirement on the target resource label text, so that the label text meeting the customized requirement can be generated based on the requirement prompt information.
The requirements indicated by the requirement prompting information may include requirements in terms of materials, requirements in terms of contents, requirements in terms of formats, requirements in terms of quantity, and the like, but are not limited thereto.
Step S40: and generating a tag text of the target resource according to the demand prompt information aiming at the target resource and the characteristic data of the target resource.
In the step, the self characteristics of the target resource and the demand prompt are combined, and the label text meeting the customized demand can be generated aiming at the characteristics of the target resource.
In the resource information generating method provided by the embodiment of the application, firstly, the data mode of the target resource of the label text to be generated is determined, then, the characteristic data of the target resource is determined according to the data mode of the target resource, so that the self characteristics of the target resource are obtained, the demand prompt information aiming at the target resource can be obtained, the demand of the label text is described, and the label text of the target resource can be generated according to the demand prompt information aiming at the target resource and the characteristic data of the target resource. In the embodiment of the application, the self characteristics of the resources can be determined according to the data modes of the resources, and the customized label text dedicated to the resources is generated by combining the demand prompt of the resources, so that the label text of each resource has the characteristic of being different from other resource label texts.
Fig. 2 shows a flowchart of another resource information generating method provided by the embodiment of the present application, and as shown in fig. 2, the embodiment of the present application may implement the following resource information generating tasks, including: a resource name/description generation task based on the resource keyword, a resource name generation task based on the resource description, and a resource name/description generation task based on the image feature.
In an alternative embodiment of the present application, the foregoing step S20 may include the following steps S21 to S22.
Step S21: and under the condition that the data modality of the target resource is an image, extracting visual features and expression features of the target resource, wherein the visual features of the target resource are used for representing visual objects and/or appearance descriptions of the visual objects in the target resource, and the expression features of the target resource are used for representing content meanings of the target resource.
The image resource includes not only visual appearance content such as color, style, pattern, etc. of clothes, but also content meaning intended to be expressed by the image resource such as style of ancient wind, mood of monarch, etc. Therefore, for the image resource, the visual characteristics representing the visible content and the expression characteristics representing the meaning of the content can be extracted, so that the image resource can be understood from multiple dimensions.
Fig. 3 shows a schematic diagram of a model architecture for generating resource information according to an embodiment of the present application, as shown in fig. 3, in an alternative embodiment, visual features and expression features of a target resource may be extracted by:
generating a text model through a preset image, and generating visual characteristics of a target resource;
and determining the expression characteristics matched with the target resources through a preset image-text matching model.
Among other advantages, image-to-text generation is the ability to generate images to text, and thus, image-to-text models can be employed to generate visual features of images to accurately summarize visible objects in the images and their appearance. The method of manual identification is difficult to accurately refine the distinguishing points for the visible features of similar images (such as image resource series of the same subject), but the embodiment of the method utilizes the advantages of generating a text model by using the images, and can accurately extract the visible features of the images, so that the efficiency of analyzing the image features can be improved.
The image-text matching model has the advantages that the similarity between images and texts is given, the image-text matching model can search content meanings which are most in line with the image through semantic matching based on the training of a large amount of image-text data pairs (image-content meaning text), and therefore the image-text matching model can be used for searching expression features which are most in line with the image so as to accurately summarize the meanings expressed by the image contents. In the current resource information generation scheme, interpretation is usually needed by people, but in the embodiment of the application, the meaning expressed by the image resource is interpreted by utilizing the advantages of the image-text matching model, so that the efficiency of analyzing the image characteristics can be improved.
By way of example, the image-generating text model may specifically employ a BLIP model (Bootstrapping Language-ImagePre-training, bootstrap-based language-image pre-training model).
The BLIP model provides a completely new VLP (Vision-language pre-training) framework that can cover a wider range of downstream tasks. The BLIP model introduces two innovations from the model and data perspective, respectively:
1. the multi-modal hybrid (MED, multimodalmixtureof Encoder-Decoder) of the encoder-Decoder is a completely new model architecture that can effectively perform multi-task pre-training and flexible migration learning. An MED may be a single mode encoder (unimmodalene encoder) or an image-based text encoder (image-groucodetextencoder) or an image-based text decoder (image-groucodetextdecoder). The model is pre-trained with three visual language targets, namely image-text contrast learning (image-text matching), image-text matching (image-text matching), and image-conditional language modeling (image-conditionedlanguage modeling).
2. Title and filtering (CaptioningandFiltering, capFilt), a new data set boosting method, may be used to learn from noisy image-text pairs. The pretrained MED is trimmed into two modules: one is captioner, which produces a composite title for a given web image, and the other is Filter, which removes noise titles in the original web text and the composite text.
The text matching model may specifically be a CLIP model (Contrastive Language-ImagePre-Training model based on a comparative language-image pre-Training model), for example.
The main structure of the CLIP model includes a text encoder (text encoder) for converting content meaning text into text vectors (text embedding), and an image encoder (image embedding) for converting images into image vectors (image embedding), and then calculating the similarity (cosine similarity) of the text vectors and the image vectors to predict whether the images are paired with the content meaning text, i.e., determine whether the images match with the content meaning text, and the CLIP model breaks the paradigm requiring a fixed category.
Step S22: and extracting keywords from the visual features and the expression features of the target resource to obtain feature data of the target resource.
In this step, after the visual features and the expression features of the target resource are obtained, keywords may be extracted therefrom, and these keywords may cover the characteristics of the visible object type, appearance, style, mood, etc. of the image, so that these keywords may be used as feature data of the target resource.
In an alternative embodiment of the present application, the foregoing step S20 may further include the following step S23.
Step S23: and under the condition that the data modality of the target resource is a non-image, receiving a keyword instruction aiming at the target resource, and determining the keyword indicated by the keyword instruction as characteristic data of the target resource.
In the embodiment of the application, for the non-image resource, the keyword indicated by the keyword instruction may be determined as the feature data of the target resource in response to the keyword instruction for the target resource.
The keyword instruction may be triggered through a graphical user interface provided by the terminal, and the keyword indicated by the keyword instruction may be obtained through input, automatic combination, or the like, which is not specifically limited in the embodiments of the present application.
Of course, in practical applications, the image resource may also determine the feature data by adopting the manner provided in step S23, which is not limited in this embodiment of the present application.
It should be noted that, in the case where the video resource is classified into an image mode, the video resource is also regarded as an image resource. In an alternative embodiment, the video resource may be first key-frame extracted, then visual and expressive features extracted for the key-frame, and then keywords extracted from the visual and expressive features. The embodiment is suitable for video resources with shorter video duration and smaller key frame number.
In the case where the video resource is classified as a non-image modality, the determination of the feature data may be performed in the manner as described in the foregoing step S23. The embodiment is suitable for video resources with longer video duration and more key frames, the key frames of the video resources are more, the extracted features are more disordered, and the core features are easy to blur, so that the video resources can be classified into non-image modes.
In addition, the characteristic data of the target resource may also include a description of the target resource, i.e. the name of the resource may be generated from the known description of the resource.
So far, for multi-modal resources covering both image and non-image resources, the feature data of the resources can be determined in the manner described above.
In an alternative embodiment of the present application, a graphical user interface may be displayed through the terminal, and accordingly, the foregoing step S30 may be specifically implemented by the following manner: and receiving demand prompt information for the target resource, which is input through a graphical user interface.
The demand prompt information for the target resource comprises one or more of a subject prompt word of the target resource, a resource type prompt word of the target resource, a word prompt word of the tag text, a format prompt word of the tag text and a quantity prompt word of the tag text.
The subject matter prompt word of the target resource can be used for indicating the background of the target resource, such as mountain and sea subject matter, three-country subject matter, science fiction subject matter and the like.
The resource type hint word of the target resource may be used to indicate a type of the target resource, which may be, for example, a type preset by human, such as a clothing category, a weapon category, a skill category, a character category, etc. in the game.
The word prompting word of the tag text can be used for indicating key words which are wanted to be reflected or contained in the tag text, for example, for a virtual skirt in a game, a skirt word can be included in the tag text, and then the tag text of the virtual jacket can be a resource name containing the skirt word, such as a Feiyan skirt, a Bishui orchid skirt and the like.
The format prompt word of the tag text can be used to indicate the format of the tag text, such as word number requirement, fixed words for words at a certain position, etc.
The number of tab texts prompt word may be used to indicate the number of tab texts that are desired to be generated for the target resource, for example, 20 selectable tab texts may be generated in a batch for the target resource.
In an alternative embodiment of the present application, the foregoing step S40 may be specifically implemented by the following steps S41 to S42.
Step S41: and organizing the demand prompt information aiming at the target resource and the characteristic data of the target resource according to a preset format to obtain a demand text of the target resource.
In an alternative embodiment, the preset format may be "data name: the format of the data content ", for example, for the fashion resource in the game, the organization is performed according to the format, and the obtained demand text of the fashion resource may be as follows:
background material: the Chinese ancient wind game takes mountain and sea meridians as story backgrounds;
resource type: fashionable dress;
visual characteristics: female wear, white series, skirt, broken flowers and multi-light yarn materials;
expression characteristics: lovely style, romantic atmosphere.
Step S42: and taking the required text of the target resource as input data, and inputting the required text into a preset generation type pre-training converter model so that the generation type pre-training converter model outputs the label text of the target resource.
And inputting the required text of the target resource obtained in the last step into a preset generative pre-training converter model (generative pre-TrainedTransformer, GPT), wherein the generative pre-training converter model can output the label text of the target resource.
In an alternative embodiment, the output end of the image generating text model is in butt joint with the input end of the generating pre-training converter model, the output end of the image-text matching model is in butt joint with the input end of the generating pre-training converter model, and a module for organizing the data format is arranged between the two butt-jointed models for realizing the step S41. Thus, the leakage risk of the resource-related data when the resource-related data flows between the models can be avoided.
In an alternative embodiment of the present application, the tag text of the target resource includes the name and/or description of the target resource.
By means of the resource information generation method, the name of the resource can be generated, and the task of naming the resource is achieved.
By means of the resource information generation method, description of the resources can be generated, and explanation or feature introduction of the resources can be achieved.
By way of example, by the resource information generating method provided by the embodiment of the application, the name and description of the resource can be generated at the same time.
Fig. 4 is a flowchart of a method for generating a resource name based on a resource description according to an embodiment of the present application, as shown in fig. 4, in an alternative implementation manner of the present application, in a case of generating a description of a target resource according to requirement prompting information and feature data for the target resource, the method may further include the following steps S51 to S52.
Step S51: extracting core words from the description of the target resource;
step S52: and generating the name of the target resource according to the core word and a preset naming strategy.
The description of the resource is usually more words, and the name of the resource is less words and more refined, so that the description of the resource can be generated first, then, core words, such as one or more of nouns, verbs, important adjectives and the like, are extracted from the description of the resource, and then the core words and/or synonyms or paraphrasing of the core words are adopted to generate the name of the resource.
The naming policy may include a selection policy of the core word (such as selecting a noun or a verb), collocation of the core word (such as a combination of noun+adjective/verb+noun+adjective/verb), a location policy of the core word (such as where the core word is placed in a name), and the like, which are not specifically limited in the embodiment of the present application.
For example, for a virtual apparel resource in a game, the generated resource description is "Xian-Zi Bai Xuepei Qing Xia, yue-Xun-Chun-Yi-Xun-Ji", a part of core nouns representing the scene, such as "snow", "Xia", "Yue", may be extracted from the description, and then "Xia" and "Yue" are extracted to generate five-word names, where "Xia" and "Yue" are located in the first and third words in the names, respectively, and the names end with "clothes" representing apparel attributes. Thus, the resource name of ' Charyuzi Yueyi ' can be obtained based on the resource description of ' Charyuyan Bai Xuepei, yueqin and Yueqin.
Fig. 5 shows a flowchart of a method for generating a resource description based on a resource name, as shown in fig. 5, in an alternative implementation manner of the present application, in a case of generating a name of a target resource according to requirement prompting information and feature data for the target resource, the method may further include the following steps S61 to S62.
Step S61: searching a target sentence matched with the name of the target resource from a preset literary composition library;
step S62: the target statement is determined as a description of the target resource.
Referring to the above description example of virtual apparel resources, in some situations, there may be some literature requirements for the description of the resources, for example, it is desirable that the description of the resources have poetry (five-language, seven-language, etc.), or meaning some literature works, so that higher requirements are placed on the generation of the resource tag text. In the related art, in order to cope with such a demand, the related person is required to have a certain literature level, which puts a higher capacity demand on the related person, which is often difficult to achieve.
However, in the embodiment of the application, based on the requirement, a literary composition library can be preset, and the literary composition library can record literary compositions of specific literaries (such as poetry prose) or specific subjects (such as Chinese and foreign language story subjects), so that sentences with matched intentions, styles and the like can be searched from the literary composition library according to the generated resource names and used as resource descriptions, and thus, the resource descriptions have a certain literary property.
Fig. 6 shows a flowchart of a method for filtering tag text according to an embodiment of the present application, as shown in fig. 6, in an alternative implementation manner of the present application, the method may further include the following steps S71 to S72.
Step S71: determining the similarity between a first tag text and a second tag text, wherein the first tag text and the second tag text are two different tag texts generated for the same or different resources;
step S72: if the similarity between the first label text and the second label text is larger than the preset similarity, deleting one of the first label text and the second label text.
In practical applications, the resources of the same type are often characterized as similar, which results in that the naming of the resources of the same type may be very similar or even identical, which easily occurs when the number of resources is large or tag texts are generated in batches. Therefore, after generating a plurality of tag texts for the same resource/different resources, duplicate checking can be performed according to the similarity between the tag texts, and when the similarity between the two tag texts is large, one tag text is deleted, and the other tag text is reserved, so that the generation of the identical or even the same tag text for the same resource/different resources is avoided.
For example, for two tag texts with a similarity exceeding a preset similarity, the tag text with a smaller number of words may be deleted, or the tag text with more similar tag texts may be deleted, or the tag text with earlier generation time may be deleted, which is not specifically limited in the embodiment of the present application.
If the similarity between the first label text and the second label text is smaller than or equal to the preset similarity, the first label text and the second label text can be reserved.
In an alternative embodiment of the present application, the target resource may comprise a game resource. The resources in the game are various, long-line development is usually carried out, various props, roles (including player roles and non-player roles), skills, gift bags, copies and other resources in the game are continuously developed along with iteration of versions, holidays, event holding and the like, the number of the resources can reach tens of thousands, and the resource name/description can be generated for tens of thousands of game resources in a short time by the resource information generation method provided by the embodiment of the application.
In one embodiment, the game resources may include, but are not limited to, multiplayer Online Battlescenna (MOBA), instant strategy game (Real-Time StrategyGame, RTS) and shooting virtual game, chess and card virtual game.
The following provides an example of the generation of a resource name and description of a game resource, in which the virtual skirt resource needs to be generated in batches, and the generation requirements include: the names and corresponding descriptions of 7 skirts are generated, the skirt is short, the type of the ancient wind is white, the style is lovely, the skirt is provided with broken flowers, the material is multi-light yarn, the name is the type of the ancient wind, and the skirt is associated with mountain and sea meridians to a certain extent. For the above requirements, the generation result of the name and description of the virtual skirt resource may be as follows:
1. Biwenxianzi skirt: the skirt made of white light yarn is as pure and lovely as the Biwenxian, and broken flowers on the skirt are interspersed, so that people can recall beautiful scenery in mountain and sea meridians.
2. Lily Xianzong skirt: the lovely white skirt is characterized in that the broken flowers on the skirt symbolize lily, so that people can recall beautiful lily faerie in mountain and sea meridians.
3. Feiyan flower skirt: the white ancient style skirt is made of light yarn material, so that people can think of the delphinidia, the broken flower decoration looks like petal falling, and the white ancient style skirt corresponds to the beautiful scenery in the mountain and sea meridians.
4. Jade rabbit flower meaning skirt: the white ancient style short skirt has the broken flowers on the skirt put to be fun and the jade rabbit pattern, so that people can recall beautiful fairy in mountain and sea meridians.
5. Crane dance skirt: the white light yarn skirt is lovely, so that people can think of the ancient fashion and the skirt is stained with broken flowers like the ancient fashion and the lovely fashion.
6. Lotus flower faerie skirt: the white ancient style short skirt is light in outer material, and the broken flower decoration is like lotus flowers in the hand of the fairy, so that people can recall beautiful fairy in mountain and sea meridians.
7. Blue water orchid skirt: white ancient style skirt, lovely style and light yarn material, the broken flower on skirt symbolizes the blue and green water surrounding the fairy in mountain and sea menses.
In the embodiment of the application, the unused resource name/description can also be used as a standby asset. And the planner can obtain a great amount of inspiration according to the automatically generated resource names/descriptions, help the planner to extend thinking infinitely and transversely, and produce a great invisible value. In application development, aiming at daily update and version update of online projects, the label text of new application resources can be quickly produced, and the conception and data consulting time is reduced; for new projects, label text of tens of thousands of new resources can be mass produced by the platform, and corresponding planning manpower is released.
Corresponding to the resource information generation method provided by the embodiment of the application, the embodiment of the application also provides a resource information generation device. As shown in fig. 7, the apparatus 700 includes:
a modality determining module 701, configured to determine a data modality of the target resource;
a feature determining module 702, configured to determine feature data of the target resource according to a data modality of the target resource;
a prompt acquisition module 703, configured to acquire requirement prompt information for the target resource;
and the tag text generation module 704 is configured to generate a tag text of the target resource according to the requirement prompt information for the target resource and the feature data of the target resource.
Optionally, the feature determining module 702 includes:
the image feature extraction unit is used for extracting visual features and expression features of the target resource under the condition that the data modality of the target resource is an image, wherein the visual features of the target resource are used for representing visible objects in the target resource and/or appearance descriptions of the visible objects, and the expression features of the target resource are used for representing content meanings of the target resource;
And the keyword extraction unit is used for extracting keywords from the visual features and the expression features of the target resource to obtain feature data of the target resource.
Optionally, the image feature extraction unit includes:
the visual characteristic generation subunit is used for generating a text model through a preset image and generating visual characteristics of the target resource;
and the expression characteristic matching subunit is used for determining the expression characteristics matched with the target resources through a preset image-text matching model.
Optionally, the feature determining module 702 includes:
and the non-image feature determining unit is used for receiving a keyword instruction aiming at the target resource and determining a keyword indicated by the keyword instruction as feature data of the target resource under the condition that the data modality of the target resource is non-image.
Optionally, a graphical user interface is displayed through the terminal, and the prompt obtaining module 703 includes:
the receiving unit is used for receiving the demand prompt information aiming at the target resource and input through the graphical user interface;
the demand prompt information aiming at the target resource comprises one or more of a subject prompt word of the target resource, a resource type prompt word of the target resource, a word use prompt word of a tag text, a format prompt word of the tag text and a quantity prompt word of the tag text.
Optionally, the tag text generating module 704 includes:
the data organization unit is used for organizing the demand prompt information aiming at the target resource and the characteristic data of the target resource according to a preset format to obtain a demand text of the target resource;
and the model control unit is used for taking the required text of the target resource as input data, inputting a preset generation type pre-training converter model, and enabling the generation type pre-training converter model to output the label text of the target resource.
Optionally, the tag text of the target resource includes a name and/or description of the target resource.
Optionally, the apparatus 700 further includes a description-based generation name module, the description-based generation name module including:
a core word extraction unit for extracting core words from the description of the target resource;
and the name generation unit is used for generating the name of the target resource according to the core word and a preset naming strategy.
Optionally, the apparatus 700 further includes a name-based generation description module, the name-based generation description module including:
the sentence searching unit is used for searching a target sentence matched with the name of the target resource from a preset literary composition library;
And the description generating unit is used for determining the target statement as the description of the target resource.
Optionally, the apparatus 700 further comprises a screening module, the screening module comprising:
a similarity determining unit, configured to determine a similarity between a first tag text and a second tag text, where the first tag text and the second tag text are two different tag texts generated for the same or different resources;
and the deleting unit is used for deleting one of the first label text and the second label text if the similarity between the first label text and the second label text is larger than the preset similarity.
Optionally, the target resource comprises a game resource.
Next, referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 800 may be configured with the resource information generating apparatus described in the embodiments of the present application, so as to implement the functions in the embodiments of the present application. Specifically, the electronic device 800 includes: a receiver 801, a transmitter 802, a processor 803, and a memory 804 (where the number of processors 803 in the execution device 800 may be one or more, one processor is exemplified in fig. 8), where the processor 803 may include an application processor 8031 and a communication processor 8032. In some embodiments of the present application, the receiver 801, transmitter 802, processor 803, and memory 804 may be connected by a bus or other means.
Memory 804 may include read only memory and random access memory and provides instructions and data to the processor 803. A portion of the memory 804 may also include non-volatile random access memory (NVRAM). The memory 804 stores a processor and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, where the operating instructions may include various operating instructions for performing various operations.
The processor 803 controls the operation of the execution device. In a specific application, the individual components of the execution device are coupled together by a bus system, which may include, in addition to a data bus, a power bus, a control bus, a status signal bus, etc. For clarity of illustration, however, the various buses are referred to in the figures as bus systems.
The methods disclosed in the embodiments of the present application may be applied to the processor 803 or implemented by the processor 803. The processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry of hardware or instructions in software form in the processor 803. The processor 803 may be a general purpose processor, a Digital Signal Processor (DSP), a microprocessor, or a microcontroller, and may further include an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The processor 803 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 804, and the processor 803 reads information in the memory 804, and in combination with the hardware, performs the steps of the above method.
The receiver 801 may be used to receive input numeric or character information and to generate signal inputs related to performing relevant settings and function control of the device. The transmitter 802 may be used to output numeric or character information through a first interface; the transmitter 802 may also be configured to send instructions to the disk group through the first interface to modify data in the disk group; the transmitter 802 may also include a display device such as a display screen.
In the embodiment of the present application, the application processor 8031 in the processor 803 is configured to perform the resource information generating method in the embodiment of the present application. It should be noted that, the specific manner in which the application processor 8031 performs each step is based on the same concept as that of each method embodiment in the present application, so that the technical effects brought by the specific manner are the same as those brought by each method embodiment in the present application, and the specific details can be referred to the descriptions in the foregoing method embodiments shown in the present application, and are not repeated herein.
The embodiment of the application also provides a chip for running the instruction, which is used for executing the technical scheme of the resource information generation method in the embodiment.
The embodiment of the application also provides a computer readable storage medium, in which computer instructions are stored, and when the computer instructions run on a processor, the processor is caused to execute the technical scheme of the resource information generating method in the above embodiment.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program is used for executing the technical scheme of the resource information generation method in the embodiment when being executed by a processor.
The computer readable storage medium described above may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose server.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.

Claims (14)

1. A method for generating resource information, the method comprising:
determining a data mode of the target resource;
determining characteristic data of the target resource according to the data mode of the target resource;
acquiring demand prompt information aiming at the target resource;
and generating a tag text of the target resource according to the demand prompt information aiming at the target resource and the characteristic data of the target resource.
2. The method of claim 1, wherein determining the characteristic data of the target resource according to the data modality of the target resource comprises:
extracting visual features and expression features of the target resource under the condition that the data modality of the target resource is an image, wherein the visual features of the target resource are used for representing visible objects in the target resource and/or appearance descriptions of the visible objects, and the expression features of the target resource are used for representing content meanings of the target resource;
and extracting keywords from the visual features and the expression features of the target resource to obtain feature data of the target resource.
3. The method of claim 2, wherein the extracting visual and expressive features of the target resource comprises:
Generating a text model through a preset image, and generating visual characteristics of the target resource;
and determining the expression characteristics matched with the target resources through a preset image-text matching model.
4. The method of claim 1, wherein determining the characteristic data of the target resource according to the data modality of the target resource comprises:
and under the condition that the data modality of the target resource is non-image, receiving a keyword instruction aiming at the target resource, and determining the keyword indicated by the keyword instruction as the characteristic data of the target resource.
5. The method according to claim 1, wherein the obtaining, by the terminal, the demand prompt information for the target resource includes:
receiving demand prompt information for the target resource, which is input through the graphical user interface;
the demand prompt information aiming at the target resource comprises one or more of a subject prompt word of the target resource, a resource type prompt word of the target resource, a word use prompt word of a tag text, a format prompt word of the tag text and a quantity prompt word of the tag text.
6. The method according to claim 1, wherein generating the tag text of the target resource according to the demand hint information for the target resource and the feature data of the target resource comprises:
organizing the demand prompt information aiming at the target resource and the characteristic data of the target resource according to a preset format to obtain a demand text of the target resource;
and taking the required text of the target resource as input data, and inputting a preset generation type pre-training converter model so that the generation type pre-training converter model outputs the label text of the target resource.
7. The method of claim 1, wherein the tag text of the target resource includes a name and/or description of the target resource.
8. The method of claim 1, wherein in the event that the tag text of the target resource includes a description of the target resource, the method further comprises:
extracting core words from the description of the target resource;
and generating the name of the target resource according to the core word and a preset naming strategy.
9. The method of claim 1, wherein in the event that the tag text of the target resource includes a name of the target resource, the method further comprises:
Searching a target sentence matched with the name of the target resource from a preset literary composition library;
and determining the target statement as the description of the target resource.
10. The method according to claim 1, wherein the method further comprises:
determining a similarity between a first tag text and a second tag text, wherein the first tag text and the second tag text are two different tag texts generated for the same or different resources;
and deleting one of the first label text and the second label text if the similarity between the first label text and the second label text is greater than the preset similarity.
11. The method of claim 1, wherein the target resource comprises a game resource.
12. A resource information generating apparatus, characterized in that the apparatus comprises:
the mode determining module is used for determining the data mode of the target resource;
the characteristic determining module is used for determining characteristic data of the target resource according to the data mode of the target resource;
the prompt acquisition module is used for acquiring the demand prompt information aiming at the target resource;
And the tag text generation module is used for generating tag text of the target resource according to the demand prompt information aiming at the target resource and the characteristic data of the target resource.
13. An electronic device, comprising: a processor, a memory, and computer program instructions stored on the memory and executable on the processor;
the processor, when executing the computer program instructions, implements the resource information generating method according to any of the preceding claims 1 to 11.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer program instructions for implementing the resource information generating method according to any of the preceding claims 1 to 11 when executed by a processor.
CN202310757505.1A 2023-06-25 2023-06-25 Resource information generation method and device, electronic equipment and readable storage medium Pending CN117274991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310757505.1A CN117274991A (en) 2023-06-25 2023-06-25 Resource information generation method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310757505.1A CN117274991A (en) 2023-06-25 2023-06-25 Resource information generation method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117274991A true CN117274991A (en) 2023-12-22

Family

ID=89201564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310757505.1A Pending CN117274991A (en) 2023-06-25 2023-06-25 Resource information generation method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117274991A (en)

Similar Documents

Publication Publication Date Title
CN111581433B (en) Video processing method, device, electronic equipment and computer readable medium
CN108010112B (en) Animation processing method, device and storage medium
CN112230909B (en) Method, device, equipment and storage medium for binding data of applet
US9940307B2 (en) Augmenting text with multimedia assets
CN111460179A (en) Multimedia information display method and device, computer readable medium and terminal equipment
CN109154943A (en) Conversion based on server of the automatic broadcasting content to click play content
CN113536172B (en) Encyclopedia information display method and device and computer storage medium
CN111857878B (en) Skeleton screen page generation method and device, electronic equipment and storage medium
US7584411B1 (en) Methods and apparatus to identify graphical elements
CN113923475A (en) Video synthesis method and video synthesizer
Fischer et al. Brassau: automatic generation of graphical user interfaces for virtual assistants
KR102040392B1 (en) Method for providing augmented reality contents service based on cloud
CN117274991A (en) Resource information generation method and device, electronic equipment and readable storage medium
Orgad et al. The humanitarian makeover
CN113761281B (en) Virtual resource processing method, device, medium and electronic equipment
KR20230023804A (en) Text-video creation methods, devices, facilities and media
CN115687816A (en) Resource processing method and device
CN113709575A (en) Video editing processing method and device, electronic equipment and storage medium
Sainty Blazor in Action
Griffith et al. MARTA: Modern Automatic Renderings from Text to Animation
CN116170626A (en) Video editing method, device, electronic equipment and storage medium
CN117315676A (en) Click-to-read system and device
Wangi et al. Augmented Reality Technology for Increasing the Understanding of Traffic Signs
CN118152617A (en) Method, device, electronic equipment and storage medium for configuring image for text
Niu et al. Design and Implementation of Ar Library Navigation System for the Elderly Based on User Experience

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination