CN111429543B - Material generation method and device, electronic equipment and medium - Google Patents

Material generation method and device, electronic equipment and medium Download PDF

Info

Publication number
CN111429543B
CN111429543B CN202010129195.5A CN202010129195A CN111429543B CN 111429543 B CN111429543 B CN 111429543B CN 202010129195 A CN202010129195 A CN 202010129195A CN 111429543 B CN111429543 B CN 111429543B
Authority
CN
China
Prior art keywords
clothing
target
data object
environment
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010129195.5A
Other languages
Chinese (zh)
Other versions
CN111429543A (en
Inventor
姚润昊
徐杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wendie Network Technology Co.,Ltd.
Original Assignee
Suzhou Diezhi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Diezhi Network Technology Co ltd filed Critical Suzhou Diezhi Network Technology Co ltd
Priority to CN202010129195.5A priority Critical patent/CN111429543B/en
Publication of CN111429543A publication Critical patent/CN111429543A/en
Application granted granted Critical
Publication of CN111429543B publication Critical patent/CN111429543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a material generation method, a material generation device, electronic equipment and a medium. The method comprises the following steps: acquiring initial data indicating a real scene in response to a material generation instruction; determining a clothing class data object based on the initial data; determining clothing attribute information of the clothing data object; acquiring clothing related information related to the clothing attribute information; and fusing the clothing data object and the clothing associated information to obtain a target clothing material. The method determines the clothing data object based on the initial data, then determines the clothing attribute information, and then acquires the clothing associated information, and further fuses the clothing data object and the clothing associated information to obtain the target clothing material. The target clothes materials existing in the virtual world are obtained from the initial data indicating the real scene, and a channel between the real scene and the virtual world is established. The information carried by the target clothing material can reflect the current environment of the user, and the real-time performance of the information carried by the material can be improved.

Description

Material generation method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of internet communication technologies, and in particular, to a method and an apparatus for generating a material, an electronic device, and a medium.
Background
An electronic game (hereinafter, referred to as a game) is an interactive game that runs on an electronic device platform, and is mainly classified into a network game and a stand-alone game. The network game generally uses the internet as a transmission medium, a game operator server and a user computer as processing terminals, and game client software as an information interaction window, and aims to realize related contents such as entertainment, leisure, communication, virtual achievement and the like.
With the rapid development of society and science and technology, the requirements of users on related materials in games are gradually improved. In the related art, a certain solution can be provided for the color richness of the material, the display pixel accuracy of the material, and the like. However, the material obtained with such solutions is often pre-set, dominated by the gaming operator and, accordingly, the user lacks involvement in the generation of the material.
Disclosure of Invention
In order to solve the problems that when the prior art is applied to material generation, the information carried by the obtained material lacks real-time performance, cannot meet the personalized requirements of users and the like, the invention provides a material generation method, a device, electronic equipment and a medium, wherein the method comprises the following steps:
in one aspect, the present invention provides a method for generating materials, including:
acquiring initial data indicating a real scene in response to a material generation instruction;
determining a clothing class data object based on the initial data;
determining clothing attribute information of the clothing data object;
acquiring clothing related information related to the clothing attribute information;
and fusing the clothing data object and the clothing associated information to obtain a target clothing material.
Another aspect provides a material generating apparatus, including:
an initial data acquisition module: the system comprises a data processing unit, a data processing unit and a data processing unit, wherein the data processing unit is used for responding to a material generation instruction and acquiring initial data indicating a real scene;
clothing class data object determination module: for determining a clothing class data object based on the initial data;
clothing attribute information determination module: clothing attribute information used for determining the clothing data object;
clothing associated information acquisition module: the system is used for acquiring clothing related information associated with the clothing attribute information;
the clothes fusion module: and the system is used for fusing the clothing data object and the clothing correlation information to obtain a target clothing material.
Another aspect provides an electronic device, which includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to implement the material generation method as described above.
Another aspect provides a computer-readable storage medium having at least one instruction or at least one program stored therein, the at least one instruction or the at least one program being loaded and executed by a processor to implement the material generation method as described above.
The invention provides a material generation method, a material generation device, electronic equipment and a medium, which have the following technical effects:
the method determines the clothing data object based on the initial data, then determines the clothing attribute information, and then acquires the clothing associated information, and further fuses the clothing data object and the clothing associated information to obtain the target clothing material. The target clothes materials existing in the virtual world are obtained from the initial data indicating the real scene, and a channel between the real scene and the virtual world is established. The obtained information carried by the target clothing material can reflect the current environment of the user, so that the real-time performance of the information carried by the material can be improved; meanwhile, the participation degree of the user and the sense of reality of the game can be increased, and the personalized requirements of the user can be better met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the invention;
fig. 2 is a schematic flow chart of a material generation method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of fusing the clothing data object and the clothing related information to obtain a target clothing material according to the embodiment of the present invention;
FIG. 4 is a schematic flowchart of a reloading process for a target virtual object according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of a material generation method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a UI displaying a target virtual object and target environment materials according to an embodiment of the invention;
fig. 7 is a block diagram showing the components of a material generation apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram showing the components of a material generation apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present invention and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment according to an embodiment of the present invention, which may include a client 01 and a server 02, where the client and the server are connected through a network. The user sends initial data indicating a real scene to the server through the client. The server determines clothing attribute information based on the received initial data, acquires clothing associated information, and then fuses the clothing data object and the clothing associated information to obtain a target clothing material. It should be noted that fig. 1 is only an example.
Specifically, the client 01 may include a physical device of a type such as a smart phone, a desktop computer, a tablet computer, a notebook computer, an Augmented Reality (AR)/Virtual Reality (VR) device, a digital assistant, a smart wearable device, and the like, and may also include software running in the physical device, such as a computer program. The operating system running on the client 01 may include, but is not limited to, an Android system (Android system), an IOS system (which is a mobile operating system developed by apple inc.), linux (an operating system), Microsoft Windows (Microsoft Windows operating system), and the like.
Specifically, the server 02 may include a server operating independently, or a distributed server, or a server cluster composed of a plurality of servers. The server 02 may comprise a network communication unit, a processor and a memory, etc. The server 02 may provide background services for the clients.
In the embodiment of the present invention, the material generation scheme may utilize an augmented reality technology (AR technology). The AR technology is a new technology for seamlessly integrating real world information and virtual world information, and is characterized in that entity information (visual information, sound, taste, touch and the like) which is difficult to experience in a certain time space range of the real world originally is overlapped after simulation through scientific technologies such as computers and the like, virtual information is applied to the real world and is perceived by human senses, and therefore sensory experience beyond reality is achieved. The real environment and the virtual object are superimposed on the same picture or space in real time and exist simultaneously.
The material generation scheme may utilize Artificial Intelligence (AI) technology. Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. For example, a plurality of geographic information data may be input into the data processing model to perform data classification, and then the geographic information data corresponding to each classification is stored; wherein the data processing model is obtained by machine learning training using a plurality of sample data.
While a specific embodiment of a material generation method according to the present invention is described below, fig. 2 is a schematic flow chart of a material generation method according to an embodiment of the present invention, and the present specification provides the method operation steps as described in the embodiment or the flow chart, but may include more or less operation steps based on conventional or non-creative labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, the method may include:
s201: acquiring initial data indicating a real scene in response to a material generation instruction;
in the embodiment of the present invention, the initial data indicating the real scene may be video data, image data, or the like. Initial data indicating a real scene may be collected by a client corresponding to the user side. For example, the server obtains an original video stream indicating a real scene from a video input module (such as a camera) carried by an entity device corresponding to the user side; and the server decodes the original video stream to obtain a video frame and takes the video frame as initial data. Or, a video input module (such as a camera) carried by an entity device corresponding to the user side acquires an original video stream indicating a real scene; and the server acquires the decoded video frame from the client corresponding to the user side and takes the decoded video frame as initial data.
Of course, the initial data indicating the real scene may also be collected by an AR camera carried by the entity device corresponding to the user side, so that the client corresponding to the user side executes the material generation method provided by the present invention by using the AR camera.
In a particular embodiment, the material generation instruction may be triggered by a target button (such as a "take picture" button) on a user interaction interface provided by the user via the client. Of course, the trigger form of the material generation instruction may also include a sound trigger (a microphone collects sound data and triggers by extracting voice with a specific meaning), an image trigger (a camera collects image data and triggers by extracting an expression and a gesture with a specific meaning).
In practical application, a user can directly acquire a 'whole body' image in a real scene by using a camera of the entity device, and the 'whole body' image is used as initial data. The user may also collect initial data in different regions using a camera of the physical device, such as collecting "shoes" image in a real scene, and "jacket" image in a real scene, respectively.
S202: determining a clothing class data object based on the initial data;
in embodiments of the present invention, determining a clothing class data object based on initial data may be viewed as extracting the clothing class data object from the initial data. For example, when the initial data corresponds to an initial image containing a "fitting model", a "one-piece dress" and a "fitting room", the "one-piece dress" may be segmented from the initial image to obtain the dress-like data object. Therefore, the influence of the interference information carried by the initial data on the accuracy of the subsequent determination of the clothing attribute information can be reduced.
Specifically, a clothing data object recognition model with high generalization capability can be obtained by utilizing neural network model training, and then the clothing data object recognition model is utilized to recognize the clothing data object in the initial data, so that the reliability and effectiveness of recognition can be ensured.
The training process of the clothing data object recognition model comprises the following steps: firstly, obtaining a plurality of sample images, wherein each sample image carries at least one label, the label indicates a region to be segmented of the corresponding sample image, and the region to be segmented corresponds to a clothing object; then, inputting the sample image into a neural network model for image segmentation training; in the training process, adjusting the model parameters of the neural network model until the image output by the neural network model is matched with the image corresponding to the input to-be-segmented region of the sample image; and finally, taking the neural network model corresponding to the adjusted model parameters as the clothing data object identification model.
In practical application, the incidence relation between the human body part and the clothes can be combined to extract the clothes data object from the initial data. For example, the relationship between "ears" as a human body part and "earrings" as clothing, and the relationship between "lower limbs" as a human body part and "pants", "skirts", "socks", "shoes", etc., as clothing. Therefore, the human body part area can be located based on the initial image (initial data), and then the clothing data object can be determined by combining the incidence relation and the clothing characteristics. The clothing characteristics are different from clothing attribute information which corresponds to classification characteristics in a wider range (such as the jeans belonging to the pants classification), and the clothing attribute information corresponds to specific attributes with finer granularity (such as the jeans having attribute information of sky blue, leggings and embroidery patterns).
Further, for the positioning of the human body part area: whether the initial image contains the human body can be determined based on a neural network model corresponding to human body detection; if a human body exists, the human body part region can be cut out from the initial image, and the position, the size and the human body attribute analysis result of the human body part region can also be returned. Wherein, the human body attribute analysis result may include sex, height, weight, etc.
Of course, the content of the incidence relation between the human body part and the clothes can also be introduced when the clothes data object identification model is obtained by utilizing the neural network model training. Therefore, the accuracy and the practicability of determining the clothing data object can be improved based on the attention to the human body part with strong correlation with clothing.
It should be noted that the clothing data object may point to one clothing object (e.g., corresponding to "a jacket"), and the clothing data object may also point to at least two clothing objects (e.g., corresponding to "a jacket" + "a scarf" + "a pair of trousers").
S203: determining clothing attribute information of the clothing data object;
in the embodiment of the present invention, in combination with the aforementioned descriptions of "clothing features" and "clothing attribute information", the "clothing features" involved in the "determination of clothing class data object based on the initial data" point to a "rough classification", and the "clothing attribute information" involved in the "determination of clothing attribute information of the clothing class data object point to a" fine classification ". Correspondingly, the clothing characteristics and the clothing attribute information are relative, and the identification efficiency and the identification accuracy can be comprehensively adjusted.
The apparel attribute information includes at least one of: dress color information, dress style information, dress texture information, and dress material information. The apparel color information may indicate hue, lightness, saturation, and the like. The garment style information may indicate a size, a collar type (round collar, stand collar, lapel-hat collar, etc.), a sleeve type (close-up sleeves, screw sleeves, split sleeves, etc.), a waist type (middle waist, high waist, low waist, etc.), a skirt type (one-step skirt, scalloped skirt, full-swing skirt, princess skirt, pleated skirt, etc.). The apparel texture information may indicate stripes, lattices, solid colors, etc. The apparel material information may indicate cotton-type fabric, hemp-type fabric, wool-type fabric, silk-type fabric, purified fiber fabric, leather, metal, and the like.
It should be noted that, when the clothing data object points to at least two clothing objects (for example, corresponding to "a jacket" + "a scarf" + "a pair of trousers"), the clothing attribute information of each clothing object, that is, the clothing attribute information of "a jacket", the clothing attribute information of "a scarf", and the clothing attribute information of "a pair of trousers" may be determined respectively.
S204: acquiring clothing related information related to the clothing attribute information;
in the embodiment of the present invention, the clothing related information may be acquired based on the classification setting of the clothing attribute information. For example, the clothing attribute information points to "white", "high waist", "large skirt" and "hemp fabric", so that a "white" target clothing color template, a "large skirt" target clothing style template and a "hemp fabric" target clothing material template are determined from the clothing template library, and further the "white" template clothing color template, the "large skirt" target clothing style template and the "hemp fabric" target clothing material template are used as clothing related information. Each template in the clothing template library is preset, so that the reduction degree of clothing attribute information and even clothing data objects cannot be ensured. Meanwhile, the types of decoration elements attached to the main body clothes (such as the jacket, the trousers, the skirt and the like) are numerous, and the decoration elements similar to the high waist are not suitable for constructing an independent template diagram, so that the target clothes target and the clothes data object need to be fused in subsequent steps, and the effective reduction of the clothes data object is realized.
It should be noted that, when the clothing data object may point to a clothing object (e.g., corresponding to "a jacket"), the corresponding clothing related information is obtained based on the classification setting of the clothing attribute information of "a jacket". When the clothing data object points to at least two clothing objects (such as corresponding to 'one jacket' + 'one scarf' + 'one pair of trousers'), corresponding clothing related information is acquired based on the classification setting of the clothing attribute information of 'one jacket', the clothing attribute information of 'one scarf' and the clothing attribute information of 'one pair of trousers', respectively.
S205: and fusing the clothing data object and the clothing associated information to obtain a target clothing material.
In the embodiment of the present invention, if the clothing data object points to "a white + high waist + hemp + irregular skirt + skirt with drawstring," target clothing color template of white "," target clothing style template of large skirt ", and target clothing material template of" hemp fabric ", these target clothing templates may be used as clothing correlation information, and accordingly," high waist "(clothing attribute information)," irregular skirt "and" skirt with drawstring "in the clothing data object do not have corresponding target clothing targets. Namely, the clothing information contained in the clothing data object is more comprehensive, so that the target clothing material obtained by fusing the clothing data object and the clothing associated information can reflect clothing in a real scene more accurately.
In a specific embodiment, as shown in fig. 3, the fusing the clothing data object and the clothing related information to obtain a target clothing material includes:
s301: generating candidate clothing materials according to the target clothing template and the clothing attribute information;
s302: when the similarity between the candidate clothing material and the clothing data object is larger than or equal to a similarity threshold value, taking the candidate clothing material as the target clothing material;
s303: and when the similarity between the candidate clothing material and the clothing data object is smaller than a similarity threshold value, determining clothing data matched with the clothing data object from a clothing database, and obtaining the target clothing material based on the matched clothing data.
The rate and the reduction degree of the material generation are comprehensively considered, and the method can comprise the following steps: firstly, candidate clothing materials are generated according to the target clothing template and the clothing attribute information. For example, a decorative element such as "high waist" may have corresponding apparel attribute information, but may not have a corresponding target apparel template. Therefore, on the basis of a white target dress color template, a large skirt target dress style template and a hemp fabric target dress material template, a high waist serving as dress attribute information is introduced to obtain candidate dress materials. Compared with the clothing data object containing all clothing information, the processing consumption of constructing candidate clothing materials by the four main clothing information of white, high waist, large skirt and hemp fabric can be smaller. Then, similarity matching is carried out on the candidate clothes materials and the clothes data objects, and if the similarity (for example, 87%) is larger than or equal to a similarity threshold value (for example, 80%), the candidate clothes materials are used as target clothes materials. Further, if the similarity (e.g., 61%) is less than the similarity threshold (e.g., 80%), apparel data matching the apparel data object may be determined from the apparel database, and the target apparel material may be obtained based on the matching apparel data. Of course, the similarity threshold here can be flexibly set.
Where "apparel database" is different from the "apparel template library" described above, the "apparel database" may indicate apparel within the game that is already owned by the account corresponding to the user (such as by "purchasing") or may indicate apparel within the game that is owned by the "apparel store" (including that owned by an account not corresponding to the user).
In a specific embodiment, after the target apparel material is obtained by fusing the apparel data object and the apparel related information, as shown in fig. 4, the method further includes:
s401: responding to the processing instruction, and acquiring a target virtual object;
the processing instruction may be triggered by a user via a target button (such as a "change-over" button) on a user interaction interface provided by the client. Of course, the triggering form of the processing instruction may also include a sound trigger (a microphone collects sound data and triggers by extracting voice with a specific meaning), an image trigger (a camera collects image data and triggers by extracting an expression and a gesture with a specific meaning).
The target virtual object may be constructed based on a two-dimensional model, a three-dimensional model. The client can obtain the data source of the target virtual object according to the received processing instruction, and then load and render the data source in the page provided by the client, so as to display the target virtual object. Or the server acquires the data source of the target virtual object according to the received processing instruction, the server sends the data source to the client, and the client loads and renders the data source in the page provided by the client so as to display the target virtual object.
S402: and performing augmented reality processing on the target virtual object and the target clothing material.
First, a first layer and a second layer are created.
And then drawing the target virtual object on the first image layer. The drawing may be performed on the first layer based on the data source of the target virtual object. When the data source points to the three-dimensional model, since the three-dimensional model may include a model base and a set of bones bound together by skins, and a map loaded on the model base, the target apparel material subsequently drawn on the second image layer may be viewed as a "map". Of course, the target virtual object drawn on the first layer corresponds to a pocket bottom map (e.g., default basic clothing material in the game, "try-on-go-back" corresponding clothing material, or "my highest score" corresponding clothing material).
And drawing the target clothing material on the second image layer.
And finally, overlapping the first image layer and the second image layer. At the moment, the first layer is drawn with the target virtual object, and the second layer is drawn with the target clothing material. Because the model base and the skeleton set corresponding to the target virtual object are relatively fixed, that is, if the target virtual object points to a virtual character, the figure (including height, weight, figure proportion, etc.) of the virtual character is relatively fixed, and correspondingly, the garment size corresponding to the virtual character is relatively fixed. Therefore, the obtained target clothing material can be subjected to proportion adjustment suitable for the virtual character, and a target clothing template suitable for the virtual character can be directly created. When the two layers are subjected to superposition processing, the 'reloading area' corresponding to the model substrate can be determined and transparency adjustment can be performed on the 'reloading area' on the first layer based on the position of the model substrate, which is required to be loaded by the target clothing material as the 'chartlet'.
The costume in the real scene is presented in the form of a target costume material in the virtual world, and the target costume material is utilized as a target virtual object for changing the costume, so that the reality sense is brought to the game. By combining an evaluation system of the 'suit change' task in the game, the current dress of the user can be commented, and the integration experience of the user is improved.
After the acquiring initial data indicating a real scene, as shown in fig. 5, the method further includes:
s501: determining an environment class data object based on the initial data;
s502: acquiring environment associated information associated with the environment type data object;
s503: and fusing the environment data object and the environment associated information to obtain a target environment material.
In the embodiment of the present invention, the environmental characteristics may correspond to characters, animals, plants, roads, buildings, logos (logos), and the like in the real scene. Determining an environmental class data object based on the initial data: on one hand, the environment type data object is extracted from the initial data based on the environment characteristics, and on the other hand, the type of the environment type data object is determined, such as indoor environment, outdoor environment and the like. The environment-related information may include geographical location information, weather information, etc., and may also include multimedia information (e.g., audio, video). And the target environment material is obtained by fusing the environment data object and the environment associated information, and can be used for updating the virtual scene in the game, so that the boundary between the real scene and the virtual world is opened, and the immersion feeling of the user is improved.
Specifically, an environment-class data object recognition model with high generalization capability can be obtained by utilizing neural network model training, and then the environment-class data object recognition model is utilized to recognize the environment-class data object in the initial data, so that the reliability and the effectiveness of recognition can be ensured.
In a specific embodiment, after the target environment material is obtained by fusing the environment class data object and the environment association information, the method further includes: responding to an environment transformation instruction, and acquiring a target virtual object; augmented reality processing is performed on the target virtual object and the target environment material, as shown in fig. 6. The "target virtual object" can be referred to the related description in the above steps S401 to S402, and is not described herein again. The augmented reality processing of the target virtual object and the target environment material shows the environment in the real scene in the form of the target environment material in the virtual world, and the target environment material is used as the scene of the target virtual object, so that the reality sense is brought to the game.
In practical application, interaction among users can be carried out in a game, users in the same or similar geographic positions can be grouped, and users in the groups can share target environment materials to construct a shared scene. In the sharing scenario, the user may trigger the display of the corresponding target virtual object. After other users (such as user B) in the team trigger to display their corresponding target virtual objects, user a may also see the target virtual objects in the shared scene. Further, in addition to "reloading" the user a may also "reload" the target virtual object a (corresponding to the user a), the user a may also "reload" the target virtual object B (corresponding to the user B).
According to the technical scheme provided by the embodiment of the specification, the clothing data object is determined based on the initial data, then the clothing attribute information is determined, then the clothing related information is obtained, and further the clothing data object and the clothing related information are fused to obtain the target clothing material. The target clothes materials existing in the virtual world are obtained from the initial data indicating the real scene, and a channel between the real scene and the virtual world is established. The obtained information carried by the target clothing material can reflect the current environment of the user, so that the real-time performance of the information carried by the material can be improved; meanwhile, the participation degree of the user and the sense of reality of the game can be increased, and the personalized requirements of the user can be better met.
An embodiment of the present invention further provides a material generation apparatus, as shown in fig. 7, the apparatus includes:
initial data acquisition module 710: the system comprises a data processing unit, a data processing unit and a data processing unit, wherein the data processing unit is used for responding to a material generation instruction and acquiring initial data indicating a real scene;
clothing class data object determination module 720: for determining a clothing class data object based on the initial data;
apparel attribute information determination module 730: clothing attribute information used for determining the clothing data object;
clothing associated information acquisition module 740: the system is used for acquiring clothing related information associated with the clothing attribute information;
clothing fusion module 750: and the system is used for fusing the clothing data object and the clothing correlation information to obtain a target clothing material.
In a specific embodiment, as shown in fig. 8, the apparatus further comprises:
the environment class data object determination module 760: for determining an environment class data object based on the initial data;
environment-associated information acquisition module 770: the environment association information is used for acquiring the environment association information associated with the environment class data object;
the environment fusion module 780: and the system is used for fusing the environment data object and the environment associated information to obtain a target environment material.
It should be noted that the device and method embodiments in the device embodiment are based on the same inventive concept.
An embodiment of the present invention provides an electronic device, where the electronic device includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the material generation method provided in the foregoing method embodiment.
Further, fig. 9 shows a hardware structure diagram of an electronic device for implementing the material generation method provided by the embodiment of the present invention, and the electronic device may participate in forming or including the material generation apparatus provided by the embodiment of the present invention. As shown in fig. 9, the electronic device 90 may include one or more (shown here as 902a, 902b, … …, 902 n) processors 902 (the processors 902 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 904 for storing data, and a transmission device 906 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration and is not intended to limit the structure of the electronic device. For example, the electronic device 90 may also include more or fewer components than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
It should be noted that the one or more processors 902 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the electronic device 90 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 904 can be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the methods described in the embodiments of the present invention, and the processor 902 can execute various functional applications and data processing by running the software programs and modules stored in the memory 94, so as to implement one of the material generation methods described above. The memory 904 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 904 may further include memory located remotely from the processor 902, which may be connected to the electronic device 90 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting means 906 is used for receiving or sending data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device 90. In one example, the transmission device 906 includes a network adapter (NIC) that can be connected to other network devices through a base station so as to communicate with the internet. In one embodiment, the transmitting device 906 can be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the electronic device 90 (or mobile device).
Embodiments of the present invention also provide a storage medium that can be disposed in an electronic device to store at least one instruction or at least one program for implementing a material generation method in the method embodiments, where the at least one instruction or the at least one program is loaded and executed by the processor to implement the material generation method provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device and electronic apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It will be understood by those skilled in the art that all or part of the steps in implementing the above embodiments may be implemented by hardware, or may be implemented by hardware associated with program instructions, and that the program may be stored in a computer-readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A method for generating material, the method comprising:
acquiring initial data indicating a real scene in response to a material generation instruction;
determining a clothing class data object based on the initial data;
determining clothing attribute information of the clothing data object;
determining a target clothing template associated with the clothing attribute information from a clothing template library according to the clothing attribute information;
generating candidate clothing materials according to the associated target clothing templates and the clothing attribute information;
when the similarity between the candidate clothing material and the clothing data object is larger than or equal to a similarity threshold value, taking the candidate clothing material as a target clothing material;
and when the similarity between the candidate clothing material and the clothing data object is smaller than a similarity threshold value, determining clothing data matched with the clothing data object from a clothing database, and generating a target clothing material based on the matched clothing data.
2. The method of claim 1, wherein after the obtaining initial data indicative of a real scene, the method further comprises:
determining an environment class data object based on the initial data;
acquiring environment associated information associated with the environment type data object;
and fusing the environment data object and the environment associated information to obtain a target environment material.
3. The method of any of claims 1 or 2, wherein after obtaining the target apparel material, the method further comprises:
responding to the processing instruction, and acquiring a target virtual object;
and performing augmented reality processing on the target virtual object and the target clothing material.
4. The method of claim 3, wherein the augmented reality processing of the target virtual object and the target apparel material comprises:
creating a first image layer and a second image layer;
drawing the target virtual object on the first image layer;
drawing the target clothing material on the second image layer;
and superposing the first image layer and the second image layer.
5. The method of any of claims 1 or 2, wherein the apparel attribute information comprises at least one of: dress color information, dress style information, dress texture information, and dress material information.
6. A material generation apparatus, characterized in that the apparatus comprises:
an initial data acquisition module: the system comprises a data processing unit, a data processing unit and a data processing unit, wherein the data processing unit is used for responding to a material generation instruction and acquiring initial data indicating a real scene;
clothing class data object determination module: for determining a clothing class data object based on the initial data;
clothing attribute information determination module: clothing attribute information used for determining the clothing data object;
clothing template determination module: the target clothing template correlated with the clothing attribute information is determined from a clothing template library according to the clothing attribute information;
a material obtaining module: the target clothing template is used for associating with the clothing attribute information; when the similarity between the candidate clothing material and the clothing data object is larger than or equal to a similarity threshold value, taking the candidate clothing material as a target clothing material; and when the similarity between the candidate clothing material and the clothing data object is smaller than a similarity threshold value, determining clothing data matched with the clothing data object from a clothing database, and generating a target clothing material based on the matched clothing data.
7. The apparatus of claim 6, further comprising:
the environment class data object determination module: for determining an environment class data object based on the initial data;
the environment associated information acquisition module: the environment association information is used for acquiring the environment association information associated with the environment class data object;
an environment fusion module: and the system is used for fusing the environment data object and the environment associated information to obtain a target environment material.
8. An electronic device, comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to implement the material generation method according to any one of claims 1 to 5.
9. A computer-readable storage medium, having stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded and executed by a processor to perform a method of generating material as claimed in any one of claims 1 to 5.
CN202010129195.5A 2020-02-28 2020-02-28 Material generation method and device, electronic equipment and medium Active CN111429543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010129195.5A CN111429543B (en) 2020-02-28 2020-02-28 Material generation method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010129195.5A CN111429543B (en) 2020-02-28 2020-02-28 Material generation method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111429543A CN111429543A (en) 2020-07-17
CN111429543B true CN111429543B (en) 2020-10-30

Family

ID=71547411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010129195.5A Active CN111429543B (en) 2020-02-28 2020-02-28 Material generation method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111429543B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112121436B (en) * 2020-09-18 2024-02-09 网易(杭州)网络有限公司 Game data processing method and device
CN112370779B (en) * 2020-11-27 2022-10-14 上海米哈游天命科技有限公司 Clothing change method and device, electronic equipment and storage medium
CN112337093B (en) * 2021-01-08 2021-05-25 成都完美时空网络技术有限公司 Virtual object clustering method and device, storage medium and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140119981A (en) * 2013-04-01 2014-10-13 주홍찬 Apparatus and method for virtual reality clothes or virtual reality accessory wear by using augmented reality
US9176704B2 (en) * 2012-07-18 2015-11-03 Bandai Co., Ltd. Generating augmented reality image using mobile terminal device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054121B (en) * 2009-11-04 2012-12-05 沈阳迅景科技有限公司 Method for building 3D (three-dimensional) panoramic live-action online game platform
US9536352B2 (en) * 2014-03-27 2017-01-03 Intel Corporation Imitating physical subjects in photos and videos with augmented reality virtual objects
CN106529413A (en) * 2016-10-13 2017-03-22 北京小米移动软件有限公司 Information acquisition method and device
CN106807088A (en) * 2017-02-15 2017-06-09 成都艾维拓思科技有限公司 The method and device that game data updates
CN106823375A (en) * 2017-02-15 2017-06-13 成都艾维拓思科技有限公司 Method, apparatus and system that game dress ornament updates
CN107213642A (en) * 2017-05-12 2017-09-29 北京小米移动软件有限公司 Virtual portrait outward appearance change method and device
CN108986199B (en) * 2018-06-14 2023-05-16 北京小米移动软件有限公司 Virtual model processing method and device, electronic equipment and storage medium
CN108905208A (en) * 2018-06-21 2018-11-30 珠海金山网络游戏科技有限公司 A kind of electronic gaming method and device based on augmented reality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9176704B2 (en) * 2012-07-18 2015-11-03 Bandai Co., Ltd. Generating augmented reality image using mobile terminal device
KR20140119981A (en) * 2013-04-01 2014-10-13 주홍찬 Apparatus and method for virtual reality clothes or virtual reality accessory wear by using augmented reality

Also Published As

Publication number Publication date
CN111429543A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
US11790589B1 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
CN111429543B (en) Material generation method and device, electronic equipment and medium
US10109051B1 (en) Item recommendation based on feature match
CN106803057B (en) Image information processing method and device
US20220327709A1 (en) Garment segmentation
CN117897734A (en) Interactive fashion control based on body gestures
CN105426462A (en) Image searching method and device based on image element
US11636662B2 (en) Body normal network light and rendering control
CN111862116A (en) Animation portrait generation method and device, storage medium and computer equipment
US11854069B2 (en) Personalized try-on ads
CN110909746A (en) Clothing recommendation method, related device and equipment
WO2023039183A1 (en) Controlling interactive fashion based on facial expressions
US20240096040A1 (en) Real-time upper-body garment exchange
US11651572B2 (en) Light and rendering of garments
CN113763440A (en) Image processing method, device, equipment and storage medium
CN116630500A (en) Virtual article generation method, virtual clothing generation method and electronic device
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation
WO2023121896A1 (en) Real-time motion and appearance transfer
WO2023121897A1 (en) Real-time garment exchange
CN113350788A (en) Virtual character reloading method, device and medium
CN108040296B (en) Live-broadcast dress decorating method and device based on self-adaptive tracking frame segmentation
CN113553633A (en) Data generation method and device, electronic equipment and computer storage medium
CN109035177A (en) A kind of photo processing method and device
CN114125271B (en) Image processing method and device and electronic equipment
US20230316666A1 (en) Pixel depth determination for object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211229

Address after: Room 201, No. 38, Zhenggao Road, Yangpu District, Shanghai 200082

Patentee after: Shanghai Wendie Network Technology Co.,Ltd.

Address before: 215000 unit 15-306, creative industry park, 328 Xinghu street, Suzhou Industrial Park, Jiangsu Province

Patentee before: Suzhou Diezhi Network Technology Co.,Ltd.