CN116778034A - Virtual object editor, virtual object editing method and related device - Google Patents
Virtual object editor, virtual object editing method and related device Download PDFInfo
- Publication number
- CN116778034A CN116778034A CN202310612106.6A CN202310612106A CN116778034A CN 116778034 A CN116778034 A CN 116778034A CN 202310612106 A CN202310612106 A CN 202310612106A CN 116778034 A CN116778034 A CN 116778034A
- Authority
- CN
- China
- Prior art keywords
- virtual object
- virtual
- module
- sub
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000004458 analytical method Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 description 27
- 230000000694 effects Effects 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 15
- 238000013473 artificial intelligence Methods 0.000 description 12
- 230000009286 beneficial effect Effects 0.000 description 11
- 238000013136 deep learning model Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 241000195955 Equisetum hyemale Species 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 125000004122 cyclic group Chemical group 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 230000036544 posture Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000004549 pulsed laser deposition Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/34—Graphical or visual programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a virtual object editor, a virtual object editing method and a related device, which are used for configuring object information of a virtual object, wherein the object information comprises image information and sound information of the virtual object, and the virtual object editor comprises: the virtual object configuration module is used for configuring the object information; and the preview module is used for previewing the object image of the virtual object determined based on the object information. The configuration of the virtual object is specially aimed at, so that the use requirement of configuration personnel for customizing the virtual object is met, the use experience and usability of the virtual object are improved, and the editing efficiency of the virtual object is improved.
Description
Technical Field
The present application relates to the field of virtual objects, and in particular, to a virtual object editor, a virtual object editing method, and a related device.
Background
The virtual objects include virtual humans, virtual animals, virtual cartoon figures, and the like. The virtual person is a personified image constructed by CG technology and operated in a code form, and has various interaction modes such as language communication, expression, action display and the like. The technology of the dummy person has been rapidly developed in the field of artificial intelligence and has been applied in many technical fields such as video, media, games, finance, travel, education, medical and so on.
Most of the existing virtual object editors are applicable to the traditional field, and cannot well meet the use requirements of configuration personnel for customizing virtual objects. Based on this, the present application provides a virtual object editor, a virtual object editing method, an electronic device, and a computer-readable storage medium to improve the prior art.
Disclosure of Invention
The application aims to provide a virtual object editor, a virtual object editing method, electronic equipment and a computer readable storage medium, which are specially used for configuring a virtual object so as to meet the use requirement of configuration personnel for customizing the virtual object, improve the use experience and usability of the virtual object and improve the editing efficiency of the virtual object.
The application adopts the following technical scheme:
in a first aspect, the present application provides a virtual object editor for configuring object information of a virtual object, the object information including character information and sound information of the virtual object, the virtual object editor comprising:
the virtual object configuration module is used for configuring the object information;
and the preview module is used for previewing the object image of the virtual object determined based on the object information.
The beneficial effect of this technical scheme lies in: the virtual object editor special for the virtual object is provided to meet the use requirement of configuration personnel for customizing the virtual object, improve the use experience and usability of the virtual object, and improve the editing efficiency of the virtual object. The virtual object editor comprises a virtual object configuration module and a preview module, wherein the virtual object configuration module is a module for configuring object information of a virtual object and mainly comprises configuration of image information and sound information of the virtual object. The avatar information may include an avatar, posture, clothing, etc. of the virtual object. The sound information may include a tone color, a language, etc. of the virtual object. The configurator can perform detailed configuration of the virtual object through the module, including adding, modifying, deleting information of the virtual object and the like. The preview module is used for previewing the virtual object image, and can generate an object image corresponding to the virtual object according to the object information configured by the virtual object configuration module and display the object image in the preview module. The configurator can observe whether the avatar information of the virtual object is consistent with what is expected through the preview module. On the one hand, through the virtual object configuration module, configuration personnel can easily configure information of the virtual object, including image information and sound information, so that the user-defined degree and individuation of the virtual object are improved. On the other hand, through the preview module, configuration personnel can view the image information of the virtual object in real time, can discover and adjust the problem of the virtual object in time, and improves the configuration efficiency and accuracy of the virtual object. In yet another aspect. The virtual object editor can be widely applied to the fields of virtual reality, game development, artificial intelligence, man-machine interaction and the like, provides a convenient tool for configuration personnel, and can greatly improve the working efficiency and quality.
In some possible implementations, the virtual object configuration module includes at least one of the following sub-modules: the device comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module.
The beneficial effect of this technical scheme lies in: the virtual object configuration module is provided with a plurality of sub-modules which are used for configuring the virtual object in different dimensions, including sub-modules of images, languages, timbres, postures, hairstyles, makeup, clothes, scenes, places, configuration drawings, props, lenses and the like. Each sub-module can be edited and adjusted in detail for specific configuration details to meet the requirements of different configuration personnel.
For example, the character sub-module may allow a configurator to set the appearance of a virtual object, including information about the shape, size, color, etc. of features such as body, face, eyes, lips, etc. The language sub-module can enable configuration personnel to add language information for the virtual object, namely, the virtual object is configured to use Chinese, english or Russian and the like. The makeup sub-module can enable configuration personnel to add detailed information such as makeup, tattoo and the like for the virtual object. The clothing sub-module can enable configuration personnel to adjust information such as clothing, shoes and caps worn by the virtual object.
The virtual object editor provides a plurality of sub-modules, so that configuration personnel can configure and adjust the virtual object more finely according to specific requirements, the self-definition degree and flexibility for the virtual object are improved, and the sense of reality of the virtual object and the experience of configuration personnel are also improved.
In some possible implementations, the preview module includes an object display area and an object update button;
the preview module is configured to implement the steps of:
and after receiving the operation of the object updating button, previewing the object image in the object display area.
The beneficial effect of this technical scheme lies in: the preview module includes an object display area and an object update button. The configuration personnel can configure the object information of the virtual object in the virtual object configuration module, and then preview the object image of the virtual object through the object display area and the object update button in the preview module, so as to ensure that the configuration effect meets the requirements.
Specifically, after the configurator completes the configuration of the virtual object, clicking an object update button, receiving a corresponding operation signal by the preview module, and then displaying the object image of the virtual object in the object display area. If the configuration personnel is not satisfied with the configuration of the virtual object, editing and adjusting can be continuously performed in the virtual object configuration module, and the object update button is clicked again to preview until the satisfied effect is achieved. The preview module is convenient for configuration personnel to preview and modify the configuration effect of the virtual object in time, reduces the error probability of the configuration of the virtual object and improves the working efficiency. Meanwhile, through the design of the object display area and the object update button, configuration personnel can more intuitively know the information such as the image, the gesture, the clothes and the like of the virtual object, and the sense of reality of the virtual object and the experience of the configuration personnel are improved.
In some possible implementations, the virtual object editor further includes:
the name configuration module is used for configuring the object name of the virtual object;
and the resolution configuration module is used for configuring the image resolution of the object image.
The beneficial effect of this technical scheme lies in: the virtual object editor also includes a name configuration module and a resolution configuration module. The name configuration module is used for configuring the object name of the virtual object, namely, giving a name to the virtual object, so that configuration personnel can manage and identify the virtual object conveniently. The resolution configuration module is used for configuring the image resolution of the virtual object image, namely setting the display effect, such as definition and detail, of the virtual object on a display or other devices. By the configuration of the module, the object images of the virtual objects can show the best effect on different interfaces to be displayed. Therefore, on one hand, the realization of the name configuration module and the resolution configuration module provides a more comprehensive virtual object editing function for configuration personnel, so that the requirements of the configuration personnel are better met. On the other hand, through the name configuration module, configuration personnel can conveniently name the virtual object, and management and recognition are better carried out. On the other hand, through the resolution configuration module, a configurator can adjust the image resolution of the object image of the virtual object according to the characteristics and requirements of different devices, so that the optimal effect is shown on different interfaces to be displayed, and the sense of reality of the virtual object and the experience of configurators are improved.
In some possible implementations, the virtual objects include one or more of a virtual host, a virtual anchor, a virtual idol, a virtual customer service, a virtual lawyer, a virtual financial advisor, a virtual teacher, a virtual doctor, a virtual lecturer, a virtual assistant.
The beneficial effect of this technical scheme lies in: the virtual object editor may configure a variety of virtual objects including virtual hosts, virtual anchor, virtual idol, virtual customer service, virtual lawyers, virtual financial advisors, virtual teacher, virtual doctors, virtual instructors, virtual assistants, and the like.
For example, for application scenarios such as virtual customer service and virtual assistant, the virtual object can provide services for users for 24 hours without interruption, and there is no need to worry about limitation of human resources, so that service efficiency can be effectively improved. For application scenes such as virtual principals, virtual anchor, virtual even images and the like, more users can be attracted to pay attention to and participate in, and meanwhile, the flexibility of program production can be increased.
Therefore, through the virtual object editor, configuration personnel can conveniently customize the virtual objects meeting the requirements of the configuration personnel, and the requirements of various application scenes are realized.
In a second aspect, the present application provides a virtual object editing method, configured to configure object information of a virtual object using the virtual object editor described in any one of the above; the virtual object editor comprises: a virtual object configuration module and a preview module;
The virtual object editing method comprises the following steps:
configuring object information of the virtual object by utilizing the virtual object configuration module;
and previewing the object image of the virtual object determined based on the object information by utilizing the previewing module.
The beneficial effect of this technical scheme lies in: the virtual object editor comprises a virtual object configuration module and a preview module. The virtual object configuration module is used for configuring object information of the virtual object, and the preview module is used for previewing an object image of the virtual object determined based on the object information. In practical applications, the configurator may input object information of the virtual object, such as an avatar, a sound, etc., using the virtual object configuration module. Through the object information, the virtual object editor may generate an object image of the virtual object. And then, the configurator can utilize the preview module to view the object image of the virtual object, so that the configurator can be helped to more intuitively see the object image of the configured virtual object, and further edit and adjust the virtual object. Therefore, on one hand, the configurator can easily create, edit and adjust the virtual object through the virtual object configuration module, so that the difficulty of creating and editing the virtual object is reduced. On the other hand, through the preview module, the configurator can more intuitively see the object image of the configured virtual object, thereby improving the configuration efficiency and quality of the virtual object.
In some possible implementations, the virtual object configuration module includes at least one of the following sub-modules: the system comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module;
the image sub-module comprises one or more image controls, and each image control corresponds to an object image;
the language sub-module comprises one or more language controls, and each language control corresponds to an object language;
the tone color submodule comprises one or more tone color controls, and each tone color control corresponds to one object tone color;
the gesture submodule comprises one or more gesture controls, and each gesture control corresponds to an object gesture;
the hairstyle submodule comprises one or more hairstyle controls, and each hairstyle control corresponds to an object hairstyle;
the makeup submodule comprises one or more makeup controls, and each makeup control corresponds to one object makeup;
the clothing submodule comprises one or more clothing controls, and each clothing control corresponds to one object clothing;
The scene submodule comprises one or more scene controls, and each scene control corresponds to an object scene;
the place sub-module comprises one or more place controls, and each place control corresponds to an object place;
the map matching sub-module comprises one or more map matching controls, and each map matching control corresponds to one object map matching;
the prop submodule comprises one or more prop controls, and each prop control corresponds to an object prop;
the lens submodule comprises one or more lens controls, and each lens control corresponds to one object lens;
the configuring the object information of the virtual object by using the virtual object configuration module includes:
responding to the selection operation of one of the avatar controls, and taking the object avatar corresponding to the selected avatar control as the object avatar of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the language controls, and taking the object language corresponding to the selected language control as the object language of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the tone controls, and taking the object tone corresponding to the selected tone control as the object tone of the virtual object; and/or the number of the groups of groups,
Responding to the selection operation of one of the gesture controls, and taking the object gesture corresponding to the selected gesture control as the object gesture of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the hairstyle controls, and taking the object hairstyle corresponding to the selected hairstyle control as the object hairstyle of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the dressing controls, and dressing the object corresponding to the selected dressing control as the object dressing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the clothing controls, and taking the object clothing corresponding to the selected clothing control as the object clothing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the scene controls, and taking the object scene corresponding to the selected scene control as the object scene of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the place controls, and taking the object place corresponding to the selected place control as the object place of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the map matching controls, and taking the object map corresponding to the selected map matching control as the object map of the virtual object; and/or the number of the groups of groups,
Responding to the selection operation of one of the prop controls, and taking the object prop corresponding to the selected prop control as the object prop of the virtual object; and/or the number of the groups of groups,
and responding to the selection operation for one of the lens controls, and taking the object lens corresponding to the selected lens control as the object lens of the virtual object.
The beneficial effect of this technical scheme lies in: the virtual object configuration module comprises at least one of the following sub-modules: the virtual object comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module, wherein each sub-module is used for configuring various attributes of object information of the virtual object, such as language, sound, gesture, scene, prop and the like, so that the virtual object has rich expression forms. Each sub-module includes one or more controls by which a configurator can adjust and modify each attribute of the object information to create a unique, personalized avatar, e.g., when the configurator selects one avatar control, the avatar of the avatar will be set to the avatar corresponding to that control. Similarly, when a configurator selects one of the language control, tone control, gesture control, hairstyle control, make-up control, apparel control, scene control, location control, make-up control, prop control or shot control, the corresponding object language, object tone, object pose, object hairstyle, object make-up, object apparel, object scene, object location, object make-up, object prop or object shot will be configured into the virtual object. Thus, a configurator can configure different object information by selecting different controls, so as to create various virtual objects. At the same time, these controls can be flexibly combined to achieve diverse scene and context simulations, such as conference lectures, newscasts, movie characters, game characters, online customer service, and the like. Thus, the virtual object can freely appear in various scenes, and the customized requirement of diversification of configuration personnel is met.
In some possible implementations, the virtual object configuration module further includes a description sub-module;
the configuring the object information of the virtual object by using the virtual object configuration module includes:
acquiring object description information by using the description submodule; the object description information includes text information and/or voice information for describing the virtual object;
and carrying out semantic analysis based on the object description information, and determining the object information of the virtual object.
The beneficial effect of this technical scheme lies in: the virtual object configuration module further includes a description sub-module that obtains object description information, including text information and voice information, and uses the obtained object description information to determine object information for the virtual object. As an example, a configurator inputs text information in the description sub-module, and extracts characteristic information in the text information by carrying out semantic analysis on the text information to determine object information of the virtual object. As another example, the configurator inputs the voice information in the description sub-module, and performs semantic analysis by converting the voice information into text information, and extracts the feature information therein for determining the object information of the virtual object.
Therefore, the text information and the voice information can be subjected to semantic analysis by the description submodule, and the characteristic information in the text information and the voice information is extracted so as to determine the object information of the virtual object. On the one hand, by providing a plurality of input modes and matching, configuration personnel can describe the virtual objects required by the configuration personnel more conveniently and obtain the results more in line with the requirements of the configuration personnel. On the other hand, through semantic analysis, the virtual object editor can more accurately understand the requirements and intentions of configuration personnel and provide more intelligent services.
In some possible implementations, the object description information includes voice information;
the determining the object information of the virtual object based on the object description information includes:
detecting whether the voice information indicates to acquire the voice tone corresponding to the voice information;
if yes, inputting the voice information into a preset tone extraction model to obtain a voice tone corresponding to the voice information;
and determining the voice tone as the object tone of the virtual object.
The beneficial effect of this technical scheme lies in: the configuration personnel inputs the voice information in the description sub-module, then detects the voice information, detects whether the voice information indicates that the voice tone corresponding to the voice information needs to be acquired, and if the voice tone needs to be acquired, inputs the voice information into a preset tone extraction model to acquire the corresponding voice tone. Finally, the extracted voice tone is determined as the object tone of the virtual object. Therefore, the voice information is used for indicating the requirement of the configurator on the virtual object, and is also used for extracting the tone of the voice information when the configurator hopes to use the tone of the voice information as the object tone of the virtual object, and directly extracting the tone by using the voice information, so that the steps of inputting voice again to acquire the tone are reduced, and the configuration efficiency of the object tone is improved.
In some possible implementations, the preview module includes an object display area and an object update button;
the previewing the object image of the virtual object determined based on the object information by using the previewing module comprises:
and after receiving the operation of the object updating button, previewing the object image in the object display area.
The beneficial effect of this technical scheme lies in: after the configurator performs the operation, for example, the object updating button is clicked, the preview module can display the object image of the new virtual object in the object display area, so that the configurator can be helped to preview the object image of the virtual object more intuitively, and the operation experience of the configurator is improved. Meanwhile, as the virtual object image can be updated in real time, the configuration personnel can more conveniently adjust the object information of the virtual object, and the manufacturing efficiency of the virtual object is improved.
In some possible implementations, the virtual object editor further includes: the virtual object editing method comprises a name configuration module and a resolution configuration module, and further comprises the following steps:
configuring an object name of the virtual object by utilizing the name configuration module so as to associate the virtual object with the object name;
And configuring the image resolution of the object image by utilizing the resolution configuration module so as to adapt the object image to the interfaces to be displayed with different resolutions.
The beneficial effect of this technical scheme lies in: in one aspect, the virtual object editor includes a name configuration module, through which an object name is set for a virtual object to associate the virtual object with the object name, thereby facilitating identification and lookup of the virtual object. On the other hand, the virtual object editor includes a resolution configuration module, and the resolution configuration module configures the image resolution of the object image, so that the object image of the virtual object is adapted to the interfaces to be displayed with different resolutions, for example, if the image resolution of the virtual object is set too low on a device with higher display resolution, problems such as insufficient display of the virtual object, blurred images, unclear details and the like occur, and the appearance of configuration personnel is affected. In order to adapt to various interfaces to be displayed, the resolution of the image of the object is configured through the resolution configuration module, so that the virtual object image can show the best effect on various interfaces to be displayed, and the experience of configuration personnel is improved.
In a third aspect, the present application provides an electronic device for configuring object information of a virtual object using a virtual object editor as defined in any one of the preceding claims, the virtual object editor comprising a virtual object configuration module and a preview module, the electronic device comprising a memory and at least one processor, the memory storing a computer program, the at least one processor being configured to implement the following steps when executing the computer program:
configuring object information of the virtual object by utilizing the virtual object configuration module;
and previewing the object image of the virtual object determined based on the object information by utilizing the previewing module.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed by at least one processor, performs the steps of any one of the virtual object editing methods described above or performs the functions of the electronic device described above.
Drawings
The application will be further described with reference to the drawings and embodiments.
Fig. 1 is a schematic structural diagram of a virtual object editor according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a virtual object configuration module according to an embodiment of the present application.
Fig. 3 is a flowchart of a virtual object editing method according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of object description information processing according to an embodiment of the present application.
Fig. 5 is a schematic flow chart of another object description information processing according to an embodiment of the present application.
Fig. 6 is a schematic flow chart of a name configuration module and a resolution configuration module according to an embodiment of the present application.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a program product according to an embodiment of the present application.
Detailed Description
The technical scheme of the present application will be described below with reference to the drawings and the specific embodiments of the present application, and it should be noted that, on the premise of no conflict, new embodiments may be formed by any combination of the embodiments or technical features described below.
In embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, a and b and c, wherein a, b and c can be single or multiple. It is noted that "at least one" may also be interpreted as "one (a) or more (a)".
It is also noted that, in embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any implementation or design described as "exemplary" or "e.g." in the examples of this application should not be construed as preferred or advantageous over other implementations or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The technical field and related terms of the embodiments of the present application are briefly described below.
The virtual objects include virtual humans, virtual animals, virtual cartoon figures, and the like. The virtual person is a personified image constructed by CG technology and operated in a code form, and has various interaction modes such as language communication, expression, action display and the like. The technology of virtual persons has been rapidly developed in the field of artificial intelligence and has been applied in many technical fields such as video, media, games, finance, travel, education, medical treatment, etc., and not only can a virtual host, a virtual anchor, a virtual even image, a virtual customer service, a virtual lawyer, a virtual financial advisor, a virtual teacher, a virtual doctor, a virtual instructor, a virtual assistant, etc. be customized, but also a video can be generated through text or audio one-key. In the virtual people, the service type virtual people mainly have the functions of replacing real people to serve and provide daily accompaniment, are the virtualization of service type roles in reality, and have the industrial value of mainly reducing the cost of the existing service type industry and enhancing the cost reduction of the stock market.
A virtual object editor is a software tool for creating or editing virtual objects that allows configuration personnel to create, edit, or customize virtual objects, such as models, characters, props, scenes, and the like. These virtual objects may be used in the fields of game development, animation, virtual reality, live broadcast, etc.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. The design principle and the implementation method of various intelligent machines are researched by artificial intelligence, so that the machines have the functions of perception, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. The computer program may learn experience E given a certain class of tasks T and performance metrics P, and increase with experience E if its performance in task T happens to be measured by P. Machine learning is specialized in studying how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, reorganizing existing knowledge structures to continually improve its own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence.
Deep learning is a special machine learning by which the world is represented using a hierarchy of nested concepts, each defined as being associated with a simple concept, and achieving great functionality and flexibility, while a more abstract representation is computed in a less abstract way. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
(virtual object editor)
Referring to fig. 1, fig. 1 is a schematic structural diagram of a virtual object editor according to an embodiment of the present application.
The embodiment of the application provides a virtual object editor for configuring object information of a virtual object, which comprises the following components:
the virtual object configuration module is used for configuring the object information;
and the preview module is used for previewing the object image of the virtual object determined based on the object information.
In this embodiment, the virtual object includes one or more of a virtual person, a virtual animal, and an avatar.
In this embodiment, the object information is used to indicate various properties of the virtual object, the object information includes avatar information of the virtual object and sound information, the avatar information is used to indicate an object avatar of the virtual object, and the sound information is used to indicate an object sound of the virtual object.
In the present embodiment, the object image includes the appearance, animation, sound, and the like of the virtual object, and the object image is not limited here.
In this embodiment, the virtual object configuration module and the preview module may be provided with a virtual object configuration interface and a preview interface that are independent of each other; or the name configuration module and the subtitle configuration module can also adopt an integrated interface, so that configuration personnel can realize the functions of virtual object configuration and preview through the same interface.
Wherein, the configurator generally refers to staff, and the working content of the configurator comprises configuration virtual objects.
Therefore, the virtual object editor special for the virtual object is provided to meet the use requirement of configuration personnel for customizing the virtual object, improve the use experience and usability of the virtual object, and improve the editing efficiency of the virtual object. The virtual object editor comprises a virtual object configuration module and a preview module, wherein the virtual object configuration module is a module for configuring object information of a virtual object and mainly comprises configuration of image information and sound information of the virtual object. The avatar information may include an avatar, posture, clothing, etc. of the virtual object. The sound information may include a tone color, a language, etc. of the virtual object. The configurator can perform detailed configuration of the virtual object through the module, including adding, modifying, deleting information of the virtual object and the like. The preview module is used for previewing the virtual object image, and can generate an object image corresponding to the virtual object according to the object information configured by the virtual object configuration module and display the object image in the preview module. The configurator can observe whether the avatar information of the virtual object is consistent with what is expected through the preview module. On the one hand, through the virtual object configuration module, configuration personnel can easily configure information of the virtual object, including image information and sound information, so that the user-defined degree and individuation of the virtual object are improved. On the other hand, through the preview module, configuration personnel can view the image information of the virtual object in real time, can discover and adjust the problem of the virtual object in time, and improves the configuration efficiency and accuracy of the virtual object. In yet another aspect. The virtual object editor can be widely applied to the fields of virtual reality, game development, man-machine interaction and the like, provides a convenient tool for configuration personnel, and can greatly improve the working efficiency and quality.
In some embodiments of the application, the virtual objects include one or more of a virtual host, a virtual anchor, a virtual idol, a virtual customer service, a virtual lawyer, a virtual financial advisor, a virtual teacher, a virtual doctor, a virtual lecturer, a virtual assistant.
Thus, the virtual object editor may configure a variety of virtual objects including virtual hosts, virtual anchor, virtual idol, virtual customer service, virtual lawyer, virtual financial advisor, virtual teacher, virtual doctor, virtual instructor, virtual assistant, and the like. For example, for application scenarios such as virtual customer service and virtual assistant, the virtual object can provide services for users for 24 hours without interruption, and there is no need to worry about limitation of human resources, so that service efficiency can be effectively improved. For application scenes such as virtual principals, virtual anchor, virtual even images and the like, more users can be attracted to pay attention to and participate in, and meanwhile, the flexibility of program production can be increased. Therefore, through the virtual object editor, configuration personnel can conveniently customize the virtual objects meeting the requirements of the configuration personnel, and the requirements of various application scenes are realized.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a virtual object configuration module according to an embodiment of the present application.
In some embodiments of the application, the virtual object configuration module includes at least one of the following sub-modules: the device comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module.
The image sub-module is used for configuring an object image of the virtual object; the language submodule is used for configuring the object language of the virtual object; the tone color submodule is used for configuring object tone colors of the virtual objects; the gesture submodule is used for configuring the object gesture of the virtual object; the hairstyle submodule is used for configuring an object hairstyle of the virtual object; the makeup submodule is used for configuring an object makeup of the virtual object; the clothing submodule is used for configuring object clothing of the virtual object; the scene submodule is used for configuring an object scene of the virtual object; the location submodule is used for configuring an object location of the virtual object; the map matching sub-module is used for configuring an object map of the virtual object; the prop submodule is used for configuring object props of the virtual objects; the lens submodule is used for configuring object lenses of the virtual objects.
As one example, the object avatar includes the shape of the body, face, eyes, lips, etc. features of the virtual object; the object languages comprise Chinese, english, japanese, russian and the like; the object tone includes sweet female voice, neutral female voice, lovely male voice, smoke male voice and the like; object gestures include a main broadcasting standing gesture, a customer service standing gesture, a news broadcasting standing gesture, an interview standing gesture and the like; the hairstyle of the subject comprises short hair, long hair, medium and long hair, ball head, high horsetail, short horsetail and the like; the object makeup comprises bare makeup, thick makeup, fresh makeup, main makeup and the like; the object clothes comprise forward clothes, sportswear, national clothes and the like; the object scene comprises a park scene, a joint scene, a market scene, a forest scene and the like; the object sites include home, school, park, etc.; the object configuration map comprises a human picture, an animal picture, a plant picture, a cartoon animation picture and the like; the object props comprise automobiles, hand accounts, magic sticks, water cups, computers and the like; the subject lens includes a centered position, a left-centered position, a right-centered position, a top-down position, and a bottom-view position.
Therefore, a plurality of sub-modules are provided in the virtual object configuration module, and the sub-modules are used for configuring the virtual object in different dimensions, including sub-modules of images, languages, timbres, postures, hairstyles, makeup, clothes, scenes, places, configuration drawings, props, lenses and the like. Each sub-module can be edited and adjusted in detail for specific configuration details to meet the requirements of different configuration personnel.
For example, the character sub-module may allow a configurator to set the appearance of a virtual object, including information about the shape, size, color, etc. of features such as body, face, eyes, lips, etc. The language sub-module can enable configuration personnel to add language information for the virtual object, namely, the virtual object is configured to use Chinese, english or Russian and the like. The makeup sub-module can enable configuration personnel to add detailed information such as makeup, tattoo and the like for the virtual object. The clothing sub-module can enable configuration personnel to adjust information such as clothing, shoes and caps worn by the virtual object.
The virtual object editor provides a plurality of sub-modules, so that configuration personnel can configure and adjust the virtual object more finely according to specific requirements, the self-definition degree and flexibility for the virtual object are improved, and the sense of reality of the virtual object and the experience of configuration personnel are also improved.
In some embodiments of the application, the preview module includes an object display area and an object update button;
the preview module is configured to implement the steps of:
and after receiving the operation of the object updating button, previewing the object image in the object display area.
In this embodiment, the object update button may be disposed inside or outside the object display area, and the position of the object update button is not limited here.
The preview module includes an object display area and an object update button. The configuration personnel can configure the object information of the virtual object in the virtual object configuration module, and then preview the object image of the virtual object through the object display area and the object update button in the preview module, so as to ensure that the configuration effect meets the requirements.
Specifically, after the configurator completes the configuration of the virtual object, clicking an object update button, receiving a corresponding operation signal by the preview module, and then displaying the object image of the virtual object in the object display area. If the configuration personnel is not satisfied with the configuration of the virtual object, editing and adjusting can be continuously performed in the virtual object configuration module, and the object update button is clicked again to preview until the satisfied effect is achieved. The preview module is convenient for configuration personnel to preview and modify the configuration effect of the virtual object in time, reduces the error probability of the configuration of the virtual object and improves the working efficiency. Meanwhile, through the design of the object display area and the object update button, configuration personnel can more intuitively know the information such as the image, the gesture, the clothes and the like of the virtual object, and the sense of reality of the virtual object and the experience of the configuration personnel are improved.
In other embodiments of the present application, the preview module includes an object display area;
the preview module is configured to implement the steps of:
monitoring the change condition of the object information;
when the object information changes, the object image is previewed in real time on the basis of the object information in the object display area.
The change condition refers to the change condition that a configurator configures the object information through the virtual object configuration module.
Therefore, whether the preview image needs to be updated or not is judged by monitoring the change condition of the object information. When the object information changes, the preview module previews the object image in real time in the object display area based on the new object information, so that configuration personnel can directly see the configuration effect. The configuration personnel can preview the configuration result at any time when configuring the virtual object, and observe the characteristics of each attribute of the configured virtual object in real time. Therefore, configuration personnel can be helped to quickly verify and adjust the attribute of the configured virtual object, and the configuration efficiency and accuracy are improved. Meanwhile, the real-time preview can also increase the interaction and operation experience of the configurator, so that the configurator can perform the configuration operation of the virtual object more pleasurably.
In some embodiments of the application, the virtual object editor further comprises:
the name configuration module is used for configuring the object name of the virtual object;
and the resolution configuration module is used for configuring the image resolution of the object image.
Thus, the virtual object editor also includes a name configuration module and a resolution configuration module. The name configuration module is used for configuring the object name of the virtual object, namely, giving a name to the virtual object, so that configuration personnel can manage and identify the virtual object conveniently. The resolution configuration module is used for configuring the image resolution of the virtual object image, namely setting the display effect, such as definition and detail, of the virtual object on a display or other devices. By the configuration of the module, the object images of the virtual objects can show the best effect on different interfaces to be displayed. Therefore, on one hand, the realization of the name configuration module and the resolution configuration module provides a more comprehensive virtual object editing function for configuration personnel, so that the requirements of the configuration personnel are better met. On the other hand, through the name configuration module, configuration personnel can conveniently name the virtual object, and management and recognition are better carried out. On the other hand, through the resolution configuration module, a configurator can adjust the image resolution of the object image of the virtual object according to the characteristics and requirements of different devices, so that the optimal effect is shown on different interfaces to be displayed, and the sense of reality of the virtual object and the experience of configurators are improved.
(virtual object editing method)
Referring to fig. 3, fig. 3 is a flowchart of a virtual object editing method according to an embodiment of the present application.
The embodiment of the application provides a virtual object editing method, which is used for configuring object information of a virtual object by utilizing the virtual object editor; the virtual object editor comprises: a virtual object configuration module and a preview module;
the virtual object editing method comprises the following steps:
step S101: configuring object information of the virtual object by utilizing the virtual object configuration module;
step S102: and previewing the object image of the virtual object determined based on the object information by utilizing the previewing module.
The virtual object editing method can be operated on the electronic equipment, the electronic equipment and the terminal equipment used by the configurator can be independent, and the electronic equipment and the terminal equipment can be combined into a whole. When the electronic device and the terminal device are independent, the electronic device may be a computer, a server, or the like having computing power. The terminal device used by the configurator is not limited in the embodiment of the application, and can be, for example, an intelligent terminal device with a display screen, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, an intelligent wearable device, or the terminal device can be a workstation or a console with a display screen. The display screen may be a touch display screen or a non-touch display screen, and is configured to display one or more configuration interfaces corresponding to the virtual object editor.
The virtual object editor comprises a virtual object configuration module and a preview module. The virtual object configuration module is used for configuring object information of the virtual object, and the preview module is used for previewing an object image of the virtual object determined based on the object information. In practical applications, the configurator may input object information of the virtual object, such as an avatar, a sound, etc., using the virtual object configuration module. Through the object information, the virtual object editor may generate an object image of the virtual object. And then, the configurator can utilize the preview module to view the object image of the virtual object, so that the configurator can be helped to more intuitively see the object image of the configured virtual object, and further edit and adjust the virtual object.
Therefore, on one hand, the configurator can easily create, edit and adjust the virtual object through the virtual object configuration module, so that the difficulty of creating and editing the virtual object is reduced. On the other hand, through the preview module, the configurator can more intuitively see the object image of the configured virtual object, thereby improving the configuration efficiency and quality of the virtual object.
In some embodiments of the application, the virtual object configuration module includes at least one of the following sub-modules: the system comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module;
The image sub-module comprises one or more image controls, and each image control corresponds to an object image;
the language sub-module comprises one or more language controls, and each language control corresponds to an object language;
the tone color submodule comprises one or more tone color controls, and each tone color control corresponds to one object tone color;
the gesture submodule comprises one or more gesture controls, and each gesture control corresponds to an object gesture;
the hairstyle submodule comprises one or more hairstyle controls, and each hairstyle control corresponds to an object hairstyle;
the makeup submodule comprises one or more makeup controls, and each makeup control corresponds to one object makeup;
the clothing submodule comprises one or more clothing controls, and each clothing control corresponds to one object clothing;
the scene submodule comprises one or more scene controls, and each scene control corresponds to an object scene;
the place sub-module comprises one or more place controls, and each place control corresponds to an object place;
the map matching sub-module comprises one or more map matching controls, and each map matching control corresponds to one object map matching;
The prop submodule comprises one or more prop controls, and each prop control corresponds to an object prop;
the lens submodule comprises one or more lens controls, and each lens control corresponds to one object lens;
the configuring the object information of the virtual object by using the virtual object configuration module includes:
responding to the selection operation of one of the avatar controls, and taking the object avatar corresponding to the selected avatar control as the object avatar of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the language controls, and taking the object language corresponding to the selected language control as the object language of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the tone controls, and taking the object tone corresponding to the selected tone control as the object tone of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the gesture controls, and taking the object gesture corresponding to the selected gesture control as the object gesture of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the hairstyle controls, and taking the object hairstyle corresponding to the selected hairstyle control as the object hairstyle of the virtual object; and/or the number of the groups of groups,
Responding to the selection operation for one of the dressing controls, and dressing the object corresponding to the selected dressing control as the object dressing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the clothing controls, and taking the object clothing corresponding to the selected clothing control as the object clothing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the scene controls, and taking the object scene corresponding to the selected scene control as the object scene of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the place controls, and taking the object place corresponding to the selected place control as the object place of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the map matching controls, and taking the object map corresponding to the selected map matching control as the object map of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the prop controls, and taking the object prop corresponding to the selected prop control as the object prop of the virtual object; and/or the number of the groups of groups,
and responding to the selection operation for one of the lens controls, and taking the object lens corresponding to the selected lens control as the object lens of the virtual object.
The image control is not limited in this embodiment, and may be, for example, a single selection box, a change-over switch, and the like.
The language control is not limited in this embodiment, and may be, for example, a single selection box, a change-over switch, and the like.
The tone color control is not limited in this embodiment, and may be, for example, a single selection box, a change-over switch, or the like.
The present embodiment is not limited to the gesture control, and may be, for example, a single selection box, a change-over switch, or the like.
The hair style control is not limited in this embodiment, and may be, for example, a single selection box, a switch, or the like.
The embodiment does not limit the makeup control, and may be, for example, a single selection box, a change-over switch, and the like.
The present embodiment does not limit the clothing control, and may be, for example, a single selection box, a change-over switch, etc.
The scene control is not limited in this embodiment, and may be, for example, a single selection box, a change-over switch, and the like.
The location control is not limited in this embodiment, and may be, for example, a single selection box, a change-over switch, or the like.
The map matching control is not limited in this embodiment, and may be, for example, a single selection box, a change-over switch, and the like.
In this embodiment, the prop control is not limited, and may be, for example, a single selection box, a change-over switch, and the like.
The lens control is not limited in this embodiment, and may be, for example, a single selection box, a switch, or the like.
Thus, the virtual object configuration module includes at least one of the following sub-modules: the virtual object comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module, wherein each sub-module is used for configuring various attributes of object information of the virtual object, such as language, sound, gesture, scene, prop and the like, so that the virtual object has rich expression forms. Each sub-module includes one or more controls by which a configurator can adjust and modify each attribute of the object information to create a unique, personalized avatar, e.g., when the configurator selects one avatar control, the avatar of the avatar will be set to the avatar corresponding to that control. Similarly, when a configurator selects one of the language control, tone control, gesture control, hairstyle control, make-up control, apparel control, scene control, location control, make-up control, prop control or shot control, the corresponding object language, object tone, object pose, object hairstyle, object make-up, object apparel, object scene, object location, object make-up, object prop or object shot will be configured into the virtual object. Thus, a configurator can configure different object information by selecting different controls, so as to create various virtual objects. At the same time, these controls can be flexibly combined to achieve diverse scene and context simulations, such as conference lectures, newscasts, movie characters, game characters, online customer service, and the like. Thus, the virtual object can freely appear in various scenes, and the customized requirement of diversification of configuration personnel is met.
Referring to fig. 4, fig. 4 is a schematic flow chart of processing object description information according to an embodiment of the present application.
In some embodiments of the application, the virtual object configuration module further comprises a description sub-module;
the configuring the object information of the virtual object by using the virtual object configuration module includes:
step S201: acquiring object description information by using the description submodule;
step S202: and carrying out semantic analysis based on the object description information, and determining the object information of the virtual object.
Wherein the object description information includes text information and/or voice information for describing the virtual object;
as an example, the text message may be "i want a virtual anchor for a woman, of the lovely type, wearing blue coats and black jeans, with a hairstyle of high horsetail. After receiving the text information, the description submodule carries out semantic analysis on the text information, so that object information is ' object image=female+lovely ', object clothes=blue coat+black jeans, and object hairstyle=high horsetail '.
Object description information, including text information and voice information, is acquired through the description submodule and used for determining object information of the virtual object. The configurator inputs text information in the description sub-module, extracts characteristic information in the text information through semantic analysis of the text information, and is used for determining object information of the virtual object. Or, the configurator inputs the voice information in the description sub-module, converts the voice information into text information, performs semantic analysis, and extracts characteristic information in the text information to determine object information of the virtual object.
Therefore, the text information and the voice information can be subjected to semantic analysis by the description submodule, and the characteristic information in the text information and the voice information is extracted so as to determine the object information of the virtual object. On the one hand, by providing a plurality of input modes and matching, configuration personnel can describe the virtual objects required by the configuration personnel more conveniently and obtain the results more in line with the requirements of the configuration personnel. On the other hand, through semantic analysis, the virtual object editor can more accurately understand the requirements and intentions of configuration personnel and provide more intelligent services.
In some embodiments of the present application, the determining the object information of the virtual object based on the object description information by semantic parsing includes:
and inputting the object description information into a preset semantic analysis model to obtain the object information.
In this embodiment, the training process of the semantic analysis model includes:
acquiring a training set, wherein the training set comprises a plurality of training data, and each training data comprises sample object description information and labeling data of object information corresponding to the sample object description information;
for each training data in the training set, performing the following processing:
Inputting sample object description information in the training data into a preset deep learning model to obtain prediction data of object information corresponding to the sample object description information;
updating model parameters of the deep learning model based on the prediction data and the annotation data of the object information;
detecting whether a preset training ending condition is met; if yes, taking the trained deep learning model as the semantic analysis model; if not, continuing to train the deep learning model by using the next training data.
Therefore, through designing, a proper amount of neuron computing nodes and a multi-layer operation hierarchical structure are established, a proper input layer and a proper output layer are selected, a preset deep learning model can be obtained, through learning and tuning of the deep learning model, a functional relation from input to output is established, although the functional relation between input and output cannot be found by 100%, the functional relation can be as close to a real association relation as possible, the obtained semantic analysis model can be trained, corresponding object information can be obtained based on object description information, the application range is wide, and the computing result is high in accuracy and reliability.
In some embodiments of the application, the application may be trained to obtain a semantic parsing model.
In other embodiments of the present application, the present application may employ pre-trained semantic parsing models.
In this embodiment, the preset deep learning model may be a convolutional neural network model or a cyclic neural network model, which is not limited herein to the implementation manner of the preset deep learning model.
The training process of the semantic analysis model is not limited, and for example, the training mode of supervised learning, the training mode of semi-supervised learning or the training mode of unsupervised learning can be adopted.
The preset training ending condition is not limited, and for example, the training times can reach the preset times (the preset times are, for example, 1 time, 3 times, 10 times, 100 times, 1000 times, 10000 times, etc.), or the training data in the training set can be all trained once or a plurality of times, or the total loss value obtained in the training is not more than the preset loss value.
Therefore, the object description information is predicted through the semantic analysis model, the obtained object information is accurate, the processing speed is high, and the working efficiency and experience of configuration personnel are greatly improved.
In some embodiments of the present application, the configuring the object information of the virtual object using the virtual object configuration module includes:
acquiring commodity description information by using the description submodule;
and acquiring object information of the virtual object based on the commodity description information.
The obtaining the object information of the virtual object based on the commodity description information includes:
determining object information of the virtual object in a preset virtual object library based on the commodity description information; or,
inputting the commodity description information into an object configuration model to obtain object information of the virtual object.
Wherein the commodity description information comprises text information and/or voice information for describing commodities; and the virtual object library stores object information of the virtual objects created in a history manner and history transaction data corresponding to each virtual object.
In the embodiment of the application, the object configuration model may be a convolutional neural network model or a cyclic neural network model, and the implementation manner of the object configuration model is not limited here.
In the embodiment of the present application, the training process of the object configuration model is similar to the training process of the semantic analysis model, and will not be described herein.
As an example, the text information may be "the commodity i want to sell is an apple computer", and the description submodule generates a suitable virtual object for i to target the student group, after receiving the text information, performs semantic analysis on the text information to obtain the commodity information as an apple computer, and the target group is the student group. And searching historical transaction data of the apple computer in the virtual object library, wherein the consumption crowd is mainly students. For example, the sales of the virtual object 'crystal' for apple computers is 5 ten thousand yuan, and the consumption population is mainly white collar; the sales of the virtual object 'Dudu' to the apple computer is 20 ten thousand yuan, and the consumption group is mainly students. The object information of the virtual object "beep" is determined as the object information of the present virtual object. Or inputting commodity description information into the object configuration model to automatically obtain the object information of the virtual object.
Therefore, the configurator can automatically generate the object information of the virtual object by inputting the commodity description information, so that the demand of the configurator on the commodity can be more comprehensively and accurately known, the object information of the virtual object required at this time can be better configured based on the historical transaction data of the virtual object corresponding to the commodity in the virtual object library, and the configuration efficiency and accuracy of the virtual object are improved.
Referring to fig. 5, fig. 5 is a schematic flow chart of another object description information processing according to an embodiment of the present application.
In some embodiments of the application, the object description information comprises speech information;
the determining the object information of the virtual object based on the object description information includes:
step S301: detecting whether the voice information indicates to acquire the voice tone corresponding to the voice information;
step S302: if yes, inputting the voice information into a preset tone extraction model to obtain a voice tone corresponding to the voice information;
step S303: and determining the voice tone as the object tone of the virtual object.
The tone extraction model may be obtained based on convolutional neural network model training or based on cyclic neural network model training, and the implementation manner of the tone extraction model is not limited here. The training mode of the tone extraction model is similar to that of the semantic analysis model, and will not be described here again.
Therefore, configuration personnel input voice information in the description sub-module, detect the voice information, detect whether the voice information indicates that the voice tone corresponding to the voice information needs to be acquired, and if the voice tone needs to be acquired, input the voice information into a preset tone extraction model to obtain the corresponding voice tone. Finally, the extracted voice tone is determined as the object tone of the virtual object. The voice information is used for indicating the requirement of the configurator on the virtual object, and is also used for extracting the tone of the voice information when the configurator hopes to use the tone of the voice information as the object tone of the virtual object, and directly extracting the tone by using the voice information, so that the steps of inputting voice again to acquire the tone are reduced, and the configuration efficiency of the object tone is improved.
In some embodiments of the present application, the determining object information of the virtual object based on the object description information includes:
detecting whether the object description information indicates acquisition of a user image;
if yes, acquiring a user image, and inputting the user image into a preset user image modeling model to obtain a user model corresponding to the user image;
and taking the object image of the user model as the object image of the virtual object.
The user image modeling model can be obtained based on convolutional neural network model training or based on cyclic neural network model training, and the implementation mode of the user image modeling model is not limited. The training manner of the user image modeling model is similar to that of the semantic analysis model, and is not repeated here.
Thus, by detecting whether the object description information contains an instruction to acquire the user image, a function of acquiring the user image and converting it into a user model is realized.
As one example, first, it is detected whether an instruction to acquire a user image is contained in object description information. If so, go to the next step. The user image is acquired, and the acquisition mode can be acquired by a camera, a designated image file or the like. Inputting the acquired user image into a preset user image modeling model, and analyzing, processing and identifying the image to obtain a corresponding user model. The user model may contain information of facial features, body gestures, etc. of the user. And taking the object image of the user model as the object image of the virtual object to realize the interaction between the virtual object and the user. The interactivity and fidelity of the virtual object can be improved, so that the use experience of a user is improved. Meanwhile, the method and the device can facilitate the self-expression and communication of the user, and increase the participation feeling and interactivity of the user.
In some embodiments of the application, the preview module includes an object display area and an object update button;
the previewing the object image of the virtual object determined based on the object information by using the previewing module comprises:
and after receiving the operation of the object updating button, previewing the object image in the object display area.
Therefore, after the configurator performs the operation, for example, the object updating button is clicked, the preview module can display the object image of the new virtual object in the object display area, so that the configurator can be helped to preview the object image of the virtual object more intuitively, and the operation experience of the configurator is improved. Meanwhile, as the virtual object image can be updated in real time, the configuration personnel can more conveniently adjust the object information of the virtual object, and the manufacturing efficiency of the virtual object is improved.
Referring to fig. 6, fig. 6 is a schematic flow chart of a name configuration module and a resolution configuration module according to an embodiment of the present application.
In some embodiments of the application, the virtual object editor further comprises: the virtual object editing method comprises a name configuration module and a resolution configuration module, and further comprises the following steps:
Step S401: configuring an object name of the virtual object by utilizing the name configuration module so as to associate the virtual object with the object name;
step S402: and configuring the image resolution of the object image by utilizing the resolution configuration module so as to adapt the object image to the interfaces to be displayed with different resolutions.
As one example, the name configuration module includes a text input box for configuring the object name of the virtual object, in which a configurator can input a desired object name to configure the object name for the virtual object. The resolution configuration module includes a horizontal pixel text box and a vertical pixel text box in which a configurator can input desired values to determine an image resolution of the subject image. The resolution configuration module further comprises at least one preset resolution control, each resolution control corresponds to one image resolution, and a configuration personnel can select one of the preset resolution controls, so that the image resolution corresponding to the selected resolution control is used as the image resolution of the object image.
Thus, in one aspect, the virtual object editor includes a name configuration module by which object names are set for virtual objects to associate virtual objects with object names, thereby facilitating identification and lookup of the virtual objects. On the other hand, the virtual object editor includes a resolution configuration module, and the resolution configuration module configures the image resolution of the object image, so that the object image of the virtual object is adapted to the interfaces to be displayed with different resolutions, for example, if the image resolution of the virtual object is set too low on a device with higher display resolution, problems such as insufficient display of the virtual object, blurred images, unclear details and the like occur, and the appearance of configuration personnel is affected. In order to adapt to various interfaces to be displayed, the resolution of the image of the object is configured through the resolution configuration module, so that the virtual object image can show the best effect on various interfaces to be displayed, and the experience of configuration personnel is improved.
(electronic device)
The embodiment of the application also provides an electronic device, the specific embodiment of which is consistent with the embodiment described in the method embodiment and the achieved technical effect, and part of the contents are not repeated.
The electronic device is configured to configure object information of a virtual object using a virtual object editor according to any of the above claims, the virtual object editor comprising a virtual object configuration module and a preview module, the electronic device comprising a memory and at least one processor, the memory storing a computer program, the at least one processor being configured to implement the following steps when executing the computer program:
configuring object information of the virtual object by utilizing the virtual object configuration module;
and previewing the object image of the virtual object determined based on the object information by utilizing the previewing module.
In some embodiments, the virtual object configuration module includes at least one of the following sub-modules: the system comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module;
The image sub-module comprises one or more image controls, and each image control corresponds to an object image;
the language sub-module comprises one or more language controls, and each language control corresponds to an object language;
the tone color submodule comprises one or more tone color controls, and each tone color control corresponds to one object tone color;
the gesture submodule comprises one or more gesture controls, and each gesture control corresponds to an object gesture;
the hairstyle submodule comprises one or more hairstyle controls, and each hairstyle control corresponds to an object hairstyle;
the makeup submodule comprises one or more makeup controls, and each makeup control corresponds to one object makeup;
the clothing submodule comprises one or more clothing controls, and each clothing control corresponds to one object clothing;
the scene submodule comprises one or more scene controls, and each scene control corresponds to an object scene;
the place sub-module comprises one or more place controls, and each place control corresponds to an object place;
the map matching sub-module comprises one or more map matching controls, and each map matching control corresponds to one object map matching;
The prop submodule comprises one or more prop controls, and each prop control corresponds to an object prop;
the lens submodule comprises one or more lens controls, and each lens control corresponds to one object lens;
the at least one processor is configured to configure object information of the virtual object with the virtual object configuration module when executing the computer program in the following manner:
responding to the selection operation of one of the avatar controls, and taking the object avatar corresponding to the selected avatar control as the object avatar of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the language controls, and taking the object language corresponding to the selected language control as the object language of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the tone controls, and taking the object tone corresponding to the selected tone control as the object tone of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the gesture controls, and taking the object gesture corresponding to the selected gesture control as the object gesture of the virtual object; and/or the number of the groups of groups,
Responding to the selection operation for one of the hairstyle controls, and taking the object hairstyle corresponding to the selected hairstyle control as the object hairstyle of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the dressing controls, and dressing the object corresponding to the selected dressing control as the object dressing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the clothing controls, and taking the object clothing corresponding to the selected clothing control as the object clothing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the scene controls, and taking the object scene corresponding to the selected scene control as the object scene of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the place controls, and taking the object place corresponding to the selected place control as the object place of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the map matching controls, and taking the object map corresponding to the selected map matching control as the object map of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the prop controls, and taking the object prop corresponding to the selected prop control as the object prop of the virtual object; and/or the number of the groups of groups,
And responding to the selection operation for one of the lens controls, and taking the object lens corresponding to the selected lens control as the object lens of the virtual object.
In some embodiments, the virtual object configuration module further comprises a description sub-module;
the at least one processor is configured to configure object information of the virtual object with the virtual object configuration module when executing the computer program in the following manner:
acquiring object description information by using the description submodule; the object description information includes text information and/or voice information for describing the virtual object;
and carrying out semantic analysis based on the object description information, and determining the object information of the virtual object.
In some embodiments, the object description information comprises voice information;
the at least one processor is configured to execute the computer program to further implement the steps of:
detecting whether the voice information indicates to acquire the voice tone corresponding to the voice information;
if yes, inputting the voice information into a preset tone extraction model to obtain a voice tone corresponding to the voice information;
and determining the voice tone as the object tone of the virtual object.
In some embodiments, the preview module includes an object display area and an object update button;
the at least one processor is configured to preview an object image of the virtual object determined based on the object information using the preview module when executing the computer program in the following manner:
and after receiving the operation of the object updating button, previewing the object image in the object display area.
In some embodiments, the virtual object editor further comprises: a name configuration module and a resolution configuration module;
the at least one processor is configured to execute the computer program to further implement the steps of:
configuring an object name of the virtual object by utilizing the name configuration module so as to associate the virtual object with the object name;
and configuring the image resolution of the object image by utilizing the resolution configuration module so as to adapt the object image to the interfaces to be displayed with different resolutions.
Referring to fig. 7, fig. 7 is a block diagram of an electronic device 10 according to an embodiment of the present application.
The electronic device 10 may for example comprise at least one memory 11, at least one processor 12 and a bus 13 connecting the different platform systems.
Memory 11 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 111 and/or cache memory 112, and may further include Read Only Memory (ROM) 113.
The memory 11 also stores a computer program executable by the processor 12 to cause the processor 12 to implement the steps of any of the methods described above.
Memory 11 may also include utility 114 having at least one program module 115, such program modules 115 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Accordingly, the processor 12 may execute the computer programs described above, as well as may execute the utility 114.
The processor 12 may employ one or more application specific integrated circuits (ASICs, application Specific Integrated Circui t), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable Logic Device), field programmable gate arrays (FPGAs, fields-Programmable Gate Arra y), or other electronic components.
Bus 13 may be a local bus representing one or more of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any of a variety of bus architectures.
The electronic device 10 may also communicate with one or more external devices such as a keyboard, pointing device, bluetooth device, etc., as well as one or more devices capable of interacting with the electronic device 10 and/or with any device (e.g., router, modem, etc.) that enables the electronic device 10 to communicate with one or more other computing devices. Such communication may be via the input-output interface 14. Also, the electronic device 10 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 15. The network adapter 15 may communicate with other modules of the electronic device 10 via the bus 13. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 10 in actual applications, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage platforms, and the like.
(computer-readable storage Medium)
The embodiment of the application also provides a computer readable storage medium, and the specific embodiment of the computer readable storage medium is consistent with the embodiment recorded in the method embodiment and the achieved technical effect, and part of the contents are not repeated.
The computer readable storage medium stores a computer program which, when executed by at least one processor, performs the steps of any of the methods or performs the functions of any of the electronic devices described above.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a program product according to an embodiment of the present application.
The program product is for implementing the steps of any of the methods described above or for implementing the functions of any of the electronic devices described above. The program product may take the form of a portable compact disc read-only memory (CD-ROM) and comprises program code and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in the embodiments of the present application, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages. The program code may execute entirely on the configurator computing device, partly on the configurator device, as a stand-alone software package, partly on the configurator computing device and partly on a remote computing device or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the configurator computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected through the internet using an internet service provider).
The present application has been described in terms of its purpose, performance, advancement, and novelty, and the like, and is thus adapted to the functional enhancement and use requirements highlighted by the patent statutes, but the description and drawings are not limited to the preferred embodiments of the present application, and therefore, all equivalents and modifications that are included in the construction, apparatus, features, etc. of the present application shall fall within the scope of the present application.
Claims (13)
1. A virtual object editor for configuring object information of a virtual object, the object information including character information and sound information of the virtual object, the virtual object editor comprising:
the virtual object configuration module is used for configuring the object information;
and the preview module is used for previewing the object image of the virtual object determined based on the object information.
2. The virtual object editor of claim 1, wherein the virtual object configuration module comprises at least one of the following sub-modules: the device comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module.
3. The virtual object editor of claim 2, wherein the preview module comprises an object display area and an object update button;
the preview module is configured to implement the steps of:
and after receiving the operation of the object updating button, previewing the object image in the object display area.
4. The virtual object editor of claim 1, further comprising:
the name configuration module is used for configuring the object name of the virtual object;
and the resolution configuration module is used for configuring the image resolution of the object image.
5. The virtual object editor of claim 1, wherein the virtual object comprises one or more of a virtual host, a virtual anchor, a virtual idol, a virtual customer service, a virtual lawyer, a virtual financial advisor, a virtual teacher, a virtual doctor, a virtual lecturer, a virtual assistant.
6. A virtual object editing method for configuring object information of a virtual object using the virtual object editor of any one of claims 1 to 3; the virtual object editor comprises: a virtual object configuration module and a preview module;
The virtual object editing method comprises the following steps:
configuring object information of the virtual object by utilizing the virtual object configuration module;
and previewing the object image of the virtual object determined based on the object information by utilizing the previewing module.
7. The virtual object editing method of claim 6, wherein the virtual object configuration module comprises at least one of the following sub-modules: the system comprises an image sub-module, a language sub-module, a sound sub-module, a gesture sub-module, a hairstyle sub-module, a dressing sub-module, a clothing sub-module, a scene sub-module, a location sub-module, a picture allocation sub-module, a prop sub-module and a lens sub-module;
the image sub-module comprises one or more image controls, and each image control corresponds to an object image;
the language sub-module comprises one or more language controls, and each language control corresponds to an object language;
the tone color submodule comprises one or more tone color controls, and each tone color control corresponds to one object tone color;
the gesture submodule comprises one or more gesture controls, and each gesture control corresponds to an object gesture;
the hairstyle submodule comprises one or more hairstyle controls, and each hairstyle control corresponds to an object hairstyle;
The makeup submodule comprises one or more makeup controls, and each makeup control corresponds to one object makeup;
the clothing submodule comprises one or more clothing controls, and each clothing control corresponds to one object clothing;
the scene submodule comprises one or more scene controls, and each scene control corresponds to an object scene;
the place sub-module comprises one or more place controls, and each place control corresponds to an object place;
the map matching sub-module comprises one or more map matching controls, and each map matching control corresponds to one object map matching;
the prop submodule comprises one or more prop controls, and each prop control corresponds to an object prop;
the lens submodule comprises one or more lens controls, and each lens control corresponds to one object lens;
the configuring the object information of the virtual object by using the virtual object configuration module includes:
responding to the selection operation of one of the avatar controls, and taking the object avatar corresponding to the selected avatar control as the object avatar of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the language controls, and taking the object language corresponding to the selected language control as the object language of the virtual object; and/or the number of the groups of groups,
Responding to the selection operation of one of the tone controls, and taking the object tone corresponding to the selected tone control as the object tone of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the gesture controls, and taking the object gesture corresponding to the selected gesture control as the object gesture of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the hairstyle controls, and taking the object hairstyle corresponding to the selected hairstyle control as the object hairstyle of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the dressing controls, and dressing the object corresponding to the selected dressing control as the object dressing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the clothing controls, and taking the object clothing corresponding to the selected clothing control as the object clothing of the virtual object; and/or the number of the groups of groups,
responding to the selection operation for one of the scene controls, and taking the object scene corresponding to the selected scene control as the object scene of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the place controls, and taking the object place corresponding to the selected place control as the object place of the virtual object; and/or the number of the groups of groups,
Responding to the selection operation for one of the map matching controls, and taking the object map corresponding to the selected map matching control as the object map of the virtual object; and/or the number of the groups of groups,
responding to the selection operation of one of the prop controls, and taking the object prop corresponding to the selected prop control as the object prop of the virtual object; and/or the number of the groups of groups,
and responding to the selection operation for one of the lens controls, and taking the object lens corresponding to the selected lens control as the object lens of the virtual object.
8. The virtual object editing method of claim 7, wherein the virtual object configuration module further comprises a description sub-module;
the configuring the object information of the virtual object by using the virtual object configuration module includes:
acquiring object description information by using the description submodule; the object description information includes text information and/or voice information for describing the virtual object;
and carrying out semantic analysis based on the object description information, and determining the object information of the virtual object.
9. The virtual object editing method according to claim 8, wherein the object description information includes voice information;
The determining the object information of the virtual object based on the object description information includes:
detecting whether the voice information indicates to acquire the voice tone corresponding to the voice information;
if yes, inputting the voice information into a preset tone extraction model to obtain a voice tone corresponding to the voice information;
and determining the voice tone as the object tone of the virtual object.
10. The virtual object editing method of claim 7, wherein the preview module comprises an object display area and an object update button;
the previewing the object image of the virtual object determined based on the object information by using the previewing module comprises:
and after receiving the operation of the object updating button, previewing the object image in the object display area.
11. The virtual object editing method of claim 6, wherein the virtual object editor further comprises: a name configuration module and a resolution configuration module; the virtual object editing method further comprises the following steps:
configuring an object name of the virtual object by utilizing the name configuration module so as to associate the virtual object with the object name;
And configuring the image resolution of the object image by utilizing the resolution configuration module so as to adapt the object image to the interfaces to be displayed with different resolutions.
12. An electronic device for configuring object information of a virtual object with a virtual object editor according to any of the claims 1-5, the virtual object editor comprising a virtual object configuration module and a preview module, the electronic device comprising a memory and at least one processor, the memory storing a computer program, the at least one processor being configured to implement the following steps when executing the computer program:
configuring object information of the virtual object by utilizing the virtual object configuration module;
and previewing the object image of the virtual object determined based on the object information by utilizing the previewing module.
13. A computer readable storage medium, characterized in that it stores a computer program which, when executed by at least one processor, implements the steps of the virtual object editing method of any of claims 6-11 or the functions of the electronic device of claim 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310612106.6A CN116778034A (en) | 2023-05-27 | 2023-05-27 | Virtual object editor, virtual object editing method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310612106.6A CN116778034A (en) | 2023-05-27 | 2023-05-27 | Virtual object editor, virtual object editing method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116778034A true CN116778034A (en) | 2023-09-19 |
Family
ID=88007290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310612106.6A Pending CN116778034A (en) | 2023-05-27 | 2023-05-27 | Virtual object editor, virtual object editing method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116778034A (en) |
-
2023
- 2023-05-27 CN CN202310612106.6A patent/CN116778034A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11587300B2 (en) | Method and apparatus for generating three-dimensional virtual image, and storage medium | |
US20200125920A1 (en) | Interaction method and apparatus of virtual robot, storage medium and electronic device | |
CN110851760B (en) | Human-computer interaction system for integrating visual question answering in web3D environment | |
CN108942919B (en) | Interaction method and system based on virtual human | |
CN110555896B (en) | Image generation method and device and storage medium | |
CN109086860B (en) | Interaction method and system based on virtual human | |
CN113240778B (en) | Method, device, electronic equipment and storage medium for generating virtual image | |
KR20190089451A (en) | Electronic device for providing image related with text and operation method thereof | |
CN111414506B (en) | Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium | |
Soliman et al. | Artificial intelligence powered Metaverse: analysis, challenges and future perspectives | |
CN117011875A (en) | Method, device, equipment, medium and program product for generating multimedia page | |
CN113506377A (en) | Teaching training method based on virtual roaming technology | |
Cui et al. | Virtual human: A comprehensive survey on academic and applications | |
Zhao | Personalized healthcare museum exhibition system design based on VR and deep learning driven multimedia and multimodal sensing | |
CN117726897B (en) | Training data generation method, device, electronic equipment and storage medium | |
Cui et al. | Multimedia display of wushu intangible cultural heritage based on interactive system and artificial intelligence | |
WO2024066549A1 (en) | Data processing method and related device | |
CN116740238A (en) | Personalized configuration method, device, electronic equipment and storage medium | |
Meo et al. | Aesop: A visual storytelling platform for conversational ai and common sense grounding | |
CN116719462A (en) | Interactive management device, interactive management method and related device | |
CN116778034A (en) | Virtual object editor, virtual object editing method and related device | |
Ma et al. | Multimodal art pose recognition and interaction with human intelligence enhancement | |
CN112115231A (en) | Data processing method and device | |
Liu | RETRACTED ARTICLE: Light image enhancement based on embedded image system application in animated character images | |
Zhang et al. | Expression recognition algorithm based on CM-PFLD key point detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |