CN114612643A - Image adjusting method and device for virtual object, electronic equipment and storage medium - Google Patents

Image adjusting method and device for virtual object, electronic equipment and storage medium Download PDF

Info

Publication number
CN114612643A
CN114612643A CN202210224365.7A CN202210224365A CN114612643A CN 114612643 A CN114612643 A CN 114612643A CN 202210224365 A CN202210224365 A CN 202210224365A CN 114612643 A CN114612643 A CN 114612643A
Authority
CN
China
Prior art keywords
data
virtual object
adjusting
information
decorative article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210224365.7A
Other languages
Chinese (zh)
Other versions
CN114612643B (en
Inventor
谭钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210224365.7A priority Critical patent/CN114612643B/en
Publication of CN114612643A publication Critical patent/CN114612643A/en
Application granted granted Critical
Publication of CN114612643B publication Critical patent/CN114612643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an image adjustment method, apparatus, electronic device and storage medium for a virtual object, the method comprising: displaying image information of the virtual object in the electronic equipment, wherein the image information comprises a main body structure of the virtual object and a decorative article attached to the main body structure, the main body structure is generated after rendering skeletal data of the virtual object in a 3D rendering environment, and the decorative article is generated after rendering decorative article data in the 3D rendering environment; under the condition that a trigger event for adjusting the image information is detected, adjusting the bone data and/or the decorative article data to obtain adjusted target bone data and/or target decorative article data, rendering the target bone data and/or the target decorative data, and generating the image information of the adjusted virtual object. According to the embodiment of the disclosure, the bone data and/or the decorative article data of the virtual object are/is adjusted to meet the viewing requirements of different users, so that the viewing experience of the users is improved.

Description

Image adjusting method and device for virtual object, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image adjustment method for a virtual object, an image adjustment apparatus for a virtual object, an electronic device, and a computer readable storage medium.
Background
In recent years, live virtual objects occupy an increasing proportion in live video services, and live virtual objects replace the real images of the anchor for live video. However, in the live broadcasting process of the virtual object, the display effect of the virtual object is controlled by the live broadcasting subject, that is, the display effect viewed by each user (viewer) is the same, and the personalized display of the virtual object cannot be realized.
Disclosure of Invention
The embodiment of the disclosure at least provides an image adjusting method for a virtual object, an image adjusting device for a virtual object, an electronic device and a computer readable storage medium, which can adjust image information of a virtual object to meet viewing requirements of different users, thereby improving viewing experience of the users.
The embodiment of the disclosure provides an image adjustment method for a virtual object, which includes:
displaying image information of a virtual object in an electronic device, wherein the image information comprises a main body structure of the virtual object and a decorative article attached to the main body structure, the main body structure is generated by rendering skeletal data of the virtual object in a 3D rendering environment, and the decorative article is generated by rendering decorative article data in the 3D rendering environment;
under the condition that a trigger event for adjusting the image information is detected, adjusting the skeleton data and/or the decorative article data to obtain adjusted target skeleton data and/or target decorative article data;
rendering the target skeleton data and/or the target decoration data to generate the adjusted image information of the virtual object.
In the embodiment of the disclosure, under the condition that the trigger event for adjusting the image information is detected, the bone data and/or the decorative article data are/is adjusted, so that the adjusted image information is richer, the image requirements of different users on virtual objects can be met, and the viewing experience of the users is further improved. In addition, when the bone data and/or the decoration data of the virtual object are/is adjusted, the bone of the virtual object can be separated from the decoration object, so that the coupling between the bone and the decoration object is reduced, namely, the bone data and the decoration object data can be respectively adjusted, the image information of the virtual object is more flexible, and the bone data and the decoration object data are not influenced mutually in the rendering process, and the resource saving is facilitated.
In an optional embodiment, the presenting, in the electronic device, the shape information of the virtual object includes:
acquiring a real scene image acquired by the electronic equipment;
and displaying the image information of the virtual object based on the real scene image.
In the embodiment of the disclosure, the image information of the virtual object is displayed based on the image of the real scene, that is, the image information of the virtual object is combined with the real scene, so that the effect of combining virtual and real can be presented, and the watching experience of the user is further improved.
In an alternative embodiment, in a case that the avatar information of the virtual object does not match the reality scene image, it is determined that the trigger event for adjusting the avatar information is detected.
In the embodiment of the disclosure, under the condition that the image information of the virtual object is not matched with the real scene, the image information can be automatically adjusted, so that the image information is matched with the real scene, and the display effect is enriched.
In an optional embodiment, the method further comprises:
determining whether image information of the virtual object is matched with the image of the real scene based on environment information of the real scene, wherein the environment information comprises at least one of weather information, temperature information and position information of the real scene.
In the embodiment of the disclosure, whether the image information of the virtual object is matched with the image of the real scene is determined based on the environmental information of the real scene, so that the matching accuracy can be improved.
In an alternative embodiment, the body structure comprises a plurality of sites; said adjusting said skeletal data and/or said decorative item data comprises:
determining at least one target site to be adjusted from the plurality of sites;
adjusting the bone data of the at least one target site.
In the embodiment of the present disclosure, since the main body structure includes the plurality of portions, the bone data of the target portion can be adjusted for at least one target portion of the plurality of portions, that is, the local adjustment of the bone data can be performed without adjusting the bone data of the entire main body structure, which is not only beneficial to improving the adjustment efficiency, but also can save resources.
In an optional embodiment, after the adjusting the bone data of the at least one target site, the method further comprises:
judging whether the adjusted bone data of the target part meets the preset requirements or not according to each target part;
determining other parts related to the target part under the condition that the adjusted bone data of the target part do not meet the preset requirement;
and correcting the adjusted bone data of the target part according to the incidence relation between the target part and the other parts so as to enable the corrected bone data of the target part to meet the preset requirement.
In the embodiment of the disclosure, if the adjusted bone data of the target portion does not meet the preset requirement, the adjusted bone data of the target portion is corrected according to the association relationship between the target portion and other portions, that is, the target portion and other portions can be adapted and adjusted, so that the ratio between the portions is more harmonious and beautiful, and the viewing experience of the user is further improved.
In an alternative embodiment, said adjusting said bone data and/or said ornamental item data comprises:
and adjusting the decorative article data according to the environmental information of the real scene.
In the embodiment of the present disclosure, can be according to the environmental information of real scene, adjust embellishment article data automatically, promptly, carry out the adaptation adjustment to adornment article data according to environmental information, so for adornment article data and real scene are laminated more, and then promote user's the experience of vwatching.
In an alternative embodiment, the decorative article comprises a garment; the adjusting the decoration data according to the environment information of the real scene includes:
determining the type of the clothes and the material of the clothes according to the environment information;
and adjusting the clothing data of the decorative article data based on the type of the clothing and the material of the clothing.
In the embodiment of the disclosure, the type of the garment and the material of the garment are determined according to the environment information, and the garment data is adjusted based on the type of the garment and the material of the garment, that is, the garment data is adapted and adjusted according to the environment information, so that the image information of the virtual object is more flexible and more fit to the real scene, and the visual experience of the user is further improved.
In an alternative embodiment, the triggering event for adjusting the character information is triggered by a user.
In the embodiment of the disclosure, the triggering event for adjusting the image information is generated by triggering of the user, that is, different users can adjust the image information according to actual requirements, so as to realize different requirements of different users and realize personalized virtual objects of different users.
The embodiment of the present disclosure further provides an image adjusting apparatus for a virtual object, the apparatus including:
the display module is used for displaying image information of a virtual object in the electronic equipment, wherein the image information comprises a main body structure of the virtual object and a decorative article attached to the main body structure, the main body structure is generated by rendering skeleton data of the virtual object in a 3D rendering environment, and the decorative article is generated by rendering decorative article data in the 3D rendering environment;
the adjusting module is used for adjusting the bone data and/or the decorative article data under the condition that a triggering event for adjusting the image information is detected, so that adjusted target bone data and/or target decorative article data are obtained;
and the rendering module is used for rendering the target skeleton data and/or the target decoration data to generate the image information of the adjusted virtual object.
In an optional implementation manner, the display module is specifically configured to:
acquiring a real scene image acquired by the electronic equipment;
and displaying the image information of the virtual object based on the real scene image.
In an alternative embodiment, in a case that the avatar information of the virtual object does not match the reality scene image, it is determined that the trigger event for adjusting the avatar information is detected.
In an optional implementation, the display module is further specifically configured to:
determining whether image information of the virtual object is matched with the image of the real scene based on environment information of the real scene, wherein the environment information comprises at least one of weather information, temperature information and position information of the real scene.
In an alternative embodiment, the body structure comprises a plurality of sites; the adjustment module is specifically configured to:
determining at least one target site to be adjusted from the plurality of sites;
adjusting the bone data of the at least one target site.
In an optional implementation manner, the adjusting module is further specifically configured to:
judging whether the adjusted bone data of the target part meets the preset requirements or not according to each target part;
determining other parts related to the target part under the condition that the adjusted bone data of the target part do not meet the preset requirement;
and correcting the adjusted bone data of the target part according to the incidence relation between the target part and the other parts so as to enable the corrected bone data of the target part to meet the preset requirement.
In an optional implementation manner, the adjusting module is specifically configured to:
and adjusting the data of the decorative articles according to the environment information of the real scene.
In an optional implementation manner, the adjusting module is specifically configured to:
determining the type of the clothing and the material of the clothing according to the environment information;
and adjusting the clothing data of the decorative article data based on the type of the clothing and the material of the clothing.
In an alternative embodiment, the triggering event for adjusting the character information is triggered by a user.
An embodiment of the present disclosure further provides an electronic device, including: the device comprises a processor, a memory and a bus, wherein the memory stores machine readable instructions executable by the processor, when the electronic device runs, the processor and the memory are communicated through the bus, and the machine readable instructions are executed by the processor to execute the image adjusting method of the virtual object.
The embodiment of the present disclosure also provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the image adjustment method for the virtual object.
For the description of the effect of the image adjusting device, the electronic device, and the computer-readable storage medium of the virtual object, reference is made to the description of the image adjusting method of the virtual object, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical aspects of the embodiments of the present disclosure, reference will now be made briefly to the accompanying drawings, which are incorporated in and constitute a part of this specification, and which illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical aspects of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic diagram illustrating an execution body of a method for adjusting an image of a virtual object according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an image adjustment method for a virtual object according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of image information of a virtual object provided in an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for displaying image information of a virtual object according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a trigger event according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of pre-adjustment image information and post-adjustment image information provided in an embodiment of the disclosure;
FIG. 7 is a flow chart of a method for adjusting skeletal data and/or upholstery data provided by an embodiment of the present disclosure;
FIG. 8 is a flow chart of another method for adjusting bone data and/or ornamental object data provided by embodiments of the present disclosure;
FIG. 9 is a flowchart of a method for adjusting decoration data according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an image adjusting apparatus for a virtual object according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of another image adjustment apparatus for a virtual object according to an embodiment of the present disclosure;
fig. 12 is a schematic view of an electronic device provided in an embodiment of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below in detail and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In recent years, live virtual objects occupy an increasing proportion in live video services, and live virtual objects replace the real images of the anchor for live video. However, in the live broadcasting process of the virtual object, the display effect of the virtual object is controlled by the live broadcasting subject, that is, the display effect viewed by each user (viewer) is the same, and the personalized display of the virtual object cannot be realized.
Based on the above research, the embodiment of the present disclosure provides an image adjustment method for a virtual object, including: displaying image information of a virtual object in an electronic device, wherein the image information comprises a main body structure of the virtual object and a decorative article attached to the main body structure, the main body structure is generated after skeletal data of the virtual object in a 3D rendering environment is rendered, and the decorative article is generated after the decorative article data in the 3D rendering environment is rendered; under the condition that a trigger event for adjusting the image information is detected, adjusting the skeleton data and/or the decorative article data to obtain adjusted target skeleton data and/or target decorative article data; rendering the target skeleton data and/or the target decoration data to generate the adjusted image information of the virtual object.
In the embodiment of the disclosure, under the condition that the trigger event for adjusting the image information is detected, the bone data and/or the decorative article data are/is adjusted, so that the adjusted image information is richer, the image requirements of different users on virtual objects can be met, and the viewing experience of the users is further improved.
In addition, when the bone data and/or the decorative article data of the virtual object are/is adjusted, the bone of the virtual object can be separated from the decorative article, so that the coupling between the bone and the decorative article is reduced, namely, the bone data and the decorative article data can be respectively adjusted, so that the image information of the virtual object is more flexible, and the bone data and the decorative article data are not influenced by each other in the rendering process, and the resource saving is facilitated.
Referring to fig. 1, a schematic diagram of an execution main body of an image adjusting method for a virtual object according to an embodiment of the present disclosure is shown, the execution main body of the method is an electronic device 100, where the electronic device 100 may include a terminal. The terminal may be the smart phone 110, the notebook computer 120, the tablet computer 130, and the like shown in fig. 1, or may be a smart speaker, a smart watch, a desktop computer, and the like, which are not shown in fig. 1, without limitation. In other embodiments, the electronic device 100 may include a server, where the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, and an artificial intelligence platform.
In other embodiments, the electronic device 100 may also include an AR (Augmented Reality) device, a VR (Virtual Reality) device, an MR (Mixed Reality) device, and the like. For example, the AR device may be a mobile phone or a tablet computer with an AR function, or may be AR glasses, which is not limited herein.
In some embodiments, the avatar adjustment method for the virtual object is applied to an electronic device (such as the smartphone 110 in fig. 1), that is, the virtual object may be displayed in a local electronic device, or may be displayed in a live video. In other embodiments, the avatar adjustment method for the virtual object may be implemented by a processor calling computer-readable instructions stored in a memory.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for adjusting a shape of a virtual object according to an embodiment of the disclosure. As shown in fig. 2, an image adjusting method for a virtual object according to an embodiment of the present disclosure includes the following steps S101 to S103:
s101, displaying image information of a virtual object in the electronic equipment, wherein the image information comprises a main body structure of the virtual object and a decorative article attached to the main body structure, the main body structure is generated after rendering by skeleton data of the virtual object in a 3D rendering environment, and the decorative article is generated after rendering by decorative article data in the 3D rendering environment.
The main structure of the virtual object is generated by rendering the bone data of the virtual object in the 3D rendering environment, that is, the main structure refers to a bone model of the virtual object.
Referring to fig. 3, which is a schematic diagram of image information of a virtual object provided by an embodiment of the present disclosure, as shown in fig. 3, the image information 10 of the virtual object includes a skeleton 11 of the virtual object (e.g., a head, a body, and limbs of the virtual object) and a decorative article 12 attached to the skeleton 11, wherein the decorative article 12 may include clothing (e.g., clothes, shoes, trousers, or skirts) and accessories (e.g., bags, earrings, umbrellas, hats, or necklaces, etc.).
The 3D rendering environment may be a 3D engine running in the electronic device and capable of generating image information based on one or more viewing angles based on the data to be rendered. The image information of the virtual object comprises a main body structure of the virtual object and a decorative article attached to the main body structure, wherein the main body structure is generated after skeletal data of the virtual object is rendered in the 3D engine, and the decorative article is generated after the decorative article data is rendered in the 3D engine. The virtual object may include an avatar, and the like, but is not limited thereto.
The image information of the virtual object is generated by rendering the image data of the virtual object, wherein the image data of the virtual object comprises the skeleton data of the virtual object and the decoration data. The image data of the virtual object may run in a computer CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and a memory, which include gridded model information and texture information. Accordingly, the visual data of the virtual object includes, but is not limited to, the gridded model data, the voxel data, and the map texture data, or a combination thereof, as examples. Wherein the mesh includes, but is not limited to, a triangular mesh, a quadrilateral mesh, other polygonal meshes, or a combination thereof.
It is understood that the virtual object is driven by control information captured by the motion capture device, i.e., the motion of the virtual object is driven by an actor, and illustratively, control information of various portions of the virtual object may be acquired by the motion capture device. The motion capture device includes a garment worn on the body of the actor, a glove worn on the hand of the actor, and the like. The clothes are used for capturing limb movements of the actor, and the gloves are used for capturing hand movements of the actor. In particular, the motion capture device includes a plurality of feature points to be identified, which may correspond to key points of an actor's skeleton. For example, feature points may be provided at positions of the motion capture device corresponding to respective joints (e.g., knee joint, elbow joint, and finger joint) of the skeleton of the actor, the feature points may be made of a specific material (e.g., a nanomaterial), and the position information of the plurality of feature points may be acquired by the camera to obtain the control information.
Accordingly, in order to realize the driving of the virtual object, the virtual object includes controlled feature points matched with the plurality of feature points to be recognized, for example, the feature points to be recognized of the elbow joint of the actor are matched with the elbow joint controlled points of the virtual object, that is, there is a one-to-one correspondence relationship between the bone key points of the actor and the bone key points of the virtual object, so that after the control information of the feature points to be recognized of the elbow joint of the actor is obtained, the corresponding change of the elbow joint of the virtual object can be driven, and further, the change of the plurality of controlled points forms the motion change of the virtual object.
In order to improve the viewing experience of the user, the embodiment of the present disclosure combines the image information of the virtual object with the real scene for displaying, that is, for step S101, when displaying the image information of the virtual object in the electronic device, referring to fig. 4, the following steps S1011 to S1012 may be included:
and S1011, acquiring the real scene image acquired by the electronic equipment.
S1012, displaying image information of the virtual object based on the real scene image.
It can be understood that the image of the real scene can be collected by the camera device arranged on the electronic equipment. Then, the image information of the virtual object can be displayed on the basis of the real scene image based on an Augmented Reality (AR) technology according to the content in the real scene image.
For example, if a table exists in the real scene image, the image information of the virtual object may be displayed on the table, that is, the virtual object may stand on the table for performing a performance. Therefore, the image information of the virtual object can be combined with the real scene to present the effect of virtual-real combination, and further improve the watching experience of the user.
The Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and a real scene, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real scene after being simulated, and the two kinds of information complement each other, so that the real scene is enhanced.
S102, under the condition that a trigger event for adjusting the image information is detected, adjusting the skeleton data and/or the decorative article data to obtain adjusted target skeleton data and/or target decorative article data.
In some embodiments, the triggering event for adjusting the avatar information is generated by a user trigger. For example, referring to fig. 5, a schematic diagram for generating a trigger event provided by the embodiment of the present disclosure, as shown in fig. 5, in a display screen a1 that shows avatar information of a virtual object in an electronic device a, a trigger icon a2 for adjusting avatar information may also be shown, and in response to a trigger operation of a user on the trigger icon a2, a trigger event for adjusting avatar information may be generated.
It should be noted that the process of generating the trigger event shown in fig. 5 is only an exemplary process, and in other embodiments, the trigger event may also be generated in other manners, for example, the trigger event may also be generated in response to the user clicking any area of the avatar information 10 of the virtual object, which is not limited herein.
According to the above, each user can adjust the image information of the virtual object that each user sees without affecting the viewing of other users, so that the image information of the virtual object that different users see may be different, for example, the virtual object that the user sees three times is a high-peak image and wears a skirt; and the virtual object seen by the user lie IV is a thin and small figure and wears the sports coat.
In other embodiments, since the avatar information of the virtual object is displayed on the basis of the image of the real scene, besides the manual triggering manner in the above embodiments, the avatar information may be adjusted by an automatic triggering manner, for example, in case that the avatar information of the virtual object does not match the image of the real scene, the triggering event for adjusting the avatar information may be detected, and the avatar information may be automatically adjusted. For example, if the color tone of the clothes of the virtual object is not matched with the color tone of the image of the real scene, it is determined that the trigger event is detected, and then the data of the clothes of the virtual object is adjusted until the color tones are matched, and the adjusted data of the target decorative article is obtained.
In some embodiments, in determining whether the avatar information of the virtual object matches the real scene image, it may be determined whether the avatar information of the virtual object matches the real scene image based on environment information of the real scene, wherein the environment information includes at least one of weather information, temperature information, and location information of the real scene.
Illustratively, if the weather information of the real scene is rainy, and the displayed image information of the virtual object is not umbrella-opening, the image information of the virtual object is considered not to be matched with the real scene image. For another example, if the temperature information of the real scene is 20 degrees below zero, and the displayed image information of the virtual object is a dress, the image information of the virtual object is considered to be not matched with the image of the real scene. Then, if it is detected that the image information of the virtual object does not match the image of the real scene, that is, if a trigger event for adjusting the image information is detected, the skeleton data and/or the decoration data may be adjusted, for example, data of an umbrella is added to the decoration data.
S103, rendering the target skeleton data and/or the target decoration data to generate the adjusted image information of the virtual object.
It can be understood that after the adjusted target bone data and the adjusted target decoration data are obtained, rendering needs to be performed again, so that the adjusted image information of the virtual object can be generated and displayed.
In the present embodiment, the detailed description will be given by taking an example in which the image information of the virtual object is not matched with the weather information of the real scene. For example, please refer to fig. 6 and fig. 7 simultaneously, fig. 6 is a schematic diagram of image information before adjustment provided by an embodiment of the present disclosure, and fig. 7 is a schematic diagram of image information after adjustment. As shown in fig. 6, the weather information of the real scene indicates that the current weather is rainy, and the image information of the virtual object displayed at present does not include an umbrella, that is, the image information of the virtual object does not match the weather information of the real scene, at this time, the decorative item data of the virtual image needs to be adjusted, and then the adjusted decorative item data is rendered to obtain the adjusted image information shown in fig. 7, as shown in fig. 7, the image information of the adjusted virtual object includes an umbrella.
In the embodiment of the disclosure, under the condition that the trigger event for adjusting the image information is detected, the bone data and/or the decorative article data are/is adjusted, so that the adjusted image information is richer, the image requirements of different users on the virtual object can be met, and the viewing experience of the users is further improved. In addition, when the skeleton data and/or the decoration data of the virtual object are/is adjusted, the skeleton of the virtual object can be separated from the decoration object, so that the coupling between the skeleton and the decoration object is reduced, namely, the skeleton data and the decoration data can be respectively adjusted without mutual influence, the image information of the virtual object is more flexible, and the resource saving is facilitated.
In some embodiments, the body structure comprises a plurality of sites, for example, the body structure comprises: limbs, head, body and face, and limbs may also include arms, legs; the face also includes ears, nose, mouth, eyebrows, eyes, and the like. Therefore, when the bone data and/or the decoration data are/is adjusted, a plurality of parts can be adjusted, for example, in response to a selection operation of a user, eyes and eyebrows are determined from the plurality of parts, and the bone data corresponding to the eyes and the eyebrows are adjusted. As another example, the facial skeleton data of the virtual object may also be adjusted in response to a drag operation by the user on the face of the virtual object (or in response to a trigger operation by the user on the "pinch face" icon).
That is, the body structure includes a plurality of locations; when the skeletal data and/or the decoration data are adjusted, please refer to fig. 8, which includes the following S701 to S702:
s701, determining at least one target part to be adjusted from the plurality of parts;
s702, adjusting the bone data of the at least one target part.
It should be noted that, in the above adjustment process, although the skeleton data of the virtual object changes, the expression or the expression of the virtual object does not change, that is, since the expression and the expression of the virtual object are driven by the actor, the adjustment of the skeleton data does not affect the normal performance of the virtual object, for example, if the current virtual object is crying, the image information of the virtual object seen by different users is different (for example, different heights, different fatness or different decorations), but the virtual object seen by different users is in a crying state.
In the embodiment, the local adjustment of the bone data can be realized, the bone data of the whole main body structure does not need to be adjusted, the adjustment efficiency is improved, and the resources can be saved.
It can be understood that, in the actual adjustment process, when the bone data of the target portion is adjusted, it is also necessary to ensure the aesthetic degree of the image information of the virtual object, and therefore, referring to fig. 9, after step S702, the method may further include the following steps S703 to S705:
and S703, judging whether the adjusted bone data of the target part meets the preset requirement or not according to each target part.
For example, the preset requirement range corresponding to the length of the eye is 2cm to 3cm, the preset requirement range corresponding to the width of the eye is 1cm to 2cm, the preset requirement range corresponding to the length of the eyebrow may be 2cm to 4cm, and the preset requirement range corresponding to the width of the eyebrow is 1cm to 1.5cm, which is not limited herein. Also, the distance between the upper edge of the eye and the eyebrow has a preset range, for example, 0.5cm to 1.5 cm.
It should be noted that the range of the preset requirements illustrated above is only exemplary, and in the actual application process, the preset requirements may be set according to actual factors and actual requirements, and different virtual objects have different preset requirements corresponding to different respective portions of the virtual objects.
S704, when the adjusted bone data of the target part does not meet the preset requirement, other parts related to the target part are determined.
For example, if the length and width of the eyes indicated by the adjusted skeletal data of the eyes do not meet the preset requirements, other parts associated with the eyes, including the eyebrows and the nose, can be determined.
S705, correcting the adjusted bone data of the target part according to the incidence relation between the target part and the other parts, so that the corrected bone data of the target part meets the preset requirement.
The association relationship between the target portion and the other portions may be a size association relationship, a position association relationship, or the like, and is not limited herein.
For example, the adjusted bone data of the eye may be corrected according to the position relationship between the eye and the eyebrow, for example, if the distance between the eye and the eyebrow is less than the minimum value of the preset range of 0.5cm, the bone data of the eye needs to be corrected until the corrected bone data of the eye indicates that the distance between the eye and the eyebrow satisfies the preset range. So, can realize the adaptation adjustment between target site and other positions for the proportion between each position is more harmonious and pleasing to the eye, and then promotes user's experience of vwatching.
In some embodiments, in order to enrich visual information of a virtual object, when the skeleton data and/or the decoration data are adjusted, the decoration data can be adjusted according to environment information of the real scene. For example, if the temperature information of the real scene is 32 degrees celsius, the decoration data of the sun hat may be added to adjust the decoration data.
In the present embodiment, since the decorative article includes a garment, when the decorative article data is adjusted according to the environment information of the real scene, please refer to fig. 10, which includes the following steps S901 to S902:
s901, determining the type of the clothes and the material of the clothes according to the environment information.
S902, adjusting the clothing data of the decoration article data based on the type of the clothing and the material of the clothing.
It is understood that the type of the clothing and the material of the clothing may be determined according to at least one of weather information, temperature information, and location information in the environment information. For example, if the temperature information indicates that the current temperature is minus 20 degrees, the type of the garment may be determined to be a jacket, and the material of the garment may be down.
Then, after the type and the material of the garment are determined, the garment data can be adjusted, so that the image information of the virtual object can be more fit with the environment information, and the watching experience of a user is further improved.
It will be understood by those of skill in the art that in the above method of the present embodiment, the order of writing the steps does not imply a strict order of execution and does not impose any limitations on the implementation, as the order of execution of the steps should be determined by their function and possibly inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a virtual object image adjusting device corresponding to the virtual object image adjusting method, and since the principle of the device in the embodiment of the present disclosure for solving the problem is similar to the virtual object image adjusting method in the embodiment of the present disclosure, the implementation of the device can refer to the implementation of the method, and repeated parts are not detailed.
Referring to fig. 11, a schematic structural diagram of an image adjusting apparatus for a virtual object according to an embodiment of the present disclosure is shown, where the image adjusting apparatus 1000 for a virtual object includes: a presentation module 1010, an adjustment module 1020, and a rendering module 1030; wherein,
a display module 1010, configured to display avatar information of a virtual object in an electronic device, where the avatar information includes a main structure of the virtual object and a decoration article attached to the main structure, where the main structure is generated by rendering skeleton data of the virtual object in a 3D rendering environment, and the decoration article is generated by rendering decoration article data in the 3D rendering environment;
an adjusting module 1020, configured to adjust the bone data and/or the decorative article data to obtain adjusted target bone data and/or target decorative article data when a trigger event for adjusting the image information is detected;
a rendering module 1030, configured to render the target bone data and/or the target decoration data, and generate image information of the adjusted virtual object.
In an optional implementation manner, the display module 1010 is specifically configured to:
acquiring a real scene image acquired by the electronic equipment;
and displaying the image information of the virtual object based on the real scene image.
In an alternative embodiment, in a case that the avatar information of the virtual object does not match the reality scene image, it is determined that the trigger event for adjusting the avatar information is detected.
In an optional implementation manner, the display module 1010 is further specifically configured to:
determining whether image information of the virtual object is matched with the image of the real scene based on environment information of the real scene, wherein the environment information comprises at least one of weather information, temperature information and position information of the real scene.
In an alternative embodiment, the body structure comprises a plurality of sites; the adjusting module 1020 is specifically configured to:
determining at least one target site to be adjusted from the plurality of sites;
adjusting the bone data of the at least one target site.
In an optional implementation manner, the adjusting module 1020 is further specifically configured to:
judging whether the adjusted bone data of the target part meets the preset requirements or not according to each target part;
determining other parts related to the target part under the condition that the adjusted bone data of the target part do not meet the preset requirement;
and correcting the adjusted bone data of the target part according to the incidence relation between the target part and the other parts so as to enable the corrected bone data of the target part to meet the preset requirement.
In an optional implementation manner, the adjusting module 1020 is specifically configured to:
and adjusting the data of the decorative articles according to the environment information of the real scene.
In an optional implementation manner, the adjusting module 1020 is specifically configured to:
determining the type of the clothing and the material of the clothing according to the environment information;
and adjusting the clothing data of the decorative article data based on the type of the clothing and the material of the clothing.
In an alternative embodiment, the triggering event for adjusting the character information is triggered by a user.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 12, a schematic structural diagram of an electronic device 800 provided in the embodiment of the present disclosure includes a processor 801, a memory 802, and a bus 803. The memory 802 is used for storing execution instructions and includes a memory 8021 and an external memory 8022; the memory 8021 is also referred to as an internal memory, and temporarily stores operation data in the processor 801 and data exchanged with the external memory 8022 such as a hard disk, and the processor 801 exchanges data with the external memory 8022 via the memory 8021.
In the embodiment of the present application, the memory 802 is specifically used for storing application program codes for executing the scheme of the present application, and the processor 801 controls the execution. That is, when the electronic device 800 is operating, the processor 801 communicates with the memory 802 via the bus 803, so that the processor 801 executes the application program code stored in the memory 802, thereby performing the method of any of the foregoing embodiments.
The processor 801 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The Memory 802 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 800. In other embodiments of the present application, electronic device 800 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the image adjustment method for a virtual object described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the method for adjusting an image of a virtual object in the foregoing method embodiments.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the terminal described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, terminal and method can be implemented in other ways. The above-described terminal embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implementing, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, or indirect coupling or communication connection of units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used to illustrate the technical solutions of the present disclosure, but not to limit the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes and substitutions do not depart from the spirit and scope of the embodiments disclosed herein, and they should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. An image adjustment method for a virtual object, comprising:
displaying image information of a virtual object in an electronic device, wherein the image information comprises a main body structure of the virtual object and a decorative article attached to the main body structure, the main body structure is generated by rendering skeletal data of the virtual object in a 3D rendering environment, and the decorative article is generated by rendering decorative article data in the 3D rendering environment;
under the condition that a trigger event for adjusting the image information is detected, adjusting the skeleton data and/or the decorative article data to obtain adjusted target skeleton data and/or target decorative article data;
rendering the target skeleton data and/or the target decoration data to generate the adjusted image information of the virtual object.
2. The method of claim 1, wherein the presenting of avatar information of the virtual object in the electronic device comprises:
acquiring a real scene image acquired by the electronic equipment;
and displaying the image information of the virtual object based on the real scene image.
3. The method according to claim 2, wherein it is determined that the trigger event for adjusting the avatar information is detected in case the avatar information of the virtual object does not match the real scene image.
4. The method of claim 3, further comprising:
determining whether image information of the virtual object is matched with the image of the real scene based on environment information of the real scene, wherein the environment information comprises at least one of weather information, temperature information and position information of the real scene.
5. The method of claim 1, wherein the body structure comprises a plurality of sites; said adjusting said skeletal data and/or said decorative item data comprises:
determining at least one target site to be adjusted from the plurality of sites;
adjusting the bone data of the at least one target site.
6. The method of claim 5, wherein after the adjusting of the bone data of the at least one target site, the method further comprises:
judging whether the adjusted bone data of the target part meets the preset requirement or not according to each target part;
determining other parts related to the target part under the condition that the adjusted bone data of the target part do not meet the preset requirement;
and correcting the adjusted bone data of the target part according to the incidence relation between the target part and the other parts so as to enable the corrected bone data of the target part to meet the preset requirement.
7. The method of claim 3, wherein said adjusting said skeletal data and/or said decorative item data comprises:
and adjusting the decorative article data according to the environmental information of the real scene.
8. The method of claim 7, wherein the decorative article comprises a garment; the adjusting the decoration data according to the environment information of the real scene includes:
determining the type of the clothes and the material of the clothes according to the environmental information;
and adjusting the clothing data of the decorative article data based on the type of the clothing and the material of the clothing.
9. The method of claim 1, wherein the triggering event for adjusting the avatar information is triggered by a user.
10. An image adjusting apparatus for a virtual object, comprising:
the display module is used for displaying image information of a virtual object in the electronic equipment, wherein the image information comprises a main body structure of the virtual object and a decorative article attached to the main body structure, the main body structure is generated after skeletal data of the virtual object in a 3D rendering environment is rendered, and the decorative article is generated after decorative article data in the 3D rendering environment is rendered;
the adjusting module is used for adjusting the bone data and/or the decorative article data under the condition that a triggering event for adjusting the image information is detected, so that adjusted target bone data and/or target decorative article data are obtained;
and the rendering module is used for rendering the target skeleton data and/or the target decoration data to generate the adjusted image information of the virtual object.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the avatar adjustment method of a virtual object according to any of claims 1 to 9.
12. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program performs the avatar adjustment method for a virtual object according to any one of claims 1 to 9.
CN202210224365.7A 2022-03-07 2022-03-07 Image adjustment method and device for virtual object, electronic equipment and storage medium Active CN114612643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210224365.7A CN114612643B (en) 2022-03-07 2022-03-07 Image adjustment method and device for virtual object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210224365.7A CN114612643B (en) 2022-03-07 2022-03-07 Image adjustment method and device for virtual object, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114612643A true CN114612643A (en) 2022-06-10
CN114612643B CN114612643B (en) 2024-04-12

Family

ID=81861376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210224365.7A Active CN114612643B (en) 2022-03-07 2022-03-07 Image adjustment method and device for virtual object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114612643B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758042A (en) * 2022-06-14 2022-07-15 深圳智华科技发展有限公司 Novel virtual simulation engine, virtual simulation method and device
CN115619923A (en) * 2022-09-30 2023-01-17 北京百度网讯科技有限公司 Rendering method and device for virtual object, electronic equipment and storage medium
CN116051694A (en) * 2022-12-20 2023-05-02 百度时代网络技术(北京)有限公司 Avatar generation method, apparatus, electronic device, and storage medium
CN116069159A (en) * 2022-09-14 2023-05-05 领悦数字信息技术有限公司 Method, apparatus and medium for displaying avatar

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129404B1 (en) * 2012-09-13 2015-09-08 Amazon Technologies, Inc. Measuring physical objects and presenting virtual articles
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN111147873A (en) * 2019-12-19 2020-05-12 武汉西山艺创文化有限公司 Virtual image live broadcasting method and system based on 5G communication
CN111583415A (en) * 2020-05-08 2020-08-25 维沃移动通信有限公司 Information processing method and device and electronic equipment
CN111815781A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Augmented reality data presentation method, apparatus, device and computer storage medium
CN112184862A (en) * 2020-10-12 2021-01-05 网易(杭州)网络有限公司 Control method and device of virtual object and electronic equipment
CN113633977A (en) * 2021-08-09 2021-11-12 北京字跳网络技术有限公司 Virtual article processing method, device, equipment and storage medium
CN113838217A (en) * 2021-09-23 2021-12-24 北京百度网讯科技有限公司 Information display method and device, electronic equipment and readable storage medium
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129404B1 (en) * 2012-09-13 2015-09-08 Amazon Technologies, Inc. Measuring physical objects and presenting virtual articles
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN111147873A (en) * 2019-12-19 2020-05-12 武汉西山艺创文化有限公司 Virtual image live broadcasting method and system based on 5G communication
CN111583415A (en) * 2020-05-08 2020-08-25 维沃移动通信有限公司 Information processing method and device and electronic equipment
CN111815781A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Augmented reality data presentation method, apparatus, device and computer storage medium
CN112184862A (en) * 2020-10-12 2021-01-05 网易(杭州)网络有限公司 Control method and device of virtual object and electronic equipment
CN113633977A (en) * 2021-08-09 2021-11-12 北京字跳网络技术有限公司 Virtual article processing method, device, equipment and storage medium
CN113838217A (en) * 2021-09-23 2021-12-24 北京百度网讯科技有限公司 Information display method and device, electronic equipment and readable storage medium
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758042A (en) * 2022-06-14 2022-07-15 深圳智华科技发展有限公司 Novel virtual simulation engine, virtual simulation method and device
CN116069159A (en) * 2022-09-14 2023-05-05 领悦数字信息技术有限公司 Method, apparatus and medium for displaying avatar
CN115619923A (en) * 2022-09-30 2023-01-17 北京百度网讯科技有限公司 Rendering method and device for virtual object, electronic equipment and storage medium
CN115619923B (en) * 2022-09-30 2023-12-12 北京百度网讯科技有限公司 Rendering method and device for virtual object, electronic equipment and storage medium
CN116051694A (en) * 2022-12-20 2023-05-02 百度时代网络技术(北京)有限公司 Avatar generation method, apparatus, electronic device, and storage medium
CN116051694B (en) * 2022-12-20 2023-10-03 百度时代网络技术(北京)有限公司 Avatar generation method, apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114612643B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN114612643B (en) Image adjustment method and device for virtual object, electronic equipment and storage medium
JP7098120B2 (en) Image processing method, device and storage medium
CN110021061B (en) Collocation model construction method, clothing recommendation method, device, medium and terminal
GB2564745B (en) Methods for generating a 3D garment image, and related devices, systems and computer program products
Kim et al. Augmented reality fashion apparel simulation using a magic mirror
US7212202B2 (en) Method and system for a computer-rendered three-dimensional mannequin
JP2019510297A (en) Virtual try-on to the user's true human body model
CN104376160A (en) Real person simulation individuality ornament matching system
CN113924601A (en) Entertaining mobile application for animating and applying effects to a single image of a human body
CN111627117B (en) Image display special effect adjusting method and device, electronic equipment and storage medium
JP2014509758A (en) Real-time virtual reflection
CN106157363A (en) A kind of photographic method based on augmented reality, device and mobile terminal
CN113610612B (en) 3D virtual fitting method, system and storage medium
CN111767817B (en) Dress collocation method and device, electronic equipment and storage medium
CN113129450A (en) Virtual fitting method, device, electronic equipment and medium
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
CN105913496A (en) Method and system for fast conversion of real clothes to three-dimensional virtual clothes
CN113318428A (en) Game display control method, non-volatile storage medium, and electronic device
WO2013120453A1 (en) System and method for natural person digitized image design
CN114049468A (en) Display method, device, equipment and storage medium
CN111402427A (en) Virtual fitting system and method thereof
CN113065924A (en) Data input system for clothes customization
WO2023160074A1 (en) Image generation method and apparatus, electronic device, and storage medium
Feng et al. A review of an interactive augmented reality customization clothing system using finger tracking techniques as input device
al-Qerem Virtual dressing room implementation using body image–clothe mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant