CN118244939A - Virtual-real interaction method, device, equipment, medium and program product based on digital person - Google Patents

Virtual-real interaction method, device, equipment, medium and program product based on digital person Download PDF

Info

Publication number
CN118244939A
CN118244939A CN202410397983.0A CN202410397983A CN118244939A CN 118244939 A CN118244939 A CN 118244939A CN 202410397983 A CN202410397983 A CN 202410397983A CN 118244939 A CN118244939 A CN 118244939A
Authority
CN
China
Prior art keywords
virtual
target virtual
real
interaction
digital person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410397983.0A
Other languages
Chinese (zh)
Inventor
刘博�
唐郡
何林
肖永强
陈永彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202410397983.0A priority Critical patent/CN118244939A/en
Publication of CN118244939A publication Critical patent/CN118244939A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual-real interaction method, a device, equipment, a medium and a program product based on digital people, wherein the method comprises the following steps: performing scene modeling on a real picture to obtain a virtual scene, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture; determining at least one target virtual article interacting with the digital person from the plurality of virtual articles based on the interaction of the digital person; based on the object attribute of the at least one target virtual object and the interaction action, the object to be interacted corresponding to the at least one target virtual object in the real picture is adjusted, so that the action of a digital person can accurately influence the real picture in real time to present a more real virtual-real interaction effect.

Description

Virtual-real interaction method, device, equipment, medium and program product based on digital person
Technical Field
The present invention relates to the field of interaction technologies, and in particular, to a virtual-real interaction method, device, equipment, medium and program product based on digital people.
Background
At present, in virtual-real scene interaction performed by a digital person, generally, light and shadow processing is performed on the digital person and a real scene to combine the digital person and the real scene to realize the virtual-real interaction, wherein the light and shadow processing mainly combines the digital person and the real scene by simulating light distribution, reflection rules, shadow effects and the like, and presents corresponding light and shadow visual effects, but the interaction between the digital person and the real scene, which is presented by the visual effects, is often not real enough.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the invention provides a virtual-real interaction method, device, equipment, medium and program product based on digital people.
The embodiment of the invention provides a virtual-real interaction method based on digital people, which comprises the following steps:
Performing scene modeling on a real picture to obtain a virtual scene, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture;
determining at least one target virtual article interacting with the digital person from the plurality of virtual articles based on the interaction of the digital person;
and adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction action.
Further, when the interaction is displacement, the adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction includes:
for each target virtual article, the following steps are performed:
if the object attribute of the target virtual object comprises a displacement attribute and the interaction action comprises a displacement action of the digital person on the target virtual object, determining a virtual interaction response corresponding to the target virtual object as a displacement response;
And when the virtual interaction response corresponding to the target virtual object is a displacement response, converting the motion trail of the target virtual object into real motion coordinates according to a preset conversion rule, and adjusting the position of the object to be interacted corresponding to the target virtual object in the real picture based on the real motion coordinates.
Further, when the interaction is deformation, the adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction includes:
for each target virtual article, the following steps are performed:
If the object attribute of the target virtual object comprises an elastic attribute and the interaction action comprises a deformation action of the digital person on the target virtual object, determining that a virtual interaction response corresponding to the target virtual object is a deformation response;
And when the virtual interaction response corresponding to the target virtual object is a deformation response, determining a virtual deformation area of the target virtual object based on the object information of the target virtual object and the physical information corresponding to the interaction action, converting the virtual deformation area into a real deformation area through a preset conversion rule, and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture according to the real deformation area.
Further, the determining of the conversion rule includes the following steps:
calculating a model transformation relation based on displacement parameters and shape parameters of the target virtual object in the virtual scene;
determining a view transformation relationship between the virtual scene and the real picture;
Constructing a projection transformation relation based on the near clipping surface distance, the far clipping surface distance and the position information corresponding to the target virtual object;
and determining a conversion rule based on the model transformation relationship, the view transformation relationship and the projection transformation relationship.
Further, the performing morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture according to the real deformation area includes:
Determining two endpoints of a maximum distortion distance in the real deformation region;
Calculating deformation parameters of points to be adjusted based on the two endpoints, boundary points in the real deformation area and the points to be adjusted on the object to be interacted corresponding to the target virtual object;
Determining color parameters of the point to be adjusted in the real picture;
and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture based on the color parameter and the deformation parameter.
Further, before the adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction, the method further includes:
Cutting out the object to be interacted corresponding to the at least one target virtual object in the real picture to obtain at least one local image;
modeling the object to be interacted corresponding to the at least one target virtual object based on the at least one local image to obtain at least one local model;
Updating the at least one target virtual article based on the at least one local model.
The embodiment of the invention also provides a virtual-real interaction device based on the digital person, which comprises:
the scene modeling module is used for performing scene modeling on a real picture to obtain a virtual scene, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture;
A target virtual article determining module, configured to determine at least one target virtual article that interacts with a digital person from the plurality of virtual articles based on an interaction of the digital person;
And the adjustment module is used for adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction action.
The embodiment of the invention also provides computer equipment, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the steps of the digital person-based virtual-actual interaction method when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the steps of the digital person-based virtual-real interaction method.
The embodiment of the invention also provides a computer program product, which comprises computer instructions, wherein the computer instructions realize the steps of the digital person-based virtual-actual interaction method when being executed by a processor.
In summary, the embodiment of the invention has at least the following beneficial effects:
By adopting the embodiment of the invention, the virtual scene is obtained by carrying out scene modeling on the real picture, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture; determining at least one target virtual article interacting with the digital person from the plurality of virtual articles based on the interaction of the digital person; based on the object attribute of the at least one target virtual object and the interaction action, the object to be interacted corresponding to the at least one target virtual object in the real picture is adjusted, so that the action of a digital person can accurately influence the real picture in real time to present a more real virtual-real interaction effect.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a digital person-based virtual-to-real interaction method provided by the present invention;
FIG. 2 is a schematic diagram of an embodiment of a virtual-real interaction device based on digital people according to the present invention;
FIG. 3 is a schematic diagram illustrating the construction of one embodiment of a computer device provided by the present invention;
FIG. 4 is a schematic flow chart of a further embodiment of the digital person-based virtual-to-real interaction method provided by the present invention;
FIG. 5 is a schematic flow chart diagram of a further embodiment of a digital person-based virtual-to-real interaction method provided by the present invention;
FIG. 6 is a schematic diagram of one embodiment of a real picture provided by the present invention;
FIG. 7 is a schematic diagram of one embodiment of digital person-based virtual-to-real interaction provided by the present invention;
FIG. 8 is a schematic diagram of one embodiment of digital person-based virtual-to-real interaction provided by the present invention;
FIG. 9 is a schematic diagram of one embodiment of digital person-based virtual-to-real interaction provided by the present invention;
FIG. 10 is a schematic diagram of one embodiment of adjusting an item to be interacted with based on a deformation response provided by the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present application, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", "a third", etc. may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more. In the description of the application, the terms "include" and variations thereof are intended to be open-ended, i.e., to include, but not limited to. The term "based on" is based at least in part on. The term "according to" is based, at least in part, on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments.
In the description of the present application, it should be noted that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art unless defined otherwise. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application, as the particular meaning of the terms described above in the present application will be understood to those of ordinary skill in the art in the detailed description of the application.
The virtual-real interaction method based on the digital person can be executed by terminal equipment (such as a mobile phone, a tablet computer, a television, a smart screen and the like) with a display function.
For convenience of description, the following description will be given by taking the terminal device as an execution subject, but not limited thereto.
Referring to fig. 1, a flow chart of an embodiment of a digital person-based virtual-real interaction method provided by the present invention includes steps S1-S3, specifically as follows:
s1, performing scene modeling on a real picture to obtain a virtual scene, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture;
Exemplary, if the virtual scene is a 3D scene, performing scene modeling on the real picture to obtain the virtual scene includes: 3D modeling is conducted on the real picture to obtain a 3D scene containing the plurality of articles to be interacted, wherein the plurality of articles to be interacted are articles which are obtained through artificial intelligence model identification in the real picture and can interact with digital people.
It can be understood that the created virtual scene is a scene for simulating a real picture; the object to be interacted refers to an object which has possibility of interaction with a user in a real picture, for example, when the real picture indicates an indoor scene, the object to be interacted can be an indoor sofa, a throw pillow, a cup, a table and chair, a wall hanging ornament and the like, and when the real picture indicates an outdoor scene, the object to be interacted can be an automobile for trial driving, and also can be a device/equipment which is provided for controlling or using adjustment for the user in the automobile, such as a steering wheel, an on-vehicle center control screen, a vehicle door, an on-vehicle rearview mirror, a vehicle seat and the like.
For example, referring to fig. 6, the real picture may be an image taken by the image pickup apparatus; the image may be a two-dimensional image; the image pickup apparatus may be an image pickup device (for example, a mobile phone, a camera on a tablet, etc.) mounted on the terminal apparatus itself, or may be an image pickup device (for example, a camera, etc.) independent of the terminal apparatus, and when the image pickup apparatus is an image pickup device independent of the terminal apparatus, the terminal apparatus may acquire the image as a real picture by means of wired/wireless data transmission. In addition, the real picture may be an image obtained by the terminal device from another route, for example, a picture downloaded from the internet, which is not particularly limited herein.
S2, determining at least one target virtual article which interacts with the digital person from the plurality of virtual articles based on the interaction action of the digital person;
It should be noted that, in general, both the digital person and the real scene are displayed in the display area of the terminal device, and the created virtual scene is not generally required to be displayed. The virtual-real interaction aims at representing the interaction process between the virtual digital person and the real scene, so that the virtual-real interaction can be represented by throwing the digital person into the real scene to be displayed together, the virtual scene is used for simulating the object to be interacted in the real scene, the virtual object can be arranged at the position of the object to be interacted in the real scene in a one-to-one correspondence manner in a non-display manner, the virtual object is set to be in a state capable of being contacted with the digital person (for example, a collision body capable of being touched by the digital person is added to the virtual object), and the interaction action of the digital person is visually carried out on the object to be interacted in the real scene, but the interaction action of the digital person is carried out on the corresponding virtual object, so that the target virtual object is selected.
For example, for a digital person, the interaction of the digital person may be determined by a user input instruction for the digital person received by the terminal device, wherein the user input instruction may include: the voice command collected through the sound sensor in the terminal device, the touch command collected through the touch sensor in the terminal device, and/or the gesture command collected through the camera in the terminal device, etc., but are not limited thereto.
In the embodiment of the application, touch instructions are used for illustrating how to determine the interaction action of the digital person and how to select the target virtual object based on the interaction action. The terminal device as the execution subject has a touch display area having both display and touch functions, and examples thereof are as follows:
firstly, displaying a real picture and a digital person through a touch display area;
Then, receiving a touch instruction input by a user for the digital person through the touch display area, and determining interaction actions of the digital person based on the touch instruction, wherein the touch instruction can instruct the user to adjust the digital person in the touch display area, for example, control the digital person to use one or more parts of the digital person to perform corresponding actions on an object to be interacted (such as control the digital person to pick up the water cup by hands to enable the water cup to generate displacement and control the digital person to sit on a sofa to enable the sofa to generate deformation);
And finally, based on the interaction action of the digital person, determining the to-be-interacted object to be interacted by the digital person, and selecting at least one target virtual object corresponding to the to-be-interacted object to be interacted one by one from the virtual objects. It will be appreciated that the interactions, which in this step are digital in visual presentation, will be directed to the item to be interacted with, but that the underlying logic is that the interactions will come into contact with the corresponding virtual item and thereby select the target virtual item. Therefore, in the technical principle, the target virtual article can be directly determined by the interaction action, and the description of determining the article to be interacted and selecting the target virtual article in the step is only for facilitating understanding of visual interaction, and is not limited.
And S3, adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction action.
It should be noted that, the object attribute of the virtual object may be determined by the physical attribute of the object to be interacted corresponding to the virtual object, which is used to indicate the physical characteristic of the corresponding object to be interacted (for example, whether the object can be deformed in response to an external force and/or can be carried to displace), and then the object attribute is combined with the interaction action of the digital person for the corresponding virtual object to determine the adjustment scheme required for the target virtual object in the real image. For example, when the object to be interacted is an object which is not easy to generate obvious deformation but can be carried and moved, such as a cup, a table and a chair, the object attribute of the corresponding target virtual object can indicate that the target virtual object can respond to the interaction action of the digital person to generate displacement but cannot generate deformation, if the interaction action of the digital person indicates to perform displacement action on the target virtual object, the target virtual object is moved to the corresponding position, but if the interaction action of the digital person indicates to perform deformation action on the target virtual object, the target virtual object cannot be displaced, and when the object to be interacted can generate obvious deformation, the object is similar and is not repeated herein.
In the embodiment of the application, the digital person can interact with the interactable object in the real picture like a real person, namely, the corresponding interaction action can be made by controlling the digital person, so that the interactable object in the real picture is correspondingly adjusted, and more real virtual-real interaction experience is brought to the user visually. Particularly, the embodiment of the application can be applied to online shopping, the picture containing the commodity (such as clothes) to be sold is taken as the real picture in advance, so that the user can obtain more convenient, visual and real try-on shopping experience by controlling the digital person to make the try-on clothes, thereby helping the user to better know and select the commodity and improving the accuracy and satisfaction of shopping decision. In another embodiment, the method and the device can be applied to virtual test driving scenes of automobiles, so that a user can more intuitively and truly know automobiles of different models, select the most interesting automobiles from the automobiles to further compare and purchase, and obtain more convenient, visual and real test driving experience.
Optionally, in step S3, referring to fig. 4, the adjusting, based on the item attribute of the at least one target virtual item and the interaction, the item to be interacted in the real picture corresponding to the at least one target virtual item includes:
S31, determining a virtual interaction response corresponding to the at least one target virtual object based on the object attribute of the at least one target virtual object and the interaction action, wherein the virtual interaction response comprises at least one of a displacement response and a deformation response;
Illustratively, the determining of the virtual interaction response includes: the virtual interaction response corresponding to each target virtual article is determined by the article attribute of the target virtual article and the interaction action of the digital person on the target virtual article.
It will be appreciated that the virtual interaction response is used to indicate that the corresponding target virtual article is subject to interaction by a digital person, and then the response given by the attribute of the article itself is combined, for example, the displacement response indicates that the target virtual article needs to be displaced, and the deformation response indicates that the target virtual article needs to be deformed.
And S32, adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture at least based on the virtual interaction response.
Illustratively, the adjusting, based at least on the virtual interaction response, the object to be interacted with in the real picture corresponding to the at least one target virtual object includes: and aiming at each target virtual article, adjusting the article to be interacted corresponding to the target virtual article in the real picture at least based on the virtual interaction response corresponding to the target virtual article.
It should be noted that, in the embodiment of the present application, the virtual interaction response is a response of the target virtual article in response to the interaction of the received digital person, and accordingly, since each target virtual article corresponds to a corresponding article to be interacted, the corresponding article to be interacted can be adjusted in the real picture according to the virtual interaction response corresponding to each target virtual article.
According to the embodiment of the application, the virtual interaction response of the target virtual object can be determined through the object attribute of the target virtual object and the corresponding interaction action (at the moment, the virtual interaction response is in the bottom operation stage and is not needed to be displayed), and then the corresponding object to be interacted in the real picture is adjusted by utilizing the determined virtual interaction response (at the moment, the corresponding object to be interacted in the visual stage), so that the real interaction between a digital person and the object to be interacted in the real picture is visually presented, the operation judgment and the actual display are separated, the virtual interaction response is obtained through the final operation, and then the actual display is carried out, thereby avoiding unreasonable adjustment of the object to be interacted by the digital person (for example, when the digital person holds the cup by hand, the cup may not be operated in time and the cup is not judged to be obviously deformed immediately), and therefore, the unreasonable scene that the cup is deformed by the digital person is displayed in the real picture, and the sense of reality of virtual-real interaction is further enhanced.
In an optional implementation manner, when the interaction is a displacement, the adjusting, based on the item attribute of the at least one target virtual item and the interaction, the item to be interacted corresponding to the at least one target virtual item in the real picture includes:
for each target virtual article, the following steps are performed:
if the object attribute of the target virtual object comprises a displacement attribute and the interaction action comprises a displacement action of the digital person on the target virtual object, determining a virtual interaction response corresponding to the target virtual object as a displacement response;
And when the virtual interaction response corresponding to the target virtual object is a displacement response, converting the motion trail of the target virtual object into real motion coordinates according to a preset conversion rule, and adjusting the position of the object to be interacted corresponding to the target virtual object in the real picture based on the real motion coordinates.
It will be understood that the interaction performed by the digital person with respect to the target virtual article is that the interaction of the digital person will contact with the corresponding target virtual article, and the contacted target virtual article is the target virtual article with respect to the interaction of the digital person.
In the embodiment of the application, the corresponding virtual interaction response can be accurately determined based on the displacement action suffered by each target virtual article with the displacement attribute, namely the displacement response, and it can be understood that the displacement response corresponding to each target virtual article can be determined by parallel operation for the condition of a plurality of target virtual articles, so that the accuracy of virtual-real interaction and the timeliness of interaction are further improved.
For example, the items to be interacted corresponding to the target virtual items containing the displacement attribute may include: cup, throw pillow, desk and chair, wall hanging ornaments, clothes, etc.
In an optional implementation manner, when the interaction is deformation, the adjusting, based on the object attribute of the at least one target virtual object and the interaction, the object to be interacted corresponding to the at least one target virtual object in the real picture includes:
for each target virtual article, the following steps are performed:
If the object attribute of the target virtual object comprises an elastic attribute and the interaction action comprises a deformation action of the digital person on the target virtual object, determining that a virtual interaction response corresponding to the target virtual object is a deformation response;
And when the virtual interaction response corresponding to the target virtual object is a deformation response, determining a virtual deformation area of the target virtual object based on the object information of the target virtual object and the physical information corresponding to the interaction action, converting the virtual deformation area into a real deformation area through a preset conversion rule, and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture according to the real deformation area.
For example, the object to be interacted corresponding to the target virtual object containing the elastic attribute may include: pillow, sofa, clothing, etc.
The physical information corresponding to the interaction refers to a physical characteristic indicated by the interaction, for example, a physical characteristic indicated by a digital person picking up the cup by hand, and "picking up by hand" may cause the cup to be displaced, see a comparison between fig. 6 and fig. 7. Or a digital person sits on the sofa, the physical characteristics indicated by the interaction of sitting will cause the sofa to deform, see fig. 10, the left side of the figure is before the sofa is deformed, and the right side of the figure is after the sofa is deformed.
In the embodiment of the application, the corresponding virtual interaction response, namely the deformation response, can be accurately determined based on the deformation action suffered by each target virtual object with elastic attribute, and it can be understood that the deformation response corresponding to each target virtual object can be determined by parallel operation for the condition of a plurality of target virtual objects, so that the accuracy of virtual-real interaction and the timeliness of interaction are further improved.
It should be noted that, for the same target virtual article, the article attribute corresponding to the same target virtual article may include both a displacement attribute and an elastic attribute, which means that the physical characteristics of the article to be interacted corresponding to the same target virtual article are both capable of being moved to displace and capable of being deformed in response to an external force. Similarly, for the same interaction of a digital person, at least one of the following may be included:
1. a displacement action and/or a deformation action for each of the plurality of target virtual articles. For example, when a throw pillow is placed on a sofa, and a digital person sits on the throw pillow, the interaction of sitting is to simultaneously aim at the target virtual articles corresponding to the throw pillow and the sofa.
2. A displacement action and/or a deformation action for the same target virtual article. For example, if a digital person picks up the throw pillow by hand, because the article attribute of the throw pillow includes both displacement attribute and elastic attribute, and the interaction action of "picking up by hand" indicates both displacement action and deformation action, the virtual interaction response corresponding to the target virtual article corresponding to the throw pillow is displacement response and deformation response; however, if the digital person picks up the cup by hand, the object attribute of the cup does not include the elastic attribute, and the virtual interaction response corresponding to the corresponding target virtual object of the cup is a displacement response.
Illustratively, the conversion rule may include:
1. If the virtual scene is a three-dimensional scene and the real picture is a two-dimensional picture, the projection process from the three-dimensional space to the two-dimensional plane is involved due to the mapping conversion rule between the three-dimensional image and the two-dimensional image, and the process involves a plurality of factors including internal parameters and external parameters of the camera, a projection mode, illumination conditions and the like.
Wherein the external parameters of the camera describe the rotational and translational relationship of the camera with respect to the world coordinate system, the parameters comprising a rotational matrix and a translational vector for mapping points in three-dimensional space to two-dimensional image coordinates.
The projection mode determines how the three-dimensional points are mapped onto the two-dimensional plane. Common projection modes include orthogonal projection and perspective projection. Orthogonal projection keeps the proportion of the object in all directions unchanged, and perspective projection can better represent the far-near relationship and depth sense.
Wherein the lighting conditions also affect the mapping transformation between the three-dimensional image and the two-dimensional image. Different illumination conditions can cause color and brightness changes on the image surface, affecting the accuracy of the mapping result.
In summary, in the embodiment of the application, when the virtual scene is a three-dimensional scene and the real picture is a two-dimensional picture, the corresponding conversion rule can be configured by setting the parameters, so that the change in the virtual scene can be reflected in the real picture more truly, thereby improving the sense of reality of virtual-real interaction.
2. The created virtual scene can also be regarded as a virtual image that is different from the real picture, and then the coordinate position of any pixel in the original image after being mapped to the target image or the coordinate position of any pixel in the transformed target image after being inversely mapped can be calculated by creating a mapping relationship between the original image (virtual image) and the target image (image indicating the real picture). This approach may not be limited to whether the dimensions between the two images are the same.
In an alternative embodiment, referring to fig. 5, the determination of the conversion rule includes the steps of:
s321, calculating a model transformation relation based on displacement parameters and shape parameters of the target virtual object in the virtual scene;
By way of example, the model transformation relationship may be a model transformation matrix, the displacement parameters include a displacement of the virtual object in the virtual scene and a rotation of the virtual object in the virtual scene, the shape parameters include a scaling of the virtual object in the virtual scene, and the model transformation matrix M model->world is determined by the following formula: m model->world =t·r·s, where T represents a displacement of the virtual item in the virtual scene, R represents a rotation of the virtual item in the virtual scene, and S represents a scaling of the virtual item in the virtual scene. It should be understood that the model transformation relationship is used to transform the target virtual object under a specific coordinate system, and thus, it is not limited to the model transformation matrix described in the present embodiment as long as such transformation relationship can be expressed.
S322, determining a view transformation relation between the virtual scene and the real picture;
The View transformation relationship may be, for example, a View transformation matrix, where View is a linear transformation that transforms points in three-dimensional space from a world coordinate system to a camera coordinate system, and is composed of rotation and translation, which respectively represent rotation and translation of the camera relative to the world coordinate system. The model in the virtual scene can be converted from the world coordinate system to the camera coordinate system by the view transformation relationship, so that the model is accurately presented in the camera view, and in general, the coordinate system needs to be constructed by taking the camera as an origin according to the camera position, orientation and the like, and the model is converted into the view space in which the real picture is located. The virtual scene in this embodiment can be inferred by the artificial intelligence model with the camera as the origin, so that both use the camera as the origin, and the View transformation matrix View can be directly obtained.
S323, constructing a projection transformation relation based on the near clipping surface distance, the far clipping surface distance and the position information corresponding to the target virtual object;
It should be noted that, after the target virtual object is put on the real picture, it can be regarded as a view object in the real picture, and the near clipping surface distance and the far clipping surface distance of the view object are important parameters for determining the range of the view object in computer graphics. The view volume refers to a three-dimensional volume formed by observing a scene from a camera position, the range of the view volume is defined by a near clipping surface and a far clipping surface, the near clipping surface distance refers to the distance of the nearest plane of the view volume, and the distance is generally set as the distance from the camera to the near clipping surface of the camera, so that the minimum range of the view volume is determined, namely, only objects in the range can be rendered. The distance of the far clipping plane refers to the distance of the furthest plane of the view volume, and is generally set as the distance from the camera to the far clipping plane of the camera, so as to determine the maximum range of the view volume, i.e. only objects within this range will be rendered.
The projective transformation relationship may be, for example, a projective transformation matrix, and the projective transformation matrix Projection is determined by the following formula:
Projection=[
2*n/(r-l),0,0,0,
0,2*n/(t-b),0,0,
(r+l)/(r-l),(t+b)/(t-b),-(f+n)/(f-n),-1,
0,0,-2*f*n/(f-n),0
]
Wherein n is the near clipping surface distance of the view, f is the far clipping surface distance of the view, l is the position coordinate of the left side of the view, r is the position coordinate of the right side of the view, t is the position coordinate of the top of the view, and b is the position coordinate of the bottom of the view.
S324, determining a conversion rule based on the model transformation relation, the view transformation relation and the projection transformation relation.
Illustratively, the transformation rule is a product of the model transformation relationship, the view transformation relationship, and the projective transformation relationship. That is, the conversion rule MVP is determined by the following formula:
MVP=Model·View·Projection
Wherein Model is the Model transformation relationship, view is the View transformation relationship, and project is the Projection transformation relationship.
In an optional implementation manner, the performing morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture according to the real deformation area includes:
Determining two endpoints of a maximum distortion distance in the real deformation region;
Calculating deformation parameters of points to be adjusted based on the two endpoints, boundary points in the real deformation area and the points to be adjusted on the object to be interacted corresponding to the target virtual object;
Determining color parameters of the point to be adjusted in the real picture;
and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture based on the color parameter and the deformation parameter.
The calculating, based on the two end points, the boundary point in the real deformation area, and the point to be adjusted on the object to be interacted corresponding to the target virtual object, deformation parameters of the point to be adjusted includes:
Calculating a distance to be migrated corresponding to the point to be migrated based on the two end points, the boundary point in the real deformation area and the point to be migrated on the object to be interacted corresponding to the target virtual object, wherein the distance to be migrated is a distance extending from the point to be migrated to the target adjusting point along the direction of the maximum distortion distance, and the target adjusting point is positioned on the object to be interacted corresponding to the target virtual object;
and calculating deformation parameters of the point to be adjusted by adopting a Besepal curve based on the distance to be migrated.
For example, referring to fig. 8 and 9, (l, t) is the upper left corner coordinate of the real deformation region, (r, b) is the lower right corner coordinate of the real deformation region, p1 and p2 are two endpoints of the maximum distortion distance, c1 is the point to be adjusted, and the deformation parameter of the point to be adjusted can be calculated by the following steps:
1. determining a first distance between p1 and p2, a second distance from p1 to the left boundary of the real deformation region, and a third distance from c1 to the left boundary of the real deformation region;
2. calculating a third distance divided by the second distance and multiplying the third distance by the first distance to obtain a maximum distance C corresponding to the point to be adjusted C1;
3. Substituting the maximum distance C into a Besepal curve to calculate the migration distance of the point C1 to be adjusted;
4. And extending the distance to be migrated from the point to be tuned c1 to the target tuning point c2 along the directions of the two endpoints p1 to p2 of the maximum twisting distance so as to determine the deformation parameters.
In an optional embodiment, before the adjusting, based on the item attribute of the at least one target virtual item and the interaction, the item to be interacted corresponding to the at least one target virtual item in the real picture, the method further includes:
Cutting out the object to be interacted corresponding to the at least one target virtual object in the real picture to obtain at least one local image;
modeling the object to be interacted corresponding to the at least one target virtual object based on the at least one local image to obtain at least one local model;
Updating the at least one target virtual article based on the at least one local model.
Specifically, in the embodiment of the application, the object to be interacted corresponding to each target virtual object can be cut out in the real picture to obtain the local image, each target virtual object is modeled based on the local image to obtain the local model, and finally each local model is utilized to update the corresponding target virtual object, and the updating can be exemplified by replacing the corresponding target virtual object with the local model to obtain a new target virtual object, or reconstructing the model or repairing the corresponding target virtual object based on the local model to improve the precision, and the updating mode is not particularly limited.
It can be appreciated that since the initial scene modeling results in a virtual scene modeled for the entire real picture, modeling of all virtual objects can be completed quickly and efficiently, but on the other hand, the virtual objects thus constructed are not fine and accurate enough. In the embodiment of the application, after the target virtual articles are determined, each target virtual article can be further independently modeled on the basis of obtaining a virtual scene by initial scene modeling, so that the target virtual articles with higher precision are obtained, the accuracy of interaction between interaction actions and the target virtual articles is improved, and finally the sense of reality of virtual-real interaction is improved. In addition, because only the target virtual object is required to be modeled separately and only the modeling with low accuracy requirement is required to be performed on the whole real picture initially, the embodiment can reduce the required calculation force on the basis of not affecting the technical effect.
Correspondingly, the embodiment of the invention also provides a virtual-real interaction device based on the digital person, which can realize all the flows of the virtual-real interaction method based on the digital person provided by the embodiment.
Referring to fig. 2, a schematic structural diagram of an embodiment of a digital person-based virtual-real interaction device according to the present invention includes:
the scene modeling module 101 is configured to perform scene modeling on a real picture to obtain a virtual scene, where the virtual scene at least includes a plurality of virtual objects corresponding to a plurality of objects to be interacted in the real picture;
A target virtual article determining module 102, configured to determine at least one target virtual article that interacts with the digital person from the plurality of virtual articles based on the interaction of the digital person;
and the adjustment module 103 is configured to adjust an item to be interacted with, corresponding to the at least one target virtual item, in the real picture based on the item attribute of the at least one target virtual item and the interaction action.
In an alternative embodiment, when the interaction is a displacement, the adjustment module 103 is specifically configured to:
for each target virtual article, the following steps are performed:
if the object attribute of the target virtual object comprises a displacement attribute and the interaction action comprises a displacement action of the digital person on the target virtual object, determining a virtual interaction response corresponding to the target virtual object as a displacement response;
And when the virtual interaction response corresponding to the target virtual object is a displacement response, converting the motion trail of the target virtual object into real motion coordinates according to a preset conversion rule, and adjusting the position of the object to be interacted corresponding to the target virtual object in the real picture based on the real motion coordinates.
In an alternative embodiment, when the interaction is deformation, the adjustment module 103 is specifically configured to:
for each target virtual article, the following steps are performed:
If the object attribute of the target virtual object comprises an elastic attribute and the interaction action comprises a deformation action of the digital person on the target virtual object, determining that a virtual interaction response corresponding to the target virtual object is a deformation response;
And when the virtual interaction response corresponding to the target virtual object is a deformation response, determining a virtual deformation area of the target virtual object based on the object information of the target virtual object and the physical information corresponding to the interaction action, converting the virtual deformation area into a real deformation area through a preset conversion rule, and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture according to the real deformation area.
In an alternative embodiment, the adjustment module 103 includes a conversion rule determining unit, where the conversion rule determining unit is configured to:
calculating a model transformation relation based on displacement parameters and shape parameters of the target virtual object in the virtual scene;
The model transformation matrix is used for indicating the transformation of virtual objects between the virtual scene and the real picture;
determining a view transformation relationship between the virtual scene and the real picture;
Constructing a projection transformation relation based on the near clipping surface distance, the far clipping surface distance and the position information corresponding to the target virtual object;
and determining a conversion rule based on the model transformation relationship, the view transformation relationship and the projection transformation relationship.
In an alternative embodiment, the adjustment module 103 includes a morphology adjustment unit for:
Determining two endpoints of a maximum distortion distance in the real deformation region;
Calculating deformation parameters of points to be adjusted based on the two endpoints, boundary points in the real deformation area and the points to be adjusted on the object to be interacted corresponding to the target virtual object;
Determining color parameters of the point to be adjusted in the real picture;
and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture based on the color parameter and the deformation parameter.
In an alternative embodiment, the digital person-based virtual-real interaction device further includes:
The shearing module is used for shearing the object to be interacted corresponding to the at least one target virtual object in the real picture respectively to obtain at least one partial image;
The local modeling module is used for respectively modeling the object to be interacted corresponding to the at least one target virtual object based on the at least one local image to obtain at least one local model;
and the target virtual article updating module is used for updating the at least one target virtual article based on the at least one local model.
The embodiment of the invention also provides computer equipment, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the steps of the digital person-based virtual-actual interaction method when executing the computer program.
Referring to fig. 3, the computer device of this embodiment includes: a processor 301, a memory 302 and a computer program stored in said memory 302 and executable on said processor 301, for example a virtual-real interactive program based on a digital person. The processor 301, when executing the computer program, implements the steps of the above-described embodiments of the digital person-based virtual-real interaction method, such as steps S1-S3 shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 302 and executed by the processor 301 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the computer device.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer device may include, but is not limited to, a processor 301, a memory 302. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a computer device and is not limiting of the computer device, and may include more or fewer components than shown, or may combine some of the components, or different components, e.g., the computer device may also include input and output devices, network access devices, buses, etc.
The Processor 301 may be a central processing unit (Centra l Process I ng Un it, CPU), but may also be other general purpose processors, digital signal processors (DI GI TA L SI GNA L Processor, DSP), application specific integrated circuits (APP L I CAT I on SPEC I F I C I NTEGRATED CI rcu it, AS ic), off-the-shelf programmable gate arrays (Fi e l d-Programmab L E GATE ARRAY, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor 301 may be any conventional processor 301 or the like, the processor 301 being the control center of the computer device, with various interfaces and lines connecting the various parts of the overall computer device.
The memory 302 may be used to store the computer programs and/or modules, and the processor 301 may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory 302, and invoking data stored in the memory 302. The memory 302 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 302 may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MED I A CARD, SMC), secure digital (Secure Di g ita l, SD) card, flash memory card (F L ASH CARD), at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Wherein the computer device integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each method embodiment described above when executed by the processor 301. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM, read-On l yMemory), a random access memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the steps of the digital person-based virtual-real interaction method.
The embodiment of the invention also provides a computer program product, which comprises computer instructions, wherein the computer instructions realize the steps of the digital person-based virtual-actual interaction method when being executed by a processor.
In summary, the embodiment of the invention has at least the following beneficial effects:
By adopting the embodiment of the invention, the virtual scene is obtained by carrying out scene modeling on the real picture, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture; determining at least one target virtual article interacting with the digital person from the plurality of virtual articles based on the interaction of the digital person; based on the object attribute of the at least one target virtual object and the interaction action, the object to be interacted corresponding to the at least one target virtual object in the real picture is adjusted, so that the action of a digital person can accurately influence the real picture in real time to present a more real virtual-real interaction effect.
From the above description of the embodiments, it will be clear to those skilled in the art that the present invention may be implemented by means of software plus necessary hardware platforms, but may of course also be implemented entirely in hardware. With such understanding, all or part of the technical solution of the present invention contributing to the background art may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the embodiments or some parts of the embodiments of the present invention.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (10)

1. The virtual-real interaction method based on the digital person is characterized by comprising the following steps of:
Performing scene modeling on a real picture to obtain a virtual scene, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture;
determining at least one target virtual article interacting with the digital person from the plurality of virtual articles based on the interaction of the digital person;
and adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction action.
2. The digital person-based virtual-real interaction method according to claim 1, wherein when the interaction is a displacement, the adjusting the item to be interacted with corresponding to the at least one target virtual item in the real picture based on the item attribute of the at least one target virtual item and the interaction comprises:
for each target virtual article, the following steps are performed:
if the object attribute of the target virtual object comprises a displacement attribute and the interaction action comprises a displacement action of the digital person on the target virtual object, determining a virtual interaction response corresponding to the target virtual object as a displacement response;
And when the virtual interaction response corresponding to the target virtual object is a displacement response, converting the motion trail of the target virtual object into real motion coordinates according to a preset conversion rule, and adjusting the position of the object to be interacted corresponding to the target virtual object in the real picture based on the real motion coordinates.
3. The digital person-based virtual-real interaction method according to claim 1, wherein when the interaction is deformation, the adjusting the item to be interacted with corresponding to the at least one target virtual item in the real picture based on the item attribute of the at least one target virtual item and the interaction comprises:
for each target virtual article, the following steps are performed:
If the object attribute of the target virtual object comprises an elastic attribute and the interaction action comprises a deformation action of the digital person on the target virtual object, determining that a virtual interaction response corresponding to the target virtual object is a deformation response;
And when the virtual interaction response corresponding to the target virtual object is a deformation response, determining a virtual deformation area of the target virtual object based on the object information of the target virtual object and the physical information corresponding to the interaction action, converting the virtual deformation area into a real deformation area through a preset conversion rule, and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture according to the real deformation area.
4. A digital person based virtual-to-real interaction method as claimed in claim 2 or 3, characterized in that the determination of the conversion rule comprises the steps of:
calculating a model transformation relation based on displacement parameters and shape parameters of the target virtual object in the virtual scene;
determining a view transformation relationship between the virtual scene and the real picture;
Constructing a projection transformation relation based on the near clipping surface distance, the far clipping surface distance and the position information corresponding to the target virtual object;
and determining a conversion rule based on the model transformation relationship, the view transformation relationship and the projection transformation relationship.
5. The method for digital person-based virtual-real interaction of claim 3, wherein the performing morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture according to the real deformation area comprises:
Determining two endpoints of a maximum distortion distance in the real deformation region;
Calculating deformation parameters of points to be adjusted based on the two endpoints, boundary points in the real deformation area and the points to be adjusted on the object to be interacted corresponding to the target virtual object;
Determining color parameters of the point to be adjusted in the real picture;
and carrying out morphological adjustment on the object to be interacted corresponding to the target virtual object in the real picture based on the color parameter and the deformation parameter.
6. A digital person based virtual-to-actual interaction method as in any of claims 1-3, wherein prior to said adjusting of the item to be interacted with in the real picture corresponding to the at least one target virtual item based on the item properties of the at least one target virtual item and the interaction, the method further comprises:
Cutting out the object to be interacted corresponding to the at least one target virtual object in the real picture to obtain at least one local image;
modeling the object to be interacted corresponding to the at least one target virtual object based on the at least one local image to obtain at least one local model;
Updating the at least one target virtual article based on the at least one local model.
7. A digital person-based virtual-real interaction device, comprising:
the scene modeling module is used for performing scene modeling on a real picture to obtain a virtual scene, wherein the virtual scene at least comprises a plurality of virtual articles corresponding to a plurality of articles to be interacted in the real picture;
A target virtual article determining module, configured to determine at least one target virtual article that interacts with a digital person from the plurality of virtual articles based on an interaction of the digital person;
And the adjustment module is used for adjusting the object to be interacted corresponding to the at least one target virtual object in the real picture based on the object attribute of the at least one target virtual object and the interaction action.
8. A computer device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the digital person based virtual-to-actual interaction method of any of claims 1-6 when the computer program is executed.
9. A computer storage medium having stored thereon a computer program, which when executed by a processor implements the digital person based virtual-to-actual interaction method of any of claims 1-6.
10. A computer program product comprising computer instructions which, when executed by a processor, implement the digital person based virtual-to-real interaction method of any of claims 1-6.
CN202410397983.0A 2024-04-03 2024-04-03 Virtual-real interaction method, device, equipment, medium and program product based on digital person Pending CN118244939A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410397983.0A CN118244939A (en) 2024-04-03 2024-04-03 Virtual-real interaction method, device, equipment, medium and program product based on digital person

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410397983.0A CN118244939A (en) 2024-04-03 2024-04-03 Virtual-real interaction method, device, equipment, medium and program product based on digital person

Publications (1)

Publication Number Publication Date
CN118244939A true CN118244939A (en) 2024-06-25

Family

ID=91555963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410397983.0A Pending CN118244939A (en) 2024-04-03 2024-04-03 Virtual-real interaction method, device, equipment, medium and program product based on digital person

Country Status (1)

Country Link
CN (1) CN118244939A (en)

Similar Documents

Publication Publication Date Title
US20220245906A1 (en) Location-based virtual element modality in three-dimensional content
US11694392B2 (en) Environment synthesis for lighting an object
US10872475B2 (en) 3D mobile renderer for user-generated avatar, apparel, and accessories
JP6493395B2 (en) Image processing apparatus and image processing method
CN104732585B (en) A kind of method and device of human somatotype reconstruct
CN102449680B (en) Information presentation device
US7786993B2 (en) Environment mapping
US7755608B2 (en) Systems and methods of interfacing with a machine
US11170521B1 (en) Position estimation based on eye gaze
US20090244064A1 (en) Program, information storage medium, and image generation system
US20210183158A1 (en) Placement and manipulation of objects in augmented reality environment
CN115049811B (en) Editing method, system and storage medium for digital twin virtual three-dimensional scene
Jimeno-Morenilla et al. Augmented and virtual reality techniques for footwear
CN111127623A (en) Model rendering method and device, storage medium and terminal
CN111803945A (en) Interface rendering method and device, electronic equipment and storage medium
US11682138B2 (en) Localization and mapping using images from multiple devices
Tang et al. AR interior designer: Automatic furniture arrangement using spatial and functional relationships
Park et al. DesignAR: Portable projection-based AR system specialized in interior design
CN108346178A (en) Mixed reality object is presented
JPH07254072A (en) Texture mapping method and device therefor
CN109407824A (en) Manikin moves synchronously method and apparatus
CN114663632A (en) Method and equipment for displaying virtual object by illumination based on spatial position
JP2003115055A (en) Image generator
JP2001275199A (en) Three-dimensional sound reproduction system
CN118244939A (en) Virtual-real interaction method, device, equipment, medium and program product based on digital person

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination