CN114821675B - Object processing method and system and processor - Google Patents
Object processing method and system and processor Download PDFInfo
- Publication number
- CN114821675B CN114821675B CN202210745674.9A CN202210745674A CN114821675B CN 114821675 B CN114821675 B CN 114821675B CN 202210745674 A CN202210745674 A CN 202210745674A CN 114821675 B CN114821675 B CN 114821675B
- Authority
- CN
- China
- Prior art keywords
- image
- model
- biological
- limb
- trunk
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/70—Multimodal biometrics, e.g. combining information from different biometric modalities
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method, a system and a processor for processing an object. Wherein, the method comprises the following steps: acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world. The method solves the technical problem of poor effect of simulating the object based on three-dimensional modeling, model driving and other means.
Description
Technical Field
The invention relates to the field of computers, in particular to a method, a system and a processor for processing an object.
Background
At present, a processing scheme for an object usually adopts a fixed scene animation, a character image is too cartoon and stiff, information of a user is not carried, the object and the scene cannot be fused, and the technical problem of poor effect of simulating the object exists. In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a processing method, a system and a processor of an object, which solve the technical problem of poor effect of object simulation by means of at least three-dimensional modeling, model driving and the like.
According to an aspect of an embodiment of the present invention, there is provided a method for processing an object, including: acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
According to another aspect of the embodiments of the present invention, there is also provided a method for processing an object, including: displaying an original image of a biological object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in VR equipment or AR equipment to obtain an avatar of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the virtual world; and driving the VR equipment or the AR equipment to display the target moving image.
According to another aspect of the embodiments of the present invention, there is also provided a method for processing an object, including: responding to an image input instruction acting on an operation interface of virtual reality VR equipment or augmented reality AR equipment, and displaying an original image of a biological object on the operation interface; and responding to an image generation instruction acting on the operation interface, driving the VR device or the AR device to display a target moving image of the biological object on the operation interface, wherein the target moving image is obtained by fusing the driven biological model into scene materials of the virtual world, and is used for representing that the biological model presents a moving effect result in the virtual world, respectively driving a plurality of part models in the biological model to execute moving effect information matched with the part models, the moving effect information is used for representing visual moving effects generated by the driven part models, and the biological model is obtained by reconstructing an original image and is used for simulating to obtain a virtual image of the biological object in the virtual world.
According to another aspect of the embodiments of the present invention, there is also provided a method for processing an object, including: acquiring an original image of a biological object by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the original image; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain a virtual image of the biological object in a virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a dynamic effect result in the virtual world; and outputting the target moving image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target moving image.
The embodiment of the invention also provides a system for processing the object. The object processing system includes: the system comprises a first processing end and a second processing end, wherein the first processing end is a cloud end or a mobile terminal, the second processing end is a cloud algorithm background or a mobile terminal algorithm background, and the first processing end is used for acquiring an original image of a biological object; the second processing end is used for reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world through a cloud algorithm module, and rendering to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the virtual world; and outputting the target moving image.
An embodiment of the present invention further provides another object processing system, including: the system comprises a server and Virtual Reality (VR) equipment or Augmented Reality (AR) equipment, wherein the server is used for acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in VR equipment or AR equipment to obtain an avatar of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and the VR equipment or the AR equipment is used for receiving the driven biological model sent by the server and fusing the driven biological model into the scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
An embodiment of the present invention further provides an apparatus for processing an object, including: an acquisition unit configured to acquire an original image of a biological object; the first reconstruction unit is used for reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating and obtaining the virtual image of the biological object in the virtual world; the first driving unit is used for respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and the first fusion unit is used for fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
An embodiment of the present invention further provides an apparatus for processing an object, including: a display unit for displaying an original image of a biological object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device; the second reconstruction unit is used for reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in VR equipment or AR equipment to obtain an avatar of the biological object in the virtual world; the second driving unit is used for respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; the second fusion unit is used for fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world; and the third driving unit is used for driving the VR equipment or the AR equipment to display the target moving image.
An embodiment of the present invention further provides an apparatus for processing an object, including: a first display unit configured to display an original image of a biological object on an operation interface in response to an image input instruction acting on the operation interface of the virtual reality VR device or the augmented reality AR device; and the second display unit is used for responding to an image generation instruction acting on the operation interface, driving the VR device or the AR device to display a target moving image of the biological object on the operation interface, wherein the target moving image is obtained by fusing the driven biological model into scene materials of the virtual world, and is used for representing that the biological model presents a moving effect result in the virtual world, respectively driving a plurality of part models in the biological model to execute moving effect information matched with the part models, the moving effect information is used for representing visual moving effects generated by the driven part models, and the biological model is obtained by reconstructing an original image and is used for simulating to obtain a virtual image of the biological object in the virtual world.
An embodiment of the present invention further provides an apparatus for processing an object, including: the biological object detection device comprises a first calling unit, a second calling unit and a third calling unit, wherein the first calling unit is used for obtaining an original image of a biological object by calling a first interface, the first interface comprises a first parameter, and a parameter value of the first parameter is the original image; the third reconstruction unit is used for reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; the fourth driving unit is used for respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; the third fusion unit is used for fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a dynamic effect result in the virtual world; and the output unit is used for outputting the target moving image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target moving image.
The embodiment of the invention also provides a processor. The processor is configured to execute a program, wherein the program executes the method for processing the object according to the embodiment of the present invention.
In an embodiment of the present invention, an original image of a biological subject is acquired; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain a virtual image of the biological object in a virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world. That is to say, the embodiment of the present invention generates a corresponding biological model based on an original image of a reconstructed biological object, and then fuses the biological model to the virtual world by driving the biological model to execute the matching dynamic effect information, thereby avoiding monotonous simulation and solidification of the object, achieving a technical effect of improving an effect of simulating the object, and further solving a technical problem of a poor effect of simulating the object.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal (or mobile device) for implementing a processing method of an object according to an embodiment of the present invention;
fig. 2 is a block diagram of a hardware configuration of a virtual reality device for implementing a processing method of an object according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method of processing an object according to an embodiment of the invention;
FIG. 4 is a flow chart of another method of processing an object according to an embodiment of the invention;
FIG. 5 is a flow chart of another method of processing objects according to an embodiment of the present invention;
FIG. 6 is a flow chart of another method of processing objects according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an object processing system according to an embodiment of the invention;
FIG. 8 is a schematic diagram of a virtual human in a live anchor scenario, according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a avatar according to an embodiment of the present invention;
FIG. 10 is a flow diagram of a processing framework for an object according to an embodiment of the invention;
FIG. 11 is a schematic diagram of a face reconstruction method according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a 3D point cloud based strategy for optimizing geometric topology, according to an embodiment of the invention;
FIG. 13 is a schematic diagram of a method of reconstructing an image of a human body according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a method of building a universal apparel database in accordance with an embodiment of the present invention;
FIG. 15 is a flow chart of a method of implementing head maneuvers according to an embodiment of the present invention;
FIG. 16 is a schematic illustration of a method of effecting body movement in accordance with an embodiment of the present invention;
FIG. 17 is a schematic diagram of a method of scene fusion according to an embodiment of the invention;
FIG. 18 is a schematic diagram of a simulation effect after a scene is fused with a virtual human according to an embodiment of the present invention;
FIG. 19 is a schematic view of another object processing apparatus according to an embodiment of the present invention;
FIG. 20 is a schematic view of another object processing apparatus according to an embodiment of the present invention;
FIG. 21 is a schematic view of another object processing apparatus according to an embodiment of the present invention;
FIG. 22 is a schematic view of another object processing apparatus according to an embodiment of the present invention;
fig. 23 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
the visual intelligent open platform (AIME) can be used for executing the object processing method provided by the embodiment of the invention by performing digital reconstruction and content creation on the input personal picture (face + human body);
avatars (Avatar), network avatars (avatars);
a Three-Dimensional Mesh (3D Mesh for short), which may be a Three-Dimensional Mesh topology result output by geometrically modeling the head and body of a human being, and mainly represents the outline information of the body;
a parameterized human body Model (Skinned Multi-Person Linear Model, abbreviated as SMPL Model), which can be used for arbitrary human body modeling and animation driving;
texture MAP (UV MAP), which may refer to a surface texture MAP developed by projecting a three-dimensional surface onto two dimensions, is mainly used to characterize texture information of a 3D reconstructed object;
the expression driver can be used for realizing the conversion drive of the facial expression of the virtual image by inputting voice or video and realizing the dynamic effect of the expression when the virtual image speaks;
the human body drive can be used for driving an avatar to generate various body postures by presetting a section of motion sequence so as to form a smooth human body dynamic video, wherein the motion sequence can be designed by a designer, can be extracted from a section of motion video through an algorithm, and can be captured through motion capture equipment;
scene fusion, which can be used for fusing an avatar with expression and action into a real physical scene, wherein the real physical scene can be a 3D virtual design scene or a real shooting scene;
generating a countermeasure Network (GAN for short), which can be used for automatically learning the data distribution of the original real sample set and establishing a corresponding model;
a Neural radiation Field (NeRF) may be used to model a complex static scene non-explicitly through a Neural network, or may be used to synthesize an image at a new perspective by capturing a series of scenes at known perspectives;
the intermediate file storage is a set of public protocols constructed based on the rendering engine.
Example 1
There is also provided, in accordance with an embodiment of the present invention, an embodiment of a method for processing an object, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing a processing method of an object. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …,102 n) processors 102 (which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. In addition, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or processing circuitry of other objects described above may be generally referred to herein as "processing circuitry of an object. The processing circuitry of the object may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the processing circuitry of the object may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the processing circuitry of the object is controlled as a processor (e.g., selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the object processing method in the embodiment of the present invention, and the processor executes various functional applications and data processing by executing the software programs and modules stored in the memory 104, that is, implementing the object processing method described above. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet via wireless.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
Fig. 2 is a block diagram of a hardware structure of a virtual reality apparatus for implementing a processing method of an object according to an embodiment of the present invention. As shown in fig. 2, the virtual reality device 204 is connected to the terminal 206, and the terminal 206 is connected to the server 202 via a network, and the virtual reality device 204 is not limited to: a virtual reality helmet, virtual reality glasses, a virtual reality all-in-one machine, etc., the terminal 206 is not limited to a PC, a mobile phone, a tablet computer, etc., the server 202 may be a server corresponding to a media file operator, and the network includes but is not limited to: a wide area network, a metropolitan area network, or a local area network.
Optionally, the virtual reality device 204 of this embodiment includes: memory, processor, and transmission means. The memory is used for storing an application program, and the application program can be used for executing: acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; the driven biological model is fused into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the virtual world, so that the technical effect of improving the effect of simulating the object is achieved, and the technical problem of poor effect of simulating the object is solved based on means such as three-dimensional modeling and model driving.
The terminal of this embodiment may be configured to perform displaying an original image of a biological object on a presentation screen of a Virtual Reality VR (VR) device or an Augmented Reality AR (AR) device; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in VR equipment or AR equipment to obtain an avatar of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a dynamic effect result in the virtual world; and driving the VR equipment or the AR equipment to display the target moving image.
Optionally, the eye tracking Head Mounted Display (HMD) and the eye tracking module of the virtual reality device 204 of this embodiment are the same as those of the above embodiments, that is, a screen in the HMD Head Display is used for displaying real-time images, and the eye tracking module in the HMD is used for obtaining a real-time movement path of the user's eyes. The terminal of the embodiment acquires the position information and the motion information of the user in the real three-dimensional space through the tracking system, and calculates the three-dimensional coordinates of the head of the user in the virtual three-dimensional space and the visual field orientation of the user in the virtual three-dimensional space.
The hardware structure block diagram shown in fig. 2 can be taken as an exemplary block diagram of not only the AR/VR device (or mobile device) but also the server.
Under the above operating environment, the present application provides a method for processing an object as shown in fig. 3. It should be noted that the processing method of the object of the embodiment can be executed by the mobile terminal of the embodiment shown in fig. 1.
Fig. 3 is a flowchart of a method of processing an object according to an embodiment of the present invention. As shown in fig. 3, the method may include the steps of:
in step S302, an original image of a biological subject is acquired.
In the technical solution provided by step S302 of the present invention, the acquiring of the original image of the biological object may be acquiring the original image of the biological object through an image acquisition device, where the biological object may be an object to be subjected to image processing, for example, a user, the original image may be an image of the biological object, a group of images of the biological object, or a video of the biological object, and the video may include an action behavior of the biological object, which is merely illustrated herein and is not limited specifically.
And S304, reconstructing the original image to obtain a biological model of the biological object.
In the technical solution provided by step S304 of the present invention, after the original image of the biological object is obtained, the original image is reconstructed to obtain a biological model of the biological object, wherein the biological model is used for simulating and obtaining an avatar of the biological object in the virtual world.
The embodiment may perform image processing on the obtained original image to reconstruct the original image and generate a biological model of the biological object, where the image processing may include image alignment, semantic segmentation, pose correction, image clipping, compression enhancement, and the like, the reconstruction may be to perform image processing on the obtained original image data to obtain a three-dimensional image of the image, and the reconstruction of the original image may be to reconstruct a 3D human Mesh or to reconstruct other human geometric representation files, for example, a file format based on volume rendering generated by neural network rendering and the like, the biological model may be a complete human 3D asset, and may be used to simulate to obtain an avatar of the biological object in a virtual world, where the virtual world may be a real physical world or a 3D virtual environment, and the avatar may be a biological avatar of the biological object in the virtual world.
Optionally, when the original image is reconstructed, each part of the biological object in the original image may be reconstructed, or clothing of the biological object in the original image may be reconstructed, and the original image may be reconstructed by acquiring three-dimensional mesh information of the original image, or may be reconstructed by identifying key points of the image, which is only described as an example and is not limited specifically herein.
And step S306, respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models.
In the technical solution provided in step S306 of the present invention, after reconstructing the original image to obtain a biological model of the biological object, respectively driving a plurality of portion models in the biological model to execute dynamic effect information matched with the portion models, where the dynamic effect information is used to represent visual dynamic effects generated by the driven portion models.
In the embodiment, a plurality of part models in the obtained biological model are driven, so that the plurality of part models in the biological model execute dynamic effect information matched with the corresponding part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models. After the plurality of part models in the biological model are driven, a driven biological model is obtained, which is the biological model after the dynamic effect information is executed, and may be a virtual human executing the dynamic effect information, the visual dynamic effect may be a dynamic effect displayed after the biological model is driven, and may include, but is not limited to, a head dynamic effect, a body action, a surrounding dynamic effect and the like, and the surrounding dynamic effect may include dynamic effects of clothing, hair and the like.
Alternatively, the biological model may be driven in different ways to drive different part models of the biological model, for example, a head model of the biological model may be driven by processing audio and/or video, or all part models included in the biological model may be driven simultaneously.
And step S308, fusing the driven biological model into a scene material of the virtual world to obtain a target moving image.
In the technical solution provided in step S308 of the present invention, after the plurality of part models in the biological model are respectively driven to execute the dynamic effect information matched with the part models, the driven biological models are fused to the scene materials of the virtual world, so as to obtain the target dynamic image.
In this embodiment, the driven biological model may be fused with a scene material of the virtual world to generate a target animation, the virtual world may be a real physical scene, and may include a 3D virtually designed scene, a realistic-shooting scene, and the like, the scene material of the virtual world may be a scene picture, may be a segment of scene material, or may be a 3D scene model, and the target animation is used to represent that the biological model presents an animation result in the virtual world.
It should be noted that the original image input in the reconstruction in this embodiment may be a picture of a biological object or a video of the biological object, and is not limited herein.
It should be noted that the driven picture or video of the biological model may be fused with the scene material to obtain the target moving image, the final output effect of the target moving image may be output as an integrated video, and the VR device or the AR device may be driven to display the integrated video including the target moving image.
Acquiring an original image of a biological object through the steps S302 to S308; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain a virtual image of the biological object in a virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world. That is to say, in the embodiment of the present invention, a corresponding biological model is generated based on an original image of a reconstructed biological object, and the biological model is fused to the virtual world by driving the biological model to execute the matching dynamic effect information, so that monotony and solidification of simulation of the object are avoided, and a purpose of presenting a dynamic effect result of the object in the virtual world is achieved, thereby achieving a technical effect of improving an effect of simulating the object, and further solving a technical problem of poor effect of simulating the object.
The above method of this embodiment is further described below.
As an alternative embodiment, the original image includes images of a plurality of portions of the biological object, wherein the step S304 of reconstructing the original image to obtain the biological model of the biological object includes: extracting a part image of a plurality of parts of the biological object in the original image, wherein the part image comprises at least one of the following: a head image, a torso image, and a limb image; reconstructing the position images of the plurality of parts to obtain part models of the plurality of parts of the biological object, wherein the part models comprise at least one of the following parts: a head model for simulating a head of the biological subject, a torso model for simulating a torso of the biological subject, a limb model for simulating a limb of the biological subject; the part models of the plurality of parts are combined to generate a biological model.
In this embodiment, the method includes extracting position images of a plurality of portions of a biological object in an original image, reconstructing the extracted position images of the plurality of portions to obtain position models of the plurality of portions of the biological object, and combining the position models of the plurality of portions to generate a biological model corresponding to the biological object in the original image, wherein the original image includes images of the plurality of portions of the biological object, and the position images may include at least one of: a head image, a torso image, and a limb image; the site model may include at least one of: the head model for simulating the head of the biological object, the body model for simulating the body of the biological object, and the limb model for simulating the limb of the biological object are only given as examples and are not limited specifically.
Alternatively, the position image of the biological object may correspond to a position model of the same position, for example, the head image of the biological object may correspond to a head model of the head of the biological object.
As an alternative example, combining the part models of the multiple parts may be combining the whole part models of the multiple parts, or combining the part models of the multiple parts, for example, combining the whole head model of the head of the biological object, the upper half part of the torso model of the torso of the biological object, and the upper half part of the limb model of the limb of the biological object to obtain the upper body part of the biological model of the biological object.
As an alternative embodiment, in step S306, the plurality of site models in the biological model are respectively driven to execute the dynamic effect information matched with the site models, which includes at least one of the following: driving the head model to execute head dynamic effect information matched with the head model, wherein the dynamic effect information comprises head dynamic effect information, and the head dynamic effect information is used for representing visual effects generated by the driven head model; driving the trunk model to execute trunk dynamic effect information matched with the trunk model, wherein the dynamic effect information comprises trunk dynamic effect information, and the trunk dynamic effect information is used for representing the visual effect generated by the driven trunk model; and driving the limb model to execute limb movement effect information matched with the limb model, wherein the movement effect information comprises limb movement effect information, and the limb movement effect information is used for representing the visual effect generated by the driven body model.
In this embodiment, based on the acquired biological model of the biological object and the dynamic effect information matched with the biological model, the head model is driven to execute the head dynamic effect information matched with the head model, the body model is driven to execute the body dynamic effect information matched with the body model, and the limb model is driven to execute the limb dynamic effect information matched with the limb model, so as to achieve the purpose of driving different part models in the biological model to execute the part dynamic effect information matched with the part models, wherein the dynamic effect information may include the head dynamic effect information, the body dynamic effect information and the limb dynamic effect information, and the head dynamic effect information may be used for representing the visual effect generated by the driven head model; the trunk dynamic effect information can be used for representing the visual effect generated by the driven trunk model; the limb movement effect information can be used for representing visual effects generated by the driven trunk model and can comprise limb muscle movement effects, clothing movement effects and the like; the trunk dynamic effect information and the limb dynamic effect information can jointly form body dynamic effect information of the biological model.
As an optional implementation manner, the method further includes: acquiring head movement effect information based on the media information of the object, wherein the media information is associated with the visual movement effect of the head model, and the visual movement effect of the head model comprises at least one of the following objects: visual motor effects of facial expressions, visual motor effects of head gestures, and visual motor effects of head accessories.
In this embodiment, the head movement information of the biological object is obtained based on the input media information of the biological object, where the media information is associated with the visual movement of the head model and may include input audio and target face video, which are only used for illustration and are not specifically limited herein; the visual dynamic of the head model may include at least one of the following of the object: visual motor effects of facial expressions, visual motor effects of head gestures, and visual motor effects of head accessories.
For example, the head movement effect information of the biological model can be obtained by rendering, masking and fusing the extracted media information through the neural network model.
As an optional embodiment, the method further includes: obtaining trunk dynamic effect information based on trunk posture information of the object and position information in the virtual world, wherein the trunk posture information is associated with visual dynamic effect of the trunk model, and the visual dynamic effect of the trunk model comprises at least one of the following information of the trunk model: visual dynamic effect of the trunk and visual dynamic effect of the trunk accessory, wherein the position information is used for representing the position of the biological model in the virtual world; and/or acquiring limb movement effect information based on the limb posture information and the position information of the object, wherein the limb posture information is associated with the visual movement effect of the limb model, and the visual movement effect of the limb model comprises at least one of the following information of the limb model: the visual dynamic effect of limbs and the visual dynamic effect of limb accessories.
In this embodiment, based on the obtained trunk posture information and limb posture information of the biological object and the determined position information in the virtual world, trunk dynamic effect information and limb dynamic effect information of the biological model may be obtained, respectively, wherein the trunk posture information is associated with the visual dynamic effect of the trunk model, and the visual dynamic effect of the trunk model may include at least one of the following information of the trunk model: visual effect of the trunk and visual effect of the trunk accessory; the limb posture information is associated with the visual movement effect of the limb model, and the visual movement effect of the limb model can comprise at least one of the following information of the limb model: visual movement effect of limbs and visual movement effect of limb accessories; the location information may be used to represent the location of the biological model in the virtual world, and may include a world coordinate system or a relative coordinate system.
Alternatively, the body posture information and the limb posture information of the biological object may be acquired by generating Four-Dimensional (4D) information of the body posture and the limb posture of the biological object.
As an alternative embodiment, step S308, fusing the driven biological model into the scene material of the virtual world to obtain the target moving image, includes: the driven biological model is fused to the scene image or video of the virtual world to obtain the target moving image, and the driven biological model may be added to the image position corresponding to the position in the scene image or video to obtain the target moving image.
In this embodiment, an image position corresponding to the position information in the scene image or video of the virtual world is determined, and the driven biological model is added to the image position, so that the driven biological model is fused with the scene image or video of the virtual world, thereby obtaining a target moving image of the biological object, wherein the position information can be used for representing a specific position in the scene image or video.
As an alternative embodiment, reconstructing a part image of a plurality of parts to obtain a part model of the plurality of parts of the biological object includes: identifying the head image to obtain the geometric information of the head; a head model is generated based on the geometric information of the head and the texture map of the head.
In this embodiment, geometric information of the head of the biological object is obtained by recognizing a head image of the biological object, and a texture map of the head is generated based on the obtained geometric information of the head, so as to generate a corresponding head model according to the reconstructed head image, where the geometric information may be geometric topological structure information or three-dimensional mesh information, and the texture map may be a two-dimensional graph of the surface of the biological object and may include a skin texture and a hair map of the biological object, which is merely illustrated and not specifically limited herein.
Optionally, the head model may be further fine-tuned after the head model is generated, wherein the fine-tuning may include geometric adjustment and texture adjustment, the geometric adjustment may be used to adjust the shape of the face, such as the height of the nose bridge, the fat and thin face, the width of the head, and the like, and the texture adjustment may be used to adjust the display effect of the head, such as skin texture, hair patch, and the like.
As an alternative embodiment, reconstructing a part image of a plurality of parts to obtain a part model of the plurality of parts of the biological object includes: identifying the trunk image to obtain key points of the trunk of the biological object; determining geometric information of the trunk based on the key points of the trunk; generating a torso model based on the geometric information of the torso and the skin texture of the torso; and/or identifying the limb image to obtain key points of the limb of the biological object; determining geometric information of the limb based on the key points of the limb; a limb model is generated based on the geometric information of the limb and the skin texture of the limb.
In the embodiment, a trunk image is identified, key points in the trunk image are extracted, geometric information of a trunk is determined based on the key points of the trunk, and skin textures of the trunk are acquired at the same time, so that a trunk model of the biological object is generated; identifying a limb image, extracting key points in the limb image, determining geometric information of a limb based on the key points of the limb, and acquiring skin texture of the limb, thereby generating a limb model of the biological object, wherein the key points may include 2D (Two-Dimensional) key points and 3D key points of a part image, which are merely illustrative and not specifically limited herein; determining geometric information based on the keypoints may employ a deep learning approach.
As an alternative embodiment, the original image includes a trunk accessory image associated with the trunk image, and/or a limb accessory map associated with the limb image, and reconstructing the part images of the multiple parts to obtain the part models of the multiple parts of the biological object includes: reconstructing the trunk image and the accessory texture of the trunk accessory image to obtain a trunk model; and/or reconstructing the limb image and the limb accessory texture of the limb accessory image to obtain a limb model.
In this embodiment, in addition to reconstructing the head portrait, the trunk portrait and the limb portrait, the trunk accessory image and the limb accessory image may be reconstructed, wherein the trunk accessory image and the limb accessory image may be the clothing image.
In this embodiment, the torso model may be generated by reconstructing a torso image and an accessory texture of the torso accessory image; generating a limb model by reconstructing the limb image and the accessory texture of the limb accessory image, wherein the original image comprises a torso accessory image associated with the torso image and/or a limb accessory image associated with the limb image.
Optionally, the accessory texture of the trunk accessory image and/or the accessory texture of the limb accessory image may be reconstructed separately to generate a trunk accessory model and/or a limb accessory model, or the trunk accessory image and the limb accessory image may be reconstructed in a combined manner to generate an overall accessory model, where the accessory model may include the trunk accessory model and the limb accessory model, or may include a general accessory database.
As an optional implementation manner, the method further includes: acquiring an original scene image or video of a Virtual Reality (VR) scene or an Augmented Reality (AR) scene; and reconstructing the virtual reality scene or the augmented reality scene based on the original scene image or the video to obtain a scene image or a video of the virtual world.
In this embodiment, an original scene image or video corresponding to a VR scene or an AR scene is acquired, the VR scene or the AR scene is reconstructed according to the original image, and a scene material of a virtual world is generated, wherein the scene reconstruction may be implemented through a neural network model.
In the embodiment of the invention, the position images of different parts in the original image of the biological object are identified, the information of the position images of different parts is extracted, the position images of different parts are reconstructed to generate corresponding part models, meanwhile, the original scene image or video of the virtual world is subjected to scene reconstruction to obtain scene materials of the virtual world, and the biological model of the biological object and the scene materials of the virtual world are fused to generate the target dynamic effect. That is to say, in the embodiment of the present invention, the position images of different positions of the biological object are reconstructed to obtain the corresponding position models, and the position models are fused with the scene materials of the virtual world obtained based on the scene reconstruction to generate the target moving image, so as to achieve the purpose of presenting the moving effect result of the biological model in the virtual world, thereby achieving the technical effect of improving the effect of simulating the object, and further solving the technical problem of poor effect of simulating the object.
The embodiment of the invention also provides another object processing method from the application scene side.
Fig. 4 is a flowchart of another object processing method according to an embodiment of the present invention. As shown in fig. 4, the method may include the steps of:
step S402, an original image of a biological object is displayed on a presentation screen of a virtual reality VR device or an augmented reality AR device.
In the technical solution provided by step S402 of the present invention, an original image of a biological object is acquired, and the original image is displayed on a display screen of a virtual reality VR device or an augmented reality AR device.
Step S404, the original image is reconstructed to obtain a biological model of the biological object.
In the technical solution provided by step S404 of the present invention, the original image is reconstructed, and a biological model of the biological object is determined based on the VR device or the AR device, wherein the biological model is used for obtaining an avatar of the biological object in the virtual world through simulation in the VR device or the AR device.
In step S406, a plurality of part models in the biological model are driven to execute dynamic effect information matched with the part models.
In the technical solution provided in step S406 of the present invention, the obtained biological model is driven to execute the dynamic effect information matched with the biological model, wherein the dynamic effect information is used to represent the visual dynamic effect generated by the driven part model.
And step S408, fusing the driven biological model into a scene material of the virtual world to obtain a target moving image.
In the technical solution provided by step S408 of the present invention, the driven biological model is fused with the scene material of the virtual world to generate the target moving image, where the target moving image is used to represent that the biological model presents a dynamic effect result in the virtual world.
And step S410, driving the VR equipment or the AR equipment to display the target moving image.
In the technical solution provided by step S410 of the present invention, the VR device or the AR device is driven, and the determined target moving image is displayed through the VR device or the AR device.
Alternatively, driving the VR device or the AR device may be sending a driving signal to the VR device or the AR device.
For example, after the target animation is determined, the client may actively transmit a driving signal, or the server/terminal may transmit a driving signal, and in response to the driving signal, the display interface of the VR device or the AR device displays the determined target animation.
In the embodiment of the invention, the original image displayed on the display picture of the virtual reality VR device or the augmented reality AR device is reconstructed, the biological model of the biological object is firstly determined, then a plurality of part models in the biological model are respectively driven to execute the dynamic effect information matched with the part models, the driven biological model is fused into the scene materials of the virtual world to obtain the target dynamic image, and finally the VR device or the AR device is driven to display the target dynamic image, so that the technical effect of improving the simulation effect of the object is realized, and the technical problem of poor simulation effect of the object is solved.
The embodiment of the invention also provides another object processing method from the man-machine interaction side.
Fig. 5 is a flowchart of another object processing method according to an embodiment of the present invention. As shown in fig. 5, the method may include the steps of:
step S502, in response to an image input instruction acting on an operation interface of the virtual reality VR device or the augmented reality AR device, displays an original image of the biological object on the operation interface.
In the technical solution provided in step S502 of the present invention, the image input instruction acts on an operation interface of the virtual reality VR device or the augmented reality AR device, and the operation interface displays the acquired original image of the biological object on the operation interface in response to the instruction.
In this embodiment, the image input instruction may be used to input an original image of a biological subject, for example, by issuing an instruction to input an original image of a biological subject on the operation interface, and in response to the instruction, implementing the input of the original image of the biological subject.
And step S504, responding to the image generation instruction acted on the operation interface, and driving the VR device or the AR device to display the target moving image of the biological object on the operation interface.
In the technical solution provided in step S504 of the present invention, the image generation instruction acts on the operation interface of the virtual reality VR device or the augmented reality AR device, and the operation interface responds to the instruction to display the target moving image of the biological object on the operation interface, where the target moving image is obtained by fusing the driven biological model into the scene material of the virtual world, and is used to represent that the biological model presents a moving effect result in the virtual world, and respectively drives a plurality of part models in the biological model to execute moving effect information matched with the part models, the moving effect information is used to represent the visual moving effect generated by the driven part models, and the biological model is obtained by reconstructing the original image and is used to obtain the virtual image of the biological object in the virtual world through simulation.
In this embodiment, the image generation instruction may be used to generate a target moving image of the biological subject, for example, by issuing an instruction to generate a target moving image of the biological subject on the operation interface, and in response to the instruction, generating the target moving image of the biological subject is achieved.
In the embodiment of the invention, in response to an image input instruction acting on an operation interface of virtual reality VR equipment or augmented reality AR equipment, an original image of a biological object is displayed on the operation interface; and responding to an image generation instruction acting on the operation interface, and driving VR equipment or AR equipment to display a target moving image of the biological object on the operation interface, wherein the target moving image is obtained by fusing the driven biological model into scene materials of the virtual world and is used for representing that the biological model presents a moving effect result in the virtual world, the moving effect information is used for representing the visual moving effect generated by the driven part model, and the biological model is obtained by reconstructing an original image and is used for simulating to obtain the virtual image of the biological object in the virtual world. That is to say, the embodiment of the present invention displays the original image of the biological object and the target moving image of the biological object on the operation interface based on the image input instruction and the image generation instruction acting on the operation interface, so as to achieve the purpose of presenting the moving effect result of the biological model in the virtual world, thereby achieving the technical effect of improving the effect of simulating the object, and further solving the technical problem of poor effect of simulating the object.
The embodiment of the invention also provides another object processing method from the interactive side.
Fig. 6 is a flowchart of another object processing method according to an embodiment of the present invention. As shown in fig. 6, the method may include the steps of:
step S602, acquiring an original image of a biological object by calling a first interface, where the first interface includes a first parameter, and a parameter value of the first parameter is the original image.
In the technical solution provided by step S602 of the present invention, the first interface may be an interface for performing data interaction between the server and the user side, and the original image of the biological object may be transmitted to the first interface as a first parameter of the first interface, so as to achieve the purpose of acquiring the skin image information.
Step S604, reconstructing the original image to obtain a biological model of the biological object.
In the technical solution provided by step S604 of the present invention, the biological model is used to obtain an avatar of the biological object in the virtual world.
Step S606, respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models.
And step S608, fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
And step S610, outputting the target moving image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target moving image.
In the technical solution provided in step S610 of the present invention, the second interface may be an interface for performing data interaction between the server and the user side, and the server may enable the terminal device to output the target avatar of the biological model as a parameter of the second interface by calling the second interface, so as to achieve the purpose of presenting the avatar result of the biological model in the virtual world.
In the embodiment of the invention, an original image of a biological object is acquired by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the original image; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the virtual world; and outputting the target moving image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target moving image. That is to say, in the embodiment of the present invention, the original image of the biological object is obtained by calling the first interface, the original image is reconstructed, the biological model of the biological object is determined, then, the plurality of part models in the biological model are respectively driven to execute the dynamic effect information matched with the part models, and the driven biological model is fused to the scene material of the virtual world, so as to obtain the target dynamic image, thereby achieving the technical effect of improving the effect of simulating the object, and further solving the technical problem of poor effect of simulating the object.
The embodiment of the invention also provides a system for processing the object. It should be noted that the processing system of the object can be used to execute the processing method of the object.
Fig. 7 is a schematic diagram of an object processing system according to an embodiment of the present invention, and as shown in fig. 7, the object processing system 70 may include: a server 702 and a virtual reality VR device or an augmented reality AR device 704.
A server 702 for acquiring an original image of a biological subject; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in VR equipment or AR equipment to obtain an avatar of the biological object in the virtual world; and respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models.
In the above-mentioned server 702 according to the embodiment of the present invention, the server may be configured to obtain an original image of a biological object, reconstruct the original image to obtain a biological model of the biological object, and respectively drive a plurality of part models in the biological model to execute dynamic effect information matched with the part models, where the biological model is used to obtain an avatar of the biological object in a virtual world in a simulation mode in a VR device or an AR device, and the dynamic effect information is used to represent a visual dynamic effect generated by the driven part models.
And the virtual reality VR equipment or augmented reality AR equipment 704 is used for receiving the driven biological model issued by the server and fusing the driven biological model into the scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
In the virtual reality VR device or the augmented reality AR device 704 according to the embodiment of the present invention, the driven biological model output by the server 702 may be received, and the driven biological model is fused with the scene material of the virtual world to obtain the target avatar, where the target avatar may be used to represent that the biological model presents an animation result in the virtual world.
In the embodiment of the invention, the obtained original image is reconstructed by the server to obtain the biological model, the biological model is driven to execute the matched dynamic effect information, and the driven biological model and the scene materials of the virtual world are fused by the virtual reality VR device or the augmented reality AR device to obtain the target dynamic image, so that the purpose of presenting the dynamic effect result of the biological model in the virtual world is achieved, the technical effect of improving the effect of simulating the object is realized, and the technical problem of poor effect of simulating the object is solved.
The embodiment of the invention also provides a system for processing the object. It should be noted that the processing system of the object can be used to execute the processing method of the object.
The processing system of the object of this embodiment may include: the system comprises a first processing end and a second processing end, wherein the first processing end is a cloud end or a mobile terminal, and the second processing end is a cloud algorithm background or a mobile terminal algorithm background.
The first processing end is used for acquiring an original image of the biological object.
In this embodiment, the original image of the biological object is input to the first processing end as an input parameter, for example, to the cloud or the mobile terminal, and then is transmitted to the second processing end, for example, to the cloud algorithm background or the mobile terminal algorithm background. The original image may be a picture or a video, and is not limited herein.
The second processing end is used for reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world through a cloud algorithm module, and rendering to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the virtual world; and outputting the target moving image.
In this embodiment, after receiving the original image of the biological object, the cloud algorithm background or the mobile terminal algorithm background may reconstruct the original image through the algorithm module to obtain the biological model of the biological object. Optionally, in this embodiment, in addition to reconstructing the head portrait, the trunk image, and the limb image in the original image, the accessory image may be a clothing image, the biological model is used to simulate and obtain an avatar of the biological object in the virtual world, and further respectively drive a plurality of site models in the biological model to execute dynamic effect information matched with the site models, where the dynamic effect information is used to represent visual dynamic effects generated by the driven site models, and further integrate output results of the algorithm modules through the cloud algorithm module, and fuse the driven biological model into scene materials of the virtual world, for example, a picture of the biological model and the scene materials of the virtual world may be fused and rendered to obtain a target dynamic image, where the target dynamic effect may be output in the form of an integral video. Optionally, the target dynamic effect can be displayed in an AR scene or a VR scene, so that a technical effect of improving an effect of simulating an object is achieved, and a technical problem of poor effect of simulating the object is solved.
It should be noted that for simplicity of description, the above-mentioned method embodiments are shown as a series of combinations of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
The following further introduces a preferred implementation of the above method of this embodiment, and specifically describes a method for generating a virtual human.
In the related technology, aiming at the virtual human scheme, on one hand, industry mainstream designs are independently output by some modules, the effect is difficult to achieve the effect of over-realistic writing, certain compromise is made on the aspect of realizing the effect in order to ensure the performance, and on the other hand, the related technology focuses on specific application functions. The embodiment of the invention mainly relates to a set of general technical research and development, can cover the application range which is mainstream in the market at present, has higher technical effect requirement, has wider fusion technical range, comprises 2D and 3D, images and videos, GAN and NeRF, traditional visual algorithms, artificial intelligence algorithms and the like, and integrates a plurality of visual field technologies such as detection, identification, segmentation, generation, enhancement, key point, completion and the like.
In the related art, in a virtual human scheme in a live broadcast scene, fusion of a virtual human and a scene is usually realized through a cartoon image, the driving effect of the upper half of the body is good, but the cartoon image has a far effect difference from the real scene, does not have personal information and cannot be applied to the personal differentiation field, and the background is an image mode and cannot achieve a multi-angle effect, so that the problems of low reality and integrity of object simulation exist.
In another related art, fig. 8 is a schematic diagram of a virtual human in a live-person anchor scene according to an embodiment of the present invention, and as shown in fig. 8, data collection and model training are usually performed on a specified character image based on a 2D video to generate a corresponding virtual human, but the virtual human can only generate a positive effect, and cannot perform secondary editing, and cannot generate a full-body action effect, so that there is a problem that simulation of an object is limited.
In another related technology, fig. 9 is a schematic diagram of a virtual human according to an embodiment of the present invention, as shown in fig. 9, in the scheme, a scene animation is set as a fixed post-animation, a human body is integrally reconstructed, and based on some preset templates, a certain dynamic effect, for example, a driving effect of a facial expression, is achieved, but the scheme cannot fuse the virtual human to a real scene, and a face reconstruction effect is difficult to reach a commercial standard, so that there is a problem that an effect of simulating an object is poor.
In another related art, a virtual human scheme in a conference scene has a plurality of different virtual backgrounds or virtual conference scenes, but a human image is not virtualized, and thus, there is a problem that an object cannot be simulated.
In another related technology, for facial expression driving, usually, a target video or voice is driven to extract 2D/3D key points in the video, or the video or voice is converted into expression basis coefficients of a Three-Dimensional (Three-Dimensional) deformable Face Model (referred to as 3DMM for short) to drive facial assets, but 2D information may cause too much Three-Dimensional information to be lost, and 3D information is relatively serious for the loss of expression effects such as faces, which may cause the driving effect not to reach good natural fluency, and thus, there is a problem of poor effect of simulating an object.
In another related art, for face model driving, a multi-modal driving mode based on 3D face reconstruction and expression coefficient regression is generally used, but information loss is easily generated in the process of learning intermediate feature characterization, which may cause a problem of mismatching of driving signals and face deformation, and at the same time, the driving method can only be used for driving fixed head pose and face region, and thus, there is a problem of large limitation on object simulation.
In order to solve the above problems, this embodiment provides an object processing method, which drives reconstruction of an original image through a designed object processing framework scheme, and ensures a more natural fusion effect through algorithms such as collision detection in a process of fusing a real scene based on a fusion strategy after scene fine understanding, so that a user can complete personal digital body-building only by uploading one image or one video.
For example, the scheme can be applied to a conference scene, and better personalized images can be edited by creating digital identities of personal identities (identities, namely IDs), and further the digital identities are fused into a proper scene to take a conference by an agent.
For another example, the scheme can be applied to a speech scene, a corresponding digital body is generated by simulating a real person image, and the body-divided proxy speech is driven.
For another example, the scheme can be applied to the personal care field, and the system can be used for performing human care on relatives, friends and the like by creating a personal digital personal identification remote agent, and can remind people to take medicines on time, daily communication and the like.
For another example, the scheme can be applied to construct digital images for the disabled, and the deaf-mute can speak in an opening mode by creating a personal digital sectional agent; a complete body is constructed for the person with limb disability.
FIG. 10 is a flow diagram of a processing framework for an object according to an embodiment of the invention, and as shown in FIG. 10, the method may include the steps of:
step S1002 reconstructs the head image.
One or a group of high-definition face images are input, the face in the images is analyzed and identified, geometric information of the head is predicted through a deep learning model, and three-dimensional Mesh output of the face is output.
The method comprises the steps of further completing head Mesh information on the basis of a face, and further generating a texture map on the basis of the acquired geometric information, wherein the head Mesh information can be a digital asset designed in advance, and can also be a head Mesh structure predicted from a photo of a user through various algorithm technical means.
Fig. 11 is a schematic diagram of a face reconstruction method according to an embodiment of the present invention, as shown in fig. 11, a vertex position and an index buffer area of an original image are obtained, the obtained vertex position and the index buffer area are rasterized, texture map coordinates are inserted into rasterized data, an image is obtained by performing texture lookup based on a texture map, it is determined that an image space is missing according to the obtained image and a target image, and accurate 3D face reconstruction and texture optimization are performed on the original image by using the method.
As can be seen from the above, for information loss caused by complex shooting environment, necessary information completion may be performed, for example, completing multiple regions such as an occlusion region and a side region, and texture details, so as to generate a more suitable UV MAP to be attached to the generated geometric topology, and then further supplementing other dimensions such as hair, teeth, eyeball, skin, neck, and the like, where the geometric topology may be formed by geometric information.
Since the strategy based on 3DMM fitting has serious dependence on 3DMM assets and has great limitation on the reconstruction effect of the Mesh, in order to optimize the accuracy of the geometric Mesh, the strategy of adding 3D point cloud is used for further assisting optimization.
Fig. 12 is a schematic diagram of a geometric topological Structure optimized based on a 3D point cloud policy according to an embodiment of the present invention, and as shown in fig. 12, sparse point clouds with background noise are obtained based on Motion recovery Structure (SFM) and multi-View Stereo (MVS) by performing video sequence sampling on an acquired target video, sparse point clouds with background noise are preprocessed to obtain sparse point clouds with inconsistent topology or device-acquired mesh information, and mesh information with consistent topology is obtained by marking key points.
In order to further improve the similarity, face semantic segmentation is added to improve the adaptability of texture mapping and geometric contrast, and a pore-level high-definition effect is realized by adding a face enhanced deep learning model technology; in order to further optimize the final presentation effect, a better scene rendering effect is realized by illumination decoupling, and a hyperfine skin beautifying algorithm is introduced to improve the final visual effect, wherein face semantic segmentation can be added through the pre-trained face semantic segmentation neural network structure prediction, and the geometric comparison can comprise a geometric topological structure and geometric information.
Meanwhile, the generated virtual image can be finely adjusted and can be further exported to a platform for secondary editing.
Step S1004 reconstructs the human body image.
In this embodiment, fig. 13 is a schematic diagram of a method for reconstructing a human body image according to an embodiment of the present invention, as shown in fig. 13, a relatively clear and recognizable human body photo or a motion video is input, and necessary image processing is performed on the input image to reconstruct a human body 3D asset that conforms to the shape and appearance of an object, where the necessary image processing may include image alignment, semantic segmentation, pose correction, image cropping, compression enhancement, and the like.
Optionally, for an input image, identifying 2D key points and 3D key points of the image, and performing preliminary estimation on the overall body posture based on the obtained key points, where accurate prediction of the 2D key points may provide more multidimensional information for 3D human geometry Mesh, and meanwhile, it is also equivalent to guiding untrained 3D posture prediction by using trained 2D key points.
Through a deep learning method, a corresponding 3D human body geometric Mesh or other intermediate files for representing human body geometric information are fitted or directly generated for storage, wherein the deep learning method can be based on an SMPL model, the model provides a method for representing the body surface morphology of a human body posture image, the method can be used for simulating the protrusion and the depression of human muscles in the limb movement process, the surface distortion of a human body in the movement process can be avoided, and the morphology of the muscle stretching and contraction movement of the human body can be accurately described; the 3D human geometry Mesh also belongs to an intermediate file form.
After the geometric reconstruction of the human body is completed, the skin texture of the human body is reconstructed finely, and meanwhile, the human body dress is reconstructed, wherein the reconstruction of the human body dress can be to reconstruct the dress texture and the skin integrally on the basis of the fine reconstruction of the skin texture, so that the method is suitable for close-fitting dress scenes, but the non-close-fitting dress scenes can cause that further editing drive cannot be performed, for example, the skirt dress cannot be further edited and driven after the integral reconstruction is used; or respectively reconstructing the dress and the skin texture.
Fig. 14 is a schematic diagram of a method for establishing a general clothing database according to an embodiment of the present invention, as shown in fig. 14, since a dynamic effect of clothing is greatly related to a material, a thickness, and the like of clothing, and is more adjustable than a more uniform texture such as skin, a general clothing database including a common clothing model/white film may be designed, which may be automatically adjusted according to different body types, or may be changed in material and chartlet texture, so as to achieve a purpose of having a larger transformation and adaptation space in a clothing generation stage.
Firstly, performing cut-part retrieval identification and stitching inference on human clothes to obtain 2D cut-parts, performing resampling and gridding on the 2D cut-parts to generate a three-dimensional network of the clothes, meanwhile, generating a mannequin model based on a determined human semantic label, automatically placing the three-dimensional network of the clothes on the mannequin model to generate a 3D cut-part of the clothes, and then automatically stitching the 3D cut-parts to obtain a 3D white model of the clothes.
Meanwhile, the dominant hue extraction and the texture expansion are carried out on the texture image of the human body garment to obtain the color card and the texture expansion of the human body garment, and the expanded material data is obtained according to the material selection, so that the structured 3D garment is generated and combined with the generated 3D garment white model to generate the diversified 3D garment.
Optionally, for the input action video, the accuracy of the SMPL model is better by performing iterative fitting constraint on the sequence frame based on a NeRF reconstruction driving algorithm, and further a static object reconstruction effect in a general scene is provided, so as to utilize strategies such as segmentation to perform logic of foreground and background separation.
Step S1006, the human 3D asset is driven.
A relatively complete human body 3D asset can be obtained through reconstruction, and the human body 3D asset is driven to obtain the dynamic effect of a corresponding part.
For example, for facial expression driving, an end-to-end driving generation strategy can be performed based on strategies such as GAN, neRF and the like, the expression effect of the whole face and even the head can be automatically learned, unnatural pseudo-expressions are automatically identified through big data, and therefore iterative optimization is performed.
Fig. 15 is a flowchart of a method for implementing head movement according to an embodiment of the present invention, and as shown in fig. 15, the method may include the following steps:
in step S1501, a target audio is input.
In step S1502, the input audio is learned to an expression base through a neural network.
In step S1503, a target face video is input.
Step S1504, synchronously extracting the expression information frame by frame of the target video.
In this embodiment, the input video is passed through a reconstruction model, and expression information of each frame in the video is synchronously extracted, where the expression information may include expression base, geometric information, texture information, head pose, and illumination information, which is only illustrated here and is not specifically limited.
In step S1505, the expression base in the video is replaced by the expression base in the audio.
In this embodiment, the expression bases extracted from the video are replaced with expression bases extracted from the audio.
And step S1506, re-rendering the replaced expression information.
In the embodiment, the replaced expression information is re-rendered, and a re-rendered preliminary result is generated.
In step S1507, a mouth mask is added.
In this embodiment, since the whole re-rendering of the face can cause the whole harmony and the definition to be fuzzy, the changed area in the target video is guided to be restricted to the lower half area of the face by adding a mouth mask.
In step S1508, a mouth dynamic effect is obtained.
In the embodiment, the preliminary results of the mouth mask and the re-rendering are input into the neural network rendering model by means of pixel inner product, and a mouth dynamic effect is generated.
In step S1509, a composite result is generated by fusion.
In this embodiment, the target video, the mouth mask and the mouth dynamic effect are input to the fusion module to generate the final composite result.
Fig. 16 is a schematic diagram of a method for implementing body movement according to an embodiment of the present invention, and as shown in fig. 16, if the body-driven effect is further fused into a scene image, 4D information of the posture of the body needs to be generated, and the posture effect needs to be fitted to finally implement the body-driven effect, where the 4D information of the posture may include three-dimensional information and a world coordinate system or a relative coordinate system of walking.
Based on the characterization mode of the nerve radiation field, the speaker image scene can be implicitly modeled, and face details can be rendered, end-to-end driving can be provided, and editing and rendering of more new postures can be realized, wherein the face details can comprise teeth, hair and the like.
Based on the state-of-the-art (SOTA) technology, the method takes voice information as additional regulation input, models independent nerve radiation fields of the head and the trunk respectively, and renders and synthesizes in the inference stage.
And step S1008, understanding and fusing the scenes.
The real scene is understood and reconstructed by performing tasks such as plane detection, plane parameter estimation, segmentation, depth information estimation and the like, so that the aim of fusing the virtual human to the real scene is fulfilled.
Fig. 17 is a schematic diagram of a scene fusion method according to an embodiment of the present invention, as shown in fig. 17, an image or a video is input into a neural network, and scene information is generated by understanding a scene in the image or the video, so as to realize scene reconstruction, where the scene information may be generated by semantic segmentation, depth information prediction, and the like.
After understanding and reconstructing a scene, a virtual human can be placed at a certain position of the image/video scene to achieve a simulation effect, fig. 18 is a schematic diagram of the simulation effect after the scene is fused with the virtual human according to the embodiment of the invention, as shown in fig. 18, a scene material a is understood and reconstructed, and a virtual human X is placed in the scene material a to achieve the simulation effect of the fusion of the scene material a and the virtual human X.
In the embodiment of the invention, different part images in an original image are reconstructed to generate a human body 3D asset, the human body 3D asset is driven to execute corresponding dynamic effect information, and a virtual human is generated; after the scene is understood finely, the generated virtual person and the scene are fused to achieve the purpose that the virtual person is displayed accurately and smoothly in the scene of the virtual world, so that the technical effect of improving the effect of simulating the object is achieved, and the technical problem that the effect of simulating the object is poor is solved.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the object processing method according to the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 3
According to an embodiment of the present invention, there is also provided an object processing apparatus for implementing the object processing method shown in fig. 3.
Fig. 19 is a schematic diagram of an object processing apparatus according to an embodiment of the present invention, and as shown in fig. 19, the object processing apparatus 190 may include: an acquisition unit 1902, a first reconstruction unit 1904, a first drive unit 1906 and a first fusion unit 1908.
An acquisition unit 1902 for acquiring an original image of a biological object.
A first reconstructing unit 1904, configured to reconstruct the original image to obtain a biological model of the biological object, where the biological model is used to simulate and obtain an avatar of the biological object in the virtual world.
A first driving unit 1906, configured to respectively drive a plurality of part models in the biological model to execute dynamic effect information matched with the part models, where the dynamic effect information is used to characterize visual dynamic effects generated by the driven part models.
A first fusion unit 1908, configured to fuse the driven biological model into a scene material of the virtual world, to obtain a target avatar, where the target avatar is used to represent that the biological model presents an animation result in the virtual world.
It should be noted here that the acquiring unit 1902, the first reconstructing unit 1904, the first driving unit 1906 and the first fusing unit 1908 correspond to steps S302 to S308 in embodiment 1, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above units may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
According to an embodiment of the present invention, there is also provided an object processing apparatus for implementing the object processing method shown in fig. 4.
Fig. 20 is a schematic diagram of an object processing apparatus according to an embodiment of the present invention, and as shown in fig. 20, the object processing apparatus 200 may include: a presentation unit 2002, a second reconstruction unit 2004, a second drive unit 2006, a second fusion unit 2008, and a third drive unit 2010.
A presentation unit 2002 for presenting an original image of a biological object on a presentation screen of the virtual reality VR device or the augmented reality AR device.
A second reconstructing unit 2004, configured to reconstruct the original image to obtain a biological model of the biological object, where the biological model is used to simulate an avatar of the biological object in the virtual world in the VR device or the AR device.
And a second driving unit 2006, configured to respectively drive the multiple part models in the biological model to execute dynamic effect information matched with the part models, where the dynamic effect information is used to represent visual dynamic effects generated by the driven part models.
The second fusion unit 2008 is configured to fuse the driven biological model into a scene material of the virtual world to obtain a target moving image, where the target moving image is used to represent that the biological model presents a moving effect result in the virtual world.
And a third driving unit 2010, configured to drive the VR device or the AR device to display the target moving image.
It should be noted here that the above-mentioned exhibition unit 2002, the second reconstruction unit 2004, the second driving unit 2006, the second fusion unit 2008 and the third driving unit 2010 correspond to steps S402 to S410 in embodiment 1, and five units are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the content disclosed in embodiment 1. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in embodiment 1.
According to an embodiment of the present invention, there is also provided an object processing apparatus for implementing the object processing method shown in fig. 5.
Fig. 21 is a schematic diagram of an object processing apparatus according to an embodiment of the present invention, and as shown in fig. 21, the object processing apparatus 210 may include: a first display unit 2102 and a second display unit 2104.
A first display unit 2102 for displaying an original image of a biological object on an operation interface of a virtual reality VR device or an augmented reality AR device in response to an image input instruction acting on the operation interface.
The second display unit 2104 is configured to, in response to an image generation instruction acting on the operation interface, drive the VR device or the AR device to display a target moving image of the biological object on the operation interface, where the target moving image is obtained by fusing the driven biological model into a scene material of the virtual world, and is used to represent that the biological model presents a moving effect result in the virtual world, and respectively drive a plurality of part models in the biological model to execute moving effect information matched with the part models, the moving effect information is used to represent a visual moving effect generated by the driven part models, and the biological model is obtained by reconstructing the original image and is used to simulate to obtain a virtual image of the biological object in the virtual world.
It should be noted here that the first display unit 2102 and the second display unit 2104 correspond to steps S502 to S504 in embodiment 1, and the two units are the same as the example and the application scenario realized by the corresponding steps, but are not limited to the disclosure of embodiment 1. It should be noted that the above units may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
According to an embodiment of the present invention, there is also provided an object processing apparatus for implementing the object processing method shown in fig. 6.
Fig. 22 is a schematic diagram of an object processing apparatus according to an embodiment of the present invention, and as shown in fig. 22, the object processing apparatus 220 may include: a calling unit 2202, a third reconstruction unit 2204, a fourth driving unit 2206, a third fusion unit 2208 and an output unit 2210.
An invoking unit 2202 configured to acquire an original image of a biological object by invoking a first interface, where the first interface includes a first parameter, and a parameter value of the first parameter is the original image.
A third reconstructing unit 2204, configured to reconstruct the original image to obtain a biological model of the biological object, where the biological model is used to simulate and obtain an avatar of the biological object in the virtual world.
And a fourth driving unit 2206, configured to drive the plurality of part models in the biological model to execute dynamic effect information matched with the part models, respectively, where the dynamic effect information is used to represent visual dynamic effects generated by the driven part models.
A third fusion unit 2208, configured to fuse the driven biological model into the scene material of the virtual world, so as to obtain a target moving image, where the target moving image is used to represent that the biological model presents a moving effect result in the virtual world.
The output unit 2210 is configured to output the target moving image by invoking a second interface, where the second interface includes a second parameter, and a parameter value of the second parameter is the target moving image.
It should be noted here that the above-mentioned calling unit 2202, third reconstructing unit 2204, fourth driving unit 2206, third fusing unit 2208 and output unit 2210 correspond to steps S602 to S610 in embodiment 1, and five units are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the contents disclosed in embodiment 1. It should be noted that the above units may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
In the embodiment, the corresponding biological model is obtained by reconstructing the original image of the biological object, the driven biological model is fused to the scene material of the virtual world, and the target moving image is generated, so that the purpose of presenting the moving effect result of the biological model in the virtual world is achieved, the technical effect of improving the effect of simulating the object is achieved, and the technical problem of poor effect of simulating the object is solved.
Example 4
The embodiment of the invention can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the object processing method: acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
Alternatively, fig. 23 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 23, the computer terminal a may include: one or more processors 2302 (only one of which is shown), a memory 2304 and a transmission device 2306.
The memory 2304 can be used for storing software programs and modules, such as program instructions/modules corresponding to the object processing method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, implementing the object processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, which may be connected to the computer terminal a via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 2302 may invoke the memory-stored information and applications via the transmission means to perform the following steps: acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain a virtual image of the biological object in a virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
Optionally, the processor may further execute the program code of the following steps: extracting a part image of a plurality of parts of the biological object in the original image, wherein the part image comprises at least one of the following: a head image, a torso image, and a limb image; reconstructing the position images of the plurality of parts to obtain position models of the plurality of parts of the biological object, wherein the position models comprise at least one of the following: a head model for simulating a head of the biological subject, a torso model for simulating a torso of the biological subject, a limb model for simulating a limb of the biological subject; the part models of the plurality of parts are combined to generate a biological model.
Optionally, the processor may further execute the program code of the following steps: driving the head model to execute head dynamic effect information matched with the head model, wherein the dynamic effect information comprises head dynamic effect information, and the head dynamic effect information is used for representing visual effects generated by the driven head model; driving the trunk model to execute trunk dynamic effect information matched with the trunk model, wherein the dynamic effect information comprises trunk dynamic effect information, and the trunk dynamic effect information is used for representing the visual effect generated by the driven trunk model; and driving the limb model to execute limb movement effect information matched with the limb model, wherein the movement effect information comprises limb movement effect information, and the limb movement effect information is used for representing the visual effect generated by the driven body model.
Optionally, the processor may further execute the program code of the following steps: acquiring head movement effect information based on the media information of the object, wherein the media information is associated with the visual movement effect of the head model, and the visual movement effect of the head model comprises at least one of the following objects: visual motor effects of facial expressions, visual motor effects of head gestures, and visual motor effects of head accessories.
Optionally, the processor may further execute the program code of the following steps: obtaining trunk dynamic effect information based on trunk posture information of the object and position information in the virtual world, wherein the trunk posture information is associated with visual dynamic effect of the trunk model, and the visual dynamic effect of the trunk model comprises at least one of the following information of the trunk model: visual dynamic effect of the trunk and visual dynamic effect of the trunk accessory, wherein the position information is used for representing the position of the biological model in the virtual world; and/or acquiring limb movement effect information based on the limb posture information and the position information of the object, wherein the limb posture information is associated with the visual movement effect of the limb model, and the visual movement effect of the limb model comprises at least one of the following information of the limb model: the visual dynamic effect of limbs and the visual dynamic effect of limb accessories.
Optionally, the processor may further execute the program code of the following steps: and adding the driven biological model to the image position corresponding to the position in the scene material to obtain the target moving image.
Optionally, the processor may further execute the program code of the following steps: identifying the head image to obtain the geometric information of the head; a head model is generated based on the geometric information of the head and the texture map of the head.
Optionally, the processor may further execute the program code of the following steps: identifying the trunk image to obtain key points of the trunk of the biological object; determining geometric information of the trunk based on the key points of the trunk; generating a torso model based on the geometric information of the torso and the skin texture of the torso; and/or identifying the limb image to obtain key points of the limb of the biological object; determining geometric information of the limb based on the key points of the limb; a limb model is generated based on the geometric information of the limb and the skin texture of the limb.
Optionally, the processor may further execute the program code of the following steps: reconstructing the trunk image and the accessory texture of the trunk accessory image to obtain a trunk model; and/or reconstructing the limb image and the limb accessory texture of the limb accessory image to obtain a limb model.
Optionally, the processor may further execute the program code of the following steps: acquiring an original scene image or video of a Virtual Reality (VR) scene or an Augmented Reality (AR) scene; and reconstructing the virtual reality scene or the augmented reality scene based on the original scene image or the video to obtain a scene material of the virtual world.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: displaying an original image of a biological object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in VR equipment or AR equipment to obtain a virtual image of the biological object in a virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the virtual world; and driving the VR equipment or the AR equipment to display the target moving image.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: responding to an image input instruction acting on an operation interface of virtual reality VR equipment or augmented reality AR equipment, and displaying an original image of a biological object on the operation interface; and responding to an image generation instruction acting on the operation interface, driving the VR device or the AR device to display a target moving image of the biological object on the operation interface, wherein the target moving image is obtained by fusing the driven biological model into scene materials of the virtual world, and is used for representing that the biological model presents a moving effect result in the virtual world, respectively driving a plurality of part models in the biological model to execute moving effect information matched with the part models, the moving effect information is used for representing visual moving effects generated by the driven part models, and the biological model is obtained by reconstructing an original image and is used for simulating to obtain a virtual image of the biological object in the virtual world.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: acquiring an original image of a biological object by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the original image; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain a virtual image of the biological object in a virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a dynamic effect result in the virtual world; and outputting the target moving image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target moving image.
The embodiment of the invention provides an object processing method, which comprises the steps of obtaining a corresponding biological model by reconstructing an original image of a biological object, fusing the driven biological model to scene materials of a virtual world, and generating a target moving image so as to achieve the aim of presenting a moving effect result of the biological model in the virtual world, thereby realizing the technical effect of improving the effect of simulating the object and further solving the technical problem of poor effect of simulating the object.
It can be understood by those skilled in the art that the structure shown in fig. 23 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 23 is a diagram illustrating a structure of the electronic device. For example, the computer terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 23, or have a different configuration than shown.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 5
Embodiments of the present invention also provide a computer-readable storage medium. Optionally, in this embodiment, the computer-readable storage medium may be configured to store program codes executed by the object processing method provided in the first embodiment.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world.
Optionally, the computer readable storage medium may further execute the program code of the following steps: extracting a part image of a plurality of parts of the biological object in the original image, wherein the part image comprises at least one of the following: a head image, a torso image, and a limb image; reconstructing the position images of the plurality of parts to obtain position models of the plurality of parts of the biological object, wherein the position models comprise at least one of the following: a head model for simulating a head of the biological subject, a torso model for simulating a torso of the biological subject, a limb model for simulating a limb of the biological subject; the site models of the plurality of sites are combined to generate a biological model.
Optionally, the computer readable storage medium may further include program code for performing the following steps: driving the head model to execute head dynamic effect information matched with the head model, wherein the dynamic effect information comprises head dynamic effect information, and the head dynamic effect information is used for representing visual effects generated by the driven head model; driving the trunk model to execute trunk dynamic effect information matched with the trunk model, wherein the dynamic effect information comprises trunk dynamic effect information, and the trunk dynamic effect information is used for representing the visual effect generated by the driven trunk model; and driving the limb model to execute limb movement effect information matched with the limb model, wherein the movement effect information comprises limb movement effect information, and the limb movement effect information is used for representing the visual effect generated by the driven body model.
Optionally, the computer readable storage medium may further execute the program code of the following steps: acquiring head movement effect information based on the media information of the object, wherein the media information is associated with the visual movement effect of the head model, and the visual movement effect of the head model comprises at least one of the following objects: visual dynamics of facial expressions, visual dynamics of head gestures, and visual dynamics of head accessories.
Optionally, the computer readable storage medium may further execute the program code of the following steps: obtaining trunk dynamic effect information based on trunk posture information of the object and position information in the virtual world, wherein the trunk posture information is associated with visual dynamic effect of the trunk model, and the visual dynamic effect of the trunk model comprises at least one of the following information of the trunk model: visual dynamic effect of the trunk and visual dynamic effect of the trunk accessory, wherein the position information is used for representing the position of the biological model in the virtual world; and/or acquiring limb movement effect information based on the limb posture information and the position information of the object, wherein the limb posture information is associated with the visual movement effect of the limb model, and the visual movement effect of the limb model comprises at least one of the following information of the limb model: the visual dynamic effect of limbs and the visual dynamic effect of limb accessories.
Optionally, the computer readable storage medium may further execute the program code of the following steps: and adding the driven biological model to the image position corresponding to the position in the scene material to obtain the target moving image.
Optionally, the computer readable storage medium may further include program code for performing the following steps: identifying the head image to obtain the geometric information of the head; a head model is generated based on the geometric information of the head and the texture map of the head.
Optionally, the computer readable storage medium may further execute the program code of the following steps: identifying the trunk image to obtain key points of the trunk of the biological object; determining geometric information of the trunk based on the key points of the trunk; generating a torso model based on the geometric information of the torso and the skin texture of the torso; and/or identifying the limb image to obtain key points of the limb of the biological object; determining geometric information of the limb based on the key points of the limb; a limb model is generated based on the geometric information of the limb and the skin texture of the limb.
Optionally, the computer readable storage medium may further include program code for performing the following steps: reconstructing the trunk image and the accessory texture of the trunk accessory image to obtain a trunk model; and/or reconstructing the limb image and the limb accessory texture of the limb accessory image to obtain a limb model.
Optionally, the computer readable storage medium may further execute the program code of the following steps: acquiring an original scene image or video of a Virtual Reality (VR) scene or an Augmented Reality (AR) scene; and reconstructing the virtual reality scene or the augmented reality scene based on the original scene image or the video to obtain a scene material of the virtual world.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: displaying an original image of a biological object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in VR equipment or AR equipment to obtain an avatar of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a dynamic effect result in the virtual world; and driving the VR equipment or the AR equipment to display the target moving image.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: responding to an image input instruction acting on an operation interface of Virtual Reality (VR) equipment or Augmented Reality (AR) equipment, and displaying an original image of a biological object on the operation interface; and responding to an image generation instruction acting on the operation interface, driving VR equipment or AR equipment to display a target moving image of the biological object on the operation interface, wherein the target moving image is obtained by fusing the driven biological model into scene materials of the virtual world and is used for representing a moving effect result of the biological model in the virtual world, respectively driving a plurality of part models in the biological model to execute moving effect information matched with the part models, the moving effect information is used for representing visual moving effects generated by the driven part models, and the biological model is obtained by reconstructing an original image and is used for simulating to obtain a virtual image of the biological object in the virtual world.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: acquiring an original image of a biological object by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the original image; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating to obtain the virtual image of the biological object in the virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the virtual world; and outputting the target moving image by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target moving image.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (14)
1. A method of processing an object, comprising:
acquiring an original image of a biological object;
reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating and obtaining an avatar of the biological object in a virtual world;
respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models;
fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a moving effect result in the scene material;
reconstructing the original image to obtain a biological model of the biological object, including: reconstructing the position images of the plurality of parts to obtain the part models of the plurality of parts of the biological object; merging the part models of the plurality of parts to generate the biological model;
reconstructing the part images of the plurality of parts to obtain the part models of the plurality of parts of the biological object, wherein the original image comprises a trunk accessory image associated with the trunk image and/or a limb accessory image associated with the limb image, and the reconstructing comprises: reconstructing the trunk image and the accessory texture of the trunk accessory image in the position image to obtain a trunk model; and/or reconstructing the limb image in the position image and the limb accessory texture of the limb accessory image to obtain a limb model.
2. The method of claim 1, wherein the original image comprises images of a plurality of portions of the biological object, and wherein reconstructing the images of the portions to obtain the model of the portions of the biological object comprises:
extracting a part image of a plurality of parts of the biological object in the original image, wherein the part image comprises: a head image, the torso image, and the limb image;
reconstructing the part images of the plurality of parts to obtain the part models of the plurality of parts of the biological object, wherein the part models include: a head model for simulating a head of the biological subject, the torso model for simulating a torso of the biological subject, the limb model for simulating a limb of the biological subject.
3. The method of claim 2, wherein separately driving a plurality of the site models in the biological model to perform dynamic effect information matching the site models comprises at least one of:
driving the head model to execute head dynamic effect information matched with the head model, wherein the dynamic effect information comprises the head dynamic effect information, and the head dynamic effect information is used for representing visual efficacy generated by the driven head model;
driving the trunk model to execute trunk dynamic effect information matched with the trunk model, wherein the dynamic effect information comprises the trunk dynamic effect information, and the trunk dynamic effect information is used for representing the visual effect generated by the driven trunk model;
and driving the limb model to execute limb movement effect information matched with the limb model, wherein the movement effect information comprises the limb movement effect information, and the limb movement effect information is used for representing the visual effect generated by the driven body model.
4. The method of claim 3, further comprising:
obtaining the head movement effect information based on the media information of the object, wherein the media information is associated with the visual movement effect of the head model, and the visual movement effect of the head model comprises at least one of the following of the object: visual motor effects of facial expressions, visual motor effects of head gestures, and visual motor effects of head accessories.
5. The method of claim 3, further comprising:
obtaining the trunk dynamic effect information based on the trunk posture information of the object and the position information in the virtual world, wherein the trunk posture information is associated with the visual dynamic effect of the trunk model, and the visual dynamic effect of the trunk model comprises at least one of the following information of the trunk model: visual dynamic effect of the trunk, visual dynamic effect of the trunk accessory, the position information being used for representing the position of the biological model in the virtual world; and/or
Acquiring the limb movement effect information based on the limb posture information and the position information of the object, wherein the limb posture information is associated with the visual movement effect of the limb model, and the visual movement effect of the limb model comprises at least one of the following information of the limb model: visual dynamic effect of limbs and visual dynamic effect of limbs accessories.
6. The method of claim 5, wherein fusing the driven biological model to scene materials of the virtual world to obtain a target animation comprises:
and adding the driven biological model to the image position corresponding to the position in the scene material to obtain the target moving image.
7. The method of claim 2, wherein reconstructing the location images of the plurality of locations to obtain the location model of the plurality of locations of the biological object comprises:
identifying the head image to obtain the geometric information of the head;
generating the head model based on the geometric information of the head and the texture map of the head.
8. The method of claim 2, wherein reconstructing the location images of the plurality of locations to obtain the location model of the plurality of locations of the biological object comprises:
identifying the trunk image to obtain key points of the trunk of the biological object; determining geometric information of the torso based on the keypoints of the torso; generating the torso model based on geometric information of the torso and skin texture of the torso; and/or
Identifying the limb image to obtain key points of the limb of the biological object; determining geometric information of the limb based on the key points of the limb; generating the limb model based on the geometric information of the limb and the skin texture of the limb.
9. The method of claim 1, further comprising:
acquiring an original scene image or video of a Virtual Reality (VR) scene or an Augmented Reality (AR) scene;
and reconstructing the VR scene or the AR scene based on the original scene image or the video to obtain a scene material of the virtual world.
10. A method of processing an object, comprising:
displaying an original image of a biological object on a presentation screen of a Virtual Reality (VR) device or an Augmented Reality (AR) device;
reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating in the VR device or the AR device to obtain an avatar of the biological object in a virtual world;
respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models;
fusing the driven biological model into a scene material of the virtual world to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a dynamic effect result in the virtual world;
driving the VR device or the AR device to display the target moving image;
reconstructing the original image to obtain a biological model of the biological object, including: reconstructing the part images of the plurality of parts to obtain the part models of the plurality of parts of the biological object; combining the part models of the plurality of parts to generate the biological model;
reconstructing the position images of the plurality of positions to obtain the position models of the plurality of positions of the biological object, wherein the original image comprises a trunk accessory image associated with a trunk image and/or a limb accessory image associated with a limb image, and the step of reconstructing the position images of the plurality of positions comprises the following steps: reconstructing the trunk image and the accessory texture of the trunk accessory image in the position image to obtain a trunk model; and/or reconstructing the limb image in the part image and the limb accessory texture of the limb accessory image to obtain a limb model.
11. A method of processing an object, comprising:
responding to an image input instruction acting on an operation interface of Virtual Reality (VR) equipment or Augmented Reality (AR) equipment, and displaying an original image of a biological object on the operation interface;
responding to an image generation instruction acting on the operation interface, driving the VR device or the AR device to display a target moving image of the biological object on the operation interface, wherein the target moving image is obtained by fusing a driven biological model into scene materials of a virtual world, and is used for representing that the biological model presents a moving effect result in the virtual world, respectively driving a plurality of part models in the biological model to execute moving effect information matched with the biological model, the moving effect information is used for representing visual moving effects generated by the driven part models, and the biological model is obtained by reconstructing the original image and is used for simulating to obtain a virtual image of the biological object in the virtual world;
reconstructing the original image to obtain a biological model of the biological object, including: reconstructing the part images of the plurality of parts to obtain the part models of the plurality of parts of the biological object; combining the part models of the plurality of parts to generate the biological model;
reconstructing the position images of the plurality of positions to obtain the position models of the plurality of positions of the biological object, wherein the original image comprises a trunk accessory image associated with a trunk image and/or a limb accessory image associated with a limb image, and the step of reconstructing the position images of the plurality of positions comprises the following steps: reconstructing the trunk image and the accessory texture of the trunk accessory image in the position image to obtain a trunk model; and/or reconstructing the limb image in the part image and the limb accessory texture of the limb accessory image to obtain a limb model.
12. A system for processing an object, comprising: a first processing end and a second processing end, wherein the first processing end is a cloud end or a mobile terminal, the second processing end is a cloud algorithm background or a mobile terminal algorithm background, and the first processing end and the second processing end are connected in series,
the first processing terminal is used for acquiring an original image of a biological object;
the second processing end is used for reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating and obtaining an avatar of the biological object in a virtual world; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the part models, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models; fusing the driven biological model into a scene material of the virtual world through a cloud algorithm module, and rendering to obtain a target moving image, wherein the target moving image is used for representing that the biological model presents a dynamic effect result in the virtual world; outputting the target moving image;
reconstructing the original image to obtain a biological model of the biological object, including: reconstructing the position images of the plurality of parts to obtain the part models of the plurality of parts of the biological object; combining the part models of the plurality of parts to generate the biological model;
the original image includes a trunk accessory image associated with the trunk image and/or a limb accessory image associated with the limb image, and the second processing end further reconstructs the part images of the plurality of parts to obtain the part models of the plurality of parts of the biological object by the following steps, including: reconstructing the trunk image and the accessory texture of the trunk accessory image in the position image to obtain a trunk model; and/or reconstructing the limb image in the position image and the limb accessory texture of the limb accessory image to obtain a limb model.
13. A system for processing an object, comprising: a server and a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein,
the server is used for acquiring an original image of a biological object; reconstructing the original image to obtain a biological model of the biological object, wherein the biological model is used for simulating an avatar of the biological object in a virtual world in the VR device or the AR device; respectively driving a plurality of part models in the biological model to execute dynamic effect information matched with the biological model, wherein the dynamic effect information is used for representing visual dynamic effects generated by the driven part models;
the VR device or the AR device is used for receiving the driven biological model sent by the server and fusing the driven biological model into a scene material of the virtual world to obtain a target dynamic image, wherein the target dynamic image is used for representing that the biological model presents a dynamic effect result in the virtual world;
reconstructing the original image to obtain a biological model of the biological object, including: reconstructing the position images of the plurality of parts to obtain the part models of the plurality of parts of the biological object; combining the part models of the plurality of parts to generate the biological model;
the original image comprises a trunk accessory image associated with a trunk image and/or a limb accessory image associated with a limb image, and the server reconstructs the position images of the plurality of positions to obtain the position models of the plurality of positions of the biological object by the following steps: reconstructing the trunk image and the accessory texture of the trunk accessory image in the position image to obtain a trunk model; and/or reconstructing the limb image in the position image and the limb accessory texture of the limb accessory image to obtain a limb model.
14. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210745674.9A CN114821675B (en) | 2022-06-29 | 2022-06-29 | Object processing method and system and processor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210745674.9A CN114821675B (en) | 2022-06-29 | 2022-06-29 | Object processing method and system and processor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114821675A CN114821675A (en) | 2022-07-29 |
CN114821675B true CN114821675B (en) | 2022-11-15 |
Family
ID=82523381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210745674.9A Active CN114821675B (en) | 2022-06-29 | 2022-06-29 | Object processing method and system and processor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114821675B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116246009B (en) * | 2022-09-06 | 2024-04-16 | 支付宝(杭州)信息技术有限公司 | Virtual image processing method and device |
CN115809696B (en) * | 2022-12-01 | 2024-04-02 | 支付宝(杭州)信息技术有限公司 | Virtual image model training method and device |
CN115738257B (en) * | 2022-12-23 | 2023-12-08 | 北京畅游时代数码技术有限公司 | Game role display method, device, storage medium and equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111744200A (en) * | 2019-03-27 | 2020-10-09 | 电子技术公司 | Generating virtual characters from image or video data |
CN111833457A (en) * | 2020-06-30 | 2020-10-27 | 北京市商汤科技开发有限公司 | Image processing method, apparatus and storage medium |
CN113223125A (en) * | 2021-05-17 | 2021-08-06 | 百度在线网络技术(北京)有限公司 | Face driving method, device, equipment and medium for virtual image |
CN113298858A (en) * | 2021-05-21 | 2021-08-24 | 广州虎牙科技有限公司 | Method, device, terminal and storage medium for generating action of virtual image |
CN113658303A (en) * | 2021-06-29 | 2021-11-16 | 清华大学 | Monocular vision-based virtual human generation method and device |
CN114049468A (en) * | 2021-10-29 | 2022-02-15 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2583687B (en) * | 2018-09-12 | 2022-07-20 | Sony Interactive Entertainment Inc | Method and system for generating a 3D reconstruction of a human |
CN113496507B (en) * | 2020-03-20 | 2024-09-27 | 华为技术有限公司 | Human body three-dimensional model reconstruction method |
CN114119910A (en) * | 2020-08-27 | 2022-03-01 | 北京陌陌信息技术有限公司 | Method, equipment and storage medium for matching clothing model with human body model |
-
2022
- 2022-06-29 CN CN202210745674.9A patent/CN114821675B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111744200A (en) * | 2019-03-27 | 2020-10-09 | 电子技术公司 | Generating virtual characters from image or video data |
CN111833457A (en) * | 2020-06-30 | 2020-10-27 | 北京市商汤科技开发有限公司 | Image processing method, apparatus and storage medium |
CN113223125A (en) * | 2021-05-17 | 2021-08-06 | 百度在线网络技术(北京)有限公司 | Face driving method, device, equipment and medium for virtual image |
CN113298858A (en) * | 2021-05-21 | 2021-08-24 | 广州虎牙科技有限公司 | Method, device, terminal and storage medium for generating action of virtual image |
CN113658303A (en) * | 2021-06-29 | 2021-11-16 | 清华大学 | Monocular vision-based virtual human generation method and device |
CN114049468A (en) * | 2021-10-29 | 2022-02-15 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114821675A (en) | 2022-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114821675B (en) | Object processing method and system and processor | |
CN101055647B (en) | Method and device for processing image | |
Ersotelos et al. | Building highly realistic facial modeling and animation: a survey | |
KR20210119438A (en) | Systems and methods for face reproduction | |
US11587288B2 (en) | Methods and systems for constructing facial position map | |
CN113628327B (en) | Head three-dimensional reconstruction method and device | |
CN108876886B (en) | Image processing method and device and computer equipment | |
US11562536B2 (en) | Methods and systems for personalized 3D head model deformation | |
WO2020056532A1 (en) | Marker-less augmented reality system for mammoplasty pre-visualization | |
CN110796593A (en) | Image processing method, device, medium and electronic equipment based on artificial intelligence | |
CN114219878A (en) | Animation generation method and device for virtual character, storage medium and terminal | |
US11461970B1 (en) | Methods and systems for extracting color from facial image | |
US11417053B1 (en) | Methods and systems for forming personalized 3D head and facial models | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
CN115049016B (en) | Model driving method and device based on emotion recognition | |
CN113870404B (en) | Skin rendering method of 3D model and display equipment | |
WO2024174422A1 (en) | Model generation method and apparatus, electronic device, and storage medium | |
CN113298956A (en) | Image processing method, nail beautifying method and device, and terminal equipment | |
CN114373043A (en) | Head three-dimensional reconstruction method and equipment | |
CN106504063B (en) | A kind of virtual hair tries video frequency showing system on | |
CN109685911B (en) | AR glasses capable of realizing virtual fitting and realization method thereof | |
CN117132711A (en) | Digital portrait customizing method, device, equipment and storage medium | |
CN116863044A (en) | Face model generation method and device, electronic equipment and readable storage medium | |
US20230050535A1 (en) | Volumetric video from an image source | |
CN115936796A (en) | Virtual makeup changing method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |