CN114821004A - Virtual space construction method, virtual space construction device, equipment and storage medium - Google Patents

Virtual space construction method, virtual space construction device, equipment and storage medium Download PDF

Info

Publication number
CN114821004A
CN114821004A CN202210439227.0A CN202210439227A CN114821004A CN 114821004 A CN114821004 A CN 114821004A CN 202210439227 A CN202210439227 A CN 202210439227A CN 114821004 A CN114821004 A CN 114821004A
Authority
CN
China
Prior art keywords
display
theme
virtual
basic information
information data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210439227.0A
Other languages
Chinese (zh)
Inventor
满园园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202210439227.0A priority Critical patent/CN114821004A/en
Publication of CN114821004A publication Critical patent/CN114821004A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment provides a virtual space construction method, a virtual space construction device, equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring the face features and body features of a target user; performing three-dimensional modeling according to the human face features and the body features to obtain a virtual human body model; acquiring basic information data of the target user; classifying the basic information data according to a preset theme classification model to obtain a theme type; extracting a corresponding display background from a preset database according to the theme type; embedding the basic information data into the corresponding display background according to the theme type to generate a virtual human theme space; and embedding the virtual human body model into the virtual human subject space to generate a target virtual space of the target user. The embodiment of the disclosure can enrich the display of personal information and improve the attraction of others to check the basic information data.

Description

Virtual space construction method, virtual space construction device, equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a virtual space construction method, a virtual space construction apparatus, a device, and a storage medium.
Background
With the development of networks, people are more and more accustomed to online social contact, and therefore a personal virtual space is shown. In the related technology, the display of the personal virtual space on the social network platform is mainly to construct a corresponding personal space according to personal information filled in by a user in the personal information, the personal space displays the personal information through characters, and a large amount of characters can lead others to generate a dry feeling when reading the personal information, and the richness of the display of the personal information is reduced.
Disclosure of Invention
The main purpose of the embodiments of the present disclosure is to provide a virtual space construction method, a virtual space construction apparatus, a device, and a storage medium, so as to construct a three-dimensional virtual space to display personal information, thereby enhancing the richness of personal information display, and further enhancing the interest of others in knowing the personal information.
In order to achieve the above object, a first aspect of the embodiments of the present disclosure provides a virtual space construction method, including:
acquiring the face features and body features of a target user;
performing three-dimensional modeling according to the human face features and the body features to obtain a virtual human body model;
acquiring basic information data of the target user;
classifying the basic information data according to a preset theme classification model to obtain a theme type;
extracting a corresponding display background from a preset database according to the theme type;
embedding the basic information data into the corresponding display background according to the theme type to generate a virtual human theme space;
and embedding the virtual human body model into the virtual human subject space to generate a target virtual space of the target user.
In some embodiments, the classifying the basic information data according to a preset topic classification model to obtain a topic type includes:
performing content identification on the basic information data to obtain image content information; the basic information data includes at least one of: video data and image data;
performing semantic analysis on the image content information to obtain character description information;
and classifying the contents of the text description information according to the preset theme classification model so as to determine the corresponding theme type.
In some embodiments, the embedding the basic information data into the corresponding exhibition background according to the theme type to generate a virtual human theme space includes:
extracting corresponding initial display content from the basic information data according to the theme type;
obtaining display parameters of a display area in the display background;
adjusting the initial display content according to the display parameters to obtain target display content;
and embedding the target display content into a display area in the display background to obtain the virtual human theme space.
In some embodiments, the basic information data comprises: video data; the extracting of the corresponding initial display content from the basic information data according to the theme type includes:
extracting corresponding cover images from the video data according to the theme type;
performing content identification on the cover image to obtain image description information;
and generating the initial display content according to the cover image and the image description information.
In some embodiments, the presentation parameters include: display size, display shape and display color; the adjusting the initial display content according to the display parameters to obtain the target display content includes:
adjusting the size of the initial display content according to the display size to obtain first display content;
adjusting the shape of the first display content according to the display shape to obtain second display content;
and adjusting the color matching of the second display content according to the display color so as to use the display color as the color matching of the second display content to obtain the target display content.
In some embodiments, after embedding the virtual human model in the virtual human subject space, generating a target virtual space corresponding to the target user, the method further comprises:
and setting the access sequence of the virtual human theme space.
In some embodiments, the facial features include: head features, facial features, and expression features; the physical characteristics include: upper body features, motion features, and lower body features; the three-dimensional modeling is carried out according to the human face features and the body features to obtain a virtual human body model, and the method comprises the following steps:
acquiring corresponding face key point information according to the head features, the facial features and the expression features;
acquiring corresponding body key point information according to the upper body feature, the action feature and the lower body feature;
performing three-dimensional modeling according to the face key point information to obtain an initial face model;
performing three-dimensional modeling according to the body key point information to obtain an initial body model;
and splicing the initial human face model and the initial body model to obtain the virtual human body model.
To achieve the above object, a second aspect of the present disclosure provides a virtual space constructing apparatus, including:
the first acquisition module is used for acquiring the face characteristics and the body characteristics of a target user;
the modeling module is used for carrying out three-dimensional modeling according to the human face characteristics and the body characteristics to obtain a virtual human body model;
the second acquisition module is used for acquiring basic information data corresponding to the target user;
the classification module is used for classifying the basic information data according to a preset theme classification model to obtain a theme type;
the extraction module is used for extracting the corresponding display background from a preset database according to the theme type;
the embedding module is used for embedding the basic information data into the corresponding display background according to the theme type to generate a virtual human theme space;
and the space generation module is used for embedding the virtual human body model into the virtual human subject space and generating a target virtual space of the target user.
To achieve the above object, a third aspect of the present disclosure provides an electronic device, including:
at least one memory;
at least one processor;
at least one program;
the programs are stored in the memory, and the processor executes the at least one program to implement:
the method of the first aspect.
To achieve the above object, a fourth aspect of the present disclosure proposes a computer-readable storage medium storing computer-executable instructions for causing a computer to perform:
the method of the first aspect.
According to the virtual space construction method provided by the embodiment of the disclosure, by constructing the three-dimensional target virtual space and setting the virtual human body model corresponding to the target user to show the individual condition of the user, not only is the richness of individual information display improved, but also the attraction of others to the individual information display of the target user is improved, and others can comprehensively know the target user, so that the trust between people is improved, and further the transaction success rate caused by the trust is improved.
Drawings
Fig. 1 is a flowchart of a virtual space construction method provided by an embodiment of the present disclosure;
FIG. 2 is a flowchart of step S200 in FIG. 1;
FIG. 3 is a flowchart of step S400 in FIG. 1;
FIG. 4 is a flowchart of step S600 in FIG. 1;
fig. 5 is a flowchart of step S610 in fig. 4;
fig. 6 is a flowchart of step S630 in fig. 4;
fig. 7 is a flowchart of a virtual space construction method according to another embodiment of the disclosure;
fig. 8 is a block diagram of a virtual space constructing apparatus according to another embodiment of the present disclosure;
fig. 9 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
First, several terms referred to in the present application are resolved:
artificial Intelligence (AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence; artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produces a new intelligent machine that can react in a manner similar to human intelligence, and research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others. The artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is also a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
Virtual Reality technology (VR): the computer simulation system can create and experience a virtual world, and generates a simulated environment by using a computer to immerse a user in the environment. The virtual reality technology is to combine electronic signals generated by computer technology with data in real life to convert the electronic signals into phenomena which can be felt by people, wherein the phenomena can be true and true objects in reality or substances which can not be seen by the naked eyes, and the phenomena are expressed by a three-dimensional model. These phenomena are called virtual reality because they are not directly visible but a real world simulated by computer technology.
Three-dimensional model: three-dimensional models are polygonal representations of objects, typically displayed by a computer or other video device. The displayed object may be a real-world entity or a fictional object. Anything that exists in physical nature can be represented by a three-dimensional model. Three-dimensional models are often generated using specialized software, such as three-dimensional modeling tools, but may be generated in other ways. The three-dimensional model may be generated manually or according to a certain algorithm as data of points and other information sets. Although usually present in a virtual manner in a computer or computer file, similar models described on paper can also be considered as three-dimensional models. Three-dimensional models are used broadly wherever three-dimensional graphics are used.
MTCNN (Multi-task shielded connected Networks): the widely applied algorithm in face detection is MTCNN. The MTCNN algorithm is a face detection and face alignment method based on deep learning. The algorithm mainly adopts three cascaded networks and adopts the idea of adding a classifier to a candidate frame to carry out rapid and efficient face detection. The three cascaded networks are respectively P-Net for quickly generating candidate windows, R-Net for filtering and selecting high-precision candidate windows and O-Net for generating final bounding boxes and key points of the human face. And many convolutional neural network models that deal with image problems, which also use image pyramids, bounding box regression, non-maximum suppression, etc.
Ocr (optical Character recognition): the optical character recognition refers to a process of scanning text data and then analyzing and processing an image file to obtain characters and layout information. It uses optical technology and computer technology to read out the characters printed or written on paper and convert them into a format which can be accepted by computer and understood by human.
With the development of networks, more and more people are used to online social contact, but a virtual personal space is needed to be set for showing the people in order to enable the opposite side to know the people more during online social contact, but the virtual personal space in the related technology is two-dimensional showing, only personal data can be viewed, and the people cannot know the people comprehensively. For example, in the sales industry, in order to make customers know themselves more, PPT is usually made to display personal information, but it is highly challenging to make customers trust themselves more through self introduction of PPT, and it is difficult for customers to know themselves more comprehensively.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Based on this, the embodiment of the present disclosure provides a virtual space construction method, which can construct a three-dimensional virtual space to display personal information of a target user in a classified manner, so as to improve richness of personal information display and improve interest of others in understanding the personal information of the target user.
The embodiments of the present disclosure provide a virtual space construction method, a virtual space construction apparatus, a device, and a storage medium, which are specifically described in the following embodiments, and first a virtual space construction method in the embodiments of the present disclosure is described.
The embodiment of the application provides a virtual space construction method, a virtual space construction device, equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of three-dimensional modeling. The virtual space construction method provided by the embodiment of the application can be applied to a terminal or a server side, and can also be software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, smart watch, or the like; the server side can be an independent server, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data and an artificial intelligence platform; the software may be an application or the like that implements a virtual space construction method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Referring to fig. 1, fig. 1 is an optional flowchart of a virtual space construction method provided in some embodiments of the present application, where the virtual space construction method includes steps S100, S200, S300, S400, S500, S600, and S700, it should be understood that the virtual space construction method in the embodiments of the present application includes, but is not limited to, steps S100 to S700, and the following describes steps S100 to S700 in detail with reference to fig. 1.
Step S100: and acquiring the face features and the body features of the target user.
Specifically, the face features and the body features of the target user are acquired by starting a camera to acquire the face features and the body features of the target user in real time. The camera can be used for acquiring a figure video containing a target user, and also can be used for acquiring a personal image of the target user, or the face and the body of the target user are scanned in real time to obtain the face characteristics and the body characteristics. If the acquired person video is the person video, the person video is subjected to framing processing to obtain a person image, and then the human face features and the body features are extracted from the person image. The time interval of the character images in the collected video data is generally far shorter than the time of switching actions of the target user, so that the change of the three-dimensional model created by the collected human face characteristics and the collected body characteristics is more consistent with the actual state switching of the target user, the displayed image effect is smoother and more real, and the experience feeling of the target user is improved. The human video is subjected to framing and feature extraction and is realized by adopting an identification model, and in the embodiment, the identification model is an MTCNN identification model. The MTCNN recognition model mainly comprises three cascaded networks, wherein the three cascaded networks are respectively a P-Net for quickly generating candidate windows, an R-Net for filtering and selecting the high-precision candidate windows and an O-Net for generating final boundary frames and key points of the human face.
For example, if a personal image of a user is obtained, and the personal image is an image including an overall image, face recognition and body recognition are performed on the personal image to obtain face features and body features.
Step S200: and performing three-dimensional modeling according to the human face characteristics and the body characteristics to obtain a virtual human body model.
It should be noted that the virtual human body model is obtained by performing three-dimensional modeling according to the human face features and the body features to construct the virtual human body model similar to the target user, so that the virtual human body model can be more realistic when the user performs virtual human subject space display with the virtual human body model, and the trust sense of the user by the client is improved. The virtual human body model is a three-dimensional human body model, a human body grid is created according to the human face characteristics and the body characteristics, and texture mapping is carried out on the basis of the human body grid to obtain the virtual human body model.
Step S300: and acquiring basic information data of the target user.
Specifically, the acquired basic information data is used for representing basic information of a target user, and the basic information data mainly comprises related data such as honor awards, service customers, personal styles, personal introductions, heart appreciation and the like, so that a virtual human theme space constructed according to the acquired basic information data is more suitable for specific situations of the user.
And if the uploaded basic information data is more perfect, the virtual human theme space constructed according to the basic information data is more in line with the actual situation of the user. The basic information data can be uploaded by the user through related personal data, or voice data can be obtained through voice question answering, and voice content recognition is carried out on the voice data to obtain the basic information data.
Step S400: and classifying the basic information data according to a preset theme classification model to obtain a theme type.
It should be noted that a plurality of basic information data exist, and the theme types corresponding to each basic information data are different, and a corresponding virtual human theme space needs to be constructed according to the theme types. Therefore, basic information data are input into a preset theme classification model to perform theme classification so as to obtain a corresponding theme type.
For example, the basic information data includes: honor prize data, service customer data, personal disposition data, personal introduction data, mental appreciation data and the like, so that the theme analysis is performed on the basic information data to obtain the corresponding theme type comprises the following steps: honor awards, service customers, personal talents, personal introductions, experience of mind, and the like. The basic information data are subjected to theme type division according to a preset theme classification model, so that a corresponding virtual human theme space is constructed, and personal information of a target user is displayed more specifically.
And S500, extracting a corresponding display background from a preset database according to the theme type.
It should be noted that, in order to make the constructed virtual human theme space more conform to the theme type to improve the attention of the client and increase the attraction of the individual exhibition, the corresponding exhibition background is extracted from the preset database according to the theme type, and the individual exhibition is performed with the corresponding exhibition background according to the theme type, thereby improving the attraction of the individual introduction. The preset database stores matching information of the theme types and the display backgrounds, and different theme types correspond to different display backgrounds. Therefore, corresponding matching information is extracted from the preset database according to the theme type, and then the corresponding display background is determined according to the theme type and the matching information.
Step S600: and embedding the basic information data into the corresponding display background according to the theme type to generate a virtual human theme space.
It should be noted that after the display background is determined, the basic information data of the same theme type is embedded into the corresponding display background to generate a virtual human theme space. The virtual human theme space is a three-dimensional space, basic information data of the target user are displayed in a three-dimensional format, and the richness of personal information display of the target user is improved, so that the attraction of the user in personal display is improved.
Step S700: and embedding the virtual human body model into the virtual human subject space to generate a target virtual space of a target user.
It should be noted that the virtual human body model is embedded into the virtual human subject space to obtain a target virtual space, and the target virtual space is displayed. The target virtual space is displayed through the virtual reality equipment, so that the target virtual space can be more truly displayed.
Specifically, the virtual reality equipment is VR equipment, and the VR equipment includes any one of the following: VR glasses, VR helmet, VR all-in-one. In this embodiment, the VR device is VR glasses, and the target virtual space is displayed through the VR glasses.
Through the steps S100 to S700, a virtual human body model is obtained by performing three-dimensional modeling according to human face features and body features, then basic information data of a target user is obtained, the basic information data is classified according to a preset theme classification model to obtain a corresponding theme type, a corresponding display background is extracted from a preset database according to the theme type, then the basic information data is embedded into the display background corresponding to the theme type to generate a virtual human theme space, finally, the virtual human body model is embedded into the virtual human theme space to obtain a target virtual space, and the target virtual space is displayed through virtual reality equipment. Therefore, a three-dimensional target virtual space is constructed, and the virtual human body model corresponding to the target user is set to show the individual condition of the user, so that the richness of the individual information display is improved, the interest of others in introducing the target user is also improved, and others can comprehensively know the target user, thereby improving the trust between people and further improving the transaction success rate caused by the trust.
Referring to fig. 2, in some embodiments of the present application, the facial features include: head features, facial features, and expression features; the physical characteristics include: upper body features, motion features, and lower body features. Step S200 may include, but is not limited to including, step S210, step S220, step S230, step S240, and step S250;
step S210: and acquiring corresponding face key point information according to the head features, the facial features and the expression features.
Wherein, the human face characteristics include: the method comprises the steps of extracting corresponding face key point information according to the head features, the facial features and the expression features so as to obtain the face key point information which can represent the face most.
Step S220: and acquiring corresponding body key point information according to the upper body characteristics, the action characteristics and the lower body characteristics.
Wherein the physical characteristics include: and acquiring corresponding body key point information according to the upper body feature, the action feature and the lower body feature, namely acquiring key point information of the whole body of the target user, so that a virtual human body model containing the whole image of the target user can be constructed.
Step S230: and carrying out three-dimensional modeling according to the face key point information to obtain an initial face model.
It should be noted that the acquired face key point information is key point position information of head features, facial features and expression features, so that three-dimensional modeling is performed according to the face key point information to map the face key point information into a preset 3D mesh to obtain corresponding polygonal shapes, and the polygonal shapes are connected together to construct an initial face model.
Step S240: and carrying out three-dimensional modeling according to the body key point information to obtain an initial body model.
Specifically, the initial body model corresponding to the target user is obtained by mapping the body key points into a preset 3D mesh to obtain corresponding polygonal shapes, and then connecting the polygonal shapes together to see the initial body model.
Step S250: and splicing the initial face model and the initial body model to obtain a virtual human body model.
It should be noted that after the initial face model and the initial body model are constructed, the initial face model and the initial body model are spliced to obtain a virtual human body model corresponding to the virtual human, and the virtual human corresponds to the target user, so that the obtained characteristics of the virtual human body model correspond to the characteristics of the target user, and the virtual human body model in the target virtual space is constructed to introduce the individual more truly, so that the sense of trust of a listener is improved.
Referring to fig. 3, in some embodiments of the present application, step S400 may include, but is not limited to, step S410, step S420, and step S430;
step S410: carrying out content identification on the basic information data to obtain image content information; the basic information data includes at least one of: video data and image data.
Specifically, if the basic information data is any one or more of video data and image data, the content of the basic information data is identified to obtain image content information. If the basic information data is video data, performing framing on the video data to obtain a video frame image, and identifying the video frame image and the content; and if the basic information data is image data, directly carrying out content identification on the image data to obtain image content information. The content recognition of the video frame image or the image data is mainly OCR content recognition, so that characters in the image are recognized through the OCR content to obtain image content information.
Step S420: and carrying out semantic analysis on the image content information to obtain the character description information.
Specifically, since the image content information is the text content in the image, the image content information needs to be semantically understood to obtain the text description information. Wherein, the text description information includes any one of the following: honor prize item description information, personal style description information, personal introduction description information and body experience description information. Therefore, by performing semantic understanding on the image content information to determine which type of information the text description information describes, it is possible to determine which type of subject the corresponding basic information data belongs to based on the text description information.
If the basic information data is the text data, semantic understanding is carried out on the text data to obtain text description information, and the type of the theme to which the text data belongs is directly determined.
Step S430: and classifying the contents of the text description information according to a preset theme classification model so as to determine the corresponding theme type.
Specifically, the topic type corresponding to the text description is determined according to the description content of the text description information, that is, the text description information may be subjected to content classification according to a preset topic classification model to determine the topic type of the basic information data. For example, if the text description information is the honor award item description information, it is determined that the subject type of the basic information data corresponding to the honor award item description information is the honor award item, and if the text description information is the personal introduction description information, it is determined that the subject type of the basic information data corresponding to the personal introduction description information is the personal introduction.
For example, if the description content in the document description information is "xxx award", the topic type of the text description information is determined to be an honor award item according to the description content, so that the corresponding topic type can be directly determined according to the text description information, so as to realize the rapid determination of the topic type of the basic information data.
Through the following steps S410 to S430, if the basic information data is at least one of video data and image data, OCR content recognition is performed on the basic information data to extract characters to obtain image content information, then semantic understanding is performed on the image content information to obtain corresponding character description information, and finally, content classification is performed on the description content of the character description information according to a preset topic classification model to obtain a topic type corresponding to each character description information, so that the basic information data is attributed to the corresponding topic type, and classification of the basic information data is achieved.
Referring to fig. 4, in some embodiments of the present application, step S600 may include, but is not limited to, step S610, step S620, step S630, and step S640.
Step S610: and extracting corresponding initial display content from the basic information data according to the theme type.
It should be noted that after the display background corresponding to the theme type is obtained, the basic information data of the same theme type needs to be embedded into the display background, but if all the basic information data are directly placed in the display background, not only the logical property during the display is affected, but also the interest of others in the individual display is reduced. Therefore, the corresponding initial display content is extracted from the basic information data, and the initial display content is embedded into the display background, so that the storage amount of the display background during display is reduced, and the operation efficiency of the virtual personal space is improved.
For example, if the basic information data is video data, the video data is decomposed to obtain video frame images, and corresponding images and image contents are acquired from the video frame images according to the theme type and serve as initial display contents. And if the basic information data is image data, directly acquiring the content corresponding to the theme type from the image data as initial display content. If the basic information data is character data, characters related to the theme type are directly extracted from the character data to obtain initial display content. If the theme type is honor prize and the basic information data is video data, intercepting the image of the trophy or the medal from the video data, acquiring the video data for introducing the trophy and the medal, and obtaining initial display content according to the image of the trophy and the introduction of the medal.
Step S620: and acquiring display parameters of a display area in the display background.
It should be noted that a plurality of display areas are arranged in the display background, and the display areas are arranged in sequence. The display parameters of the display area of each display background are different, so the display parameters of the display area need to be acquired to determine the shape, size and color of the display area of each display background.
Step S630: and adjusting the initial display content according to the display parameters to obtain the target display content.
It should be noted that, because the display parameters of the display areas in different display backgrounds are different, in order to better embed the initial display content into the display background, the initial display content is adjusted according to the display parameters to obtain the target display content, so that the target display content better conforms to the display background.
Step S640: and embedding the target display content into a display area in a display background to obtain a virtual human theme space.
It should be noted that, because a plurality of display areas exist in the display background, the target display contents are embedded into the display areas one by one according to the sorting order of the display areas, so as to obtain a complete theme space of the virtual person. The number of the display areas in the display background is determined according to the number of the initial display contents, if N initial display contents exist after the N initial display contents are extracted from the basic information data of the same theme type, the N display areas of the display background are set, and the display areas are sequentially ordered in the display background. And then embedding the generated target display contents into the display areas one by one to obtain a virtual human theme space of which the display areas are all filled with the target display contents.
For example, if the theme type is a honor prize, the display background extracted from the preset database according to the theme type is a honor wall, the images related to the theme type are extracted from the basic information data as a cup image and a medal image, and the introduction of the medal to the cup image and the medal image is extracted at the same time, and if the number of the cup image and the medal image is N, N display areas of the display background are set. And then obtaining display parameters of the display area, adjusting the trophy image and the medal image according to the display parameters, and embedding the adjusted trophy image and medal image and the corresponding introduction of the medal into the display area in the display background to obtain a virtual human subject space, namely obtaining a honor wall full of the medals and the medals.
Referring to fig. 5, in some embodiments of the present application, if the basic information data is video data, the step S610 may include, but is not limited to, the steps S611, S612, and S613.
Step S611: and extracting a corresponding cover image from the video data according to the theme type.
If the basic information data is video data, the video data is divided into a plurality of video frame images, and a target video frame image corresponding to the theme type is acquired from the plurality of video frame images, and the target video frame image is a cover image. And if a plurality of target video frame images exist, directly acquiring the target video frame image with the highest definition from the plurality of target video frame images as a cover image. By extracting the cover image from the video data as the display content in the display background, the method can realize the display of the image which can express the personal condition of the user in the video data without limiting the type of the uploaded data of the user, thereby improving the attraction of the theme space of the virtual human.
For example, if the theme type is honor award, the video data is divided into a plurality of video frame images, and then cover images of the image content, i.e., the cup and medal content, are extracted from the plurality of video frames. And if the theme type is personal style, acquiring a cover image with image content as a target user picture from a plurality of video frame images.
Step S612: and performing content identification on the cover image to obtain image description information.
It should be noted that, performing content recognition on the cover image mainly performs OCR content recognition on the cover image to obtain image description information, so that the text description content of the cover image is clear through the image description information. For example, if the cover image is a prize, the text in the prize is extracted to obtain the image description information "a obtains the best counselor title in the first half of 2022", so the description information corresponding to the cover image can be more clearly extracted by the image description information.
Step S613: and generating initial display content according to the cover image and the image description information.
Specifically, the cover image is matched with the corresponding image description information to generate initial presentation content information, and the initial presentation content information can more specifically and clearly know the personal condition of the target user. For example, if the cover image is an award image, the award image and the corresponding image description information "a obtains the best advisor title number in the first half of 2022" are matched with each other to generate the initial presentation content, and if the text in the cover image is small, the award information of the award acquired by the target user can be made clearer by the image description information corresponding to the cover image.
It should be noted that, if the basic information data is the picture data, the corresponding subject item image is extracted from the picture data according to the subject type to obtain the cover image, and the content of the cover image is identified to obtain the image description information, so as to generate the initial display content according to the cover image and the image description information. For example, if the theme type is honor award, an image related to the honor award is captured from the image data to obtain a cover image, and if the cover image is an award, OCR content recognition is performed on the award to obtain corresponding image description information. If the image description information is "first-class prize of sales contest", the medal image and the image description information are "first-class prize of sales contest" matched to generate the initial display content. If the basic information data is character data, extracting corresponding character description from the character data according to the theme type to obtain target description information, adjusting the target description information in a character format corresponding to the theme type, and generating a picture format for the target description information with the adjusted format to obtain initial display content. Therefore, corresponding initial display content is extracted from the basic information data according to the theme type, so that the richness of personal display is improved.
Referring to fig. 6, in some embodiments of the present application, the display parameters include: display size, display shape and display color; step S630 may include, but is not limited to, step S631, step S632, and step S633.
Step S631: and adjusting the size of the initial display content according to the display size to obtain the first display content.
It should be noted that, because different theme types correspond to different display backgrounds and display areas of different display background clocks are different, the initial display content is adjusted according to the display parameters of the display areas, so that the display content can be directly embedded into the display areas to generate a virtual human theme space. Wherein, the display parameters include: the display size, display shape and display color, and thus, the size of the initial display content is adjusted according to the display size. And adjusting the initial display content according to the image width and the image height of the display size to obtain the first display content consistent with the display size of the display area.
Step S632: and adjusting the shape of the first display content according to the display shape to obtain a second display content.
It should be noted that, in order to facilitate the initial display content to be quickly embedded into the display background, the shape of the first display content is adjusted according to the display shape to obtain the second display content, and the shape and size of the second display content are matched with those of the display area. If the display shape is a rectangle, the shape of the first display content is also set to be a rectangle to obtain the second display content. And if the display shape is a triangle, setting the shape of the first display content into the triangle to obtain a second display content.
Step S633: and adjusting the color matching of the second display content according to the display color so as to take the display color as the color matching of the second display content to obtain the target display content.
It should be noted that, in order to make the virtual human theme space generated after the initial display content is embedded into the display area look color-matched and make the virtual human theme space look simpler, the display color needs to be used as the color matching of the second display content, that is, the color matching of the second display content in the same theme type is consistent, and the obtained target display content is matched with the shape, size and color matching of the display area. The display background is stored in the preset database and is predefined for the target user, and the display area in the display background is also predefined for the target user, so that the shape, the size and the color matching of the target display content are matched with the shape, the size and the color matching of the display area, the target display content is embedded into the virtual human theme space generated by the display background and is more uniform, the aesthetic requirements of the target user are better met, and the experience of the target user in personal display is improved.
In some embodiments of the present application, after performing step S700, the virtual space construction method further includes:
and step S800, setting the access sequence of the theme space of the virtual human.
Each theme type corresponds to a virtual human theme space, and different virtual human theme spaces show personal information of different aspects of a target user, so that the access sequence of the virtual human theme spaces needs to be set according to the preset showing sequence of the target user, and the corresponding virtual human theme spaces are shown according to the preset showing sequence.
Specifically, after the access sequence of the virtual human theme space is set, the virtual human theme space is displayed according to the corresponding access sequence, an integral VR using moving line is formed, and a one-step access route guide is formed according to the access sequence to form a complete virtual space.
Referring to fig. 7, in some embodiments of the present application, step S800 may include, but is not limited to, step S810 and step S820.
Step S810: acquiring a preset access sequence of the theme type;
it should be noted that, the basic information data is classified according to a preset theme classification model to determine a corresponding theme type, and then a corresponding virtual human theme space is constructed according to the theme type, and the virtual human theme spaces corresponding to different theme types are different. Therefore, the preset access sequence of the theme type preset by the target user is obtained, that is, the access sequence number of each theme type is determined. For example, topic types include: honor awards, service clients, personal talents, personal introductions and hearts, if the preset access sequence of the acquired theme type is 'personal introductions-personal talents-service clients-honor awards-hearts'. And determining the access sequence of the corresponding virtual human subject space according to the preset access sequence of the corresponding subject type.
Step S820: and setting the access sequence of the virtual human theme space corresponding to the theme type according to the preset access sequence.
It should be noted that the preset access order of each theme type is determined, that is, the access order of the virtual human theme space corresponding to the theme type is determined, and the corresponding virtual human theme spaces are displayed one by one according to the preset access order of the theme type, so that the virtual human theme spaces are displayed one by one according to the preset access order of the target user, thereby improving the experience of the target user and enhancing the appeal of the listener for displaying the virtual human theme space of the target user.
After the access sequence of each virtual human subject space is determined, the virtual human body model is embedded into each virtual human subject space to generate a target virtual space, and the target virtual space is displayed through virtual reality equipment, so that the personal information of a target user is more abundantly and diversely displayed, other people can know the target user more comprehensively, the trust between people is improved, and the transaction success rate caused by trust is improved.
By integrating the above steps S100 to S800, by acquiring the head feature, the facial feature and the expression feature, the upper body feature, the motion feature and the lower body feature of the target user, the corresponding face key point information is extracted according to the head feature, the facial feature and the expression feature, and the corresponding body key point information is acquired according to the upper body feature, the motion feature and the lower body feature. And performing three-dimensional modeling according to the face key point information to map the face key point information into a preset 3D mesh to obtain corresponding polygonal shapes, and connecting the polygonal shapes together to construct an initial face model. The method comprises the steps of mapping body key points to a preset 3D grid to obtain corresponding polygonal shapes, connecting the polygonal shapes together to obtain an initial body model corresponding to a target user, and finally splicing the initial face model and the initial body model to obtain a virtual body model corresponding to the target user. And then obtaining basic information data of the target user, wherein the basic information data comprises at least one of video data, image data and character data, if the basic information data is the video data and the image data, performing OCR content recognition on the basic information data to obtain image content information, and performing semantic understanding on the image content information to obtain character description information. And classifying the contents of the text description information according to a preset theme classification model so as to divide the basic information data into corresponding theme types. Meanwhile, a corresponding display background is obtained from a preset database according to the theme type, corresponding initial display content is obtained from basic information data according to the theme type, if the theme type is a honor award, images and characters related to the honor award are obtained from the basic information data to obtain the initial display content, the display size, the display shape and the display color of a display area in the display background are obtained, the width and the height of the initial display content are adjusted according to the display size to obtain first display content, the shape of second display content is adjusted according to the display shape, finally the color matching of the second display content is adjusted to be the display color to obtain target display content, and the target display content can be directly embedded into the display area in the display background to obtain a virtual human theme space filling the display area. Meanwhile, according to the preset access sequence of the theme type, the access sequence of the corresponding virtual human theme space is determined according to the preset access sequence of the theme type, finally, the virtual human body model is embedded into each virtual human theme space to obtain a target virtual space, the target virtual space is displayed through virtual reality equipment, the richness and diversity of personal introduction are increased, other people can know a target user more comprehensively, the trust sense between people is increased, and the transaction success rate caused by trust can be improved.
In addition, referring to fig. 8, another embodiment of the present application discloses a virtual space constructing apparatus, including: a first obtaining module 100, configured to obtain a face feature and a body feature of a target user; the modeling module 200 is used for performing three-dimensional modeling according to the human face characteristics and the body characteristics to obtain a virtual human body model; a second obtaining module 300, configured to obtain basic information data of a corresponding target user; the analysis module 400 is configured to classify the basic information data according to a preset topic classification model to obtain a topic type; the extracting module 500 is configured to extract a corresponding display background from a preset database according to the theme type; an embedding module 600, configured to embed, according to a theme type, basic information data into a corresponding display background, so as to generate a virtual human theme space; and the space generation module 700 is configured to embed the virtual human body model into the virtual human subject space, so as to generate a target virtual space.
The virtual space construction apparatus in the embodiments of the present disclosure is configured to execute the virtual space construction method in the embodiments, and a specific processing procedure of the virtual space construction apparatus is the same as that of the virtual space construction method in the embodiments, which is not described herein any more.
The virtual space construction device provided by the embodiment of the application not only improves the richness of personal information display, but also improves the attraction of others to target users when introducing themselves by constructing a three-dimensional target virtual space and setting the personal condition of a virtual human body model display user corresponding to the target users, so that others can comprehensively know the target users, thereby improving the trust between people and further improving the transaction success rate brought by the trust.
Additionally, another embodiment of the present application discloses a computer device, at least one memory;
at least one processor;
at least one program;
the programs are stored in a memory and a processor executes the at least one program to implement the present disclosure to implement the virtual space construction method described above. The computer equipment can be any intelligent terminal such as a mobile phone, a tablet computer, a desktop computer, a Personal Digital Assistant (Personal Digital Assistant, PDA for short), a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of a computer device according to another embodiment, where the computer device includes:
the processor 901 may be implemented by a general-purpose CPU (central processing unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute a relevant program to implement the technical solution provided by the embodiment of the present disclosure;
the memory 902 may be implemented in a form of a ROM (read only memory), a static storage device, a dynamic storage device, or a RAM (random access memory). The memory 902 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present disclosure is implemented by software or firmware, the relevant program codes are stored in the memory 902 and called by the processor 901 to execute the virtual space construction method of the embodiments of the present disclosure;
an input/output interface 903 for implementing information input and output;
a communication interface 904, configured to implement communication interaction between the device and another device, where communication may be implemented in a wired manner (e.g., USB, network cable, etc.), or in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 enable a communication connection within the device with each other through a bus 905.
The embodiment of the present disclosure also provides a storage medium, which is a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and the computer-executable instructions are used to enable a computer to execute the virtual space construction method.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present disclosure are for more clearly illustrating the technical solutions of the embodiments of the present disclosure, and do not constitute a limitation to the technical solutions provided in the embodiments of the present disclosure, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present disclosure are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
Those skilled in the art will appreciate that the solutions shown in fig. 1-7 are not meant to limit embodiments of the present disclosure, and may include more or fewer steps than those shown, or may combine certain steps, or different steps.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present disclosure have been described above with reference to the accompanying drawings, and therefore do not limit the scope of the claims of the embodiments of the present disclosure. Any modifications, equivalents and improvements within the scope and spirit of the embodiments of the present disclosure should be considered within the scope of the claims of the embodiments of the present disclosure by those skilled in the art.

Claims (10)

1. A virtual space construction method is characterized by comprising the following steps:
acquiring the face features and body features of a target user;
performing three-dimensional modeling according to the human face features and the body features to obtain a virtual human body model;
acquiring basic information data of the target user;
classifying the basic information data according to a preset theme classification model to obtain a theme type;
extracting a corresponding display background from a preset database according to the theme type;
embedding the basic information data into the corresponding display background according to the theme type to generate a virtual human theme space;
and embedding the virtual human body model into the virtual human subject space to generate a target virtual space of the target user.
2. The virtual space construction method according to claim 1, wherein the classifying the basic information data according to a preset topic classification model to obtain a topic type includes:
performing content identification on the basic information data to obtain image content information; the basic information data includes at least one of: video data and image data;
performing semantic analysis on the image content information to obtain character description information;
and classifying the contents of the text description information according to the preset theme classification model so as to determine the corresponding theme type.
3. The virtual space construction method according to claim 1, wherein the embedding the basic information data into the corresponding exhibition background according to the theme type to generate a virtual human theme space includes:
extracting corresponding initial display content from the basic information data according to the theme type;
obtaining display parameters of a display area in the display background;
adjusting the initial display content according to the display parameters to obtain target display content;
and embedding the target display content into a display area in the display background to obtain the virtual human theme space.
4. The virtual space construction method according to claim 3, wherein the basic information data includes: video data; the extracting of the corresponding initial display content from the basic information data according to the theme type includes:
extracting corresponding cover images from the video data according to the theme type;
performing content identification on the cover image to obtain image description information;
and generating the initial display content according to the cover image and the image description information.
5. The virtual space construction method according to claim 3, wherein the presentation parameters include: display size, display shape and display color; the adjusting the initial display content according to the display parameters to obtain the target display content includes:
adjusting the size of the initial display content according to the display size to obtain first display content;
adjusting the shape of the first display content according to the display shape to obtain second display content;
and adjusting the color matching of the second display content according to the display color so as to use the display color as the color matching of the second display content to obtain the target display content.
6. The virtual space construction method according to any one of claims 1 to 5, wherein after embedding the virtual human body model in the virtual human subject space, generating a target virtual space corresponding to the target user, the method further comprises:
and setting the access sequence of the virtual human theme space.
7. The virtual space construction method according to any one of claims 1 to 5, wherein the human face features comprise: head features, facial features, and expression features; the physical characteristics include: upper body features, motion features, and lower body features; the three-dimensional modeling is carried out according to the human face features and the body features to obtain a virtual human body model, and the method comprises the following steps:
acquiring corresponding face key point information according to the head features, the facial features and the expression features;
acquiring corresponding body key point information according to the upper body feature, the action feature and the lower body feature;
performing three-dimensional modeling according to the face key point information to obtain an initial face model;
performing three-dimensional modeling according to the body key point information to obtain an initial body model;
and splicing the initial human face model and the initial body model to obtain the virtual human body model.
8. Virtual space construction apparatus, characterized in that, the apparatus includes:
the first acquisition module is used for acquiring the face characteristics and the body characteristics of a target user;
the modeling module is used for carrying out three-dimensional modeling according to the human face characteristics and the body characteristics to obtain a virtual human body model;
the second acquisition module is used for acquiring basic information data corresponding to the target user;
the classification module is used for classifying the basic information data according to a preset theme classification model to obtain a theme type;
the extraction module is used for extracting a corresponding display background from a preset database according to the theme type;
the embedding module is used for embedding the basic information data into the corresponding display background according to the theme type to generate a virtual human theme space;
and the space generation module is used for embedding the virtual human body model into the virtual human subject space and generating a target virtual space of the target user.
9. An electronic device, comprising:
at least one memory;
at least one processor;
at least one program;
the programs are stored in the memory, and the processor executes the at least one program to implement: the method of any one of claims 1 to 7.
10. A storage medium that is a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform:
the method of any one of claims 1 to 7.
CN202210439227.0A 2022-04-25 2022-04-25 Virtual space construction method, virtual space construction device, equipment and storage medium Pending CN114821004A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210439227.0A CN114821004A (en) 2022-04-25 2022-04-25 Virtual space construction method, virtual space construction device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210439227.0A CN114821004A (en) 2022-04-25 2022-04-25 Virtual space construction method, virtual space construction device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114821004A true CN114821004A (en) 2022-07-29

Family

ID=82508053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210439227.0A Pending CN114821004A (en) 2022-04-25 2022-04-25 Virtual space construction method, virtual space construction device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114821004A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359220A (en) * 2022-08-16 2022-11-18 支付宝(杭州)信息技术有限公司 Virtual image updating method and device of virtual world
CN116246009A (en) * 2022-09-06 2023-06-09 支付宝(杭州)信息技术有限公司 Virtual image processing method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359220A (en) * 2022-08-16 2022-11-18 支付宝(杭州)信息技术有限公司 Virtual image updating method and device of virtual world
CN115359220B (en) * 2022-08-16 2024-05-07 支付宝(杭州)信息技术有限公司 Method and device for updating virtual image of virtual world
CN116246009A (en) * 2022-09-06 2023-06-09 支付宝(杭州)信息技术有限公司 Virtual image processing method and device
CN116246009B (en) * 2022-09-06 2024-04-16 支付宝(杭州)信息技术有限公司 Virtual image processing method and device

Similar Documents

Publication Publication Date Title
CN114821004A (en) Virtual space construction method, virtual space construction device, equipment and storage medium
CN113255052B (en) Home decoration scheme recommendation method and system based on virtual reality and storage medium
CN103916621A (en) Method and device for video communication
EP3408836A1 (en) Crowdshaping realistic 3d avatars with words
CN111275784A (en) Method and device for generating image
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN114241558A (en) Model training method, video generation method, device, equipment and medium
CN108491881A (en) Method and apparatus for generating detection model
CN112581567A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113095206A (en) Virtual anchor generation method and device and terminal equipment
Van Nguyen et al. Reconstruction of 3D digital heritage objects for VR and AR applications
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN114529785A (en) Model training method, video generation method and device, equipment and medium
CN115713585B (en) Texture image reconstruction method, apparatus, computer device and storage medium
CN111638792A (en) AR effect presentation method and device, computer equipment and storage medium
CN116703797A (en) Image fusion method, image fusion system, computer device and storage medium
CN106503174B (en) Scene visualization method and system based on network three-dimensional modeling
CN111738087B (en) Method and device for generating face model of game character
CN112927329A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN114638920A (en) Reception method, reception device, computer device, and storage medium
CN114666307B (en) Conference interaction method, conference interaction device, equipment and storage medium
Liang Art Design Teaching Based on the Multidata Fusion Algorithm and Virtual Simulation Technology
CN115617174B (en) Method for constructing interactive virtual exhibition hall
CN116204167B (en) Method and system for realizing full-flow visual editing Virtual Reality (VR)
Pauls et al. The Depth Estimation of 2D Content: A New Life for Paintings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination