CN114332374A - Virtual display method, equipment and storage medium - Google Patents

Virtual display method, equipment and storage medium Download PDF

Info

Publication number
CN114332374A
CN114332374A CN202111651926.3A CN202111651926A CN114332374A CN 114332374 A CN114332374 A CN 114332374A CN 202111651926 A CN202111651926 A CN 202111651926A CN 114332374 A CN114332374 A CN 114332374A
Authority
CN
China
Prior art keywords
avatar
user data
virtual
updating
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111651926.3A
Other languages
Chinese (zh)
Inventor
陈凯彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202111651926.3A priority Critical patent/CN114332374A/en
Publication of CN114332374A publication Critical patent/CN114332374A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a virtual display method, a virtual display device and a storage medium. The virtual display method comprises the following steps: acquiring a scene image of the target object based on the second sensor, and generating an avatar based on the scene image; the avatar in the virtual space is updated based on the user data. Therefore, by the mode, the virtual image generated based on the scene image acquired by the second sensor can be updated by using the user data acquired by the first sensor, so that the purpose of combining the space data acquired by the two sensors can be realized, the interaction effect of the scene image and the target object is enriched, and the use interest is improved.

Description

Virtual display method, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a virtual display method, device, and storage medium.
Background
Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information with the real world. The method and the system widely apply various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like. The augmented reality is applied to the real world after virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer are subjected to analog simulation, so that the two kinds of information are mutually supplemented, and the real world is enhanced. The augmented reality technology has been developed for many years, and it is expected that the technology can be used in daily life and work for a day, so that convenience in life is provided, work efficiency is improved, and the like.
The AR applications in the prior art are all to create AR space using a single sensor or to perform human-computer interaction with a single sensor. For example, on a terminal device, the rear camera mainly constructs an AR space through a shot real-time picture and a related algorithm, and the front camera is mainly used for providing a face space, so as to perform algorithm analysis on a face and then superimpose an AR effect using method.
Disclosure of Invention
In order to solve the above problems in the prior art, the present application provides a virtual display method, a device, and a storage medium.
In order to solve the technical problems in the prior art, the application provides a virtual display method, which includes: acquiring user data of a target object based on a first sensor; acquiring a scene image of a target object based on a second sensor, and generating an avatar based on the scene image; updating the avatar based on the user data.
The virtual display method further comprises the following steps: identifying biological data from the scene image; generating the avatar based on the biometric data.
Therefore, by directly recognizing the biological data from the scene image as the generated avatar, the convenience of avatar generation can be improved, and the effect of avatar display can be improved.
Wherein said updating the avatar based on the user data comprises: extracting first characteristic information of a first preset position from the user data; identifying second characteristic information of a second preset position on the virtual image; fusing the first characteristic information and the second characteristic information to obtain third characteristic information; updating the avatar based on the third feature information.
Therefore, the third characteristic information can be obtained by fusing the first characteristic information of the first preset position in the extracted user data with the extracted second characteristic information, and the virtual image is updated by using the third characteristic information, so that the display effect of the virtual image can be improved.
Wherein the user data comprises: a face key point of a user; said updating said avatar based on said user data, comprising: and updating the virtual image based on the face key points of the user.
Therefore, the virtual image is updated by using the key points of the human face, so that the display effect of the virtual image can be improved.
Wherein updating the avatar based on the face key points of the user comprises: identifying the facial expression type of the user based on the facial key points of the user; inquiring a corresponding virtual image ID based on the facial expression type, and displaying the virtual image of the virtual image ID; and the facial expression type is pre-bound with the virtual image ID.
Therefore, the virtual image is updated by using the virtual image queried by the face key points, the interaction effect of the scene image and the target object can be enriched, and the use interest is improved.
Wherein said updating the avatar based on the user data comprises: identifying face feature information of the target object from the user data; generating expression information of the target object based on the facial feature information; mapping to a face of the avatar based on the expression information.
Therefore, expression information can be extracted from the facial features and mapped to the face of the virtual image, so that the expression synchronization of the target object and the virtual image can be realized, and the user experience is improved.
Wherein the acquiring user data of the target object based on the first sensor comprises: acquiring continuous multiframe images of a target object based on the first sensor; identifying user data of the target object from the continuous multiframe images; said updating said avatar based on said user data, comprising: generating a virtual animation based on the user data of the consecutive multi-frame images to update the avatar.
Therefore, the user data can be acquired from the continuous multi-frame images, so that the aim of updating the virtual animation in the virtual image is fulfilled, the interaction effect of the scene image and the target object is enriched, and the use interest is improved.
Wherein said updating the avatar based on the user data comprises: responding to a first operation instruction of a user, and selecting a first target virtual image in the scene image; updating the first target avatar based on the user data.
Therefore, the user responds to the first operation instruction of the user to select the virtual image in the scene image for rendering, the interaction effect of the scene image and the target object can be enriched, and the interestingness of use is improved.
Wherein said updating the avatar based on the user data comprises: selecting a second target avatar within the scene image in response to a second operation instruction of the user, wherein the second target avatar includes at least one avatar other than the first target avatar; updating the second target avatar based on the user data.
Therefore, the user responds to the second operation instruction of the user to selectively update the virtual image in the scene image for multiple times or selectively update the virtual image, so that the interaction effect of the scene image and the target object can be enriched, and the use interest is improved.
The first sensor is a front camera, the second sensor is a rear camera, and the front camera and the rear camera are arranged on the same terminal.
For solving the technical problem that exists among the prior art, the application provides a virtual reality display device, and virtual reality display device includes: a processor and a memory, the memory having stored therein a computer program, the processor being configured to execute the computer program to implement the method described above.
In order to solve the technical problems in the prior art, the present application provides a virtual reality display device, in which program instructions are stored, and the program instructions implement the above method when executed by a processor.
Compared with the prior art, the virtual display method comprises the following steps: acquiring user data of a target object based on a first sensor; acquiring a scene image of the target object based on the second sensor, and generating an avatar based on the scene image; the avatar in the virtual space is updated based on the user data. Therefore, by the mode, the virtual image generated based on the scene image acquired by the second sensor can be updated by using the user data acquired by the first sensor, so that the purpose of combining the space data acquired by the two sensors can be realized, the interaction effect of the target object is enriched, and the use interest is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating a virtual display method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating an embodiment of step S103 in FIG. 1;
FIG. 3 is a schematic flow chart of another embodiment of step S103 in FIG. 1;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of a virtual display method provided in the present application;
fig. 5 is a schematic structural diagram of an embodiment of a virtual reality display apparatus provided in the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a virtual reality display apparatus provided in the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples and not all examples of the present application, and all other examples obtained by a person of ordinary skill in the art without any inventive step are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Throughout the description of the present application, it is intended that the terms "mounted," "disposed," "connected," and "connected" be construed broadly and encompass, for example, fixed connections, removable connections, or integral connections unless expressly stated or limited otherwise; can be mechanically connected or electrically connected; they may be directly connected or may be connected via an intermediate medium. To one of ordinary skill in the art, the foregoing may be combined in any suitable manner with the specific meaning ascribed to the present application.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Based on the basic technologies, the present application provides a virtual display method, and specifically please refer to fig. 1, where fig. 1 is a schematic flow diagram of an embodiment of the virtual display method provided in the present application. Specifically, the following steps S101 to S103 may be included:
step S101: user data of the target object is acquired based on the first sensor.
The first sensor can be the leading camera at terminal, and the terminal can intelligent terminal or mobile terminal, like smart mobile phone etc. and it includes leading camera, rear camera and display screen simultaneously. The target object may be a person or animal or the like having an object that can be used as user data. The user data may be facial data of a human or an animal, such as an outline of the user's face, five sense organs of the user, facial expressions of the user, and the like. Of course, in other embodiments, the user data is not limited thereto, and any data related to the target object that can be obtained by the first sensor may be the user data.
Step S102: and acquiring a scene image of the target object based on the second sensor, and generating an avatar based on the scene image.
The second sensor is a rear camera, the front camera and the rear camera are arranged on the same terminal, and the terminal can support the function of simultaneously opening the front camera and the rear camera.
The target object and the terminal provided with the second sensor are located in the same space, namely, the scene image acquired by the second sensor is the scene environment of the area where the target object is located. The scene image may be an image captured based on a scene environment in which the target object is located, and the image may be captured by the second sensor. The number of images may be multiple, for example, 10, 20, 50, etc., and the greater the number of images, the more accurate the scene image is generated, but the more time the scene image is generated, the more images may be used to generate the three-dimensional map data.
In other embodiments, the second sensor mounted on the terminal may also be a sensor such as a laser radar, and the three-dimensional point cloud of the area where the target object is located may be acquired by the sensor such as the laser radar, that is, the three-dimensional map data may be constructed from the three-dimensional point cloud.
The avatar may be specific virtual content in the scene image, such as a person, animal or other object that may be used as the avatar. After the avatar is generated, the avatar may be directly displayed in the scene image.
In an embodiment, after step S102, the virtual display method may further include: and identifying biological data from the scene image, and generating an avatar in the scene image based on the biological data.
The scene image is obtained by shooting the scene environment where the target object is located, in this embodiment, the shot image includes biological data and environment data, a scene space is formed by the environment data, and an avatar is generated in the formed scene space by using the biological data. For example, the scene image includes environment data such as a building, a road, and a tree on both sides of the road, and biological data such as pedestrians and animals on the road. Wherein the acquired environmental data is used to form a scene space and the acquired biological data is used to generate an avatar in the scene space.
Step S103: the avatar is updated based on the user data.
Updating the avatar through the collected user data to achieve the purpose of changing the display state of the avatar using the user data, for example, when the user data is the facial expression of the extracted target object, the display state of the avatar is changed by updating the extracted facial expression at the preset position of the avatar so as to overlay the preset position of the avatar with the user expression.
Through the embodiment, the virtual image generated based on the scene image acquired by the second sensor can be updated by using the user data acquired by the first sensor, so that the purpose of combining the space data acquired by the two sensors can be realized, the interaction effect of the scene image and the target object is enriched, and the use interest is improved.
In one embodiment, step S103 includes: in response to a first operation instruction of a user, a first target avatar within the scene image is selected, and the first target avatar is updated based on user data.
The first operation instruction may be voice information of a user, touch information of the user, or a control signal input by the user. In this embodiment, one or more avatars exist in the scene image, the user may issue a first operation instruction before the avatar is updated with the user data, and after the device receives the first operation instruction, the one or more avatars are selected as the first target avatar in the scene image, and the first target avatar is updated with the user data.
In another embodiment, step S103 further comprises: and in response to a second operation instruction of the user, selecting a second target avatar within the scene image, and updating the second target avatar in the scene image based on the user data, wherein the second target avatar includes at least one avatar other than the first target avatar.
The second operation instruction may be voice information of the user, touch information of the user, or a control signal input by the user, and in this embodiment, a plurality of avatars exist in the scene image, and in addition to the selected first target avatar, other avatars that are not updated also exist in the scene image. At this time, before the avatar is updated with the user data, the user may issue a second operation instruction, and the apparatus selects one or more avatars in the scene image as second target avatars after receiving the second operation instruction, and updates the second target avatar with the user data. At this time, the second target avatar may also include the first target avatar that has been previously rendered.
Through the embodiment, the plurality of virtual images in the scene image acquired by the second sensor can be respectively updated through the user data acquired by the first sensor, so that the purpose of combining the space data acquired by the two sensors can be realized, the interaction effect of the scene image and the target object is enriched, and the use interest is improved.
In an embodiment, the user data includes face key points of the user, and the step S103 further includes: and updating the virtual image based on the face key points of the user.
The face key points can be a plurality of feature points on the face, and the feature data of the face can be identified and obtained through the face key points. For example, mouth, nose, eyes, ears or eyebrows on the face of the user can be identified by using the key points of the face; or the face key points can be used for identifying the face outline, and the face outline can reflect the face shape of the user, including a round face shape, a goose egg face shape or a melon seed face shape. In this way, when the recognized face feature data is facial features or outlines, the facial features or outlines can be displayed on the virtual image, and in addition, when the recognized face feature data is facial features, facial expressions can be recognized through the extracted facial features so as to display expression information of the user on the virtual image.
The virtual image may be updated by using the feature data of the face obtained by the face key points, and certainly, in other embodiments, the corresponding virtual image may also be queried by using the face key points to update the virtual image generated based on the scene image.
Further, the above steps: updating the avatar based on the face key points of the user, comprising:
identifying the facial expression type of the user based on the facial key points of the user; inquiring a corresponding virtual image ID based on the facial expression type, and displaying the virtual image of the virtual image ID; the facial expression type and the virtual image ID are bound in advance.
In this embodiment, the face key points may be a plurality of feature points on the face, for example, feature points on the mouth, nose, eyes, ears or eyebrows of the user, and the expression type of the face may be identified through the extracted feature points on the facial features, for example, the expression type of the face may include laughing, crying, anger, fear, excitement, distraction, indifference, spying and the like. The expression type of the face is to bind an avatar ID in advance, for example, when the expression type identified by using the face key point is smile, the avatar ID bound in advance is avatar 1, and the avatar in avatar 1 can be set in advance, for example, avatar 1 is a smiling face of an animal and an environmental background corresponding to the smiling face; for example, when the expression type recognized by the face key point is anecdotal, the avatar ID bound in advance is the avatar 2, and the avatar in the avatar 2 may be set in advance, illustratively, the avatar 2 is an anecdotal expression of another animal and an environmental background corresponding to the anecdotal expression. Therefore, when the facial expression type of the user is identified and the corresponding virtual image ID is inquired based on the facial expression type, the inquired corresponding virtual image can be used for displaying, the interaction effect of the scene image and the target object can be enriched, and the use interest is improved.
In the above embodiment, the virtual image is directly updated by using the obtained user data, and in other embodiments, the feature information of the obtained user data and the feature information of the virtual image may be used to perform feature fusion, and the virtual image is updated by using the fused feature.
Referring to fig. 2, fig. 2 is a schematic flowchart of an embodiment of step S103 in fig. 1, and specifically, may include the following steps S201 to S204:
step S201: first feature information of a first preset position is extracted from user data.
In this embodiment, the first preset position may be a face of the target object, and the first feature information may be feature information on one or more feature points of the mouth, the nose, the eyes, the ears, and the eyebrows.
Step S202: second characteristic information of a second preset position on the virtual image is identified.
In this embodiment, the avatar may be a virtual human or other virtual creatures, and the second preset position of the avatar may be a human face of the virtual human, and the second feature information of the avatar is also feature information of one or more feature points in the mouth, nose, eyes, ears and eyebrows.
Step S203: and fusing the first characteristic information and the second characteristic information to obtain third characteristic information.
In this embodiment, the first feature information and the second feature information may be fused, and the fused feature information is third feature information. In an embodiment, when the extracted first feature information is three feature information of mouth, nose and eyes, and the extracted second feature information is mouth, nose, eyes and eyebrows, the mouth, nose and eyes in the first feature information and the mouth, nose and eyes of the second feature information are respectively and correspondingly fused, and finally the fused mouth, nose and eyes are used as third feature information, while the feature information of the eyebrows is kept unchanged.
Step S204: and updating the virtual image based on the third characteristic information.
After the third characteristic information is determined, the preset position of the avatar is updated by using the third characteristic information, for example, the preset position is the second preset position of the avatar, and the avatar with the updated third characteristic information is used as the display state of the avatar.
Referring to fig. 3, fig. 3 is a schematic flowchart of another embodiment of step S103 in fig. 1, and specifically, may include the following steps S301 to S303:
step S301: face feature information of the target object is identified from the user data.
In this embodiment, the user data may include face data of the target object, so as to identify face feature information of the target object from the user data, where the face feature information may include a face contour and facial feature data of the face.
Step S302: and generating expression information of the target object based on the facial feature information.
In this embodiment, information of eye muscles, face muscles, mouth muscles, and the like of the recognized target object may be extracted from the facial feature information thereof, compared with information related to a case of no expression, and various emotional states may be expressed based on changes between the information, for example, the finally generated expression information of the target object may include laughing, crying, anger, fear, excitement, distraction, inebriety, accommodation, and the like.
Step S303: the face of the avatar is mapped based on the expression information.
In the present embodiment, the position of the avatar may be previously located before the expression information is mapped to the face of the avatar, and the mapping to the face of the avatar based on the expression information is performed in the case where the position of the avatar satisfies a preset position. For example, there may be more environmental information in the scene image, including roads, buildings beside the roads, plants between the roads and the buildings, etc., and the position of the avatar in the virtual space is determined by the relative positions of the avatar and the environmental objects. Illustratively, the avatar is located on a highway, on a building roof, by a plant, or the like, it is determined whether the avatar is at a preset position after the position of the avatar is determined, it is determined that the avatar is at the preset position when the avatar is determined to be on the highway of the scene image, and it is determined that the avatar is not at the preset position when the preset avatar is at the building roof in the scene image.
Through the embodiment, when a plurality of avatars are arranged in the scene image, expression information can be selectively updated on the faces of the partial avatars.
In the above embodiment, the projection of the static avatar in the scene image may be implemented by a single frame image, and further, the projection of the dynamic virtual animation in the scene image may be implemented by a plurality of consecutive frames of images, so as to improve the interaction effect between the user and the scene image.
Referring to fig. 4, fig. 4 is a schematic flowchart of another embodiment of the virtual display method provided in the present application, and specifically, the method may include the following steps S401 to S404:
step S401: continuous multiframe images of the target object are acquired based on the first sensor.
In the embodiment, the target object may be in a non-static state in the space, and the continuous multi-frame images of the target object in the space are acquired through the first sensor, for example, the acquired continuous multi-frame images of the target object may be 10, 20, 50, and the like, and when the acquired target images are more, the animation time finally generated is longer.
Step S402: user data of a target object is identified from a plurality of consecutive frame images.
In this embodiment, each of the consecutive multi-frame images has a target object, and user data of the target object is identified from the consecutive multi-frame images, so that continuously changing user data can be obtained, and the continuously changing user data jointly form a target object animation. For example, the user data is expression information of the target object, and the user data recognized from the images of the consecutive frames is animation formed by the change of the expression of the target object in the period of time.
Step S403: and acquiring a scene image of the target object based on the second sensor, and generating an avatar based on the scene image.
In this embodiment, step S403 is the same as the embodiment of step S102, and is not described herein again.
Step S404: a virtual animation is generated based on user data of successive frames of images to update the avatar.
In this embodiment, the acquired user data is updated to the avatar of the scene image according to the sequence of acquiring the corresponding images, so as to render the virtual animation on the avatar. The virtual animation can simulate the change condition of user data in the period of time, can further enrich scene images, and improves the interestingness of use.
Through the mode, the user data with the animation effect acquired by the first sensor can be used for updating the virtual image generated based on the scene image acquired by the second sensor, so that the virtual image updated with the virtual animation can be obtained, the purpose of combining the space data acquired by the two sensors can be realized, the interaction effect of the scene image and the target object is enriched, and the use interest is improved.
The virtual display method in this embodiment may be applied to a virtual reality display apparatus, and the virtual reality display apparatus of this embodiment may be a server, may also be a mobile device, and may also be a system in which the server and the mobile device cooperate with each other. Accordingly, each part, such as each unit, sub-unit, module, and sub-module, included in the mobile device may be all disposed in the server, may also be all disposed in the mobile device, and may also be disposed in the server and the mobile device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein.
In order to implement the virtual display method of the above embodiment, the present application provides a virtual reality display apparatus. Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a virtual reality display device 50 provided in the present application.
Specifically, the virtual reality display device 50 may include: an acquisition module 51, a generation module 52 and an update module 53.
The obtaining module 51 is configured to obtain user data of a target object based on a first sensor, and obtain an image of a scene where the target object is located based on a second sensor.
The generation module 52 is for generating an avatar based on the scene image.
The updating module 53 is for updating the avatar based on the user data.
According to the scheme, the virtual image generated based on the scene image acquired by the second sensor can be updated by utilizing the user data acquired by the first sensor, so that the purpose of combining the space data acquired by the two sensors can be realized, the interaction effect of the scene image and the target object is enriched, and the use interest is improved.
In an embodiment of the present application, each module in the virtual reality display apparatus 50 shown in fig. 5 may be respectively or entirely combined into one or several units to form the virtual reality display apparatus, or some unit(s) may be further split into multiple sub-units with smaller functions, so that the same operation may be implemented without affecting implementation of technical effects of the embodiment of the present application. The modules are divided based on logic functions, and in practical application, the functions of one module can be realized by a plurality of units, or the functions of a plurality of modules can be realized by one unit. In other embodiments of the present application, the virtual reality display device 50 may also include other units, and in practical applications, these functions may also be implemented by the assistance of other units, and may be implemented by cooperation of multiple units.
The method is applied to the virtual reality display equipment. Referring to fig. 6 in detail, fig. 6 is a schematic structural diagram of an embodiment of a virtual reality display device 60 provided in the present application, where the virtual reality display device 60 of the present embodiment includes a processor 61 and a memory 62. The memory 62 stores a computer program, and the processor 61 is configured to execute the computer program to implement the virtual display method.
The processor 61 may be an integrated circuit chip having signal processing capability. The processor 61 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application, where the computer storage medium 70 of the present embodiment includes a computer program 71 that can be executed to implement the virtual display method.
The computer storage medium 70 of this embodiment may be a medium that can store program instructions, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may also be a server that stores the program instructions, and the server may send the stored program instructions to other devices for operation, or may self-operate the stored program instructions.
In addition, if the above functions are implemented in the form of software functions and sold or used as a standalone product, the functions may be stored in a storage medium readable by a mobile terminal, that is, the present application also provides a storage device storing program data, which can be executed to implement the method of the above embodiments, the storage device may be, for example, a usb disk, an optical disk, a server, etc. That is, the present application may be embodied as a software product, which includes several instructions for causing an intelligent terminal to perform all or part of the steps of the methods described in the embodiments.
In the description of the present application, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (e.g., a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (12)

1. A virtual display method, comprising:
acquiring user data of a target object based on a first sensor;
acquiring a scene image of a target object based on a second sensor, and generating an avatar based on the scene image;
updating the avatar based on the user data.
2. The virtual display method according to claim 1, wherein the generating an avatar based on the scene image further comprises:
identifying biological data from the scene image;
generating the avatar based on the biometric data.
3. The virtual display method of claim 1, wherein said updating the avatar based on the user data comprises:
extracting first characteristic information of a first preset position from the user data;
identifying second characteristic information of a second preset position on the virtual image;
fusing the first characteristic information and the second characteristic information to obtain third characteristic information;
updating the avatar based on the third feature information.
4. The virtual display method of claim 1, wherein the user data comprises: a face key point of a user; said updating said avatar based on said user data, comprising:
and updating the virtual image based on the face key points of the user.
5. The virtual display method according to claim 4, wherein said updating the avatar based on the face key points of the user comprises:
identifying the facial expression type of the user based on the facial key points of the user;
inquiring a corresponding virtual image ID based on the facial expression type, and displaying the virtual image of the virtual image ID; and the facial expression type is pre-bound with the virtual image ID.
6. The virtual display method of claim 1, wherein said updating the avatar based on the user data comprises:
identifying face feature information of the target object from the user data;
generating expression information of the target object based on the facial feature information;
mapping to a face of the avatar based on the expression information.
7. The virtual display method of claim 1, wherein the obtaining user data of the target object based on the first sensor comprises:
acquiring continuous multiframe images of a target object based on the first sensor;
identifying user data of the target object from the continuous multiframe images;
said updating said avatar based on said user data, comprising:
generating a virtual animation based on the user data of the consecutive multi-frame images to update the avatar.
8. The virtual display method of claim 1, wherein said updating the avatar based on the user data comprises:
responding to a first operation instruction of a user, and selecting a first target virtual image in the scene image;
updating the first target avatar based on the user data.
9. The virtual display method of claim 8, wherein said updating the avatar based on the user data comprises:
selecting a second target avatar within the scene image in response to a second operation instruction of the user, wherein the second target avatar includes at least one avatar other than the first target avatar;
updating the second target avatar based on the user data.
10. The virtual display method according to any one of claims 1 to 9,
the first sensor is a front camera, the second sensor is a rear camera, and the front camera and the rear camera are arranged on the same terminal.
11. A virtual reality display device, comprising: a processor and a memory, the memory having stored therein a computer program for execution by the processor to implement the method of any of claims 1 to 9.
12. A computer readable storage medium having stored thereon program instructions, characterized in that the program instructions, when executed by a processor, implement the method of any of claims 1 to 9.
CN202111651926.3A 2021-12-30 2021-12-30 Virtual display method, equipment and storage medium Pending CN114332374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111651926.3A CN114332374A (en) 2021-12-30 2021-12-30 Virtual display method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111651926.3A CN114332374A (en) 2021-12-30 2021-12-30 Virtual display method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114332374A true CN114332374A (en) 2022-04-12

Family

ID=81019909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111651926.3A Pending CN114332374A (en) 2021-12-30 2021-12-30 Virtual display method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114332374A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758042A (en) * 2022-06-14 2022-07-15 深圳智华科技发展有限公司 Novel virtual simulation engine, virtual simulation method and device
CN115222899A (en) * 2022-09-21 2022-10-21 湖南草根文化传媒有限公司 Virtual digital human generation method, system, computer device and storage medium
CN115374141A (en) * 2022-09-20 2022-11-22 支付宝(杭州)信息技术有限公司 Virtual image updating method and device
CN115909413A (en) * 2022-12-22 2023-04-04 北京百度网讯科技有限公司 Method, apparatus, device and medium for controlling avatar
WO2023241010A1 (en) * 2022-06-14 2023-12-21 Oppo广东移动通信有限公司 Virtual image generation method and apparatus, electronic device and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758042A (en) * 2022-06-14 2022-07-15 深圳智华科技发展有限公司 Novel virtual simulation engine, virtual simulation method and device
CN114758042B (en) * 2022-06-14 2022-09-02 深圳智华科技发展有限公司 Novel virtual simulation engine, virtual simulation method and device
WO2023241010A1 (en) * 2022-06-14 2023-12-21 Oppo广东移动通信有限公司 Virtual image generation method and apparatus, electronic device and storage medium
CN115374141A (en) * 2022-09-20 2022-11-22 支付宝(杭州)信息技术有限公司 Virtual image updating method and device
CN115374141B (en) * 2022-09-20 2024-05-10 支付宝(杭州)信息技术有限公司 Update processing method and device for virtual image
CN115222899A (en) * 2022-09-21 2022-10-21 湖南草根文化传媒有限公司 Virtual digital human generation method, system, computer device and storage medium
CN115909413A (en) * 2022-12-22 2023-04-04 北京百度网讯科技有限公司 Method, apparatus, device and medium for controlling avatar
CN115909413B (en) * 2022-12-22 2023-10-27 北京百度网讯科技有限公司 Method, apparatus, device, and medium for controlling avatar

Similar Documents

Publication Publication Date Title
CN110850983B (en) Virtual object control method and device in video live broadcast and storage medium
US11790589B1 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
CN108961369B (en) Method and device for generating 3D animation
CN111028330B (en) Three-dimensional expression base generation method, device, equipment and storage medium
CN114332374A (en) Virtual display method, equipment and storage medium
US9922461B2 (en) Reality augmenting method, client device and server
CN109242961A (en) A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN111833458B (en) Image display method and device, equipment and computer readable storage medium
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN112308977B (en) Video processing method, video processing device, and storage medium
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN112673400A (en) Avatar animation
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN113867531A (en) Interaction method, device, equipment and computer readable storage medium
CN114092670A (en) Virtual reality display method, equipment and storage medium
CN117333645A (en) Annular holographic interaction system and equipment thereof
CN115731326A (en) Virtual role generation method and device, computer readable medium and electronic device
CN111650953B (en) Aircraft obstacle avoidance processing method and device, electronic equipment and storage medium
CN112070901A (en) AR scene construction method and device for garden, storage medium and terminal
CN114758041A (en) Virtual object display method and device, electronic equipment and storage medium
EP3385869B1 (en) Method and apparatus for presenting multimedia information
KR20200052812A (en) Activity character creating method in virtual environment
CN117132687B (en) Animation generation method and device and electronic equipment
CN114840089A (en) Augmented reality musical instrument display method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination