CN113325952A - Method, apparatus, device, medium and product for presenting virtual objects - Google Patents

Method, apparatus, device, medium and product for presenting virtual objects Download PDF

Info

Publication number
CN113325952A
CN113325952A CN202110585630.XA CN202110585630A CN113325952A CN 113325952 A CN113325952 A CN 113325952A CN 202110585630 A CN202110585630 A CN 202110585630A CN 113325952 A CN113325952 A CN 113325952A
Authority
CN
China
Prior art keywords
control information
virtual object
hand
dimensional virtual
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110585630.XA
Other languages
Chinese (zh)
Inventor
吴准
邬诗雨
杨瑞
张晓东
李士岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110585630.XA priority Critical patent/CN113325952A/en
Publication of CN113325952A publication Critical patent/CN113325952A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Business, Economics & Management (AREA)
  • Signal Processing (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a method, an apparatus, a device, a medium, and a product for displaying a virtual object, which relate to the field of computers and further relate to the field of human-computer interaction technology. The specific implementation scheme is as follows: acquiring real object information; carrying out three-dimensional modeling on the real object information to obtain a three-dimensional virtual object; determining hand control information and/or voice control information for the three-dimensional virtual object; and displaying the three-dimensional virtual object based on the hand control information and/or the voice control information. The implementation mode can enhance the display effect of the virtual object.

Description

Method, apparatus, device, medium and product for presenting virtual objects
Technical Field
The present disclosure relates to the field of computers, and more particularly, to a method, an apparatus, a device, a medium, and a product for displaying a virtual object.
Background
Currently, in the process of live broadcasting by using a virtual idol, a virtual object is often displayed, for example, in the process of live broadcasting with goods, a virtual commodity needs to be displayed.
Currently, a display method for a virtual commodity is generally to display the virtual commodity based on a two-dimensional map corresponding to the virtual commodity. However, this method can only perform static display, and cannot realize interaction with virtual goods, which has a problem of poor display effect.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, medium, and article of manufacture for presenting virtual objects.
According to a first aspect, there is provided a method for presenting a virtual object, comprising: acquiring real object information; carrying out three-dimensional modeling on the real object information to obtain a three-dimensional virtual object; determining hand control information and/or voice control information for the three-dimensional virtual object; and displaying the three-dimensional virtual object based on the hand control information and/or the voice control information.
According to a second aspect, there is provided an apparatus for presenting a virtual object, comprising: an information acquisition unit configured to acquire real object information; the modeling unit is configured to carry out three-dimensional modeling on the real object information to obtain a three-dimensional virtual object; an information determination unit configured to determine hand control information and/or voice control information for a three-dimensional virtual object; a presentation unit configured to present a three-dimensional virtual object based on the hand control information and/or the voice control information.
According to a third aspect, there is provided an electronic device performing a method for presenting a virtual object, comprising: one or more processors; a memory for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method for exposing virtual objects as any one of above.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method for presenting a virtual object as any one of the above.
According to a fifth aspect, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method for presenting virtual objects as any one of the above.
According to the technology of the present disclosure, a method for displaying a virtual object is provided, which is capable of obtaining a three-dimensional virtual object based on three-dimensional modeling of real object information, and displaying the three-dimensional virtual object based on hand control information and/or voice control information. The process can realize the three-dimensional display of the virtual commodity, and can further adjust the display effect of the three-dimensional form based on hand control and/or voice control, so that the interaction with the virtual commodity is realized, and the display effect is better.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for presenting virtual objects, according to the present disclosure;
FIG. 3 is a schematic illustration of an application scenario of a method for presenting virtual objects according to the present disclosure;
FIG. 4 is a flow diagram of another embodiment of a method for presenting virtual objects in accordance with the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for presenting virtual objects in accordance with the present disclosure;
FIG. 6 is a block diagram of an electronic device for implementing a method for presenting virtual objects of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is an exemplary system architecture diagram according to a first embodiment of the present disclosure, illustrating an exemplary system architecture 100 to which embodiments of the method for presenting virtual objects of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, and 103 may be electronic devices such as a mobile phone, a computer, and a tablet, and various application software, such as software for performing virtual live broadcasting, may be installed in the terminal devices 101, 102, and 103. The software for virtual live broadcasting can display the virtual idol to perform corresponding activities in the virtual space, for example, the virtual idol performs live broadcasting goods selling in the virtual space.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, televisions, smart phones, tablet computers, e-book readers, car-mounted computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, for example, may obtain virtual live broadcast information of software for performing virtual live broadcast in the terminal devices 101, 102, and 103, where the virtual live broadcast information may include a idol performing virtual live broadcast, a virtual commodity for live broadcast selling, a virtual item in a virtual space where the virtual live broadcast is located, and the like. The server 105 may determine a virtual object, such as a virtual commodity, to be displayed from the virtual live information. At this time, the server 105 can further obtain real commodity information corresponding to the virtual commodity, that is, real object information. And modeling based on the real commodity information to obtain a three-dimensional virtual commodity. The display of the three-dimensional virtual commodity in the terminal devices 101, 102, 103 is controlled based on the gesture control information and/or the voice control information for the three-dimensional virtual commodity.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for presenting a virtual object provided by the embodiment of the present disclosure may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105. Accordingly, the means for presenting the virtual object may be provided in the terminal devices 101, 102, 103, or in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for presenting virtual objects in accordance with the present disclosure is shown. The method for displaying the virtual object of the embodiment comprises the following steps:
step 201, real object information is acquired.
In this embodiment, the real object information is used to describe real parameters corresponding to a virtual object to be displayed, and the form of the real object information may be various forms such as a picture form (e.g., a photograph of the real object), a video form (e.g., a video of the real object), a text form (e.g., text description information of the real object), a digital form (e.g., length, width, and high-level parameter data of the real object), and the like. An executing agent (such as the server 105 or the terminal devices 101, 102, and 103 in fig. 1) may establish a connection with an electronic device that manages a virtual live broadcast, and the executing agent may also be an electronic device that manages a virtual live broadcast. Under the condition that the execution main body is connected with the electronic equipment for managing the virtual live broadcast, the execution main body can acquire a virtual scene corresponding to the virtual live broadcast from the electronic equipment for managing the virtual live broadcast, and the virtual scene can comprise various virtual objects such as virtual idols, virtual commodities, virtual objects, virtual props and the like in the virtual live broadcast. Alternatively, when the execution subject is an electronic device that manages a virtual live broadcast, the virtual scene corresponding to the virtual live broadcast may be directly read from the local place. Further, the execution subject may determine a virtual object to be presented in the virtual scene. Specifically, the execution subject may identify a category of the virtual scene, such as a virtual live category, a virtual object configuration category, and the like. Under the condition that the category of the virtual scene is the virtual live broadcast category, the actual parameters of the commodities needing to be displayed can be determined as real object information. When the category of the virtual scene is the virtual object configuration category, the virtual item to be displayed and the real parameter corresponding to the virtual object waiting configuration item can be determined as the real object information.
Step 202, performing three-dimensional modeling on the real object information to obtain a three-dimensional virtual object.
In this embodiment, after the execution subject acquires the real object information, the real object information may be three-dimensionally modeled, and specifically, the real object information may be input into a preset three-dimensional modeling application software, so that the preset three-dimensional modeling application software constructs a corresponding three-dimensional virtual object in a three-dimensional form in a virtual three-dimensional space based on the real object information. Wherein the real object information and the three-dimensional virtual object are different information for the same object. The preset three-dimensional modeling application software may be available software such as 3Ds MAX (a three-dimensional animation rendering and production software based on PC system).
Step 203, determining hand control information and/or voice control information for the three-dimensional virtual object.
In this embodiment, the voice control information is a voice for controlling the corresponding display of the three-dimensional virtual object, and the hand control information is a hand motion for controlling the corresponding display of the three-dimensional virtual object, which may include, but is not limited to, a hand movement motion, a hand gesture motion, and the like. Specifically, the voice control information may be a voice uttered by a virtual idol in the virtual scene, or may also be a voice uttered by a person related to controlling the virtual idol. For example, in the process of virtual live broadcasting, a tool person (a person controlling a virtual idol) is generally used to realize a designated motion and expression, and then devices such as a motion capture device and a surface capture device are used to capture the motion and expression of the tool person, and the virtual idol is controlled to present a corresponding motion and expression based on relevant parameters. And can also utilize the tool person to speak, catch the pronunciation of the tool person based on recording equipment, based on the sound source synthesis technology, convert the pronunciation of the tool person into the pronunciation of the virtual idol, so that the virtual idol sounds according to the pronunciation that the tool person uttered. The voice uttered by the tool man at this time can also be used as the voice control information. Further, the motion capture device may capture the hand motion of the tool person, and return the corresponding parameter to the virtual idol, so that the virtual idol controls the hand motion based on the parameter, at this time, the hand control information may be the hand motion of the tool person, and the hand control information may also be the hand motion of the virtual idol, which is not limited in this embodiment. Optionally, the execution subject may manage a plurality of three-dimensional virtual objects, and for each three-dimensional virtual object, the execution subject may determine the voice control information and/or the hand control information corresponding to the three-dimensional virtual object, thereby implementing parallel display of the plurality of three-dimensional virtual objects. Further alternatively, the hand control information may be determined based on the image capturing device capturing the hand motion image.
And step 204, displaying the three-dimensional virtual object based on the hand control information and/or the voice control information.
In this embodiment, the execution main body may store the corresponding relationship between the hand control information and/or the voice control information and the display effect in advance, and after the hand control information and/or the voice control information are acquired, the display effect corresponding to the hand control information and/or the voice control information can be determined based on the corresponding relationship, and the three-dimensional virtual object is controlled to be displayed according to the corresponding display effect. The display effect may include, but is not limited to, various combinations of various effects such as a display duration, a display animation, a display motion trajectory, a display angle change trajectory, a display special effect, and the like.
Optionally, displaying the three-dimensional virtual object based on the hand control information and/or the voice control information may include: determining a display motion track, a display angle change track and/or a first display animation of the three-dimensional virtual object based on the hand control information; determining the display duration and/or the second display animation of the three-dimensional virtual object based on the voice control information; and controlling the three-dimensional virtual object to be displayed according to the display motion track, the display angle change track, the first display animation, the display duration and/or the second display animation. If the hand control information comprises the motion trail of the hand, establishing a corresponding relation based on the initial coordinates of the hand and the initial coordinates of the three-dimensional virtual object, and mapping and generating a display motion trail moving from the initial coordinates of the three-dimensional virtual object based on the motion trail of the hand moving from the initial coordinates. If the hand control information comprises the angle change track of the hand, determining the angle change amplitude of the three-dimensional virtual object based on the angle change amplitude of the hand, and generating the angle change track based on the angle change amplitude and the initial angle of the three-dimensional virtual object. And the execution body carries out voice recognition on the voice control information, and can obtain the display duration and/or the display animation indicated by the voice. The first display animation may be stored in advance in correspondence with the corresponding hand control information, and the second display animation may be stored in advance in correspondence with the corresponding voice control information. When the three-dimensional virtual object is displayed, the first display animation and the second display animation can be displayed in an overlapping mode. For example, if the first display animation is a rotating display and the second display animation is a lens display in a manner of advancing from far to near, the first display animation and the second display animation can be overlapped to obtain a lens rotating display in a manner of advancing from far to near, and the three-dimensional virtual object is displayed based on the overlapped display effect.
In some optional implementations of this embodiment, presenting the three-dimensional virtual object based on the hand control information and/or the voice control information includes: acquiring preset display duration; and controlling the three-dimensional virtual object to be displayed according to preset display duration based on the hand control information and/or the voice control information.
In this embodiment, the execution body may also be preset with a presentation duration, such as 3 seconds. When the three-dimensional virtual object is controlled to be displayed, the display can be controlled according to a preset display duration, for example, the three-dimensional virtual object is controlled to be displayed for 3 seconds according to a corresponding display effect.
For example, when a certain commodity is promoted in a virtual live broadcast, a worker can wear various devices such as a hand-capture device, a surface-capture device and a recording device to drive the virtual main broadcast to make corresponding sound and make corresponding actions. If the virtual anchor displays the commodity, the commodity can be subjected to three-dimensional modeling to obtain a three-dimensional virtual commodity, and meanwhile, the staff can send corresponding hand control information and voice control information to display the three-dimensional virtual commodity. For example, the worker says: the three-dimensional virtual commodity can be controlled to be displayed in a rotating mode when people rotate to see the commodity. If the commodity is clothes, in order to better show the upper body effect of the clothes, modeling can be carried out based on a real model to obtain a three-dimensional virtual model, and the three-dimensional virtual commodity and the three-dimensional virtual model are controlled to be displayed in a fitting mode. At the moment, the staff can send out corresponding hand control information and voice control information to display the three-dimensional virtual model. For example, the worker says: the three-dimensional virtual model can be controlled to carry out close-up display at the moment when people see the upper body effect. The object to be displayed and the display mode can be determined by integrating the voice recognition result of the voice control information and the hand recognition result of the hand control information.
With continued reference to fig. 3, a schematic diagram of one application scenario of a method for presenting virtual objects in accordance with the present disclosure is shown. In the application scenario of fig. 3, the executing subject may first acquire a virtual scenario 301, and in the virtual scenario 301, a virtual idol 302 is performing virtual live broadcasting for promoting virtual goods. The execution subject may first obtain real commodity information, and obtain the three-dimensional virtual commodity 304 based on the real commodity information and preset three-dimensional modeling software. Thereafter, the executing agent may detect the speech uttered by virtual idol 302 in virtual scene 301 and the hand motion of virtual idol 302. In fig. 3, the voice uttered by the virtual idol 302 is the voice control information 303, that is, "give people to see the product 360 degrees," the motion trajectory of the hand and the gesture motion of the hand of the virtual idol 302 are the hand control information. In the application scenario of fig. 3, only the voice control information 303 is detected, and no hand control information is detected as an example, at this time, the execution main body may display the three-dimensional virtual commodity 304 based on the voice control information 303, that is, control the three-dimensional virtual commodity 304 to rotate 360 degrees, so that the user obtains the three-dimensional virtual commodity 304 displayed at various angles, thereby improving the display comprehensiveness and achieving a better display effect.
The method for displaying the virtual object provided by the above embodiment of the present disclosure can obtain the three-dimensional virtual object based on three-dimensional modeling of the real object information, and display the three-dimensional virtual object based on the hand control information and/or the voice control information. The process can realize the three-dimensional display of the virtual commodity, and can further adjust the display effect of the three-dimensional form based on hand control and/or voice control, so that the interaction with the virtual commodity is realized, and the display effect is better.
With continued reference to FIG. 4, a flow 400 of another embodiment of a method for presenting virtual objects in accordance with the present disclosure is shown. As shown in fig. 4, the method for presenting a virtual object of the present embodiment may include the following steps:
step 401, obtaining real object information.
And 402, performing three-dimensional modeling on the real object information to obtain a three-dimensional virtual object.
In this embodiment, please refer to the detailed description of step 201 to step 202 for the detailed description of step 401 to step 402, which is not repeated herein.
And step 403, acquiring the hand-catching device.
In this embodiment, the hand capture device may be in the form of a glove for monitoring the movement of the individual fingers. The execution main body can be connected with the hand-capture device, and the connection mode is not limited in this embodiment. For example, a connection is established with the hand-held device via bluetooth, and the executing main body may determine the device identifier of the hand-held device from the search list based on the bluetooth search, and connect to the hand-held device based on the device identifier to acquire the hand-held device.
And step 404, establishing a binding relationship between the hand-capture device and the three-dimensional virtual object.
In this embodiment, after acquiring the handheld device, the executing entity may perform association binding on the handheld device and the three-dimensional virtual object, that is, establish a binding relationship between the handheld device and the three-dimensional virtual object. Specifically, the execution subject may call a preset application software to establish the binding relationship. The hand-capturing device can be further used for controlling hand motions of the virtual idols, corresponding relations between the hand motions of the virtual idols and display effects of the three-dimensional virtual objects can be achieved based on the binding relations between the hand-capturing device and the three-dimensional virtual objects, and interaction effects between the virtual idols and the virtual objects are improved. In this case, the preset application software is used to bind the virtual object and the palm skeleton of the virtual idol, so that the virtual object moves or rotates with the hand of the virtual idol in visual effect.
Step 405, determining hand control information which is transmitted by the hand-capture device and aims at the three-dimensional virtual object based on the binding relation.
In this embodiment, after establishing the binding relationship between the hand-capture device and the three-dimensional virtual object, the execution main body may further determine the hand-capture device corresponding to the three-dimensional virtual object based on the binding relationship, and receive the hand control information transmitted by the hand-capture device as the hand control information for the three-dimensional virtual object. Wherein the hand capture device may detect motion profiles of the respective fingers and generate hand control information based on the motion profiles. For example, a gesture made by a finger is detected as hand control information, or translation or rotation of the hand is detected and translation or rotation information is used as hand control information.
Step 406, determining hand control information and/or voice control information for the three-dimensional virtual object; the hand control information includes at least hand translation information, hand rotation information, and/or gesture control information.
In this embodiment, the execution main body may capture a hand image by using the imaging device to obtain hand control information, in addition to the hand control information for the three-dimensional virtual object transmitted by the hand capture device. In this step, the determined hand control information may be the hand control information transmitted by the hand-capturing device, or may be the hand control information obtained by combining the hand control information transmitted by the hand-capturing device and the hand image captured by the imaging device, which is not limited in this embodiment. Preferably, gesture control information may be determined using a capture device, and hand translation information and hand rotation information may be determined using a camera device.
Step 407, determining a display translation trajectory of the three-dimensional virtual object based on the hand translation information, and/or determining a display rotation trajectory of the three-dimensional virtual object based on the hand rotation information.
In this embodiment, the hand translation information may be a virtual idol or parameters specifying the translation of the hand of the person, and may include, but is not limited to, a translation distance and a translation direction. The hand rotation information may be a virtual idol or a parameter specifying that the hand of the person rotates, and may include, but is not limited to, a rotation angle and a rotation direction. The execution body can determine a display translation track of the three-dimensional virtual object based on the translation distance and the translation direction of the hand translation information, and determine a display rotation track of the three-dimensional virtual object based on the rotation angle and the rotation direction of the hand rotation information. And the display translation track is a track which moves from the current position of the three-dimensional virtual object along the translation direction by a distance matched with the translation distance. And displaying the rotation track as a track which is rotated by an angle matched with the rotation angle along the rotation direction from the current angle of the three-dimensional virtual object.
And step 408, controlling the three-dimensional virtual object to be displayed based on the display translation track control and/or the display rotation track.
In this embodiment, the execution body may control the three-dimensional virtual object to translate along the display translation track, or control the three-dimensional virtual object to rotate along the display rotation track. Alternatively, the execution body may control the three-dimensional virtual object to simultaneously translate and rotate along the presentation translation trajectory and the presentation rotation trajectory.
Step 409, determining a display animation effect corresponding to the gesture control information, and controlling the three-dimensional virtual object to be displayed according to the display animation effect.
In this embodiment, the execution subject may also store different presentation animation effects corresponding to different gestures in advance. After the gesture control information is obtained, a display animation effect corresponding to the gesture of the gesture control information can be determined, and the three-dimensional virtual object is controlled to be displayed according to the display animation effect. The different pre-stored gestures may include, but are not limited to, a thumbing gesture, an ok gesture, a hearting gesture, an opening gesture, a fist making gesture, a victory gesture, and the like, and the different animation effects may include, but are not limited to, an object rotation animation, an object appearance animation, an object disappearance animation, and the like, which is not limited in this embodiment. For example, the open gesture may correspond to an object rotation animation, an object disappearance animation, and an object appearance animation at the same time, where the display animation effect is specifically that the three-dimensional virtual object disappears from the hand position and appears in rotation at the aerial position.
Step 410, determining a display keyword in the voice control information; and displaying the three-dimensional virtual object based on the display parameters corresponding to the display keywords.
In this embodiment, the execution main body may further perform voice recognition on the voice control information to obtain the display keyword. And further determining display parameters corresponding to the display keywords, wherein the display parameters can include but are not limited to display duration parameters, camera parameters for display shooting, display action parameters and the like. For example, if the voice control information is "let everybody see the product at 360 degrees", the display keyword may be determined to be "360 degrees", and further, the execution main body may determine a display parameter which is stored in advance and has a corresponding relationship with the display keyword, such as a display duration of 3 seconds, a lens distance of the image pickup apparatus being smaller than a threshold value, and a preset 360-degree rotation animation, at this time, the lens distance of the image pickup apparatus may be reduced, the three-dimensional virtual object may be displayed according to the preset 360-degree rotation animation, and the display may be stopped after the display is continued for 3 seconds.
The method for presenting a virtual object provided by the above embodiments of the present disclosure may further control translation or rotation of the three-dimensional virtual object based on translation or rotation of the hand. And setting the display effect of the three-dimensional virtual object based on the display animation corresponding to the gesture. And displaying the three-dimensional virtual object based on the display parameters corresponding to the display keywords of the voice control information. The process further enriches the diversity of the display effect and the diversity of the man-machine interaction mode, and the display effect is better. In addition, a binding relationship can be established between the hand capturing device and the three-dimensional virtual object, so that the hand control information is determined, and the acquisition accuracy of the hand control information is improved.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for presenting a virtual object, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various servers.
As shown in fig. 5, the apparatus 500 for presenting a virtual object of the present embodiment includes: the information display device comprises an information acquisition unit 501, a modeling unit 502, an information determination unit 503 and a presentation unit 504.
An information obtaining unit 501 configured to obtain real object information.
The modeling unit 502 is configured to perform three-dimensional modeling on the real object information to obtain a three-dimensional virtual object.
An information determination unit 503 configured to determine hand control information and/or voice control information for the three-dimensional virtual object.
A presentation unit 504 configured to present a three-dimensional virtual object based on the hand control information and/or the voice control information.
In some optional implementations of this embodiment, the hand control information includes at least hand translation information and/or hand rotation information; and, presentation unit 504 is further configured to: determining a display translation track of the three-dimensional virtual object based on the hand translation information, and/or determining a display rotation track of the three-dimensional virtual object based on the hand rotation information; and controlling the three-dimensional virtual object to be displayed based on the display translation track and/or the display rotation track.
In some optional implementations of this embodiment, the hand control information includes at least gesture control information; and, presentation unit 504 is further configured to: determining a display animation effect corresponding to the gesture control information; and controlling the three-dimensional virtual object to be displayed according to the animation effect.
In some optional implementations of this embodiment, the presentation unit 504 is further configured to: determining display keywords in the voice control information; and displaying the three-dimensional virtual object based on the display parameters corresponding to the display keywords.
In some optional implementations of this embodiment, the presentation unit 504 is further configured to: acquiring preset display duration; and controlling the three-dimensional virtual object to be displayed according to preset display duration based on the hand control information and/or the voice control information.
In some optional implementations of this embodiment, the apparatus further includes: a device acquisition unit configured to acquire a hand-capture device; a binding unit configured to establish a binding relationship between the hand-catching device and the three-dimensional virtual object; and a control determination unit configured to determine hand control information for the three-dimensional virtual object transmitted by the hand-capture device based on the binding relationship.
It should be understood that the units 501 to 504 described in the apparatus 500 for presenting a virtual object correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for processing virtual objects are equally applicable to the apparatus 500 and the units included therein, and will not be described in detail here.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a block diagram of an electronic device 600 for implementing a method for presenting virtual objects of embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as methods for presenting virtual objects. For example, in some embodiments, the method for exposing virtual objects may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM603 and executed by the computing unit 601, one or more steps of the method for presenting virtual objects described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the method for exposing virtual objects.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A method for presenting a virtual object, comprising:
acquiring real object information;
carrying out three-dimensional modeling on the real object information to obtain a three-dimensional virtual object;
determining hand control information and/or voice control information for the three-dimensional virtual object;
and displaying the three-dimensional virtual object based on the hand control information and/or the voice control information.
2. The method of claim 1, wherein the hand control information includes at least hand translation information and/or hand rotation information; and
the presenting the three-dimensional virtual object based on the hand control information and/or the voice control information comprises:
determining a display translation trajectory of the three-dimensional virtual object based on the hand translation information and/or determining a display rotation trajectory of the three-dimensional virtual object based on the hand rotation information;
and controlling the three-dimensional virtual object to be displayed based on the display translation track and/or the display rotation track.
3. The method of claim 1, wherein the hand control information includes at least gesture control information; and
the presenting the three-dimensional virtual object based on the hand control information and/or the voice control information comprises:
determining a display animation effect corresponding to the gesture control information;
and controlling the three-dimensional virtual object to be displayed according to the animation display effect.
4. The method of claim 1, wherein said presenting the three-dimensional virtual object based on the hand control information and/or the voice control information comprises:
determining display keywords in the voice control information;
and displaying the three-dimensional virtual object based on the display parameters corresponding to the display keywords.
5. The method of claim 1, wherein said presenting the three-dimensional virtual object based on the hand control information and/or the voice control information comprises:
acquiring preset display duration;
and controlling the three-dimensional virtual object to be displayed according to the preset display duration based on the hand control information and/or the voice control information.
6. The method of claim 1, wherein the method further comprises:
acquiring a hand-catching device;
establishing a binding relationship between the hand-catching device and the three-dimensional virtual object;
and determining the hand control information aiming at the three-dimensional virtual object transmitted by the hand capture device based on the binding relation.
7. An apparatus for presenting a virtual object, comprising:
an information acquisition unit configured to acquire real object information;
the modeling unit is configured to carry out three-dimensional modeling on the real object information to obtain a three-dimensional virtual object;
an information determination unit configured to determine hand control information and/or voice control information for the three-dimensional virtual object;
a presentation unit configured to present the three-dimensional virtual object based on the hand control information and/or the voice control information.
8. The apparatus of claim 7, wherein the hand control information comprises at least hand translation information and/or hand rotation information; and
the presentation unit is further configured to:
determining a display translation trajectory of the three-dimensional virtual object based on the hand translation information and/or determining a display rotation trajectory of the three-dimensional virtual object based on the hand rotation information;
and controlling the three-dimensional virtual object to be displayed based on the display translation track and/or the display rotation track.
9. The apparatus of claim 7, wherein the hand control information comprises at least gesture control information; and
the presentation unit is further configured to:
determining a display animation effect corresponding to the gesture control information;
and controlling the three-dimensional virtual object to be displayed according to the animation display effect.
10. The apparatus of claim 7, wherein the presentation unit is further configured to:
determining display keywords in the voice control information;
and displaying the three-dimensional virtual object based on the display parameters corresponding to the display keywords.
11. The apparatus of claim 7, wherein the presentation unit is further configured to:
acquiring preset display duration;
and controlling the three-dimensional virtual object to be displayed according to the preset display duration based on the hand control information and/or the voice control information.
12. The apparatus of claim 7, wherein the apparatus further comprises:
a device acquisition unit configured to acquire a hand-capture device;
a binding unit configured to establish a binding relationship between the hand-capture device and the three-dimensional virtual object;
a control determination unit configured to determine the hand control information for the three-dimensional virtual object transmitted by the hand-capture device based on the binding relationship.
13. An electronic device that performs a method for presenting a virtual object, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
CN202110585630.XA 2021-05-27 2021-05-27 Method, apparatus, device, medium and product for presenting virtual objects Pending CN113325952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110585630.XA CN113325952A (en) 2021-05-27 2021-05-27 Method, apparatus, device, medium and product for presenting virtual objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110585630.XA CN113325952A (en) 2021-05-27 2021-05-27 Method, apparatus, device, medium and product for presenting virtual objects

Publications (1)

Publication Number Publication Date
CN113325952A true CN113325952A (en) 2021-08-31

Family

ID=77421811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110585630.XA Pending CN113325952A (en) 2021-05-27 2021-05-27 Method, apparatus, device, medium and product for presenting virtual objects

Country Status (1)

Country Link
CN (1) CN113325952A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961069A (en) * 2021-09-30 2022-01-21 西安交通大学 Augmented reality interaction method and device suitable for real object and storage medium
CN114079800A (en) * 2021-09-18 2022-02-22 深圳市有伴科技有限公司 Virtual character performance method, device, system and computer readable storage medium
CN114155605A (en) * 2021-12-03 2022-03-08 北京字跳网络技术有限公司 Control method, control device and computer storage medium
CN114500429A (en) * 2022-01-24 2022-05-13 北京百度网讯科技有限公司 Control method and device for virtual image in voice room and electronic equipment
CN115937430A (en) * 2022-12-21 2023-04-07 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object
CN116030191A (en) * 2022-12-21 2023-04-28 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169076A1 (en) * 2013-12-16 2015-06-18 Leap Motion, Inc. User-defined virtual interaction space and manipulation of virtual cameras with vectors
CN108355347A (en) * 2018-03-05 2018-08-03 网易(杭州)网络有限公司 Interaction control method, device, electronic equipment and storage medium
CN109426783A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 Gesture identification method and system based on augmented reality
CN112135160A (en) * 2020-09-24 2020-12-25 广州博冠信息科技有限公司 Virtual object control method and device in live broadcast, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169076A1 (en) * 2013-12-16 2015-06-18 Leap Motion, Inc. User-defined virtual interaction space and manipulation of virtual cameras with vectors
CN109426783A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 Gesture identification method and system based on augmented reality
CN108355347A (en) * 2018-03-05 2018-08-03 网易(杭州)网络有限公司 Interaction control method, device, electronic equipment and storage medium
CN112135160A (en) * 2020-09-24 2020-12-25 广州博冠信息科技有限公司 Virtual object control method and device in live broadcast, storage medium and electronic equipment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079800A (en) * 2021-09-18 2022-02-22 深圳市有伴科技有限公司 Virtual character performance method, device, system and computer readable storage medium
CN113961069A (en) * 2021-09-30 2022-01-21 西安交通大学 Augmented reality interaction method and device suitable for real object and storage medium
CN113961069B (en) * 2021-09-30 2024-05-07 西安交通大学 Augmented reality interaction method and device suitable for real objects and storage medium
CN114155605A (en) * 2021-12-03 2022-03-08 北京字跳网络技术有限公司 Control method, control device and computer storage medium
CN114155605B (en) * 2021-12-03 2023-09-15 北京字跳网络技术有限公司 Control method, device and computer storage medium
CN114500429A (en) * 2022-01-24 2022-05-13 北京百度网讯科技有限公司 Control method and device for virtual image in voice room and electronic equipment
CN115937430A (en) * 2022-12-21 2023-04-07 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object
CN116030191A (en) * 2022-12-21 2023-04-28 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object
CN115937430B (en) * 2022-12-21 2023-10-10 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object
CN116030191B (en) * 2022-12-21 2023-11-10 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object

Similar Documents

Publication Publication Date Title
CN113325952A (en) Method, apparatus, device, medium and product for presenting virtual objects
US11158102B2 (en) Method and apparatus for processing information
US9928662B2 (en) System and method for temporal manipulation in virtual environments
US9785741B2 (en) Immersive virtual telepresence in a smart environment
US11132842B2 (en) Method and system for synchronizing a plurality of augmented reality devices to a virtual reality device
CN112667068A (en) Virtual character driving method, device, equipment and storage medium
CN111563855B (en) Image processing method and device
CN104871214A (en) User interface for augmented reality enabled devices
CN113325954B (en) Method, apparatus, device and medium for processing virtual object
CN111738072A (en) Training method and device of target detection model and electronic equipment
CN109992111B (en) Augmented reality extension method and electronic device
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN115047976A (en) Multi-level AR display method and device based on user interaction and electronic equipment
KR20210139203A (en) Commodity guiding method, apparatus, device and storage medium and computer program
CN110520826B (en) Smart command pooling in augmented and/or virtual reality environments
CN111443853B (en) Digital human control method and device
CN113327311B (en) Virtual character-based display method, device, equipment and storage medium
CN114120448B (en) Image processing method and device
CN113327309B (en) Video playing method and device
CN110263743B (en) Method and device for recognizing images
CN113780045A (en) Method and apparatus for training distance prediction model
CN113362472B (en) Article display method, apparatus, device, storage medium and program product
CN113313839B (en) Information display method, device, equipment, storage medium and program product
US20230245643A1 (en) Data processing method
CN114721562B (en) Processing method, apparatus, device, medium and product for digital object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination