CN118012270A - Interaction method, device, storage medium and equipment based on holographic display equipment - Google Patents

Interaction method, device, storage medium and equipment based on holographic display equipment Download PDF

Info

Publication number
CN118012270A
CN118012270A CN202410269980.9A CN202410269980A CN118012270A CN 118012270 A CN118012270 A CN 118012270A CN 202410269980 A CN202410269980 A CN 202410269980A CN 118012270 A CN118012270 A CN 118012270A
Authority
CN
China
Prior art keywords
user
digital object
interaction
action
holographic display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410269980.9A
Other languages
Chinese (zh)
Inventor
唐峥巍
许敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Prism Holographic Technology Co ltd
Original Assignee
Zhejiang Prism Holographic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Prism Holographic Technology Co ltd filed Critical Zhejiang Prism Holographic Technology Co ltd
Priority to CN202410269980.9A priority Critical patent/CN118012270A/en
Publication of CN118012270A publication Critical patent/CN118012270A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an interaction method, device, storage medium and equipment based on holographic display equipment, and belongs to the technical field of artificial intelligence. The method comprises the following steps: responding to an interaction event triggered by a first user terminal, and acquiring a second user identification of a second user from the interaction event; when the interaction condition is met between the first user identifier corresponding to the first user terminal and the second user identifier, acquiring second digital object data of the second user based on the second user identifier; and presenting a second digital object of the second user in a first holographic display device corresponding to the first user terminal based on the second digital object data. The application can improve the interactivity between users.

Description

Interaction method, device, storage medium and equipment based on holographic display equipment
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an interaction method, apparatus, storage medium, and device based on a holographic display device.
Background
With the development of mobile communication technology, the experience requirements of people on mobile communication are also increasing. Traditional communication interaction modes are generally limited to audio and/or video communication, and users have limited interactivity in the communication process and cannot obtain richer and real conversation experience.
To address this problem, the use of holographic display devices has emerged in recent years to provide users with a more realistic and immersive interactive experience. However, the existing holographic display device still has some limitations and disadvantages in terms of user interaction, mainly including the following aspects:
The interaction mode is single: current holographic display devices generally only support simple clicking, sliding, etc. operations, and cannot implement more natural and real interaction modes, such as gesture recognition, voice recognition, etc. This makes it impossible for the user to obtain a richer and more realistic interactive experience during the interaction.
The response speed is slow: in order to provide richer interactive vision, the holographic display device often needs to process larger audio and video data, and due to the limitation of the processing speed and the network transmission speed of the holographic display device, delay and jamming phenomena can occur when the holographic display device processes complex operation and real-time interaction, so that the interactive experience of a user is affected.
The user experience is poor: due to limitations in the interactive manner and the response speed, a user may feel uncomfortable and uncomfortable when making a call using the existing hologram display device, and the same comfort as in the conventional call manner may not be obtained.
Based on this, a more natural, real and rapid interaction manner needs to be developed to improve the interaction experience between users.
Disclosure of Invention
The present application aims to provide an interaction method, device, storage medium and apparatus based on a holographic display apparatus, so as to solve at least one of the above problems.
In a first aspect of the present application, there is provided an interaction method based on a holographic display device, applied to a first holographic display device, the method comprising:
Responding to an interaction event triggered by a first user terminal, and acquiring a second user identification of a second user from the interaction event;
when the interaction condition is met between the first user identifier corresponding to the first user terminal and the second user identifier, acquiring second digital object data of the second user based on the second user identifier;
and presenting a second digital object of the second user in a first holographic display device corresponding to the first user terminal based on the second digital object data.
In one embodiment, the interaction event includes a call request event with the second user, where the call request event includes a call request event sent by the first user to the second user, and/or the first user receives a call request event sent by the second user;
When the interaction event is the call request event, the second digital object data comprises basic data for presenting the second digital object;
when the social affinity between the first user and the second user is higher than an affinity threshold, judging that the interaction condition is met;
The method further comprises the steps of: when the interaction condition is not met, acquiring first digital object data of the first user based on a first user identification in the interaction event;
and presenting the first digital object of the first user in a first holographic display device corresponding to the first user terminal based on the first digital object data.
In one embodiment, the method further comprises: and acquiring a presentation instruction sent by a first user terminal, and determining a digital object displayed on the first holographic display device according to the presentation instruction, wherein the digital object comprises the second digital object and/or a first digital object of a first user.
In one embodiment, the method further comprises: and acquiring real-time interaction information of the first user and the second user, and adjusting the presented digital object based on the real-time interaction information to enable the presented digital object to generate an action effect matched with the real-time interaction information.
In one embodiment, the digital object displayed on the first holographic display device comprises the first digital object and the second digital object; the method further comprises the steps of:
Collecting a first action of the first user through a camera of the first holographic display device and/or receiving first interaction information sent by the first user terminal;
and presenting the interaction effect of the first digital object and the second digital object on the first holographic display device, wherein the interaction effect is matched with the first action and/or the first interaction information.
In one embodiment, the presenting the interactive effect of the first digital object and the second digital object on the first holographic display device includes:
Detecting the first action and/or the interaction type of the first interaction information, and enabling the first digital object to present an action matched with the first action and/or the first interaction information when the first action and/or the interaction type of the first interaction information belong to a unidirectional interaction type;
when of the bi-directional interaction type, the first digital object is caused to exhibit a first action that matches the first action and/or the first interaction information, and the second digital object is caused to exhibit a second action that interacts with the first action.
In one embodiment, the obtaining the second digital object data of the second user includes: and acquiring the second digital object data from a cloud server.
In a second aspect of the present application, there is provided an interaction device based on a holographic display, the device comprising:
the interactive event acquisition module is used for responding to an interactive event triggered from the first user terminal and acquiring a second user identification of a second user from the interactive event;
the digital object data processing module is used for acquiring second digital object data of the second user based on the second user identifier when the interaction condition is met between the first user identifier corresponding to the first user terminal and the second user identifier;
And the digital object presenting module is used for presenting the second digital object of the second user in the first holographic display equipment corresponding to the first user terminal based on the second digital object data.
In a third aspect of the application, there is provided a computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to perform the method according to any of the embodiments of the application.
In a fourth aspect of the present application, there is provided an electronic apparatus comprising: one or more processors;
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to perform the method described in any of the embodiments of the application.
According to the interaction method, the device, the storage medium and the equipment based on the holographic display equipment, the holographic display equipment is used for participating in the interaction between the first user and the second user, and corresponding interaction conditions are set, so that the digital object of the second user can be presented on the first holographic display equipment in the interaction process of the first user and the second user on the basis of meeting the interaction conditions, and the interactivity between the users is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of the present application.
FIG. 1A is a schematic illustration of an application scenario of an interaction method based on a holographic display device in one embodiment;
FIG. 1B is a schematic illustration of an application scenario of an interaction method based on a holographic display device in another embodiment;
FIG. 2 is a flow diagram of an interaction method based on a holographic display device in one embodiment;
FIG. 3 is a flow chart of an interaction method based on a holographic display device in another embodiment;
FIG. 4 is a schematic structural diagram of an interaction means based on a holographic display device in one embodiment;
Fig. 5 is a schematic structural diagram of an electronic device in another embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
The terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first digital object may be referred to as a second digital object, and similarly, a second digital object may be referred to as a first digital object, without departing from the scope of the application. Both the first digital object and the second digital object are digital objects, but they are not the same digital object.
Also as used herein, the terms "comprises," "comprising," and/or the like, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
The interaction method based on the holographic display device can be applied to a scene shown in FIG. 1A. The holographic display device is a device which projects a three-dimensional image by reflection and/or transmission of an optical lens by using an optical projection technology. The digital object is the three-dimensional figure projected by the holographic display device. The first user terminal 110, the second user terminal 120, the first holographic display device 115, and the second holographic display device 125 may all be in communication with the server 130 over a network. The first user terminal 110 and the first holographic display device 115 are devices used by the first user, and the second user terminal 120 and the second holographic display device 125 are devices used by the second user. Specifically, the first user terminal 110 and the second user terminal 120 may be mobile terminal devices used by users or other devices capable of performing interactive communication, for example, any device capable of implementing call interaction between users, such as a mobile phone, a notebook computer, a tablet computer, an electronic bracelet, a desktop computer, and the like. The first holographic display device 115 and the second holographic display device 125 may be, for example, holographic projectors or holographic digital handheld devices that receive and process corresponding image data to present digital objects by projection in air through optical effects.
Connection communication between the first holographic display device 115 and the first user terminal 110, between the second holographic display device 125 and the second user terminal 120 may also be achieved through a network to form a binding. The connection between the holographic display device and the user terminal can be Wi-Fi direct connection, NFC (near field communication) connection, and connection of various wireless technical standards such as Wi-Fi 6E and 5G networks in various wireless communication modes, binding between the holographic display device and the user terminal is achieved, and interaction information and related instructions transmitted by the user terminal are received. The server 130 may be a distributed server or a cloud server, and the server 130 may be connected to each user terminal and the holographic display device through a network, receive the interaction information transmitted by each user terminal and the video or image information transmitted by the holographic display device, process according to the relevant interaction information, form a digital object and/or relevant imaging data that may be presented on the holographic display device, and send the formed digital object and/or relevant imaging data to the corresponding holographic display device, where the holographic display device processes according to the data received from the server 130, and presents the digital object.
Wherein the digital object may be a avatar customized by a user, which may be generated according to the user's real avatar. Specifically, the holographic display device may collect an image and/or video of a user through a camera, generate a model according to the image and/or video and generate a digital object matched with the user according to the corresponding digital object.
Specifically, when a digital object of a user is first generated, the user may input corresponding preference setting information on the holographic display device, and personal image and/or audio-video data acquired by a camera of the holographic display device, or the holographic display device may receive the preference setting information, the personal image and/or the audio-video data transmitted by the bound user terminal. The preference setting information comprises personal basic information such as characters, ages, sexes and the like of the user, and feature evidence such as facial features, physical features, voice features, behavior features and the like of the user can be extracted from personal images and/or video data. Image information is set based on these characteristic data preferences, and a digital object of the user himself is generated from the corresponding digital object generation model.
Wherein the facial features include one or more of shape, size and position of face, eyes, nose, mouth, eyebrows, etc. By analyzing these features, a digital object can be generated that resembles the facial features of the user, making it more realistic and natural.
Physical characteristics: including height, weight, body type, skin color, etc. Such information may help generate a digital object that resembles the physical characteristics of the user, making it more realistic to the user's actual image.
Behavior characteristics: behavior features may be actions that a user makes for some behavior, such as gait, posture, expression, voice, etc. actions made by the user. By extracting the mode data from the behaviors, the generated digital object can be more vivid and has individuality, and the interactive experience of a user is enhanced.
Taking the communication among the first user terminal 110, the first holographic display device 115, and the server 130 as an example, as shown in fig. 1B, after the first holographic display device and the first user terminal establish communication, a user may select a model of a digital object (such as a "digital person") on the first user terminal, and after selecting the corresponding model, the model may be directly used as the digital object of the user. Furthermore, the digital object can be optimized by combining the portrait of the user on the basis of the selected digital object, so that the optimized image is matched with the user, and the optimized image is used as the digital object of the user.
Specifically, after the corresponding digital object model is selected, the personality and preference of the user can be more accurately understood by analyzing the multi-modal data such as the expression, the voice and the like of the user by utilizing the related user portrait analysis technology. Based on these analyses, a visual model suggestion is made that more closely matches the user's personal characteristics and preferences, and server 130 is utilized to ultimately form a corresponding digital object based on the user's associated actions. After the digital object is generated, the system may generate a unique identifier that is bound to the digital object. In this way, when the user's terminal device is connected to the holographic display device, the system can identify and display the corresponding digital object by this identifier. Wherein the identifier may be a user identification.
Further, for the digital object, the system is also provided with a corresponding dynamic adjustment mechanism, and based on the dynamic adjustment mechanism, the user can adjust the digital object in real time according to the interaction behavior, feedback and/or change of preference of the user when using the digital object. For example, the digital object can change along with the change of the image and habit of the user, so that the growth of the follower with the increase of the age of the user is realized.
Still further, the holographic display device may further integrate a camera and/or a microphone, such as a camera on the top of the first holographic display device 115 and a microphone and a speaker on its base, so as to collect audio or image information of the user in real time, where the image information represents actions made by the user. By analyzing the user's voice and/or motion, the user's voice and/or motion is converted into motion data that can drive the digital object to perform a corresponding motion. For example, when the hand-lifting action made by the user is detected, after the hand-lifting action is analyzed, action data corresponding to the hand-lifting action is generated, and the digital object is controlled to also make the hand-lifting action based on the action data, namely, the digital object displayed on the holographic display device makes an action matched with the user.
In one embodiment, an interaction method based on a holographic display device is provided, which is applied in a first holographic display device, which may be in particular a first holographic projector. As shown in connection with fig. 2, the method comprises:
Step 202, in response to an interaction event triggered from the first user terminal, obtaining a second user identification of the second user from the interaction event.
In this embodiment, the interaction event is an event triggered by an interaction action or operation generated by the first user and the second user, and the interaction event may be an instant messaging event such as a call, a voice or a video chat. The interaction event can be an event actively triggered by the first user, or can be an event passively triggered by the first user when the second user sends an interaction request to the first user. Wherein the second user is the object of the first user interaction. The second user may include one or more. When a plurality of second users are provided, the second user identification of each second user can be obtained.
For example, the first user triggers a request for making a call to the second user through clicking, sliding, voice and other operations on the first user terminal, and at this time, the first user terminal triggers a corresponding interaction event, where the interaction event includes the second user identifier of the second user and may further include the first user identifier of the first user. The user identification may be a user ID, a cell phone number, a mailbox address, or other unique identifier of the corresponding user. For example, the second user identification may be a mobile phone number of the second user. Through this user identification, contact may be established with the corresponding user to obtain the digital object and/or data associated with the digital object for the corresponding user.
After triggering the interaction event on the first user terminal, the first user terminal may send the interaction event to a first holographic display device bound to the first user terminal. Specifically, the first user terminal may be provided with a related application program which communicates with the first holographic display device, and after the application program monitors the generation of the interaction event, the application program sends the interaction event to the first holographic display device. The interaction event comprises a corresponding event type, a first user identifier and a second user identifier. The event type may be specifically a voice interaction, a telephone interaction, a video interaction, a short message interaction, etc. The first holographic display device obtains a second user identification therein in response to the interaction event.
In one embodiment, the interaction event includes a call request event with the second user, and may also include a call in progress event. Wherein the call request event comprises an actively triggered event or a passively triggered event. I.e. the call request event comprises a call request event sent by the first user to the second user and/or the call request event sent by the second user is received by the first user. The call request event is an event in which an interactive request is being initiated, but a call has not been successfully made yet. The user may reject the call request or may accept the call request. The call in progress event is a call request event that is converted to a call in progress event after the call request event is accepted.
And 204, acquiring second digital object data of a second user based on the second user identifier when the interaction condition is met between the first user identifier and the second user identifier corresponding to the first user terminal.
In this embodiment, the interaction condition may be an interaction condition preset by the first user, or a default interaction condition of the system, and only if the interaction condition is satisfied, the first holographic display device may acquire the second digital object data.
The interaction condition may be physical factors such as distance, direction, angle, network environment and the like between the user and the holographic display device, or may be non-physical factors such as social relationship, interaction history, event occurrence time, presentation authority, authorization mode and the like between the users. I.e. the interaction conditions may be physical factor conditions and/or non-physical factor conditions.
Specifically, the first holographic display device may detect a visual position between the first user and the first holographic display device, and determine whether the visual position is within a preset visual range, if so, the physical factor condition is satisfied. When the first holographic display device can scan whether a first user exists in the visible area through the built-in or built-out camera, if so, the first holographic display device is also in the sight range of the first user (for example, the eyes of the first user are detected), the description is in the preset visual range, and the physical factors are judged to be met. The non-physical factors may include social affinity between the first user and the second user, and/or whether the second user has opened rights to present their second digital object as the first user. The social affinity may be determined according to one or more factors of interaction duration between users, interaction frequency, and interaction relationship set between users. The interaction relationship may include a plurality of types of common friends relationship, close friends relationship, and the like. It can be appreciated that the longer the interaction duration, the higher the interaction frequency, the more intimate the interaction relationship (e.g., intimate friend relationship), and the higher the intimacy. A corresponding affinity threshold may be preset, above which it is determined that the non-physical factor is satisfied.
The digital object data is data used to cause the holographic display device to render the digital object. The data may be stored in the memory of the holographic display device itself or may be sent directly to the holographic display device by a server or user terminal. The digital object data may include one or more of base data, control parameters, action data, and interactive instructions, among others. The second digital object data represents digital object data of a second user and the first digital object data represents digital object data of the first user.
Basic data: the underlying data includes one or more of a shape, size, structure, surface detail, etc. of the digital object. These data are typically created by a digital object model preset by the system in conjunction with user set requirements and stored using a proprietary format (e.g.,. Obj,. Fbx,. Dae, etc.). Wherein the base data may include one or more of shape data, appearance data, expression data, and the like.
The underlying data is used to describe the basic shape and structure of the digital object, including one or more of the size, scale, shape, etc. of the head, body, extremities, etc. The appearance data is used to describe surface details, colors, textures, etc. of the digital object, including one or more of the colors, textures, and patterns of portions of skin, hair, eyes, clothing, etc. Expression data is used to describe the facial expression and emotional state of a digital object, including the shape, motion of the mouth, eyes, eyebrows, etc., and one or more of sound, intonation, volume, sound expression, etc. With this basic data, the default form of the user digital object may be presented.
Control parameters: the control parameters are used to adjust the brightness and contrast of the projection, as well as to control the size, direction, position, etc. of the projection to ensure accurate presentation of the digital object.
Action data: the motion data describes a sequence of motions of the digital object that enable the digital object to exhibit a dynamic effect. The action data includes key point data of changes in walking, running, jumping, and the like. By importing motion data into the holographic display device, a realistic dynamic projection effect can be achieved.
Interaction instruction: interaction instructions may be generated based on user related operations for controlling the behavior and response of the digital object, such as in response to user input or interaction with other digital objects. Through the instructions, the user can interact with the digital object, so that the immersion and the interestingness of the user are improved.
In one embodiment, when the interaction condition is met between the second user identification and the first user identification of each second user can be detected, and corresponding second digital object data of each second user identification meeting the interaction condition can be obtained.
In one embodiment, the first holographic display device extracts the respective different second digital object data in accordance with the different interaction events. The acquired second digital object data includes base data in the second user's digital object data, such as when the interaction event is a call request event. The digital object of the second user in the default form of the system may be presented based on the base data, as it is not yet in the on state. Further, control parameters may also be included such that a default digital object of the second user is presented on the first holographic display device in accordance with the set projection effect.
When the interactive event is a call in progress event, the digital object data may further include one or more of control parameters, action data, and interactive instructions. In particular, the interaction instruction may be an instruction triggered according to an actual behavior of the second user or a related operation on the second user terminal/the first user terminal. For example, the interactive instruction may be an instruction of running, walking, hugging, etc., and the corresponding action data may be an action matched with the interactive instruction.
For example, if the second user terminal sends a "hug" message to the first user terminal or detects, through the camera, that the second user has made a hug physical action, then based on the message or action, a hug instruction is issued to the second digital object, and hug action data is generated, so that the second digital object presented on the first holographic display device makes the hug action. Or the first user terminal sends a hug message to the second user terminal, or the camera detects that the first user makes a hug entity action, and the system detects that the message belongs to interactive information, or the action belongs to interactive action, so that the second digital object is triggered to feed back a hug instruction, and hug action data is generated, so that the second digital object presented on the first holographic display device makes a hug action.
In one embodiment, the second digital object data may be second digital object data obtained from a cloud server.
In this embodiment, the digital object information defined and set by the user is stored on the cloud server. When the user terminal is connected with the holographic display device and the digital object is required to be displayed, the holographic display device downloads and synchronizes the corresponding digital object data from the cloud server so as to display and interact on the device. The real-time performance of image presentation can be improved by acquiring corresponding digital object data on the cloud server.
And step 206, presenting the second digital object of the second user in the first holographic display device corresponding to the first user terminal based on the second digital object data.
In this embodiment, after the first holographic display device acquires the digital object data, rendering processing may be performed according to the digital object data, so that the digital object corresponding to the data is rendered on the first holographic display device. Wherein the presentation may be a projection, such as a projection of the second digital object.
The second digital object may be a digital object that is generated or set in advance by the second user, and a process of generating or setting the digital object is described above and will not be described herein. The second digital object data is generated in the digital object generating process, and the first holographic display device can present the corresponding digital object according to the second digital object data after obtaining the second digital object data.
For example, when a first user initiates a call request to a second user terminal of a second user through the first user terminal thereof, a corresponding interaction event is triggered. The first holographic display device extracts second digital object data preset by the second user based on the interaction event, and then projects a target call object of the second digital object data, namely the second digital object of the second user, on the first holographic display device, so that interaction experience among users can be improved.
And further, when the call is in progress, the second user terminal or the second holographic display device can detect the entity action of the second user in real time through the camera, analyze the entity action, generate action data matched with the entity action, transmit the action data to the first holographic display device, and the first holographic display device makes an action matched with the entity action according to a second digital object presented by the action data on the first holographic display device.
Or the second user analyzes the interactive information sent to the first user terminal on the second user terminal, generates action data matched with the interactive information, and makes an action matched with the interactive information according to a second digital object presented by the action data on the first holographic display device.
According to the interaction method based on the holographic display device, the holographic display device is used for participating in the interaction between the first user and the second user, and corresponding interaction conditions are set, so that the digital object of the second user can be presented on the first holographic display device in the interaction process of the first user and the second user on the basis of meeting the interaction conditions, and the interactivity between the users is improved.
In one embodiment, the interaction condition is determined to be satisfied when the social affinity between the first user and the second user is above an affinity threshold.
In this embodiment, the social affinity threshold may be a value set by user definition, or may be a suitable value set automatically by the system. The system can calculate the intimacy between the users according to the historical interaction information such as the interaction frequency, the interaction duration and the like between the first user and the second user, and can directly judge that the interaction condition is met when the intimacy is higher than a preset intimacy threshold.
In one embodiment, after step 202, further comprising: when the interaction condition is not met, acquiring first digital object data of a first user based on a first user identification in the interaction event; a first digital object of a first user is presented in a first holographic display device corresponding to the first user terminal based on the first digital object data.
In this embodiment, when the interaction condition is not satisfied between the first user and the second user, it is described that the image of the second user is not allowed to be presented on the first holographic display device of the first user, so that the digital object data of the first user can be obtained, and the digital object of the first user can be presented on the device according to the data. Wherein the content of the first digital object data is similar to the content of the second digital object data, and the manner of presenting the first digital object is similar to the manner of presenting the second digital object, and the above-mentioned content is not repeated here.
In one embodiment, the method further comprises: and acquiring a presentation instruction sent by the first user terminal, and determining a digital object displayed on the first holographic display device according to the presentation instruction, wherein the digital object comprises the second digital object and/or the first digital object of the first user.
Specifically, an APP or applet for communicating with the first holographic display device is installed on the first user terminal, and by operating on the APP or applet, a presentation instruction may be triggered. Wherein the rendering instructions are for indicating a digital object to be rendered on the first holographic display device. The digital object can be the digital object of the user, can also be the digital object of the call object of the user, can also comprise the digital object of the user and the digital object of the call object, and when the call object is the second user, the digital object of the call object is the second digital object of the second user.
The presentation instructions include a user identifier of a user to whom the digital object belongs, where the user identifier may be one or more. From the user identification, a digital object to be presented may be determined.
In this embodiment, the rendering instructions may be executed after step 206 described above, or may be executed before step 202 or after step 204 described above. I.e. the first user may set the digital object presented on the first holographic display device at any suitable time before or after the generation of the interaction event. The presented object may be modified, for example, before the interactive event is generated, i.e. the virtual object to be presented is set, or after the interactive event is generated, for example, after the second digital object has been presented.
In one embodiment, the second users may include a plurality of second user lists, when the second users corresponding to the interaction event are a plurality of second users, the first user may present a plurality of second user lists on the first user terminal, and receive a selection operation of one or more users of the first user, and trigger a corresponding presentation instruction based on the selection operation, where a user identifier of the first user and/or one or more second users that need to be presented is indicated in the presentation instruction.
Further, it may also be determined automatically or manually based on the number of second users which second users' digital objects need to be presented. For example, when the number of the second users exceeds a preset number threshold, it may be determined that the first N second users with the highest affinity with the first user among the second users selected to satisfy the interaction condition are presented. Or at least one second user and first user satisfying the interaction condition may be alternately presented in an alternate presentation manner.
In one embodiment, the method further comprises: and acquiring real-time interaction information of the first user and the second user, and adjusting the presented digital object based on the real-time interaction information to enable the presented digital object to generate an action effect matched with the real-time interaction information.
In this embodiment, the real-time interactive information may be executed after the step 206. The real-time interaction information may be second interaction information sent to the first user terminal by the second user terminal, or may be first interaction information sent to the second user terminal by the first user terminal, or may further include first action information of the first user acquired by the first user terminal or the first holographic display device, and second action information of the second user acquired by the second user terminal or the second holographic display device. The acquisition mode can be acquisition through a camera on the user terminal or a camera on the holographic display device. The first interactive information and the second interactive information can be any form of message such as text message, voice message or image/animation message. For example, the real-time interaction information can be a hug text message or picture sent by the user, or a hug action made by the user and collected by the camera.
After the real-time interaction information is detected, the presented digital object can be adjusted according to the information, so that the digital object can also perform actions such as matching the interaction information. Such as also causing the digital object to make a hug action.
Specifically, after resolving the action matched with the interaction information, the data matched with the action can be obtained from the cloud server or the memory of the holographic display device, and then the data is loaded, so that the matched action is made. The cloud server or the holographic display device can store certain action expression data in advance, and after meaning expressed by the interaction information is analyzed, the expression data is directly called, so that the digital object makes corresponding expression or action.
In this embodiment, the interactivity between users may be further improved by updating the actions of the digital object in real time according to the interaction information of the users.
In one embodiment, the digital object displayed on the first holographic display device comprises a first digital object and a second digital object; the method further comprises the following steps: collecting a first action of a first user through a camera of first holographic display equipment and/or receiving first interaction information sent by a first user terminal; an interactive effect of the first digital object and the second digital object is presented on the first holographic display device, the interactive effect being matched to the first action and/or the first interactive information.
In this embodiment, when the interaction condition is satisfied, the first digital object may be presented on the first holographic display device in addition to the second digital object. Specifically, the affinity threshold may include a first affinity threshold and a second affinity threshold that is greater than the first affinity threshold, and when the social affinity between the first user and the second user is greater than or equal to the second affinity, then presenting the first digital object and the second digital object simultaneously; when the social affinity is between the first affinity threshold and the second affinity threshold, the second digital object is displayed, and when the social affinity is below the first affinity threshold, only the first digital object of the first user is displayed.
After determining the digital object of the user to be displayed, the digital object data of the corresponding user may be obtained, and the process is similar to the above step 204, and will not be repeated here.
In this embodiment, when the intimacy between users is high enough, the digital objects of the first user and the second user may be displayed at the same time, so as to improve the interactive experience of the users.
After the first digital object and the second digital object are presented at the same time, the first holographic display device can acquire the first action of the first user in real time according to the camera of the first holographic display device or receive the first interaction information sent by the first user terminal, wherein the first interaction information is a message such as characters, voice, images and the like sent by the first user to the second user. And analyzing the first action and/or the first interaction information to identify the expressed interaction intention, further acquiring action data matched with the interaction intention, and enabling the first digital object to make an action matched with the interaction intention based on the action data, so that the interaction effect between the first digital object and the second digital object is presented.
In this embodiment, the first digital object and the second digital object are displayed on the first holographic display device, and the first digital object is enabled to generate corresponding actions according to the interaction information of the first user, so that a matched interaction effect is generated between the plurality of digital objects, and the interactivity of the first digital object and the second digital object can be further improved.
Further, after the first action is obtained, the distance between the first user and the first holographic display device can be further detected, and the size of the first digital object is adjusted according to the distance, so that the interactive effect is displayed and the size of the first digital object is also adjusted. When the distance between the first user and the first holographic display device is detected to be changed from far to near, the first digital object can be enabled to display the interactive effect, and meanwhile the display size of the first digital object is gradually increased; on the contrary, when the distance between the two is changed from near to far, the first digital object can be made to display the interactive effect and the display size is gradually reduced. For example, the first action is a walking action, and is far away from the first holographic display device, the first digital object can be made to perform the walking action, and the size of the first digital object is gradually reduced, so that the walking effect is displayed.
In one embodiment, the first holographic display device may also receive second interaction information and/or a second action of the second user. The second interaction information and/or the second action may be information sent to the first holographic display device by the cloud server, and based on the second interaction information and/or the second action, the second digital object may make a corresponding interaction effect.
And the second digital object is made to perform an action which is matched with the interaction intention based on the action data, so that the interaction effect between the first digital object and the second digital object is presented.
In one embodiment, an interactive effect of a first digital object and a second digital object is presented on a first holographic display device, comprising: detecting a first action and/or an interaction type to which the first interaction information belongs, and enabling the first digital object to present an action matched with the first action and/or the first interaction information when the first action and/or the interaction type belongs to a unidirectional interaction type; when of the bi-directional interaction type, the first digital object is caused to exhibit a first action that matches the first action and/or the first interaction information, and the second digital object is caused to exhibit a second action that interacts with the first action.
In this embodiment, the first holographic display device further detects an interaction type of the real-time interaction message, where different interaction types determine to perform corresponding actions on different digital objects. The interaction types include a unidirectional interaction type and a bidirectional interaction type. The unidirectional interaction type is a type which only needs a unilateral user to make corresponding interaction, and the bidirectional interaction type represents an interaction type which can be used by both the first user and the second user to make corresponding actions.
For example, interactive information or actions of a one-way interactive type may include walking, standing, praying, speaking, etc. that are relatively weak in interactivity, information that does not require a response; the interactive information or actions of the two-way interaction type can comprise information with relatively strong interactivity such as hugging, kissing, handshake and the like, and response needs to be carried out.
The first action and/or the first interaction information can be analyzed, the meaning expressed by the first action and/or the first interaction information can be analyzed, the interaction type of the meaning is identified, and after the interaction type of the meaning is known, the corresponding action can be performed on the corresponding digital object.
Specifically, corresponding action data can be obtained according to the expressed meaning and the affiliated interaction type, and the corresponding digital object is based on the action data. If the interaction type is a unidirectional interaction type, acquiring a part of action data corresponding to the expression meaning, and enabling the first digital object to act with an interaction effect based on the action data; and if the interaction type is a bidirectional interaction type, acquiring two pieces of action data corresponding to the meaning, wherein one piece of action data is used for enabling the first digital object to make a first action, and the other piece of action data is used for enabling the second digital object to make a second action responding to the first action. The first action and the second action may be the same action or may be different actions. For example, the first action and the second action are both the action of a handshake or the action of a hug, or the first action is a specific action, and the second action is a response action for receiving the specific action.
In one embodiment, another interaction method based on a holographic display device is provided, taking the holographic display device as a projection device, and the interaction event is specifically an event of a first user talking with a second user, that is, the first holographic display device is a first projection device, and the second holographic display device is a second projection device. As shown in fig. 3, the method includes:
step 302, in response to a call request event triggered from the first user terminal, obtaining a second user identification of the second user from the call request event.
In this embodiment, the interaction event is a call request event, and the first holographic display device may be connected to the first user terminal in a bluetooth connection, wi-Fi direct connection, NFC connection, or the like, and when detecting that the first user terminal initiates a call request with the second user terminal, trigger a corresponding call request event.
And step 304, when the first interaction condition is met between the first user identifier and the second user identifier corresponding to the first user terminal, acquiring second basic data of a second digital object of the second user based on the second user identifier.
In this embodiment, the second user identifier may be a mobile phone number of the second user, and the first holographic display device or the cloud server determines the corresponding second digital object based on a one-to-one correspondence relationship of "mobile phone number-binding user identity-binding digital object identifier". Wherein each user's mobile phone number is bound to its identity information and associated with a particular digital object identifier, so that the digital object to be presented can be quickly determined.
In particular, efficient database querying and data synchronization techniques may be employed. The techniques ensure that when a call request event is triggered, a first user can acquire the digital object corresponding relation of a second user in real time and accurately display the digital object corresponding relation on the holographic display device. The method not only improves the interaction experience among users, but also ensures the safety and privacy of information transmission.
The second digital object data acquired by the first user terminal may specifically include basic data of the second digital object, i.e. the second basic data. By means of this basic data, the digital object set by the second user can be presented on the first holographic display device. The second digital object data may be obtained from a cloud server. Specifically, the first holographic display device may detect whether second digital object data is stored in the first holographic display device, if yes, the second digital object data is directly obtained from the storage of the first holographic display device, and if not, a request for obtaining the second digital object may be sent to the cloud server, where the request includes a second user identifier. And after receiving the request, the cloud server acquires the second digital object data based on the second user identifier and sends the second digital object data to the first holographic display device.
Wherein, before step 304, step 303 is further included: and detecting whether the first interaction condition is met between the first user identifier and the second user identifier, if yes, executing step 304, otherwise, executing step 308.
The first interaction condition may be any one or a combination of the above physical factor conditions and/or non-physical factor conditions. For example, the condition specifically includes one or a combination of several of user authentication, preset rules, authorization mode, two parties agreeing to the mechanism, etc.
For user authentication, the user can confirm whether to display the digital object of the other party by means of fingerprint, face recognition, voiceprint authentication, or the like. This increases security and ensures that only authenticated users can present their digital objects.
For the preset rule, a user can preset a specific rule in the device, for example, only when the two parties meet a certain interaction frequency or have a specific relationship label, the digital object of the other party can be displayed. This way the user is enabled to more flexibly control when the digital object is presented. The preset rule may be a corresponding affinity threshold, and the specific relationship tag may be a close friend tag, for example, the two parties are in a friend group of the close friend of the other party.
For the temporary authorization mode, the user may select a temporary authorization that allows a particular contact to display its digital object for a particular period of time. This approach may be applicable to ad hoc conferences or interactions in specific contexts.
For the bi-directional consent mechanism, explicit consent of both users is required before the digital object is displayed. This way, privacy and control of interactions of the user are ensured.
For example, after the user authentication is satisfied, the first interaction condition may further include a preset time period and an affinity of the first user and the second user, and the first interaction condition is determined to be satisfied only when the social affinity of the first user and the second user is higher than the first affinity threshold and the occurrence of the interaction event is within an allowable time period set by the second user.
Further, in order to improve the privacy and security of the data, when the first holographic display device sends a request to the cloud server, the second user identifier in the request may be encrypted, that is, the encrypted second user identifier is sent. After receiving the request, the cloud server can verify the user identity of the first holographic display device, and the validity of the sender of the request is ensured. After the identity verification is passed, the user identifier in the identity verification is analyzed and verified, for example, a second user identifier in the request can be decrypted, the decrypted second user identifier is matched with information in a database, whether the corresponding second user identifier exists or not is inquired, and the validity and the accuracy of the second user identifier are verified.
After the second user identification passes the verification, corresponding second digital object data is extracted according to the second user identification, and the data can be sent to the first holographic display device in an encrypted form.
In one embodiment, to further improve the real-time of the first holographic display device to render the second digital object, the second digital object data may be data that has been rendered on the cloud server such that the first holographic display device directly loads the rendering without performing the rendering.
And step 306, presenting a second digital object of the second user in the first holographic display device corresponding to the first user terminal based on the second basic data.
Step 308, when the first interaction condition is not satisfied, acquiring first basic data of the first user based on the first user identifier in the interaction event.
In this embodiment, when the first interaction condition is not met, only the first digital object is presented on the first holographic display device. The first digital object data is acquired in the same manner as the second digital object data, and may be extracted from own data of the first holographic display device or may be acquired from a cloud server. The first digital object data is first base data, i.e., base data of the first digital object.
Step 309, presenting a first digital object of the first user in a first holographic display device corresponding to the first user terminal based on the first base data.
In step 310, the occurrence of a call request event is detected as a transition to a call in progress event.
In this embodiment, it may be detected whether the call request that occurs on the ue is successful, and if so, the call request is converted into a call in progress. In response to transitioning from the call request event to the call in progress event, the first action data and/or the second action data may continue to be acquired in real-time.
Step 312, obtaining in real time second action data that matches the second action of the second user.
The second action is a second action made by the second user, and the action is analyzed to form action data capable of driving the digital object to move, wherein the second action data is the action data of the second user.
Specifically, the second motion may be a second motion collected by the second holographic display device through a camera and/or a microphone, which may include any motion presented by the second user, such as walking, jumping, sound, mouth shape, expression, and the like. The second holographic display device forms second action data based on the acquired second action, and transmits the second action data to the cloud server, and the cloud server can directly send the second action data to the first holographic display device after receiving the second action data, or can also render the second action data and send the rendered second action data to the first holographic display device.
Specifically, a voice recognition algorithm and a motion recognition algorithm are provided on the holographic display device, and the voice recognition algorithm can analyze the captured sound and convert the captured sound into key point data capable of driving the digital object. The action recognition algorithm will analyze the captured user's facial expression or body motion, etc., and convert it into key point data that can drive the digital object. The motion recognition algorithm may specifically comprise a face recognition algorithm.
The voice recognition algorithm is a voice recognition algorithm based on deep learning, and the algorithm can extract key features from the voice of the user, so that the original voice record does not need to be directly accessed or stored, and the voice processing efficiency is improved. Also, the motion recognition algorithm recognizes and simulates the expression and behavior of the user by analyzing key points such as the face or body of the user in the video data photographed by the camera. The voice recognition algorithm and the action recognition algorithm only extract necessary characteristic information, and original data is not reserved, so that user privacy is protected. By the method, training and learning of the model can be effectively realized under the condition of not invading the privacy of the user, and a safe and real-time interactive experience is provided for the user.
Step 314 causes the second digital object to exhibit a dynamic effect on the first holographic display device that matches the second action based on the second action data.
Further, the second motion data further includes a distance between the second user and the second holographic display device, and the size of the presentation of the second digital object is adjusted according to the distance, so that the second digital object is driven to perform the motion matched with the second motion, and the size of the second digital object is further adjusted according to the distance.
Further, in step 312, in addition to the second motion data, a second control parameter of the second user is further acquired, the control parameter being determined according to the captured distance and direction between the second user and the second holographic display device. Step 314 includes: the second digital object is caused to exhibit a dynamic effect on the first holographic display device that matches the second motion based on the second motion data and the second control parameter.
By further combining the control parameters, the second digital object is further enabled to adjust the direction and the size of the presentation of the second digital object in addition to the corresponding action, so that the action of the second user is matched.
Step 316, acquiring the first action of the first user in real time, and acquiring the matched first action data based on the first action.
In this embodiment, the first action is an action of the first user collected by the first holographic display device through the camera and/or the microphone, and similar to the second action, the first action may also include any action presented by the first user such as walking, jumping, sound, mouth shape, expression, and the like of the second user. The first holographic display device may generate first motion data from the first motion parsing.
The first holographic display device can also transmit the first action data to the cloud server, and the first action data is transmitted to the second holographic display device by the cloud server on the premise that the first interaction condition is met, so that the first digital object is presented and driven to make an action matched with the first action on the basis of the first action data by the second holographic display device.
If the first condition is not met, steps 316-318 are only performed to render and drive the first digital object on the first holographic display device to make an action that matches the first action.
In one embodiment, after step 310, further comprising: and detecting whether a second interaction condition is met between the first user identifier and the second user identifier, if yes, executing the steps 312 to 318, otherwise, executing the steps 316 to 318. The execution order between the step 312 and the step 316 is not limited, for example, the step 312 and the step 316 may be executed in parallel, or the step 312 may be executed first and then the step 316 may be executed sequentially.
Wherein the second interaction condition is used for determining a presentation mode of the digital object during the communication in-progress event, and similar to the first interaction condition, the second interaction condition can also comprise one or more of the above-mentioned physical factors and non-physical factors.
When the second interaction condition is further met, then the first digital object and the second digital object are presented simultaneously, otherwise the first digital object and the second digital object are not presented simultaneously. The second interaction condition may be determining whether the affinity between the first user identification and the second user identification is greater than or equal to a second affinity threshold, and if so, determining that the second interaction condition is satisfied.
At step 318, the first digital object is caused to present a dynamic effect on the first holographic display device that matches the first action based on the first action data.
Similarly, in step 316, in addition to the first motion data, a first control parameter of the first user is further acquired, the control parameter being determined according to the captured distance and direction between the first user and the first holographic display device. Step 318 includes: the first digital object is caused to exhibit a dynamic effect on the first holographic display device that matches the first action based on the first action data and the first control parameter.
By further combining the control parameters, the first digital object is further enabled to adjust the direction and the size of the presentation of the first digital object in addition to the presentation of the corresponding action so as to be more matched with the action of the first user.
If the first interaction condition is satisfied, steps 314 to 318 are continuously performed, and if the first interaction condition is not satisfied, steps 316 to 318 are performed.
In particular, the first digital object and the second digital object may be presented simultaneously or alternatively on the first holographic display device, provided that the first interaction condition is fulfilled.
When in alternate presentation, it is possible to detect which user is acting at the current moment in the ongoing event of the call and switch the presentation of the digital object of the corresponding user according to the user who is currently acting. For example, at the current time, if the second user is detected to make a second action, a second digital object is presented on the first holographic display device, and the second digital object is driven to make an action matched with the second action; if at the next moment, the second user is detected to stop the action, and the first user makes a first action, switching to the first digital object presentation on the first holographic display device, and making an action matched with the first action according to driving the first digital object.
For the condition that the first digital object and the second digital object are presented simultaneously, detecting the interaction type to which the first action belongs, and when the interaction type belongs to the unidirectional interaction type, enabling the first digital object to present the action matched with the first action; when the first action is of the bi-directional interaction type, the first digital object is caused to present an action that matches the first action, and the second digital object is caused to present an action that interacts with the first action.
In one embodiment, as shown in fig. 4, there is provided an interaction apparatus based on a holographic display device, the apparatus comprising:
An interaction event acquisition module 402, configured to acquire a second user identifier of a second user from the interaction event in response to the interaction event triggered from the first user terminal;
a digital object data processing module 404, configured to obtain second digital object data of the second user based on the second user identifier when the interaction condition is satisfied between the first user identifier and the second user identifier corresponding to the first user terminal;
A digital object rendering module 406 for rendering a second digital object of the second user in a first holographic display device corresponding to the first user terminal based on the second digital object data.
In one embodiment, the interaction event comprises a call request event with the second user, the call request event comprises a call request event sent by the first user to the second user, and/or the first user receives a call request event sent by the second user; when the interaction event is a call request event, the second digital object data includes base data for rendering the second digital object.
In one embodiment, the interaction device further includes an interaction condition detection module 403, configured to detect whether the interaction condition is satisfied between the first user identifier and the second user identifier, and determine that the interaction condition is satisfied when the social affinity between the first user and the second user is higher than the affinity threshold.
The digital object data processing module 404 is further configured to obtain first digital object data of a first user based on the first user identification in the interaction event when the interaction condition is not satisfied;
digital object rendering module 406 is also operative to render a first digital object of the first user in a first holographic display device corresponding to the first user terminal based on the first digital object data.
In one embodiment, digital object data processing module 404 is further configured to obtain a presentation instruction sent by the first user terminal; digital object rendering module 406 is also configured to determine a digital object to display on the first holographic display device according to the rendering instructions, the digital object comprising the second digital object and/or the first digital object of the first user.
In one embodiment, digital object data processing module 404 is also configured to obtain real-time interaction information for the first user and the second user; digital object rendering module 406 is also configured to adjust the rendered digital object based on the real-time interaction information such that the rendered digital object produces an action effect that matches the real-time interaction information.
In one embodiment, the digital object displayed on the first holographic display device comprises a first digital object and a second digital object;
The digital object data processing module 404 is further configured to collect, by using a camera of the first holographic display device, a first action of the first user, and/or receive first interaction information sent by the first user terminal;
the digital object presentation module 406 is further configured to present an interactive effect of the first digital object and the second digital object on the first holographic display device, the interactive effect matching the first action and/or the first interaction information.
The digital object presenting module 406 is further configured to detect an interaction type to which the first action and/or the first interaction information belong, and when the interaction type belongs to the unidirectional interaction type, cause the first digital object to present an action that matches the first action and/or the first interaction information; when of the bi-directional interaction type, the first digital object is caused to exhibit a first action that matches the first action and/or the first interaction information, and the second digital object is caused to exhibit a second action that interacts with the first action.
In one embodiment, digital object data processing module 404 is also configured to obtain second digital object data from a cloud server.
In one embodiment, a computer-readable storage medium is provided having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the steps of the method embodiments described above.
In one embodiment, there is also provided an electronic device comprising one or more processors; and a memory, wherein the memory stores one or more programs, and the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the steps in the method embodiments described above.
The electronic device may be a holographic display device, in particular a holographic display device, such as the holographic projector described above. As shown in fig. 5, the electronic device 500 includes a Central Processing Unit (CPU) 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims below, any of the claimed embodiments may be used in any combination. The information disclosed in this background section is only for enhancement of understanding of the general background of the application and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.

Claims (10)

1. An interaction method based on a holographic display device, applied to a first holographic display device, characterized in that the method comprises:
Responding to an interaction event triggered by a first user terminal, and acquiring a second user identification of a second user from the interaction event;
when the interaction condition is met between the first user identifier corresponding to the first user terminal and the second user identifier, acquiring second digital object data of the second user based on the second user identifier;
and presenting a second digital object of the second user in a first holographic display device corresponding to the first user terminal based on the second digital object data.
2. The method of claim 1, wherein the interaction event comprises a call request event with the second user, the call request event comprising a call request event sent by the first user to the second user, and/or the first user receiving a call request event sent by the second user;
When the interaction event is the call request event, the second digital object data comprises basic data for presenting the second digital object;
when the social affinity between the first user and the second user is higher than an affinity threshold, judging that the interaction condition is met;
The method further comprises the steps of: when the interaction condition is not met, acquiring first digital object data of the first user based on a first user identification in the interaction event;
and presenting the first digital object of the first user in a first holographic display device corresponding to the first user terminal based on the first digital object data.
3. The method according to claim 1, wherein the method further comprises:
and acquiring a presentation instruction sent by a first user terminal, and determining a digital object displayed on the first holographic display device according to the presentation instruction, wherein the digital object comprises the second digital object and/or a first digital object of a first user.
4. The method according to claim 1, wherein the method further comprises:
And acquiring real-time interaction information of the first user and the second user, and adjusting the presented digital object based on the real-time interaction information to enable the presented digital object to generate an action effect matched with the real-time interaction information.
5. The method of claim 3, wherein the digital object displayed on the first holographic display device comprises the first digital object and the second digital object;
the method further comprises the steps of:
Collecting a first action of the first user through a camera of the first holographic display device and/or receiving first interaction information sent by the first user terminal;
and presenting the interaction effect of the first digital object and the second digital object on the first holographic display device, wherein the interaction effect is matched with the first action and/or the first interaction information.
6. The method of claim 5, wherein the presenting the interactive effect of the first digital object and the second digital object on the first holographic display device comprises:
Detecting the first action and/or the interaction type of the first interaction information, and enabling the first digital object to present an action matched with the first action and/or the first interaction information when the first action and/or the interaction type of the first interaction information belong to a unidirectional interaction type;
when of the bi-directional interaction type, the first digital object is caused to exhibit a first action that matches the first action and/or the first interaction information, and the second digital object is caused to exhibit a second action that interacts with the first action.
7. The method according to any one of claims 1 to 6, wherein the obtaining the second digital object data of the second user comprises:
and acquiring the second digital object data from a cloud server.
8. An interactive apparatus based on a holographic display device, the apparatus comprising:
the interactive event acquisition module is used for responding to an interactive event triggered from the first user terminal and acquiring a second user identification of a second user from the interactive event;
the digital object data processing module is used for acquiring second digital object data of the second user based on the second user identifier when the interaction condition is met between the first user identifier corresponding to the first user terminal and the second user identifier;
And the digital object presenting module is used for presenting the second digital object of the second user in the first holographic display equipment corresponding to the first user terminal based on the second digital object data.
9. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to perform the method of any of claims 1 to 7.
10. An electronic device, comprising:
One or more processors;
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
CN202410269980.9A 2024-03-11 2024-03-11 Interaction method, device, storage medium and equipment based on holographic display equipment Pending CN118012270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410269980.9A CN118012270A (en) 2024-03-11 2024-03-11 Interaction method, device, storage medium and equipment based on holographic display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410269980.9A CN118012270A (en) 2024-03-11 2024-03-11 Interaction method, device, storage medium and equipment based on holographic display equipment

Publications (1)

Publication Number Publication Date
CN118012270A true CN118012270A (en) 2024-05-10

Family

ID=90948342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410269980.9A Pending CN118012270A (en) 2024-03-11 2024-03-11 Interaction method, device, storage medium and equipment based on holographic display equipment

Country Status (1)

Country Link
CN (1) CN118012270A (en)

Similar Documents

Publication Publication Date Title
US11423509B2 (en) Method and apparatus for displaying face of virtual role, computer device, and readable storage medium
WO2020204000A1 (en) Communication assistance system, communication assistance method, communication assistance program, and image control program
US11100694B2 (en) Virtual reality presentation of eye movement and eye contact
US20160191958A1 (en) Systems and methods of providing contextual features for digital communication
CN109691054A (en) Animation user identifier
KR20130022434A (en) Apparatus and method for servicing emotional contents on telecommunication devices, apparatus and method for recognizing emotion thereof, apparatus and method for generating and matching the emotional contents using the same
WO2017211139A1 (en) Method and apparatus for implementing video communication
CN113691833B (en) Virtual anchor face changing method and device, electronic equipment and storage medium
US12067804B2 (en) True size eyewear experience in real time
US20220197027A1 (en) Conversation interface on an eyewear device
KR20210076373A (en) Mediating Apparatus, Method and Computer Readable Recording Medium Thereof
US11562548B2 (en) True size eyewear in real time
KR20230160918A (en) Interface with haptic and audio feedback responses
WO2022147451A1 (en) Media content items with haptic feedback augmentations
US20240248546A1 (en) Controlling augmented reality effects through multi-modal human interaction
WO2024044138A1 (en) Avatar call on an eyewear device
CN116229311B (en) Video processing method, device and storage medium
KR102058190B1 (en) Apparatus for providing character service in character service system
CN118012270A (en) Interaction method, device, storage medium and equipment based on holographic display equipment
US11922587B2 (en) Dynamic augmented reality experience
US20240193875A1 (en) Augmented reality shared screen space
US20230215062A1 (en) Protecting image features in stylized representations of a source image
TW201108151A (en) Instant communication control system and its control method
US20230410479A1 (en) Domain changes in generative adversarial networks
KR20140008687A (en) Character service system, method and apparatus for providing character service in the system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination