CN116027895A - Virtual content interaction method, device, equipment and storage medium - Google Patents

Virtual content interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN116027895A
CN116027895A CN202211621462.6A CN202211621462A CN116027895A CN 116027895 A CN116027895 A CN 116027895A CN 202211621462 A CN202211621462 A CN 202211621462A CN 116027895 A CN116027895 A CN 116027895A
Authority
CN
China
Prior art keywords
virtual
digital person
instruction
exhibition hall
person identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211621462.6A
Other languages
Chinese (zh)
Inventor
孔国良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cctv Technology Co ltd
Original Assignee
Shenzhen Cctv Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cctv Technology Co ltd filed Critical Shenzhen Cctv Technology Co ltd
Priority to CN202211621462.6A priority Critical patent/CN116027895A/en
Publication of CN116027895A publication Critical patent/CN116027895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a virtual content interaction method, device, equipment and storage medium, through obtaining the digital person identity model after three-dimensional scanning, and load into virtual exhibition hall platform, establish corresponding digital person identity, according to the digital person identity matches corresponding virtual exhibition hall platform explanation digital person, finally according to the instruction that the digital person identity was triggered and carry out corresponding visual content, the interactive processing of pronunciation content, the current online digital exhibition hall of having solved, virtual exhibition hall just is the exhibition, visit, can't let the user really realize immersive interaction, thereby lead to experiencing low technical problem.

Description

Virtual content interaction method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of information technologies, and in particular, to a virtual content interaction method, device, apparatus, and storage medium.
Background
Metauniverse (Metaverse), which is a digital living space constructed by human beings using digital technology, is a virtual world mapped or surpassed by the real world and can interact with the real world, and is provided with a novel social system. The method is novel virtual-real compatible Internet application and social morphology generated by integrating multiple new technologies, provides immersive experience based on the augmented reality technology, enables virtual world and real world to be closely fused on multiple systems such as a social system, an identity system and the like, and enables each user to perform content production and editing.
The current online exhibition develops rapidly. Various online exhibitions, virtual exhibitions, and online panoramic exhibitions have become indispensable approaches to exhibitors. The virtual exhibition hall can help the exhibitors to reduce the cost and improve the efficiency. Recently, the popular world exhibition halls have become hot, and the world exhibition halls bring more innovation and development opportunities.
However, the existing online digital exhibition halls and virtual exhibition halls are only exhibition halls and visit halls, so that the users cannot really realize immersive interaction, and the technical problem of low experience is caused.
Disclosure of Invention
The application provides a virtual content interaction method, device, equipment and storage medium, which solve the technical problems that the conventional online digital exhibition hall and virtual exhibition hall are only exhibition and visit, and users cannot really realize immersive interaction, so that experience is low.
In view of this, a first aspect of the present application provides a virtual content interaction method, including:
s1, acquiring a digital person identity model after three-dimensional scanning, loading the digital person identity model into a virtual exhibition hall platform, and establishing a corresponding digital person identity;
s2, explaining the digital person according to the corresponding virtual exhibition platform matched with the digital person;
s3, performing corresponding visual content and voice content interaction processing according to the acquired digital person identity triggering instruction.
Preferably, step S1 is preceded by:
s4, acquiring a creation instruction for creating a virtual exhibition hall platform;
s5, performing three-dimensional model construction rendering operation according to the creation instruction.
Preferably, step S5 specifically includes:
s51, performing three-dimensional model construction rendering operation for the preset display module according to the creation instruction;
s52, performing three-dimensional model construction rendering operation for the custom display module according to the creation instruction.
Preferably, after step S2, before step S3, the method further comprises:
s6, acquiring a corresponding virtual exhibition hall display instruction triggered by the explanation digital person;
and S7, performing exhibition hall effect display rendering according to the virtual exhibition hall display instruction.
Preferably, step S3 specifically includes:
s31, if the acquired command triggered by the digital personal identity is a voice command, identifying the voice command, and carrying out corresponding visual content and voice content interaction processing;
s32, if the acquired instruction triggered by the digital human identity is a gesture action instruction, capturing the gesture action instruction, matching the gesture action instruction with a preset action library, and carrying out corresponding visual content and voice content interaction processing according to the matched association behavior.
Preferably, step S3 specifically includes:
and S33, if the acquired instruction triggered by the digital person identity is a virtual communication instruction, establishing a communication connection relation with the corresponding digital person identity to be communicated according to the virtual communication instruction, and triggering a virtual communication mode.
Preferably, the gesture motion instruction comprises a body motion instruction or a gesture motion instruction.
A second aspect of the present application provides a virtual content interaction device, the device comprising:
the model loading unit is used for acquiring a digital person identity model after three-dimensional scanning, loading the model into a virtual exhibition hall platform and establishing a corresponding digital person identity;
the matching unit is used for matching the corresponding virtual exhibition hall platform according to the identity of the digital person to explain the digital person;
and the interaction unit is used for carrying out corresponding visual content and voice content interaction processing according to the acquired instruction triggered by the digital person identity.
A third aspect of the present application provides a virtual content interaction device, the device comprising a processor and a memory:
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of the virtual content interaction method of the first aspect as described above according to instructions in the program code.
A fourth aspect of the present application provides a computer readable storage medium storing program code for performing the steps of the virtual content interaction method of the first aspect described above.
From the above technical solutions, the embodiments of the present application have the following advantages:
according to the virtual content interaction method, the digital person identity model after three-dimensional scanning is obtained and loaded into the virtual exhibition hall platform, the corresponding digital person identity is established, then the digital person is explained according to the virtual exhibition hall platform corresponding to the digital person identity matching, and finally corresponding visual content and voice content interaction processing is carried out according to the obtained instruction triggered by the digital person identity, so that the technical problem that an existing online digital exhibition hall and virtual exhibition hall only show and visit, and immersive interaction cannot be really realized by a user is solved, and low experience is caused.
Further, the digital person identity triggering instruction is determined to be a voice instruction, a gesture action instruction or a virtual communication instruction to carry out corresponding interactive operation, so that the richer immersive interaction functionality is improved, the user experience is greatly improved, and a plurality of choices can be selected in the interaction process to solve the problem of clients.
Drawings
FIG. 1 is a flow chart of one embodiment of a method of virtual content interaction in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a virtual content interaction device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a virtual content interaction device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The application designs a virtual content interaction method, device, equipment and storage medium, which solve the technical problems that the conventional online digital exhibition hall and virtual exhibition hall are only exhibition and visit, and users cannot really realize immersive interaction, so that experience is low.
For easy understanding, referring to fig. 1, fig. 1 is a flowchart of a method for virtual content interaction in an embodiment of the present application, and as shown in fig. 1, specifically:
s1, acquiring a digital person identity model after three-dimensional scanning, loading the digital person identity model into a virtual exhibition hall platform, and establishing a corresponding digital person identity;
the method includes the steps that when a digital person identity model after three-dimensional scanning is obtained and loaded into a virtual exhibition hall platform, a creation instruction for creating the virtual exhibition hall platform is required to be obtained before a corresponding digital person identity is established; and performing three-dimensional model construction rendering operation according to the creation instruction. The digital person identity can be obtained by the visitor through the three-dimensional camera system device in advance after the visitor is scanned, and the model is endowed with the three-dimensional map of the visitor, so that the digital person identity is formed.
Further carrying out three-dimensional model construction rendering operation for the preset display module according to the creation instruction; and carrying out three-dimensional model construction rendering operation for the custom display module according to the creation instruction.
For example, a user can customize the style and the size of different virtual exhibition halls by the digital platform. The user can construct different display modules of the virtual exhibition hall through the fixed three-dimensional model on the digital platform, and can also construct the virtual exhibition hall through the three-dimensional model which is built by himself. The dynamic effect form of the display content (pictures, words and videos) of the virtual exhibition hall can be customized.
S2, explaining the digital person according to the corresponding virtual exhibition platform matched with the digital person;
after the digital person is explained according to the corresponding virtual exhibition hall platform matched with the digital person, a corresponding virtual exhibition hall display instruction triggered by the explained digital person is obtained; and performing exhibition hall effect display rendering according to the virtual exhibition hall display instruction.
For example, an interpreter can remotely operate a real exhibition hall display equipment switch, a lamplight switch and content display through a digital platform. The lecturer can explain the content in a real exhibition hall for a visitor through images and sounds by using a digital platform according to the digital person identity.
Further, the lecturer explanation content is real-time and can interact with the visitors, and the lecturer appears on all screens of the exhibition hall in digital human identities.
S3, performing corresponding visual content and voice content interaction processing according to the acquired digital person identity triggering instruction.
If the acquired command triggered by the digital person identity is a voice command, the voice command is identified, and corresponding visual content and voice content interaction processing is performed;
if the acquired instruction triggered by the identity of the digital person is a gesture action instruction, capturing the gesture action instruction, matching the gesture action instruction with a preset action library, and carrying out corresponding visual content and voice content interaction processing according to the matched association action.
If the acquired instruction triggered by the digital person identity is a virtual communication instruction, establishing a communication connection relation with the corresponding digital person identity to be communicated according to the virtual communication instruction, and triggering a virtual communication mode.
For example, visitors can navigate through the digital platform in a digital person identity, and the virtual exhibition hall is traversed and browsed from multiple perspectives to multiple perspectives. The user can realize the face-to-face communication of multiple people in the virtual exhibition hall with digital person identities through the digital platform. The user can freely control the shuttling of different virtual display plates through the digital platform according to the digital person identity, and content control and browsing are realized.
The visitor can carry out the corresponding plate content in the real exhibition hall, and the pictures, the characters and the video can be singly enlarged, reduced, rotated, played and paused.
The visitor clicks a virtual button on an image-text display board of a real exhibition hall to connect extended contents beyond the cloud extranet for watching, and experiences interests and boundaries brought by immersive interaction.
The visitor views the real object in the real exhibition hall, and the corresponding display content of the real object is suspended in mid-air, so that 360-degree omnibearing viewing can be realized, and related information can be known by controlling the content.
The VR/AR equipment is worn by the visitor to enter the virtual exhibition hall, the visitor can be switched to different exhibition areas of the exhibition hall through the handle or gesture action, and all corresponding exhibition contents of the exhibition hall are triggered, so that the contents of the exhibition plates can be controlled and browsed. For example, the display board on the virtual exhibition hall is in a static form, before a digital person walks to the display board, the display board content can float at the eyes by touching the display board by hands, and the display special effects and the more detailed browsing of the extended content can be switched according to the effect.
1. The three-dimensional modeling is to synchronously collect the scene images through an automatic optical detection technology, and then synthesize the scene images through a 3D modeling technology and a 2D image; the modification of the exhibition hall is carried out through the background real-time rendering of the internet cloud server, the three-dimensional modeling is that a plurality of initial models are built in advance and the cloud server exists, when the exhibition hall is further rendered, a plurality of 2D image sheets are required to be preset, the exhibition hall can be directly classified in the early stage, the model which is well classified in advance by the cloud server can be called when the background real-time rendering is carried out, and then the virtual image of the corresponding exhibition hall is generated through the 3D rendering technology.
2. Preset display module, wall molding, equipment display mode, interactive display mode, exhibition hall colour collocation etc. because the user demand that different exhibition halls correspond is different, consequently the experience to the user on functional display module further improves, and the visitor can all-round with digital personnel's identity through digital platform, and the multi-view is to virtual exhibition hall all-round, multi-view shuttles and browses. The user can realize the face-to-face communication of multiple people in the virtual exhibition hall with digital person identities through the digital platform. The user can freely control different virtual display plates in a shuttling mode through digital personal identities, content control and browsing are realized, big data recording is carried out on user operation habits, recommendation algorithm calculation can be carried out in real time according to user habits when the user enters the next exhibition hall through a cloud server preset recommendation algorithm, real-time corresponding recommendation preset rendering is carried out on exhibition hall functions suitable for user operation and color collocation, the user can select one more selection to match the user habits, for example, the rendering effect and function modules of the user in the A exhibition hall selection, gesture operation habits, residence time length and the like are recorded, the display content browsing habit mode and the like are uploaded to a cloud server in real time, user habit calculation is carried out, and when the user enters the B exhibition hall through the recommendation algorithm calculation, initial model rendering is carried out on the cloud server, when the user enters the B exhibition hall, real-time judgment is carried out on virtual equipment, a plurality of virtual exhibition hall recommendation is carried out, the user can select a recommended exhibition hall mode through sound and the like, the user can select the corresponding exhibition hall mode is rendered in real time after a determined selection instruction is obtained, and user experience is improved.
3. Each exhibition hall is internally provided with a group, each group is linked into a virtual exhibition hall, the group is automatically added, a handle or a voice command is used for calling the group to search for contact persons, all digital persons far away from the sky can be communicated face to face through the communication technology of the Internet and VR equipment, the communication mode can be a voice communication room which corresponds to the voice communication established directly through voice communication, or public publication of voice conversion characters is adopted in the group communication, and the like, friends are not required to be added independently, other digital persons corresponding to the requirements can be directly contacted, even the digital person virtual equipment end can present limb actions of the other end of communication, and the experience of the exhibition process is further improved.
According to the virtual content interaction method, the digital person identity model after three-dimensional scanning is obtained and loaded into the virtual exhibition hall platform, the corresponding digital person identity is established, the corresponding virtual exhibition hall platform is matched according to the digital person identity to explain the digital person, and finally the corresponding visual content and voice content interaction processing is carried out according to the obtained instruction triggered by the digital person identity, so that the technical problem that the user cannot really realize immersive interaction due to the fact that the existing online digital exhibition hall and virtual exhibition hall only show and visit is solved.
Further, the digital person identity triggering instruction is determined to be a voice instruction, a gesture action instruction or a virtual communication instruction to carry out corresponding interactive operation, so that the richer immersive interaction functionality is improved, the user experience is greatly improved, and a plurality of choices can be selected in the interaction process to solve the problem of clients.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a virtual content interaction device according to an embodiment of the present application, and as shown in fig. 2, the structure specifically includes:
the model loading unit 201 is configured to acquire a digital person identity model after three-dimensional scanning, load the digital person identity model into a virtual exhibition hall platform, and establish a corresponding digital person identity;
a matching unit 202, configured to match corresponding virtual exhibition platforms to explain digital persons according to their identities;
and the interaction unit 203 is configured to perform corresponding interaction processing on visual content and voice content according to the acquired instruction triggered by the digital person identity.
According to the virtual content interaction device, the model loading unit 201 acquires the digital person identity model after three-dimensional scanning, the model loading unit loads the digital person identity model into the virtual exhibition hall platform, the corresponding digital person identity is established, the matching unit 202 then explains the digital person according to the digital person identity matching corresponding virtual exhibition hall platform, and finally the interaction unit 203 carries out corresponding visual content and voice content interaction processing according to the acquired digital person identity triggering instruction, so that the technical problem that the conventional online digital exhibition hall and virtual exhibition hall only show and visit, and the user cannot really realize immersive interaction is solved, and the experience is low is solved.
Further, the digital person identity triggering instruction is determined to be a voice instruction, a gesture action instruction or a virtual communication instruction to carry out corresponding interactive operation, so that the richer immersive interaction functionality is improved, the user experience is greatly improved, and a plurality of choices can be selected in the interaction process to solve the problem of clients.
The embodiment of the present application further provides another virtual content interaction device, as shown in fig. 3, for convenience of explanation, only a portion related to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to a method portion of the embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, a personal digital assistant (English full name: personal Digital Assistant, english abbreviation: PDA), a Sales terminal (English full name: point of Sales, english abbreviation: POS), a vehicle-mounted computer and the like, taking the mobile phone as an example of the terminal:
fig. 3 is a block diagram showing a part of a structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 3, the mobile phone includes: radio Frequency (RF) circuit 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuit 1060, wireless fidelity (wireless fidelity, wiFi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 3 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 3:
the RF circuit 1010 may be used for receiving and transmitting signals during a message or a call, and particularly, after receiving downlink information of a base station, the signal is processed by the processor 1080; in addition, the data of the design uplink is sent to the base station. Generally, RF circuitry 1010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (English full name: low Noise Amplifier, english abbreviation: LNA), a duplexer, and the like. In addition, the RF circuitry 1010 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (english: global System of Mobile communication, english: GSM), general packet radio service (english: general Packet Radio Service, GPRS), code division multiple access (english: code Division Multiple Access, english: CDMA), wideband code division multiple access (english: wideband Code Division Multiple Access, english: WCDMA), long term evolution (english: long Term Evolution, english: LTE), email, short message service (english: short Messaging Service, SMS), and the like.
The memory 1020 may be used to store software programs and modules that the processor 1080 performs various functional applications and data processing of the handset by executing the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 1020 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state memory device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 1030 may include a touch panel 1031 and other input devices 1032. The touch panel 1031, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1031 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 1031 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 1080 and can receive commands from the processor 1080 and execute them. Further, the touch panel 1031 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1030 may include other input devices 1032 in addition to the touch panel 1031. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, a joystick, etc.
The display unit 1040 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 1040 may include a display panel 1041, and alternatively, the display panel 1041 may be configured in the form of a liquid crystal display (english full name: liquid Crystal Display, acronym: LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1031 may overlay the display panel 1041, and when the touch panel 1031 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 1080 to determine a type of touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1041 according to the type of touch event. Although in fig. 3, the touch panel 1031 and the display panel 1041 are two independent components for implementing the input and output functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 1060, a speaker 1061, and a microphone 1062 may provide an audio interface between a user and a cell phone. Audio circuit 1060 may transmit the received electrical signal after audio data conversion to speaker 1061 for conversion by speaker 1061 into an audio signal output; on the other hand, microphone 1062 converts the collected sound signals into electrical signals, which are received by audio circuit 1060 and converted into audio data, which are processed by audio data output processor 1080 for transmission to, for example, another cell phone via RF circuit 1010 or for output to memory 1020 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 1070, so that wireless broadband Internet access is provided for the user. Although fig. 3 shows a WiFi module 1070, it is understood that it does not belong to the necessary constitution of the handset, and can be omitted entirely as required within the scope of not changing the essence of the invention.
Processor 1080 is the control center of the handset, connects the various parts of the entire handset using various interfaces and lines, and performs various functions and processes of the handset by running or executing software programs and/or modules stored in memory 1020, and invoking data stored in memory 1020, thereby performing overall monitoring of the handset. Optionally, processor 1080 may include one or more processing units; preferably, processor 1080 may integrate an application processor primarily handling operating systems, user interfaces, applications, etc., with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset further includes a power source 1090 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 1080 by a power management system, such as to provide for managing charging, discharging, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 1080 included in the terminal further has the following functions:
s1, acquiring a digital person identity model after three-dimensional scanning, loading the digital person identity model into a virtual exhibition hall platform, and establishing a corresponding digital person identity;
s2, explaining the digital person according to the corresponding virtual exhibition platform matched with the digital person;
s3, performing corresponding visual content and voice content interaction processing according to the acquired digital person identity triggering instruction.
The present application also provides a computer readable storage medium storing program code for executing any one of the virtual content interaction methods of the foregoing embodiments.
In the embodiment of the application, a virtual content interaction method, device, equipment and storage medium are provided, through acquiring a digital person identity model after three-dimensional scanning and loading the digital person identity model into a virtual exhibition hall platform, a corresponding digital person identity is established, then the digital person is explained according to the virtual exhibition hall platform corresponding to the digital person identity matching, and finally corresponding visual content and voice content interaction processing is carried out according to an instruction triggered by the acquired digital person identity, so that the technical problem that the conventional online digital exhibition hall and virtual exhibition hall only show and visit, and the user cannot really realize immersive interaction is solved, thereby causing low experience is solved.
Further, the digital person identity triggering instruction is determined to be a voice instruction, a gesture action instruction or a virtual communication instruction to carry out corresponding interactive operation, so that the richer immersive interaction functionality is improved, the user experience is greatly improved, and a plurality of choices can be selected in the interaction process to solve the problem of clients.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A virtual content interaction method, comprising:
s1, acquiring a digital person identity model after three-dimensional scanning, loading the digital person identity model into a virtual exhibition hall platform, and establishing a corresponding digital person identity;
s2, explaining the digital person according to the virtual exhibition hall platform corresponding to the digital person identity;
s3, performing corresponding visual content and voice content interaction processing according to the acquired instruction triggered by the digital person identity.
2. The virtual content interaction method according to claim 1, wherein the step S1 is preceded by:
s4, acquiring a creation instruction for creating the virtual exhibition hall platform;
s5, performing three-dimensional model construction rendering operation according to the creation instruction.
3. The virtual content interaction method according to claim 2, wherein the step S5 specifically includes:
s51, performing three-dimensional model construction rendering operation for a preset display module according to the creation instruction;
s52, performing three-dimensional model construction rendering operation for the custom display module according to the creation instruction.
4. The virtual content interaction method according to claim 1, wherein after the step S2, before the step S3, further comprises:
s6, acquiring a corresponding virtual exhibition hall display instruction triggered by the explanation digital person;
and S7, performing exhibition hall effect display rendering according to the virtual exhibition hall display instruction.
5. The virtual content interaction method according to claim 1, wherein the step S3 specifically includes:
s31, if the acquired instruction triggered by the digital person identity is a voice instruction, identifying the voice instruction, and carrying out corresponding visual content and voice content interaction processing;
s32, if the acquired instruction triggered by the identity of the digital person is a gesture action instruction, capturing the gesture action instruction, matching the gesture action instruction with a preset action library, and carrying out corresponding visual content and voice content interaction processing according to the matched association behavior.
6. The virtual content interaction method according to claim 1 or 5, wherein the step S3 specifically includes:
and S33, if the acquired instruction triggered by the digital person identity is a virtual communication instruction, establishing a communication connection relation with the corresponding digital person identity to be communicated according to the virtual communication instruction, and triggering a virtual communication mode.
7. The virtual content interaction method of claim 5, wherein the gesture motion instruction comprises a body motion instruction or a gesture motion instruction.
8. A virtual content interaction device, comprising:
the model loading unit is used for acquiring a digital person identity model after three-dimensional scanning, loading the model into a virtual exhibition hall platform and establishing a corresponding digital person identity;
the matching unit is used for matching the corresponding virtual exhibition hall platform according to the digital person identity to explain the digital person;
and the interaction unit is used for carrying out corresponding visual content and voice content interaction processing according to the acquired instruction triggered by the digital person identity.
9. A virtual content interaction device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the virtual content interaction method of any one of claims 1-7 according to instructions in the program code.
10. A computer readable storage medium, characterized in that the computer readable storage medium is for storing a program code for performing the virtual content interaction method of any of claims 1-7.
CN202211621462.6A 2022-12-05 2022-12-05 Virtual content interaction method, device, equipment and storage medium Pending CN116027895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211621462.6A CN116027895A (en) 2022-12-05 2022-12-05 Virtual content interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211621462.6A CN116027895A (en) 2022-12-05 2022-12-05 Virtual content interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116027895A true CN116027895A (en) 2023-04-28

Family

ID=86090445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211621462.6A Pending CN116027895A (en) 2022-12-05 2022-12-05 Virtual content interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116027895A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117590986A (en) * 2024-01-19 2024-02-23 四川蜀天信息技术有限公司 Navigation interaction method, device and equipment applied to online virtual exhibition hall

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117590986A (en) * 2024-01-19 2024-02-23 四川蜀天信息技术有限公司 Navigation interaction method, device and equipment applied to online virtual exhibition hall

Similar Documents

Publication Publication Date Title
JP7062092B2 (en) Display control method and terminal
CN111491197B (en) Live content display method and device and storage medium
CN114205324B (en) Message display method, device, terminal, server and storage medium
CN110300274B (en) Video file recording method, device and storage medium
CN108495029A (en) A kind of photographic method and mobile terminal
CN108966004A (en) A kind of method for processing video frequency and terminal
WO2019149028A1 (en) Application download method and terminal
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN110865745A (en) Screen capturing method and terminal equipment
CN108881719A (en) A kind of method and terminal device switching style of shooting
CN110248245A (en) A kind of video locating method, device, mobile terminal and storage medium
WO2019184902A1 (en) Method for controlling icon display, and terminal
CN109327672A (en) A kind of video call method and terminal
CN113490010A (en) Interaction method, device and equipment based on live video and storage medium
CN111064888A (en) Prompting method and electronic equipment
CN116027895A (en) Virtual content interaction method, device, equipment and storage medium
CN112870697B (en) Interaction method, device, equipment and medium based on virtual relation maintenance program
CN112023403B (en) Battle process display method and device based on image-text information
CN113014960B (en) Method, device and storage medium for online video production
CN108551562A (en) A kind of method and mobile terminal of video communication
CN111178306A (en) Display control method and electronic equipment
CN110471895A (en) Sharing method and terminal device
CN109343782A (en) A kind of display methods and terminal
CN109547696A (en) A kind of image pickup method and terminal device
CN109040415A (en) A kind of method and terminal showing content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination