CN117459751A - Virtual-real interaction method, device and equipment - Google Patents

Virtual-real interaction method, device and equipment Download PDF

Info

Publication number
CN117459751A
CN117459751A CN202311521322.6A CN202311521322A CN117459751A CN 117459751 A CN117459751 A CN 117459751A CN 202311521322 A CN202311521322 A CN 202311521322A CN 117459751 A CN117459751 A CN 117459751A
Authority
CN
China
Prior art keywords
virtual
user
actual
scene
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311521322.6A
Other languages
Chinese (zh)
Inventor
刘婉蓉
郑彬戈
于芹
李建忠
袁庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311521322.6A priority Critical patent/CN117459751A/en
Publication of CN117459751A publication Critical patent/CN117459751A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual-real interaction method, a virtual-real interaction device and virtual-real interaction equipment, which are used for responding to an interaction instruction sent by a first virtual user in a virtual scene to acquire interaction data of the first virtual user; determining the actual position of the first virtual user in an actual scene corresponding to the virtual scene according to the virtual position of the first virtual user in the virtual scene; controlling interaction equipment in the actual scene based on the actual position, and pushing the interaction data to an actual user in the actual scene; and receiving interactive feedback data sent by the target actual user in the actual scene according to the interactive data, and pushing the interactive feedback data to the first virtual user. By adopting the method and the device, the interaction mechanism of the virtual scene and the actual scene can be realized, and the interaction experience between the meta-universe world and the real world is improved.

Description

Virtual-real interaction method, device and equipment
Technical Field
The invention relates to the technical field of meta space, in particular to a virtual-real interaction method, device and equipment.
Background
Entertainment activities such as singing and the like are promoted along with the improvement of the living standard of people. The online concert is hot to climb because the physical concert has uniqueness as well as regional and ticket restrictions.
The existing online concert form is generally that an actual image of a singer on an actual stage is collected through cameras arranged in an actual scene, live broadcasting is carried out on-line watching is carried out on audiences through video streams, and interaction between vermicelli and the singer is achieved through virtual gift appreciation, vermicelli space for vermicelli communication interaction and the like.
However, the inventors found that the prior art has at least the following problems: the existing online singing session increases the forms of virtual gift appreciation, independently opens up online vermicelli communication space and the like, but does not essentially improve the interaction experience between vermicelli and singers, and is still interacted with the singers in a single direction by audiences.
Disclosure of Invention
The embodiment of the invention aims to provide a virtual-real interaction method, device and equipment, which can realize an interaction mechanism of a virtual scene and an actual scene and improve interaction experience between a meta-universe world and a real world.
In order to achieve the above object, an embodiment of the present invention provides a virtual-real interaction method, which is characterized by comprising:
responding to an interaction instruction sent by a first virtual user in a virtual scene, and acquiring interaction data of the first virtual user;
determining the actual position of the first virtual user in an actual scene corresponding to the virtual scene according to the virtual position of the first virtual user in the virtual scene;
Controlling interaction equipment in the actual scene based on the actual position, and pushing the interaction data to an actual user in the actual scene;
and receiving interactive feedback data sent by the target actual user in the actual scene according to the interactive data, and pushing the interactive feedback data to the first virtual user.
As an improvement of the scheme, the interaction data is a user image picture; the interactive device comprises a projection display device and a projection device, wherein the projection display device is arranged behind a spectator area in the actual scene and faces a specific area where the target actual user is located;
the step of controlling the corresponding interactive device to push the interactive data to the actual user in the actual scene based on the actual position is specifically:
determining a display area of the user image picture in the projection display equipment according to the actual position and the central position of the specific area, and taking the display area as a target display area;
and controlling the projection equipment to project the user image picture onto the target display area.
As an improvement of the above scheme, the interactive data is user sound information, the interactive device includes at least one sound playing device, and each sound playing device is disposed in a specific area where the target actual user is located and is close to the periphery of the audience area in the actual scene;
The step of controlling the corresponding interactive device to push the interactive data to the actual user in the actual scene based on the actual position is specifically:
according to the actual position and the central position of the specific area, determining a sound playing device for playing the user sound information as a target sound playing device;
and controlling the target sound playing equipment to play the user sound information.
As an improvement of the above solution, before the acquiring the interaction data of the first virtual user in response to the interaction instruction sent by the first virtual user in the virtual scene, the method further includes:
when an interaction instruction sent by the first virtual user is received, judging whether a preset interaction condition is met or not; the interaction condition is that the number of the currently responded interaction instructions does not reach the upper limit, and the interaction equipment corresponding to the first virtual user is not occupied;
when the interaction condition is met, responding to an interaction instruction sent by a first virtual user in a virtual scene, and acquiring interaction data of the first virtual user;
and when the interaction condition is not met, pushing prompt information indicating that the interaction condition is not met to the first virtual user.
As an improvement of the above solution, the method further includes:
responding to a viewing angle selection instruction sent by the first virtual user in the virtual scene, and determining the viewing angle currently selected by the first virtual user; wherein the viewing angle comprises a panoramic viewing angle and a seat viewing angle;
invoking an image frame shot by preset shooting equipment in the actual scene according to the viewing angle currently selected by the first virtual user, generating a display picture under the viewing angle, and pushing the display picture to the first virtual user; the preset image pickup apparatus includes: a foreground image pickup apparatus located in front of an audience area in the actual scene, and a panoramic image pickup apparatus located behind the audience area.
As an improvement of the above-mentioned scheme, when the viewing angle currently selected by the first virtual user is a panoramic viewing angle,
the step of calling the image frames shot by the preset shooting equipment in the actual scene to generate the display picture under the viewing angle comprises the following steps:
determining the position of the panoramic camera on the connecting line as a first position according to the connecting line of the actual position and the central position of the specific area;
Determining a second position of the panoramic camera according to the first position and a preset offset and an offset direction; wherein the offset direction is determined from a relative positional relationship of the actual position and a center position of the audience area;
and combining the display picture under the panoramic viewing angle according to the image frames shot by the panoramic shooting equipment between the first position and the second position.
As an improvement of the above solution, when the viewing angle currently selected by the first virtual user is a seat viewing angle,
the step of calling the image frames shot by the preset shooting equipment in the actual scene to generate the display picture under the viewing angle comprises the following steps:
determining a position of the panoramic image capturing apparatus located on the link as a third position and a position of the foreground image capturing apparatus located on the link as a fourth position according to a link between the actual position and a center position of the specific area;
determining a corresponding image interception proportion according to the distance between the actual position and the central position of the specific area;
intercepting the image frames shot by the panoramic shooting equipment at the third position according to the image intercepting proportion to obtain intercepted image frames;
And synthesizing a display picture under the seat watching visual angle according to the captured image frame and the image frame captured by the foreground image capturing equipment at the fourth position.
As an improvement of the above solution, the method further includes:
after constructing the virtual scene, when detecting that a third virtual user joins the virtual scene, constructing a virtual position in the virtual scene for the third virtual user;
determining virtual identity information of the third virtual user, and binding the virtual identity information with the position information; wherein, the same virtual position can bind a plurality of virtual identity information; each third virtual user is used as the first virtual user initiating interactive instructions in the virtual scene or each second virtual user which has joined in the virtual scene and does not initiate interactive instructions.
The embodiment of the invention also provides a virtual-real interaction device, which comprises:
the interactive data acquisition module is used for responding to an interactive instruction sent by a first virtual user in a virtual scene to acquire interactive data of the first virtual user;
the actual position determining module is used for determining the actual position of the first virtual user in an actual scene corresponding to the virtual scene according to the virtual position of the first virtual user in the virtual scene;
The interactive data pushing module is used for controlling the interactive equipment in the actual scene based on the actual position and pushing the interactive data to an actual user in the actual scene;
and the interactive feedback data pushing module is used for receiving the interactive feedback data sent by the target actual user in the actual scene according to the interactive data and pushing the interactive feedback data to the first virtual user.
The embodiment of the invention also provides virtual-real interaction equipment, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the virtual-real interaction method is realized when the processor executes the computer program.
The embodiment of the invention also provides a computer readable storage medium, which comprises a stored computer program, wherein the computer program is used for controlling equipment where the computer readable storage medium is located to execute the virtual-real interaction method according to any one of the above.
Compared with the prior art, the virtual-real interaction method, the virtual-real interaction device and the equipment disclosed by the invention construct a virtual scene according to an actual scene, a first virtual user selects one virtual position to be seated in the virtual scene, when the user interacts with a target actual user in the actual scene, the corresponding interaction equipment in the actual scene is controlled to push interaction data to the actual user in the actual scene according to the actual position corresponding to the virtual position, so that the target actual user in the actual scene can truly receive the interaction data of the first virtual user, interaction feedback data is made for the interaction data, and the interaction feedback data is returned to the first virtual user, thereby enabling the first virtual user to receive the interaction feedback which is made by the target actual user in the actual scene and is exclusive to the first virtual user, realizing a two-way interaction mechanism of the virtual scene and the actual scene, and improving the immersive interaction experience of the user.
Drawings
Fig. 1 is a schematic flow chart of a virtual-real interaction method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a projection display device and a mounting location of the projection device in an embodiment of the present invention;
fig. 3 is a schematic diagram of an installation position of a sound playing device in an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a virtual-real interaction method according to an embodiment of the invention;
fig. 5 is a schematic diagram of an installation position of an image pickup apparatus in an embodiment of the present invention;
FIG. 6 is a schematic diagram of generating a panoramic viewing perspective in an embodiment of the invention;
FIG. 7 is a schematic diagram of generating a seat viewing perspective in an embodiment of the invention;
fig. 8 is a schematic structural diagram of a virtual-real interaction device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of virtual-real interaction equipment according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the specification and claims, it should be understood that the terms first, second, etc. are used solely for the purpose of distinguishing between similar features and not necessarily for the purpose of describing a sequential or chronological order, and not necessarily for the purpose of indicating or implying a relative importance or implicitly indicating the number of features indicated. The terms are interchangeable where appropriate. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Referring to fig. 1, a flow chart of a virtual-real interaction method provided by an embodiment of the present invention is shown, and the embodiment of the present invention provides a virtual-real interaction method applied to a meta space server, where the interaction method includes steps S11 to S14:
s11, responding to an interaction instruction sent by a first virtual user in a virtual scene, and acquiring interaction data of the first virtual user;
s12, determining the actual position of the first virtual user in an actual scene corresponding to the virtual scene according to the virtual position of the first virtual user in the virtual scene;
s13, controlling interaction equipment in the actual scene based on the actual position, and pushing the interaction data to an actual user in the actual scene;
S14, receiving interaction feedback data sent by the target actual user in the actual scene according to the interaction data, and pushing the interaction feedback data to the first virtual user.
It should be noted that, the embodiment of the invention is applied to a virtual scene, the meta-space server acquires the related parameter information in the actual scene to construct, and the constructed virtual scene can be displayed by a user terminal connected and interacted with the meta-space server. The user terminal includes, but is not limited to, mobile phones, tablet computers, VR devices and other electronic devices.
The actual scene includes the audience area, the target actual user and the specific area where the target actual user is located, and of course, if there is an audience on the audience area in the actual scene, the audience may also be included. The embodiment of the invention can be applied to various scenes, and optionally, when being applied to a concert scene, the actual scene can be an actual concert scene, then the audience area is an audience seat in a concert venue, the target actual user is a concert singer, and the specific area is a stage in the concert venue. Alternatively, when applied to a teaching scenario, the actual scenario may also be an actual classroom scenario, and then the audience area is a student seat area in a classroom, the target actual user is a teacher, and the specific area is a podium. Of course, it is also applicable to other scenarios, and is not particularly limited herein.
It may be understood that the constructed frame of the virtual scene may be an actual frame generated by capturing a video frame or an image frame of the actual scene and the same as the actual scene, or may be a virtual frame generated by performing 3D modeling and rendering according to the actual scene, and the constructed frames for user views at different positions may also be different, which is not specifically limited herein.
Preferably, before step S11, the method further comprises:
after constructing the virtual scene, when detecting that a third virtual user joins the virtual scene, constructing a virtual position in the virtual scene for the third virtual user; wherein after joining a virtual scene, each of the third virtual users is treated as the first virtual user initiating an interactive instruction in the virtual scene, or each of the second virtual users having joined an unexpired interactive instruction in the virtual scene.
Specifically, the meta space server side firstly models according to a target actual user, a specific area and an audience area in an actual scene to construct a virtual scene, and when the user joins the virtual scene through the user side, a virtual position of the user in the virtual audience area in the virtual scene can be constructed according to the bound position information of the user or according to the selection of the user and the like, wherein the audience area in the actual scene corresponds to the virtual position.
Preferably, after constructing a virtual location, determining virtual identity information of the third virtual user, and binding the virtual identity information with the location information; wherein the same virtual location can bind multiple virtual identity information. That is, in a virtual scenario, a virtual user may choose to sit at the same virtual location, distinguished by the virtual identity information of the virtual user, e.g., a virtual ID. The seat of the audience is determined by binding the virtual ID and the virtual position coordinates, so that the unique limit of the actual position in the real concert is broken through, and the audience can select a better concert viewing angle.
After entering the virtual scene, when the user (noted as the first virtual user) wants to interact with the target actual user in the actual scene, the interaction instruction is sent out by executing a preset operation by the corresponding first virtual user in the control element universe meeting, wherein the preset operation is preset, for example, a preset trigger button is pressed, or a preset virtual earphone is carried on. After receiving the interaction instruction, the meta-space server responds to the interaction instruction, the first virtual user can perform online interaction with a target actual user in an actual scene, the interaction initiated by the first virtual user comprises but is not limited to action interaction, expression interaction, sound interaction and the like, the meta-space server acquires interaction data sent by the first virtual user, acquires a virtual position of the first virtual user in the virtual scene, and determines that the virtual position corresponds to the actual position in a spectator area in the actual scene.
It should be noted that, each actual position in the actual scene corresponds to an interaction device preset for pushing the interaction data sent by the first virtual user, and one interaction device may correspondingly match a plurality of actual positions.
When the meta-universe server receives the interactive data of the first virtual user, and after determining the corresponding actual position of the virtual position of the first virtual user in the actual scene, the interactive device corresponding to the actual position is controlled to push out the interactive data, so that all actual users in the actual scene, including target actual users, actual audiences and the like, can receive the interactive data. The target actual user may make interactive feedback based on the interactive data, including but not limited to action interactions, expression interactions, and voice interactions, etc. The meta space server acquires audio and video data in the actual scene through a preset information acquisition device and pushes the audio and video data to the first virtual user; the audio and video data comprise interactive feedback data which are made by the target actual user in the actual scene according to the interactive data.
Preferably, after step S13, the method further comprises:
pushing the interactive data to each second virtual user joining the virtual scene;
after step S14, the method further comprises:
pushing the interactive feedback data to each second virtual user joining the virtual scene.
Specifically, for other users (noted as second virtual users) in the virtual scene that do not initiate the connection interaction instruction, the interaction situation between the target actual user and the first virtual user can also be received in real time. The meta space server acquires audio and video data in the actual scene through a preset information acquisition device and pushes the audio and video data to each second virtual user in the virtual scene; the audio and video data comprise interaction data pushed by the interaction equipment and interaction feedback data sent by the target actual user.
By adopting the technical means of the embodiment of the invention, the first virtual user selects one virtual position to be seated in the virtual scene according to the actual scene, when the user interacts with the target actual user in the actual scene, the corresponding interactive equipment in the actual scene is controlled to push the interactive data to the actual user in the actual scene according to the actual position corresponding to the virtual position, so that the target actual user in the actual scene can truly receive the interactive data of the first virtual user, and the interactive feedback data is made for the interactive data, and the interactive feedback data is returned to the first virtual user, thereby enabling the first virtual user to receive the specific interactive feedback made by the target actual user in the actual scene, realizing the bidirectional interactive mechanism of the virtual scene and the actual scene, and improving the immersive interactive experience of the user.
As a preferred embodiment, the interactive data is a user avatar screen. The interactive device comprises a projection display device and a projection device, wherein the projection display device is arranged behind the audience area and faces the specific area.
Step S13, namely, the step of controlling the interactive device in the actual scene based on the actual position, is to push the interactive data to the actual user in the actual scene, specifically:
determining a display area of the user image picture in the projection display equipment according to the actual position and the central position of the specific area, and taking the display area as a target display area; and controlling the projection equipment to project the user image picture onto the target display area.
In the embodiment of the invention, the interactive data is a user image picture, the user image picture comprises information reflected on the appearance of a user such as expression, action and the like of the user, and the user image picture is displayed by arranging a projection display device and a projection device.
Referring to fig. 2, a schematic diagram of installation positions of a projection display device and a projection device in an embodiment of the present invention, where the projection display device is disposed behind an audience area in the actual scene and faces the specific area, so that an actual target user can watch the projection device may be disposed in the middle of the audience area. It should be noted that, the number of the projection devices may be determined according to the upper limit number of the interactive connection commands that can be responded at the same time, for example, the number of the interactive connection that can be currently connected at the same time is n, and then the number of the projection devices is n.
Further, the metauniverse server side is used for controlling the first virtual user according to the actual position (x, y) of the first virtual user in the actual scene and the central position (x 0 ,y 0 ) The position (x) of the projection display device on the line is determined by means of a two-point line s ,y s ) The center point of the target display area is obtained, the target display area is further determined, the projection equipment is controlled to project the user image picture sent by the first virtual user onto the target display area, and therefore when the target actual user looks at the target display area and makes interactive feedback, the user can interact with the user corresponding to the actual position as if the target actual user interacted with the user corresponding to the actual position. Under the view angle of the user, the interaction between the target actual user and the projection of the audience on the projection display device is seen as the interaction between the target actual user and the audience, so that the immersion experience of the user is improved.
Correspondingly, the interactive feedback data comprise the image picture and/or the sound information of the target actual user. That is, the target actual user can make a specific expression and action image picture towards the target display area after seeing the user image picture on the projection display device, and can also send out sound information.
Taking a concert scene as an example, the actual scene is an entity concert venue, the correspondingly constructed virtual scene is a virtual concert, the projection display device is an annular curtain, the projection device is a projector, the target actual user is a singer, and the specific area is a stage. The annular curtain is arranged on the back wall of the entity concert venue, and a plurality of projectors are arranged in the center of the audience seat on the assumption that the projection height is h.
In the process of the connection interaction, the coordinate (x, y) of the actual position in the entity concert and the stage center coordinate (x 0 ,y 0 ) Is used for determining the central point (x s ,y s ) Further, the width w=9h/16 of the target display area is determined. And after receiving the user image picture input by the first virtual user, driving one projector to rotate at an angle corresponding to the user image picture to project, and projecting the user image picture onto a target display area of the curtain. When the user makes actions such as waving hands and comparing centers, the corresponding user image picture can be changed and projected on the curtain in real time. When the singer sees the waving or the comparing action of the user, the singer can feed back the waving, comparing action and the like, the image picture of the singer is generated and serves as interaction feedback data to be pushed to the user, the interaction action of the singer and the audience in the actual scene is synchronously driven to execute the corresponding action, and all audiences watching the visual angle of the virtual concert scene can see the virtual position of the virtual singer towards the interactive audience to do the interaction action. Of course, the singer can also feed back words such as 'thank you support', and the like, and generate sound information to be pushed to the user as interactive feedback data.
By adopting the technical means of the embodiment of the invention, the layout of the stage and the seat in the virtual concert scene is designed according to the real concert scene, and a spectator can select one of the virtual positions to sit when entering the online concert. When the audience interacts with a real singer of an actual scene, the real picture of the audience is projected to the corresponding position of the annular curtain at the periphery of the seat under the actual scene according to the position of the actual position corresponding to the virtual position of the audience, and the real singer watches the picture of the audience on the annular curtain to realize interaction with the audience. Because the projection position of the audience on the curtain is determined based on the position of the user in the concert, the interaction between the singer and the projection of the audience on the curtain can be seen as the interaction between the singer and the audience on the curtain under the view angle of the audience, so that the immersion experience of the user is improved.
As another preferred embodiment, the interactive data is user sound information, and the interactive device includes at least one sound playing device, where each sound playing device is disposed on a periphery of the specific area near the audience area;
step S13, namely, the step of controlling the interactive device in the actual scene based on the actual position, is to push the interactive data to the actual user in the actual scene, specifically:
According to the actual position and the central position of the specific area, determining a sound playing device for playing the user sound information as a target sound playing device; and controlling the target sound playing equipment to play the user sound information.
In the embodiment of the invention, the interactive data is user voice information, and the user voice information comprises voice, cheering and the like of a user and is played by setting a voice playing device.
Referring to fig. 3, a schematic diagram of installation positions of sound playing devices in an embodiment of the present invention is shown, where a plurality of sound playing devices are disposed around a specific area near an audience area, and a meta-space server side is configured according to an actual position (x, y) of a first virtual user in an actual scene and a central position (x 0 ,y 0 ) The position (x) of the sound playing device on the connection is determined by means of a two-point one-line method a ,y a ) Is determined to be located at the position (x a ,y a ) Is played as a target soundAnd the device controls the target sound playing device to play the sound information sent by the first virtual user so that the target actual user can hear the sound information sent from the actual position direction of the first virtual user and make exclusive interaction feedback with the user side aiming at the sound information. Correspondingly, the interactive feedback data comprise the image picture and/or the sound information of the target actual user. That is, after hearing the sound information played by the target sound playing device, the target actual user can make a specific expression and action image picture, and can also send out the sound information.
Taking a concert scene as an example, the actual scene is an entity concert venue, the correspondingly constructed virtual scene is a virtual concert, the sound playing equipment is a sound line change, the target actual user is a singer, the specific area is a stage, and the sound line change is arranged at the periphery of the stage.
In the process of the connection interaction, the coordinate (x, y) of the actual position in the entity concert and the stage center coordinate (x 0 ,y 0 ) Is used for determining the position point (x) of the sound equipment around the stage by two points a ,y a ). After receiving the sound information input by the first virtual user, the position point (x a ,y a ) The sound equipment plays the sound information, such as voice like 'refueling', and the sound information is transmitted to the actual scene. After hearing the sound information, the singer can feed back interactions such as waving, comparing heart and the like, an image picture of the singer is generated to serve as interaction feedback data, the interaction feedback data are pushed to a user in a virtual concert scene, and utterances such as 'thank you support' can be fed back, and the sound information is generated to serve as interaction feedback data and pushed to the user.
Preferably, after the first virtual user is successfully connected with the singer, if the singer invites the user to chorus, the first virtual user in the virtual scene is identified through the virtual identity information, and is driven to walk to the virtual stage to chorus with the virtual singer. The sound of the first virtual user is transmitted to the scene through the sound loop in the physical concert, and the sound of the real singer and the sound of the first virtual user are transmitted into the virtual concert through the network, so that synchronous chorus of the real concert scene and the virtual concert scene is realized.
Preferably, the interactive data may include user image frames and user sound information at the same time, referring to fig. 4, which is another flow chart of the virtual-real interaction method in the embodiment of the present invention, still taking a concert scene as an example, after the connection, according to coordinates of an entity concert seat corresponding to the virtual concert seat, the projector is controlled to rotate, a real frame of the first virtual user is displayed on the curtain, the real singer calls the first virtual user through the direction of the curtain, and meanwhile, the sound of the first virtual user is heard through the designated sound loop arranged by the entity concert. The first virtual user can see the real singer to call the user from the view angle of the first virtual user in the virtual concert, and other second virtual users can see the actions of the first virtual user and the virtual singer for calling each other in the virtual concert.
As a preferred embodiment, before step S11, that is, before the step of responding to the interaction instruction sent by the first virtual user in the virtual scene to obtain the interaction data of the first virtual user, the method further includes:
when an interaction instruction sent by the first virtual user is received, judging whether a preset interaction condition is met or not; the interaction condition is that the number of the currently responded interaction instructions does not reach the upper limit, and the interaction equipment corresponding to the first virtual user is not occupied;
When the preset interaction condition is met, executing the steps of: responding to an interaction instruction sent by a first virtual user in a virtual scene, and acquiring interaction data of the first virtual user;
and when the preset interaction conditions are not met, pushing prompt information indicating that the interaction conditions are not met to the first virtual user.
In the embodiment of the invention, the number of simultaneous interaction connection lines is limited to m in consideration of the noisy of the on-site sound and the quality of ensuring the interaction with the target actual users in the actual scene, namely, the number of simultaneous response interaction instructions is m, and in consideration of the limited interaction equipment in the actual scene, the users participating in the simultaneous interaction connection lines must meet the requirement of not occupying the same interaction equipment.
When an interaction instruction sent by a first virtual user is received, whether the number of the interaction instructions which are responded currently does not reach an upper limit or not is judged first, the first virtual user corresponds to a condition that required interaction equipment is not occupied, if yes, the interaction instruction of the first virtual user is responded, if not, a response prompt is sent, and the first virtual user is informed of the current online interaction failure.
Taking a concert scene as an example, setting the upper limit of the number of simultaneous line-connected interactive persons as m=5, and because the coordinates (x, y) of the actual position of the first virtual user in the actual concert scene and the stage center point coordinates (x 0 ,y 0 ) And determining a target display area and/or a corresponding sound device on the corresponding curtain, if there are different users matching the same display area on the curtain, the projected image pictures will be overlapped, and if the different users matching the same sound device, the played sound information will be overlapped. Therefore, when a new interaction instruction is received, it is necessary to determine whether the curtain area or the audio device being played is occupied according to the actual position of the corresponding user in the actual scene. When the number of the responding online interaction persons does not reach the upper limit and the required interaction equipment is not occupied, the interaction instruction of the user can be responded, otherwise, the prompt message of 'the current existing online and the later try again' is returned.
By adopting the technical means of the embodiment of the invention, the immersive interactive experience sense of each user side of the online interaction can be truly ensured.
As a preferred implementation manner, the embodiment of the present invention is further implemented on the basis of any one of the foregoing embodiments, and the method further includes steps S21 to S22:
S21, responding to a viewing angle selection instruction sent by the first virtual user in the virtual scene, and determining the viewing angle currently selected by the first virtual user; wherein the viewing angle comprises a panoramic viewing angle and a seat viewing angle;
s22, according to the viewing angle currently selected by the first virtual user, invoking an image frame shot by preset shooting equipment in the actual scene, generating a display picture under the viewing angle, and pushing the display picture to the first virtual user; wherein the preset image capturing apparatus includes: a foreground image capturing apparatus located in front of the audience area, and a panoramic image capturing apparatus located behind the audience area.
The embodiment of the invention optimizes the adjustment of the viewing angle of the user. Specifically, the viewing angle includes a panorama viewing angle, which refers to a god viewing angle from which a panorama of an actual scene can be viewed, and a seat viewing angle, which refers to a viewing angle seen when sitting in a spectator area of the actual scene.
It should be noted that, the panoramic viewing angle is further divided into an actual panoramic viewing angle and a virtual panoramic viewing angle, and the seat viewing angle is further divided into an actual seat viewing angle and a virtual seat viewing angle. The virtual panoramic viewing angle and the virtual seat viewing angle refer to virtual pictures under corresponding angles generated through 3D modeling rendering according to the content of an actual scene, and the physical panoramic viewing angle and the physical seat viewing angle refer to real pictures under corresponding angles generated through recording of the content of the actual scene through an image pickup device.
As an example, when the first virtual user enters the virtual scene, the view angle at this time is a virtual panorama viewing view angle. When the first virtual user selects a certain seat to sit down, the viewing angle can be switched to one of the following four types according to the requirement: an actual panoramic viewing perspective, a virtual panoramic viewing perspective, an actual seat viewing perspective, and a virtual seat viewing perspective. When the first virtual user selects a virtual seat to sit down, the virtual seat is presented with a viewing angle. And hanging a virtual earphone on each virtual seat in advance, switching to an entity panoramic viewing angle when a first virtual user stands on the virtual earphone, and truly displaying the full scene of the entity concert, wherein the standing action of the first virtual user can be driven by the standing action of the user or can be realized by clicking a standing button in the virtual scene. When the first virtual user sits on the belt with the virtual earphone, the first virtual user is switched to the physical seat viewing angle to display the real angle scene of the stage in the current physical seat, wherein the sitting action of the first virtual user can be driven by the sitting action of the user or can be realized by clicking a sitting button in the virtual scene. Of course, the switching of the four views may also be performed according to a switching button in the virtual scene.
The generating manner of the virtual panoramic viewing angle and the virtual picture under the virtual seat viewing angle may refer to a meta space modeling method in the prior art, which is not described herein. The embodiment of the invention describes generation of a real picture of the physical panoramic viewing angle and the physical seat viewing angle.
Referring to fig. 5, which is a schematic diagram of installation positions of the image capturing apparatuses in the embodiment of the present invention, a foreground image capturing apparatus located in front of the audience area and a panoramic image capturing apparatus located in rear of the audience area are preset, and a tracking image capturing apparatus of a target actual user located in front of the specific area may be also set.
Preferably, the number of the foreground image capturing apparatuses and the panoramic image capturing apparatuses is at least one, the foreground image capturing apparatuses are located on a preset first moving track, the panoramic image capturing heads are located on a preset second moving track, and the foreground image capturing apparatuses and the panoramic image capturing apparatuses synchronously move along the respective moving tracks according to a preset moving speed; the first moving track is arranged in front of the audience area, and the second moving track is arranged behind the audience area. The tracking camera equipment of the target actual user is positioned on a preset third moving track, and the third moving track is positioned in front of the specific area.
Specifically, since the data of the actual scene and the target actual user need to be acquired in real time at multiple angles, different cameras need to be built and moved according to different tracks and speeds. By arranging the multi-angle mobile cameras, scene data in the actual scene are collected to generate a virtual scene and a virtual digital wisdom image of the target actual user in real time.
Tracking camera equipment of target actual user: the camera is controlled to move through a face tracking technology, the actual target user in a specific area is tracked in real time, the real-time expression and limb action pictures of the actual target user are collected, and the moving speed is consistent with the moving speed of the actual target user.
Foreground image pickup apparatus: mainly collecting foreground pictures of actual scene according to a certain speed V (distance S moving per second) v ) And the device moves back and forth at a constant speed.
Panoramic image pickup apparatus: mainly collecting full scene pictures of actual scenes according to a certain speed V (distance S moving per second) v ) And the foreground image pickup equipment and the panoramic image pickup equipment move back and forth at a uniform speed, and the movement of the foreground image pickup equipment and the movement of the panoramic image pickup equipment are kept synchronous.
The moving speed of the panoramic and foreground image capturing apparatuses is calculated as follows:
assuming that the image pickup apparatus picks up a frame rate of n pieces/second, the one-second image pickup apparatus moves S v Distance shooting n sheets, shooting n/S per meter v And (5) tensioning. Assuming that the viewing frame rate of a person at a certain fixed point is m pieces/second, taking the continuity of the moving distance into consideration, the adjacent frame splicing technology can be utilized to minimize the number of the photographed pieces of the same part, and the viewing effect of high frame rate can be achieved under the condition of photographing low frame rate, and the fixed point range is S 0 I.e. the image pickup apparatus moves S in 1 second 0 The distance shooting of m sheets may not affect the viewing experience. I.e. n/S v =m/S 0 The speed of the image pickup apparatus is therefore S v =n*S 0 The distance/m, i.e. the distance moved per second, is the velocity V.
As a preferred embodiment, when the viewing angle currently selected by the first virtual user is a panoramic viewing angle, in step S22, the invoking the image frame captured by the image capturing device preset in the actual scene generates a display screen under the viewing angle, including:
determining the position of the panoramic camera on the connecting line as a first position according to the connecting line of the actual position and the central position of the specific area;
determining a second position of the panoramic camera according to the first position and a preset offset and an offset direction; wherein the offset direction is determined from a relative positional relationship of the actual position and a center position of the audience area;
And combining the display picture under the panoramic viewing angle according to the image frames shot by the panoramic shooting equipment between the first position and the second position.
Referring to fig. 6, a schematic diagram of generating a panoramic viewing angle in an embodiment of the present invention, in which a frame under the panoramic viewing angle is constructed by moving a panoramic image capturing apparatus back and forth on a track to capture an image frame. Assuming that the coordinates of the actual position corresponding to the virtual position of the first virtual user in the virtual scene are (x, y), the center coordinates of the specific area are (x 0 ,y 0 ) The left vertex angle coordinates are (x l ,y l ) The right vertex angle coordinates are (x r ,y r ). Based on the coordinates (x, y) of the actual position and the center coordinates (x) of the specific region 0 ,y 0 ) Confirming that the first position coordinate of the panoramic image pickup apparatus on the moving track is (x) c ,y c ) The panoramic image pickup apparatus is located on a moving track at coordinates (x c ,y c ) The wide angle of the visible range is alpha 1
In order to enable the user to view a more comprehensive panoramic view angle and obtain better user experience, an offset L of one image pickup device needs to be added 0 The wide angle of the visual range after movement is used for enabling the user to fully watch the specific area, and the preset offset L 0 Can be calculated according to the central position coordinate of the specific region and the position coordinate of the left/right vertex angle, namely L 0 =|x 0 -x r The offset direction is determined according to the relative positional relationship between the actual position and the central position of the audience area, specifically, if the actual position is positioned at the left side of the central position of the audience area, the offset direction is the direction toward the right side of the audience area, if the actual position is positioned at the audience areaTo the right of the center position of the field, the offset direction is the direction to the left of the audience area. Obtaining a second position coordinate of the panoramic image pickup apparatus on the moving track as (x) according to the offset and the offset direction c ’,y c ') the panoramic camera apparatus is located at the coordinates (x) on the moving track c ’,y c ') the wide angle of the visible range at the time of' ' is alpha 2 The panoramic viewing angle is seen from the point (x) by the panoramic image pickup apparatus at the actual position (x, y) c ,y c ) Move to point (x) c ’,y c ') view of combining the acquired image frame data, as shown in FIG. 6, the view angle of the view range of the panoramic viewing angle at this time is α c
As another preferred embodiment, when the viewing angle currently selected by the first virtual user is a seat viewing angle, in step S22, the calling the image frame captured by the image capturing device preset in the actual scene, to generate a display screen under the viewing angle includes:
Determining a position of the panoramic image capturing apparatus located on the link as a third position and a position of the foreground image capturing apparatus located on the link as a fourth position according to a link between the actual position and a center position of the specific area;
determining a corresponding image interception proportion according to the distance between the actual position of the first virtual user and the central position of the specific area;
intercepting the image frames shot by the panoramic shooting equipment at the third position according to the image intercepting proportion to obtain intercepted image frames;
and synthesizing a display picture under the seat watching visual angle according to the captured image frame and the image frame captured by the foreground image capturing equipment at the fourth position.
Referring to fig. 7, a schematic diagram of generating a viewing angle of a seat in an embodiment of the present invention, in which a picture under the viewing angle of the seat is constructed by capturing image frames by moving a panoramic image capturing apparatus and a foreground image capturing apparatus back and forth on a trackAnd (5) a surface. Assuming that the coordinates of the actual position corresponding to the virtual position of the first virtual user in the virtual scene are (x, y), the center coordinates of the specific area are (x 0 ,y 0 ) Confirming the coordinates (x) of the position (fourth position) of the foreground image capturing apparatus on the moving track based on the line connecting the coordinates of the actual position and the center coordinates of the specific area c ,y c ) And coordinates (x) of a position (third position) of the panoramic image pickup apparatus on the moving track c ’,y c ') by acquiring the foreground image capturing apparatus at (x) c ,y c ) Left and right S n Image frames within a distance, and the panoramic image pickup apparatus is mounted on a frame (x c ’,y c ' left and right S n In the image frames in the distance, the angles of the acquired image frames are consistent because the foreground image pickup equipment and the panoramic image pickup equipment are on the same straight line, so that the image frames also have a rule of gradual amplification. Taking the coordinates (x) of the foreground image capturing apparatus on the basis of the straight line c, y c ) Coordinates (x) of the panoramic image pickup apparatus c ’,y c '), the actual position coordinates (x, y) of the first virtual user, and the center position coordinates (x) of the specific area 0 ,y 0 ) And (3) calculating an image interception ratio, acquiring a part of image frames shot by the panoramic shooting equipment according to the image interception ratio, and synthesizing the picture under the view angle of the seat.
It should be noted that, the virtual panoramic viewing angle and the virtual picture under the virtual position viewing angle can also be constructed by the image frames acquired by the foreground image capturing device and the panoramic image capturing device, which does not affect the beneficial effects obtained by the present invention.
By adopting the technical means of the embodiment of the invention, the pictures of the user under the panoramic viewing angle and the seat viewing angle are synthesized through the foreground image pickup equipment and the image frames acquired by the panoramic image pickup equipment which are arranged in the actual scene, so that the immersive experience of the user is improved.
Referring to fig. 8, a schematic structural diagram of a virtual-real interaction device according to an embodiment of the present invention is provided, and a virtual-real interaction device 30 is provided according to an embodiment of the present invention, including:
the interactive data acquisition module 31 is configured to respond to an interactive instruction sent by a first virtual user in a virtual scene, and acquire interactive data of the first virtual user;
an actual position determining module 32, configured to determine an actual position of the first virtual user in an actual scene corresponding to the virtual scene according to a virtual position of the first virtual user in the virtual scene;
an interactive data pushing module 33, configured to control an interactive device in the actual scene based on the actual position, and push the interactive data to an actual user in the actual scene;
the interactive feedback data pushing module 34 is configured to receive interactive feedback data sent by the target actual user in the actual scene according to the interactive data, and push the interactive feedback data to the first virtual user.
By adopting the technical means of the embodiment of the invention, the first virtual user selects one virtual position to be seated in the virtual scene according to the actual scene, when the user interacts with the target actual user in the actual scene, the corresponding interactive equipment in the actual scene is controlled to push the interactive data to the actual user in the actual scene according to the actual position corresponding to the virtual position, so that the target actual user in the actual scene can truly receive the interactive data of the first virtual user, and the interactive feedback data is made for the interactive data, and the interactive feedback data is returned to the first virtual user, thereby enabling the first virtual user to receive the specific interactive feedback made by the target actual user in the actual scene, realizing the bidirectional interactive mechanism of the virtual scene and the actual scene, and improving the immersive interactive experience of the user.
As a preferred embodiment, the interactive data is a user image; the interactive device comprises a projection display device and a projection device, wherein the projection display device is arranged behind the audience area and faces the specific area;
The interactive data pushing module 33 is specifically configured to:
determining a display area of the user image picture in the projection display equipment according to the actual position and the central position of the specific area, and taking the display area as a target display area; and controlling the projection equipment to project the user image picture onto the target display area.
As a preferred embodiment, the interactive data is user sound information, and the interactive device includes at least one sound playing device, where each sound playing device is disposed on a periphery of the specific area near the audience area;
the interactive data pushing module 33 is specifically configured to:
according to the actual position and the central position of the specific area, determining a sound playing device for playing the user sound information as a target sound playing device; and controlling the target sound playing equipment to play the user sound information.
Preferably, the interactive feedback data comprises an image picture and/or sound information of the target actual user.
As a preferred embodiment, the device further comprises:
the interaction condition judging module is used for judging whether a preset interaction condition is met or not when an interaction instruction sent by the first virtual user is received; the interaction condition is that the number of the currently responded interaction instructions does not reach the upper limit, and the interaction equipment corresponding to the first virtual user is not occupied;
When the preset interaction condition is satisfied, the control interaction data acquisition module 31 performs the steps of: responding to an interaction instruction sent by a first virtual user in a virtual scene, and acquiring interaction data of the first virtual user;
and when the preset interaction conditions are not met, pushing prompt information indicating that the interaction conditions are not met to the first virtual user.
As a preferred embodiment, the interactive data pushing module 33 is further configured to push the interactive data to each second virtual user joining the virtual scene.
The interactive feedback data pushing module 34 is further configured to push the interactive feedback data to each second virtual user joining the virtual scene.
As a preferred embodiment, the device further comprises:
the virtual position construction module is used for constructing a virtual position in the virtual scene for a third virtual user when the third virtual user is detected to join the virtual scene after the virtual scene is constructed; determining virtual identity information of the third virtual user, and binding the virtual identity information with the position information; wherein, the same virtual position can bind a plurality of virtual identity information; each third virtual user is used as the first virtual user initiating interactive instructions in the virtual scene or each second virtual user which has joined in the virtual scene and does not initiate interactive instructions.
As a preferred embodiment, the device further comprises:
a viewing angle determining module, configured to respond to an angle selection instruction sent by the first virtual user in the virtual scene, and determine a viewing angle currently selected by the first virtual user; wherein the viewing angle comprises a panoramic viewing angle and a seat viewing angle;
the display screen generating module is used for calling image frames shot by preset shooting equipment in the actual scene according to the viewing angle currently selected by the first virtual user, generating a display screen under the viewing angle and pushing the display screen to the first virtual user; wherein the preset image capturing apparatus includes: a foreground image capturing apparatus located in front of the audience area, and a panoramic image capturing apparatus located behind the audience area.
As a preferred embodiment, when the viewing angle currently selected by the first virtual user is a panoramic viewing angle, the display screen generating module is specifically configured to:
determining the position of the panoramic camera on the connecting line according to the current panoramic viewing angle selected by the first virtual user and the connecting line of the actual position and the central position of the specific area as a first position;
Determining a second position of the panoramic camera according to the first position and a preset offset and an offset direction; wherein the offset direction is determined from a relative positional relationship of the actual position and a center position of the audience area;
and synthesizing a display picture under the panoramic viewing angle according to the image frames shot by the panoramic shooting equipment between the first position and the second position, and pushing the display picture to the first virtual user.
As a preferred embodiment, when the viewing angle currently selected by the first virtual user is a seat viewing angle, the display screen generating module is specifically configured to:
determining a position of the panoramic image capturing apparatus located on the link as a third position and a position of the foreground image capturing apparatus located on the link as a fourth position according to a link between the actual position and a center position of the specific area;
determining a corresponding image capturing proportion according to the distance between the actual position of the first virtual user and the central position of the specific area according to the currently selected seat viewing angle of the first virtual user;
Intercepting the image frames shot by the panoramic shooting equipment at the third position according to the image intercepting proportion to obtain intercepted image frames;
and synthesizing a display picture under the seat watching visual angle according to the captured image frame and the image frame captured by the foreground image capturing device at the fourth position, and pushing the display picture to the first virtual user.
In a preferred embodiment of the seat, the number of the foreground image capturing devices and the panoramic image capturing devices is at least one, the foreground image capturing devices are located on a preset first moving track, the panoramic image capturing devices are located on a preset second moving track, and the foreground image capturing devices and the panoramic image capturing devices synchronously move along the respective moving tracks according to a preset moving speed; the first moving track is arranged in front of the audience area, and the second moving track is arranged behind the audience area.
It should be noted that, the virtual-real interaction device provided by the embodiment of the present invention is used for executing all the flow steps of the virtual-real interaction method in the above embodiment, and the working principles and beneficial effects of the two correspond to each other one by one, so that the description is omitted.
Referring to fig. 9, a schematic structural diagram of a virtual-real interaction device according to an embodiment of the present invention is provided, and the embodiment of the present invention further provides a virtual-real interaction device 40, including a processor 41, a memory 42, and a computer program stored in the memory and configured to be executed by the processor, where the processor executes the computer program to implement the virtual-real interaction method according to any one of the embodiments.
It should be noted that, the virtual-real interaction device provided by the embodiment of the present invention is used for executing all the flow steps of the virtual-real interaction method in the above embodiment, and the working principles and beneficial effects of the two correspond to each other one by one, so that the description is omitted.
The embodiment of the invention also provides a computer readable storage medium, which comprises a stored computer program, wherein when the computer program runs, equipment where the computer readable storage medium is located is controlled to execute the virtual-real interaction method according to any one of the embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (10)

1. The virtual-real interaction method is characterized by comprising the following steps of:
responding to an interaction instruction sent by a first virtual user in a virtual scene, and acquiring interaction data of the first virtual user;
determining the actual position of the first virtual user in an actual scene corresponding to the virtual scene according to the virtual position of the first virtual user in the virtual scene;
controlling interaction equipment in the actual scene based on the actual position, and pushing the interaction data to an actual user in the actual scene;
and receiving interactive feedback data sent by the target actual user in the actual scene according to the interactive data, and pushing the interactive feedback data to the first virtual user.
2. The virtual-real interaction method according to claim 1, wherein the interaction data is a user image; the interactive device comprises a projection display device and a projection device, wherein the projection display device is arranged behind a spectator area in the actual scene and faces a specific area where the target actual user is located;
The step of controlling the corresponding interactive device to push the interactive data to the actual user in the actual scene based on the actual position is specifically:
determining a display area of the user image picture in the projection display equipment according to the actual position and the central position of the specific area, and taking the display area as a target display area;
and controlling the projection equipment to project the user image picture onto the target display area.
3. The virtual-real interaction method of claim 1, wherein the interaction data is user sound information, the interaction device comprises at least one sound playing device, and each sound playing device is disposed in a specific area where the target actual user is located and is close to the periphery of a spectator area in the actual scene;
the step of controlling the corresponding interactive device to push the interactive data to the actual user in the actual scene based on the actual position is specifically:
according to the actual position and the central position of the specific area, determining a sound playing device for playing the user sound information as a target sound playing device;
and controlling the target sound playing equipment to play the user sound information.
4. A virtual-to-real interaction method as claimed in any one of claims 1 to 3, wherein before said acquiring the interaction data of the first virtual user in response to the interaction instruction sent by the first virtual user in the virtual scene, the method further comprises:
when an interaction instruction sent by the first virtual user is received, judging whether a preset interaction condition is met or not; the interaction condition is that the number of the currently responded interaction instructions does not reach the upper limit, and the interaction equipment corresponding to the first virtual user is not occupied;
when the interaction condition is met, responding to an interaction instruction sent by a first virtual user in a virtual scene, and acquiring interaction data of the first virtual user;
and when the interaction condition is not met, pushing prompt information indicating that the interaction condition is not met to the first virtual user.
5. The virtual-to-real interaction method of claim 1, wherein the method further comprises:
responding to a viewing angle selection instruction sent by the first virtual user in the virtual scene, and determining the viewing angle currently selected by the first virtual user; wherein the viewing angle comprises a panoramic viewing angle and a seat viewing angle;
Invoking an image frame shot by preset shooting equipment in the actual scene according to the viewing angle currently selected by the first virtual user, generating a display picture under the viewing angle, and pushing the display picture to the first virtual user; the preset image pickup apparatus includes: a foreground image pickup apparatus located in front of an audience area in the actual scene, and a panoramic image pickup apparatus located behind the audience area.
6. The virtual-to-real interaction method of claim 5, wherein when the viewing angle currently selected by the first virtual user is a panoramic viewing angle,
the step of calling the image frames shot by the preset shooting equipment in the actual scene to generate the display picture under the viewing angle comprises the following steps:
determining the position of the panoramic camera on the connecting line as a first position according to the connecting line of the actual position and the central position of the specific area;
determining a second position of the panoramic camera according to the first position and a preset offset and an offset direction; wherein the offset direction is determined from a relative positional relationship of the actual position and a center position of the audience area;
And combining the display picture under the panoramic viewing angle according to the image frames shot by the panoramic shooting equipment between the first position and the second position.
7. The virtual-to-real interaction method of claim 5, wherein when the viewing angle currently selected by the first virtual user is a seat viewing angle,
the step of calling the image frames shot by the preset shooting equipment in the actual scene to generate the display picture under the viewing angle comprises the following steps:
determining a position of the panoramic image capturing apparatus located on the link as a third position and a position of the foreground image capturing apparatus located on the link as a fourth position according to a link between the actual position and a center position of the specific area;
determining a corresponding image interception proportion according to the distance between the actual position and the central position of the specific area;
intercepting the image frames shot by the panoramic shooting equipment at the third position according to the image intercepting proportion to obtain intercepted image frames;
and synthesizing a display picture under the seat watching visual angle according to the captured image frame and the image frame captured by the foreground image capturing equipment at the fourth position.
8. The virtual-to-real interaction method of claim 1, wherein the method further comprises:
after constructing the virtual scene, when detecting that a third virtual user joins the virtual scene, constructing a virtual position in the virtual scene for the third virtual user;
determining virtual identity information of the third virtual user, and binding the virtual identity information with the position information; wherein, the same virtual position can bind a plurality of virtual identity information; each third virtual user is used as the first virtual user initiating interactive instructions in the virtual scene or each second virtual user which has joined in the virtual scene and does not initiate interactive instructions.
9. An interactive device for virtual and real applications, comprising:
the interactive data acquisition module is used for responding to an interactive instruction sent by a first virtual user in a virtual scene to acquire interactive data of the first virtual user;
the actual position determining module is used for determining the actual position of the first virtual user in an actual scene corresponding to the virtual scene according to the virtual position of the first virtual user in the virtual scene;
The interactive data pushing module is used for controlling the interactive equipment in the actual scene based on the actual position and pushing the interactive data to an actual user in the actual scene;
and the interactive feedback data pushing module is used for receiving the interactive feedback data sent by the target actual user in the actual scene according to the interactive data and pushing the interactive feedback data to the first virtual user.
10. A virtual-real interaction device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the virtual-real interaction method according to any of claims 1 to 8 when the computer program is executed.
CN202311521322.6A 2023-11-15 2023-11-15 Virtual-real interaction method, device and equipment Pending CN117459751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311521322.6A CN117459751A (en) 2023-11-15 2023-11-15 Virtual-real interaction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311521322.6A CN117459751A (en) 2023-11-15 2023-11-15 Virtual-real interaction method, device and equipment

Publications (1)

Publication Number Publication Date
CN117459751A true CN117459751A (en) 2024-01-26

Family

ID=89583470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311521322.6A Pending CN117459751A (en) 2023-11-15 2023-11-15 Virtual-real interaction method, device and equipment

Country Status (1)

Country Link
CN (1) CN117459751A (en)

Similar Documents

Publication Publication Date Title
US8289367B2 (en) Conferencing and stage display of distributed conference participants
EP2352290B1 (en) Method and apparatus for matching audio and video signals during a videoconference
CN105794202B (en) Depth for video and line holographic projections is bonded to
JP2023181217A (en) Information processing system, information processing method, and information processing program
CN110013678A (en) Immersion interacts panorama holography theater performance system, method and application
CN110349456B (en) Intelligent control system, remote control terminal and classroom terminal of interactive classroom
US11902349B1 (en) System and method for providing a real-time digital virtual audience
CN113518232B (en) Video display method, device, equipment and storage medium
KR20150105058A (en) Mixed reality type virtual performance system using online
WO2015151766A1 (en) Projection photographing system, karaoke device, and simulation device
KR20180052496A (en) Virtual performance-studio system
JP2024506390A (en) Video conference device, video conference method, and computer program using spatial virtual reality environment
JP2021136683A (en) Reality simulation panoramic system and method to use the same
WO2022262839A1 (en) Stereoscopic display method and apparatus for live performance, medium, and system
WO2021246183A1 (en) Information processing device, information processing method, and program
JP2006041886A (en) Information processor and method, recording medium, and program
CN114549744A (en) Method for constructing virtual three-dimensional conference scene, server and AR (augmented reality) equipment
US11831454B2 (en) Full dome conference
CN117459751A (en) Virtual-real interaction method, device and equipment
CN113315885B (en) Holographic studio and system for remote interaction
JP6697512B2 (en) Content distribution system, content distribution method, and computer program
WO2018161816A1 (en) Projection system, method, server, and control interface
JP7495558B1 (en) VIRTUAL SPACE CONTENT DELIVERY SYSTEM, VIRTUAL SPACE CONTENT DELIVERY PROGRAM, AND VIRTUAL SPACE CONTENT DELIVERY METHOD
JP7167388B1 (en) Movie creation system, movie creation device, and movie creation program
CN114979699B (en) Layout method of live interface, readable medium and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination