CN109996060B - Virtual reality cinema system and information processing method - Google Patents

Virtual reality cinema system and information processing method Download PDF

Info

Publication number
CN109996060B
CN109996060B CN201711490235.3A CN201711490235A CN109996060B CN 109996060 B CN109996060 B CN 109996060B CN 201711490235 A CN201711490235 A CN 201711490235A CN 109996060 B CN109996060 B CN 109996060B
Authority
CN
China
Prior art keywords
information
cinema
client
film
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711490235.3A
Other languages
Chinese (zh)
Other versions
CN109996060A (en
Inventor
李刚
龙寿伦
张大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dlodlo New Technology Co Ltd
Original Assignee
Shenzhen Dlodlo New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dlodlo New Technology Co Ltd filed Critical Shenzhen Dlodlo New Technology Co Ltd
Priority to CN201711490235.3A priority Critical patent/CN109996060B/en
Publication of CN109996060A publication Critical patent/CN109996060A/en
Application granted granted Critical
Publication of CN109996060B publication Critical patent/CN109996060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Abstract

The application provides a virtual reality cinema system and an information processing method. In practical application, firstly, a film watching instruction of a user is obtained through a client, film watching information is generated, and then the film watching information is sent to a server; after receiving the film watching information, the server extracts film resources and cinema models corresponding to the film watching information and sends the film resources and cinema models to the client; and finally, after receiving the film resources and the cinema model, the client displays the cinema model on the screen of the head-mounted equipment according to the selected seat information, and shows the film resources at the screen position of the cinema model. The system provided by the application can utilize the watching effect of the simulation reality in the movie theater, greatly improves the information quantity presented by the virtual reality movie theater for the user, and solves the problems that the traditional virtual movie theater has too little information quantity and the feeling of taking the movie into the movie theater is not strong.

Description

Virtual reality cinema system and information processing method
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a virtual reality theater system and an information processing method.
Background
Virtual Reality (VR) technology is a technology that provides an immersive sensation in a three-dimensional environment generated on a computer. The virtual reality technology realizes human-computer interaction through virtual reality equipment, such as VR glasses. The conventional virtual reality device presents VR images to left and right eyes of a wearer by a built-in screen or by other display devices to form virtual reality images. The virtual reality technology can be used for playing movie and television resources to replace movie screens or televisions to display movie contents for users.
In order to present a more realistic viewing experience to a user, a virtual reality cinema system is shown in the prior art, and as shown in fig. 1, the system is composed of a server side, a user side and a merchant side. In actual use, a user selects a movie to be watched at the user side, payment information is sent to the merchant side through the service side, the cinema operator confirms the payment information through the merchant side, then a playing instruction of the corresponding movie is sent to the service side, the service side sends the corresponding movie content to the user side, and the corresponding movie content is played on the virtual reality equipment of the user side, so that the limitation to a place when the user watches the movie is avoided, and the movie watching experience of the user is improved.
However, in the virtual reality cinema system shown in the prior art, only the movie resources stored in the server are displayed to the user side, and since the amount of viewing information presented by the user side to the user is small, the user is not limited by the use place, but the theater introduction feeling of viewing the movie in the movie theater is lost. Therefore, how to display more viewing information in the virtual reality device at the user end to simulate the viewing effect in the real movie theater more truly becomes a technical problem to be solved urgently in the field.
Disclosure of Invention
The application provides a virtual reality cinema system and an information processing method, which aim to solve the problem that the traditional virtual reality cinema presents too little information for users, so that the impression of the film is not strong.
In one aspect, the application provides a virtual reality theater system, including a server storing movie resources and theater models, and a plurality of clients establishing communication connection with the server, wherein:
the server is used for receiving the film watching information sent by the client and sending corresponding film resources and cinema models to the client according to the film watching information; the film watching information comprises selected cinema information, selected seat information and selected film information;
the client comprises a head-mounted device with a display screen and an optical module, and is used for acquiring a film watching instruction of a user, generating film watching information according to the film watching instruction and sending the film watching information to the server;
the client is further configured to receive the movie resources and the theater models sent by the server, display the theater models on the display screen of the head-mounted device according to the selected seat information, display the movie resources at the screen position of the theater models, and present the enlarged images on the display screen to the user through the optical module.
Optionally, the client is configured to:
determining the position coordinates of the selected seat in the cinema model according to the selected seat information and the cinema model;
acquiring a visual field range of the client corresponding to the head-mounted equipment, wherein the visual field range comprises a display viewing cone and a field angle of the head-mounted equipment;
and determining an initial display picture of the head-mounted equipment corresponding to the client according to the position coordinates and the field angle.
Optionally, the client is further configured to:
determining the plane of the screen in the cinema model as a view plane;
according to the visual field range, determining the intersection surface of the display viewing cone and the visual plane and the projection of the virtual article in the cinema model on the intersection surface under the visual field range;
determining a center point of the intersecting surface, a width Wp of the intersecting surface and a height Hp of the intersecting surface;
acquiring the screen width Ws and the screen height Hs of the head-mounted equipment, and displaying the projection in an initial display picture according to the following formula:
xs=(Ws-1)(xp/Wp+0.5);
ys=(Hs-1)(yp/Hp+0.5);
in the formula, xp and yp are coordinate points in the intersecting surface; xs, ys is a coordinate point in the head-mounted device screen.
Optionally, the client further includes a sound device, two sound playing sources are built in the sound device, and the client is further configured to:
determining the position coordinates of the selected seats in the cinema model according to the selected seat information;
determining the sound source position of sound equipment in the cinema model, and calculating the sound field distance between the selected seat and the sound source position according to the position coordinate;
and adjusting the loudness of the two sound playing sources according to the sound field distance.
Optionally, the server is further configured to add a character model to the cinema model according to the viewing information; the server is further configured to:
receiving the film watching information sent by a plurality of clients;
determining selected seat information corresponding to the client according to the film watching information;
adding a character model to a seat position corresponding to the selected seat information in the cinema model according to the selected seat information;
and sending the cinema model added with the character model to a plurality of clients.
Optionally, the viewing information further includes audience information, and the server is further configured to:
selecting a character model corresponding to the audience information according to the audience information;
and adding the character model to the cinema model and the seat position corresponding to the viewing information.
Optionally, the system further includes a cinema end establishing a communication connection with the server, and the cinema end is configured to:
updating the cinema model, sending the updated cinema model to the server, and replacing the cinema model stored in the server; and the number of the first and second groups,
and uploading the new film resource to the server.
Optionally, the film watching information further includes associated user information, where the associated user information is a common film watching request sent by a plurality of the clients, and the associated user information at least includes the number of the common film watching clients; the server is further configured to:
matching optional cinema information which accords with the number of the common film watching clients according to the associated user information;
and pushing the selectable cinema information for a plurality of corresponding clients in the associated user information, and sending corresponding movie resources and cinema models to the plurality of clients after receiving confirmation information of the plurality of clients.
Optionally, the headset has a built-in orientation sensor for detecting movement and rotation of the headset, and the client is further configured to:
acquiring an initial attitude angle of the head-mounted device through the orientation sensor;
determining the position coordinates of the selected seat in the cinema model according to the selected seat information and the cinema model;
acquiring a visual field range of the head-mounted equipment corresponding to the client under the initial attitude angle, wherein the visual field range comprises a display viewing cone and a viewing angle of the head-mounted equipment;
and determining an initial display picture of the head-mounted equipment corresponding to the client according to the position coordinates and the field angle.
Optionally, the client is further configured to:
acquiring detection data of the orientation sensor during the film watching process of the head-mounted equipment;
determining the attitude data of the head-mounted equipment in the cinema model according to the detection data of the orientation sensor;
and adjusting the display picture of the head-mounted equipment according to the posture data.
On the other hand, the application also provides a virtual reality cinema information processing method, which comprises the following steps:
the client side obtains a film watching instruction and generates film watching information according to the film watching instruction, wherein the film watching information comprises selected cinema information, selected seat information and selected film information;
the client sends the film watching information to a server, and the server stores film resources and cinema models;
after receiving the film watching information, the server extracts film resources and cinema models corresponding to the film watching information;
the server sends the extracted film resources and the cinema model to the client;
and after receiving the film resources and the cinema model, the client displays the cinema model according to the selected seat information and shows the film resources at the screen position of the cinema model.
According to the technical scheme, the virtual reality cinema system comprises a server storing movie resources and cinema models and a plurality of client sides establishing communication connection with the server, wherein in practical application, a movie watching instruction of a user is obtained through the client sides, movie watching information comprising selected cinema information, selected seat information and selected movie information is generated according to the movie watching instruction, and then the movie watching information is sent to the server; after receiving the film watching information, the server extracts film resources and cinema models corresponding to the film watching information and sends the film resources and cinema models to the client; and finally, after receiving the film resources and the cinema model, the client displays the cinema model on the screen of the head-mounted equipment according to the selected seat information, and shows the film resources at the screen position of the cinema model.
The application provides a virtual reality cinema system can utilize virtual reality head-mounted apparatus, the effect of watching of simulation reality in the cinema to through selecting the seat position in the cinema model and corresponding visual angle, the virtual image that makes the demonstration on the screen more is close to the effect in actual cinema, has improved the information content that the virtual reality cinema appears for the user greatly, and it is too little to solve traditional virtual cinema and exist and present the information content, and the impression brings the not strong problem.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a conventional virtual reality cinema system;
fig. 2 is a schematic diagram of a virtual reality theater system according to the present disclosure;
fig. 3 is another schematic diagram of a virtual reality theater system provided herein;
FIG. 4 is a schematic view illustrating a process of determining an initial display frame according to the present application;
FIG. 5 is a schematic diagram of the present application in determining an initial display;
FIG. 6 is a schematic flow chart of a screen-displayed cinema model of a head-mounted device according to the present application;
FIG. 7 is a schematic diagram of the present application for determining the sound effect to play;
FIG. 8 is a schematic view illustrating a process of determining a sound effect to be played according to the present application;
FIG. 9 is a schematic diagram of a virtual reality theater system according to one embodiment of the present disclosure;
FIG. 10 is a schematic flow chart of the present application for adding a character model;
fig. 11 is a schematic diagram of another embodiment of a virtual reality theater system provided herein;
FIG. 12 is a schematic flow chart illustrating the determination of an initial display based on orientation sensor data according to the present application;
fig. 13 is a schematic flowchart of a virtual reality cinema information processing method provided in the present application.
Detailed Description
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following examples do not represent all embodiments consistent with the present application. But merely as exemplifications of systems and methods consistent with certain aspects of the application, as recited in the claims.
In the technical scheme provided by the application, the server refers to a network device which can provide computing service, respond to the sent service request and generate a service instruction. The server has a storage space therein for storing various film resources and various model files. In addition, the server provided by the application can also perform file transmission, and download or receive various files sent by other equipment in the network resource so as to update the film resource and the model stored in the client.
The client is a device set mainly composed of an interactive device and a head-mounted device, wherein the interactive device can interact with a user, and has functions of data processing, file transmission and instruction sending, such as a computer, a smart phone, a tablet computer and the like. For convenience of description, a single client (e.g., client a1) is taken as an example in the embodiments of the present application, and other clients of the present application have the same functions as the exemplified single client, and the communication manner and the data flow relationship between the multiple clients and the server are also the same, except for the description. In the technical scheme provided by the application, the establishment of the communication connection relationship between the server and the client means that the client establishes communication with the server through network connection modes such as the Ethernet and the like so as to realize data transmission.
In this application, a cinema model refers to a virtual 3D model established according to the structure of an actual cinema theater. The cinema model at least comprises a screen, seats and sound equipment which are positioned in the same position as the actual cinema, and in order to reflect the effect of the actual cinema more truly, the cinema model can be added with a model of objects such as a light source, a passageway ornament and the like. According to the technical scheme, different cinema models are established for different actual cinema types for the user to select. For the convenience of distinguishing and identifying the calling, each cinema model possesses a cinema model identification code corresponding to the actual cinema, and the identification code can be directly determined by the showing license number of the actual cinema or determined according to the address of the actual cinema. In addition, because the cinema model is established by taking the auditorium as a unit in the application, different auditoriums of the same cinema can be distinguished by the identification code.
Referring to fig. 2, a schematic view of a virtual cinema structure is provided for the present application. As can be seen from the structure shown in fig. 2, the virtual reality theater system provided by the present application includes a server S0 storing movie resources and theater models, and a plurality of clients a1, a2, A3, … … establishing communication connections with the server S0, wherein:
as shown in fig. 3, the server is configured to receive the viewing information sent by the client, and send corresponding movie resources and theater models to the client according to the viewing information; the viewing information includes selected theater information, selected seat information, and selected movie information. The client comprises a head-mounted device with a display screen and an optical module, and is used for acquiring a film watching instruction of a user, generating film watching information according to the film watching instruction and sending the film watching information to the server. The client is also used for receiving the film resources and the cinema models sent by the server, displaying the cinema models on the display screen of the head-mounted equipment according to the selected seat information, displaying the film resources at the screen position of the cinema models, and amplifying the pictures on the display screen through the optical module to present the amplified pictures to the user.
The above viewing information refers to a data transmission file generated by a series of operation instructions, i.e. viewing instructions, performed on a user interface by a user through operating an interactive device in the client. For the film watching instruction, the user selects the actual cinema to be simulated, the position of watching the film in the cinema and the film to be watched according to the self intention, the three selections are completed to generate the corresponding film watching instruction, and the interactive equipment converts the film watching instruction into the film watching information capable of being transmitted by the network, namely the corresponding selected cinema information, the selected seat information and the selected film information.
In this embodiment, the server has a data storage function, and a source file of movie resources and a virtual theater model established in advance according to structural characteristics of an actual theater are stored in a memory of the server. The movie resources stored in the server can be sourced from actual cinema operators, namely, the movie resources are uploaded to the server by the operators according to the copyright agreement of the movie, and the server can directly acquire the video resources with the open public license from the network. In order to reduce the occupation of the storage space of the server, in practical application, for the video resource with the public right of exhibition, the video resource can be temporarily not stored in the storage space of the server, but when the user needs to watch such a movie, the video source file of the movie is obtained from the specified movie website.
According to the server and the client, in practical application, the virtual reality cinema system provided by the application firstly acquires a film watching instruction of a user through the client, generates film watching information comprising selected cinema information, selected seat information and selected film information according to the film watching instruction, and then sends the film watching information to the server; after receiving the film watching information, the server extracts film resources and cinema models corresponding to the film watching information and sends the film resources and cinema models to the client; and finally, after receiving the film resources and the cinema model, the client displays the cinema model on the screen of the head-mounted equipment according to the selected seat information, and shows the film resources at the screen position of the cinema model.
In this embodiment, displaying the cinema model on the screen of the head-mounted device means that after receiving the cinema model and the movie resources at the client, by judging the information of the selected seat of the user, a picture that can be viewed at the selected seat is simulated inside the cinema model, and then the viewed picture is displayed on the screen of the virtual reality device in the form of a video. Further, as shown in fig. 4, the process of determining the display content of the built-in screen of the head-mounted device by the client specifically includes the following steps:
s101: determining the position coordinates of the selected seat in the cinema model according to the selected seat information and the cinema model;
s102: acquiring a visual field range of the client corresponding to the head-mounted equipment, wherein the visual field range comprises a display viewing cone and a viewing angle of the head-mounted equipment;
s103: and determining an initial display picture of the head-mounted equipment corresponding to the client according to the position coordinates and the field angle.
According to the above steps, the screen that the user can see on the seat corresponding to the selected theater information can be determined by using the selected seat information and the theater model. I.e. any virtual object in the cinema model that can be included in the field of view area. In this embodiment, the field angle is a field angle provided by the client corresponding to the virtual reality head-mounted device, and the maximum picture size presented by the head-mounted device may be determined according to the field angle, so as to adjust the display mode and avoid large deformation of the display picture determined according to the cinema model when the display picture is on the screen.
Further, as shown in fig. 5 and 6, after the position coordinates and the visual field range are determined, the screen content displayed on the screen is determined according to the following steps:
s201: determining the plane of the screen in the cinema model as a view plane;
s202: according to the visual field range, determining the projection of the intersection surface of the display viewing cone and the viewing plane and the virtual article of the cinema model in the visual field range on the intersection surface under the visual field range;
s203: determining the center point of the intersecting surface, the width Wp of the intersecting surface and the height Hp of the intersecting surface;
s204: acquiring the screen width Ws and the screen height Hs of the head-mounted equipment, and displaying projection in the initial display picture according to the following formula:
xs=(Ws-1)(xp/Wp+0.5);
ys=(Hs-1)(yp/Hp+0.5);
in the formula, xp and yp are coordinate points in the intersecting surfaces; xs, ys are coordinate points in the head-mounted device screen.
Through the above steps, the 3D cinema model and the selected movie asset can be displayed on the screen of the head-mounted device in a planar perspective projection manner. Obviously, in the technical solution provided in the present application, since the head-mounted device has two screens, in order to present the virtual cinema effect more truly, when determining the display screen, it is also necessary to determine the screen display contents in the visual field ranges corresponding to the two screens respectively. Due to the difference of the corresponding display contents of the two screens in the visual angle, the display pictures are slightly different, and therefore, a stronger 3D feeling can be brought to the user.
Through the technical scheme, the picture displayed on the screen of the head-mounted equipment can be determined, so that the scene of watching the movie in an actual cinema can be simulated. According to the technical scheme, the film resource can be shown at the screen position of the cinema model by adding a video map at the screen position in the cinema model, so that the film resource is always displayed at the screen position, and the shape and the proportion of the film resource displayed on the visual plane are adjusted through the determined visual field range, so that the film can be shown. Further, when the film resource selected by the user is a 3D film resource, before determining the display picture, the left-eye picture and the right-eye picture corresponding to the 3D film are separated, and when determining that the left-side screen of the head-mounted device displays the picture, the left-eye picture is displayed at the screen position of the cinema model; similarly, when the right screen display picture of the head-mounted device is determined, the right-eye picture is projected at the screen position of the cinema model, and the 3D film visual effect which can be experienced by viewing at the selected position is achieved.
According to the technical scheme, the virtual reality cinema system can simulate the actual watching effect in a cinema by using the head-mounted equipment, and the position and the corresponding visual angle of the selected seat in the cinema model are simulated, so that the virtual image displayed on the screen is closer to the effect in the actual cinema, the information amount presented by the virtual reality cinema to a user is greatly improved, and the user bring-in feeling of the virtual reality cinema system is improved.
In order to simulate the viewing effect in an actual theater more truly and increase the feeling of bringing in the virtual reality theater system, the client of the virtual reality theater system may further include a sound device, two sound playing sources are built in the sound device, and the two sound playing sources respectively correspond to the left ear and the right ear of the wearer, so that the client adjusts the loudness of the two sound playing sources according to the selected seat information to further simulate the sound effect of the actual theater, as shown in fig. 7, the method specifically includes the following steps:
s301: determining the position coordinates of the selected seats in the cinema model according to the selected seat information;
s302: determining the sound source position of sound equipment in the cinema model, and calculating the sound field distance between the selected seat and the sound source position according to the position coordinate;
s303: and adjusting the loudness of the two sound playing sources according to the sound field distance.
In this embodiment, as shown in fig. 8, in order to obtain better simulation effect and adapt to hardware conditions of most users, the sound device may select an earphone. Because the sound equipment distribution modes of different cinemas are different, the sound equipment distribution position information corresponding to the cinemas is attached to the established cinema model, and after the client receives the cinema model, the distribution position information of the sound equipment in the model is read, so that the distance between the selected seat and each sound equipment is determined according to the position coordinates, and the loudness of the two sound playing sources is determined according to the distribution characteristics of the sound field.
Since most of the sound signals of the movie are in the form of surround sound, in this embodiment, the two sound playing sources respectively play the contents of the left and right channels in the audio signal, and then the loudness of the two sound playing sources is respectively adjusted according to the sound field distance of the selected seat. For example, when the seat selected by the user is closer to the left sound device, which corresponds to a real scene, the loudness of the sound heard by the left sound device should be higher than the loudness of the sound heard by the right sound device, that is, the loudness of the sound content in the left channel is higher, and at this time, the sound production loudness of the sound playing source corresponding to the left ear is appropriately increased by the client according to the sound field distance of the selected position.
In addition, in order to obtain a sound experience closer to an actual cinema, in the technical scheme provided by the application, the loudness can be adjusted, and meanwhile, the situation that sound can be heard in the actual cinema can be further simulated by adding reverberation, such as echo, cinema environment sound and the like, to the sound. Obviously, the intensity of the added reverberation also depends on the size of the space in the cinema model, for example, for a cinema model with a larger space, the echo has a larger intensity and a longer interval time with the main sound; on the contrary, for the cinema model with a small space, the intensity of the echo is small, and the interval time between the echo and the main sound is also short.
In some embodiments of the present application, as shown in fig. 9 and 10, in order to provide more information to the user during the viewing process, the server is further configured to add a character model to the theatre model according to the viewing information, and the specific implementation steps of adding a character model to the theatre model are as follows:
s401: receiving film watching information sent by a plurality of clients;
s402: determining selected seat information corresponding to the client according to the film watching information;
s403: adding a character model to the seat position corresponding to the selected seat information in the cinema model according to the selected seat information;
s404: and sending the cinema model added with the character model to a plurality of clients.
In this embodiment, the server S0 is connected to a plurality of clients, such as client a1, client a2, … …, wherein when the client a1 completes the seat selection, the server S0 records the seat selected by the client; when the client a2 performs seat selection in the same theater, the seat position selected by the client a1 is set as not selectable, and after the client a2 finishes uploading the viewing information, the server S0 adds a character model to the theater model sent to the client a2, where the position of the character model in the theater model is located at the seat position selected by the client a 1. In turn, the same operation is performed for client a3 in this manner.
Further, in practical application, after setting seat selection time periods for each movie in the virtual reality theater system, that is, after uploading viewing information on the client a1, the server S0 first sends only movie resources to the client a1 and prompts the showing time of the client a1 movie, and at the same time, records the selected seat information of all different clients about the movie in the seat selection time periods, and after the seat selection time periods are ended, adds character models to the theater models according to all the selected seat information, and sends the theater models to which the character models are added to all the clients that have completed seat selection operation in the seat selection time periods. For display by the head-mounted device in each client. Obviously, the server removes the character model from the seat selected by the client receiving the model in the cinema model transmitted to each client, so as to avoid the character model from influencing the viewing picture. For example, when the server S0 sends the cinema model with the added character model to the client a1, the character model in the seat selected by the client a1 needs to be removed and the other character model maintains the corresponding position.
In one embodiment, the viewing information further includes audience information, where the audience information is used to determine the physical characteristics of the viewing user, so as to select a corresponding character model from the server to add to the theater model, that is, for step S403, the character model is added to the theater model according to the selected seat information and the seat position corresponding to the selected seat information, and the method further includes:
s4031: selecting a character model corresponding to the audience information according to the audience information;
s4032: the character model is added to the cinema model at the seat position corresponding to the viewing information.
In this embodiment, the audience information mainly includes: the audience information may be specified by the user when registering an account, or may be determined according to the registered identity information of the user or the nickname and the head portrait of the user. And after receiving the viewing information sent by the client, the server extracts audience information in the viewing information, matches a character model similar to the physical and morphological characteristics of the audience information in a character model library, and adds the matched character model to the corresponding selected seat position. By the embodiment, a plurality of different character models can be added into the cinema model, so that pictures displayed by the head-mounted equipment do not bring unreal feeling due to single model, and the information amount presented by the virtual reality cinema system to the user is further improved.
In some embodiments of the present application, as shown in fig. 11, the virtual reality theater system further includes a theater end establishing a communication connection with the server, and the theater end is configured to:
updating the cinema model, sending the updated cinema model to the server, and replacing the cinema model stored in the server; and uploading the new movie resource to the server.
In this embodiment, the cinema end may be multiple cinema ends as the client end, that is, each entity cinema corresponds to one cinema end, and in practical application, the cinema end includes an interaction device that implements data interaction with the server. The merchant using the cinema end can update the cinema model through the interaction equipment to replace the cinema model stored in the server. The cinema model can be updated and adjusted by using the cinema end, so that the user can feel the same as that of an actual cinema, errors in the cinema model can be found in time, and the normal work of the virtual reality cinema system is maintained. The client can also be used for uploading new film resources to the server, so that the server can timely acquire high-quality film resources meeting the playing requirements.
In one technical scheme, the film watching information further comprises associated user information, the associated user information is a common film watching request sent by a plurality of clients, and the associated user information at least comprises the number of the common film watching clients; the server realizes the common film watching request of a plurality of users according to the associated user information by the following steps:
s501: matching optional cinema information which accords with the number of the common film watching clients according to the associated user information;
s502: and pushing optional cinema information for a plurality of corresponding clients in the associated user information, and sending corresponding movie resources and cinema models to the plurality of clients after receiving confirmation information of the plurality of clients.
The embodiment can meet the user requirements when a plurality of different clients have a common film watching request. For example, for a plurality of users, if they agree to perform a movie watching experience together, movie watching information including associated user information may be sent to the server, after the server receives the movie watching information, the number of users performing the movie watching experience together is determined according to the associated user information, a movie theater model meeting the requirement of the common movie watching of the corresponding number of users is matched according to the number of users performing the movie watching experience together, and is recommended to each client, and after each client confirms, corresponding movie resources and movie theater models are sent to the users according to the above manner. It should be noted that, according to the technical scheme provided by the application, when the number of clients initiating common film watching is large, as the cinema meeting the common film watching requirement is difficult to match, other clients except the common film watching client can not be considered when the common film watching request is provided, so that more seat requirements are met.
In some embodiments of the present application, the client has a built-in orientation sensor for detecting movement and rotation of the head-mounted device, so that the client determines an initial display screen of the head-mounted device according to user posture data detected by the orientation sensor, as shown in fig. 12, the method specifically includes the following steps:
s601: acquiring an initial attitude angle of the head-mounted device through an orientation sensor;
s602: determining the position coordinates of the selected seat in the cinema model according to the selected seat information and the cinema model;
s603: acquiring a visual field range of the head-mounted equipment corresponding to the client under the initial attitude angle, wherein the visual field range comprises a display viewing cone and a viewing angle of the head-mounted equipment;
s604: and determining an initial display picture of the head-mounted equipment corresponding to the client according to the position coordinates and the field angle.
In the embodiment, the attitude data of the user is detected through the orientation sensor, so that the film viewing attitude of the user can be determined when the initial display picture is determined when the user just wears the head-mounted equipment, different film viewing pictures can be obtained under different attitudes, and the picture observed through the head-mounted equipment is more real. Further, in the technical scheme provided by the application, the picture displayed on the screen of the head-mounted device can be adjusted in real time according to the attitude data detected by the orientation sensor, that is, in the film watching process, the client side realizes the adjusting function according to the following steps:
s701: acquiring detection data of an orientation sensor of the head-mounted equipment in a film watching process;
s702: determining attitude data of the head-mounted equipment in the cinema model according to detection data of the orientation sensor;
s703: and adjusting the display picture of the head-mounted equipment according to the posture data.
According to the steps, in the technical scheme provided by the application, in the process of viewing experience by wearing the head-mounted equipment by a user, if the head of the user changes in posture, the built-in orientation sensor of the head-mounted equipment can detect the movement and rotation of the head, determine posture data, and the client adjusts the picture content displayed by the head-mounted equipment according to the posture data. For example, during the film watching process, the head of the user turns left by a certain angle, the orientation sensor detects the posture data of the head of the user turning left, and the client adjusts the virtual object in the visual field range according to the posture data, so that the virtual object on the left side in the cinema model is displayed on the screen of the head-mounted device. In the present embodiment, since the client has a different form of head-mounted device, the manner of adjusting the display screen according to the posture data is also different. For example, some head-mounted devices are only responsible for collecting posture data, and adjusting the display screen according to the posture data is completed by a computer used with the head-mounted devices; and some devices can directly complete the picture adjustment through a built-in processor. In the technical scheme provided by the application, because the matched display of the cinema model and the movie resources is involved, and the data processing amount is large, in order to reduce the load of a head-mounted device processor and improve the data processing efficiency, the computer is preferably selected to complete the adjustment of the display picture so as to improve the data processing efficiency and the fluency of the display picture.
Further, in the virtual reality cinema system provided by the application, the viewing information further comprises viewing time information. The setting of the film watching time information can be divided into two forms, one of which is set by a cinema end, namely, a cinema side determines the film showing time according to the heat degree of the film and the load of a server, and when a user generates a film watching instruction at a client, the user directly selects the corresponding film watching time at a service interface. The method can simulate the film showing mode of an actual movie theater in reality, can buffer film resources when the film is not shown, and avoids increasing the data processing amount of the server when a plurality of users simultaneously download one film resource, thereby fully mobilizing the server resources and avoiding serious network delay. In addition, the cinema end sets the film watching time information, and a plurality of users can watch films at the same time conveniently, so that the users in different places can communicate with each other according to the film content, and the film watching experience is improved.
Another way of setting the viewing time information is to set by the client, that is, the user selects the corresponding movie showing time according to the own time schedule in the service interface. The mode can facilitate users to arrange the film to watch according to own plans, so that the users do not need to be limited by time when using the film, and the film can be more freely watched. When the method is used for determining the film watching time, in order to avoid higher delay, in actual use, a mode that the film watching time is set by a user can be adopted for films with lower download quantity, so that the load of a server is reduced. It is understood that, in the technical solution provided in the present application, a corresponding viewing time setting manner may be displayed on the service interface, and the user selects the corresponding time determination manner, but in order to guide the user to select a suitable manner, the occurrence of a user peak may be avoided by a discount manner. For example, for a heat-mapped movie with a large download amount, the fee to be paid during downloading can be increased appropriately.
In addition, in some embodiments of the present application, the virtual reality theater system further includes a payment platform, and the payment platform has a data connection relationship with the server, the client, and the theater terminal. In the operation process, after selecting the name of a movie to be watched, a movie theater to be watched, seats in the movie theater to be watched and the like on the service interface, the client generates a movie watching instruction and sends the movie watching instruction to the server. And the server determines the order information needing to be paid according to the film watching instruction. The order information can be confirmed according to the charging standard which is set in advance by the cinema terminal and stored in the server, or the cinema terminal can confirm the order information according to the film arrangement condition of the cinema after the cinema terminal receives the film observation information and sends the film observation information to the cinema terminal.
After the order information is determined, the server sends the order information to the client through the payment platform, the client waits for the order information to be confirmed and payment is completed, and when the client confirms the order information and completes payment, the server generates film watching information according to the film watching instruction and sends film resources and the cinema model to the client. And simultaneously, the payment platform transfers the amount of money obtained by the payment of the user to an account corresponding to the cinema terminal to complete the transaction.
Based on the virtual reality cinema system, the present application further provides a virtual reality cinema information processing method, as shown in fig. 13, the method includes:
s801: the client side obtains a film watching instruction and generates film watching information according to the film watching instruction, wherein the film watching information comprises selected cinema information, selected seat information and selected film information;
s802: the client sends the film watching information to a server, and the server stores film resources and cinema models;
s803: after receiving the film watching information, the server extracts film resources and cinema models corresponding to the film watching information;
s804: the server sends the extracted film resources and the cinema model to the client;
s805: and after receiving the film resources and the cinema model, the client displays the cinema model according to the selected seat information and shows the film resources at the screen position of the cinema model.
According to the technical scheme, the virtual reality cinema system comprises a server storing movie resources and cinema models and a plurality of client sides establishing communication connection with the server, wherein in practical application, a movie watching instruction of a user is obtained through the client sides, movie watching information comprising selected cinema information, selected seat information and selected movie information is generated according to the movie watching instruction, and then the movie watching information is sent to the server; after receiving the film watching information, the server extracts film resources and cinema models corresponding to the film watching information and sends the film resources and cinema models to the client; and finally, after receiving the film resources and the cinema model, the client displays the cinema model on the screen of the head-mounted equipment according to the selected seat information, and shows the film resources at the screen position of the cinema model.
The application provides a virtual reality cinema system can utilize virtual reality head-mounted apparatus, the effect of watching of simulation reality in the cinema to select the seat position in the cinema model and correspond the visual angle through the simulation, the virtual image that makes the demonstration on the screen more is close to the effect in actual cinema, thereby improved the information content that the virtual reality cinema appears for the user greatly, it is too little to solve traditional virtual cinema and exist and present the information content, the problem that the impression is brought the sense not strong.
In the above embodiments, the devices included in the client are not limited to virtual reality devices, but may also be applied to any head-mounted device, and the head-mounted device specifically includes, but is not limited to, virtual reality devices, augmented reality devices, game devices, mobile computing devices, other wearable computers, and the like.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (9)

1. A virtual reality theatre system including a server having movie assets and theatre models stored thereon, and a plurality of clients in communication with the server, wherein:
the server is used for receiving the film watching information sent by the client and sending corresponding film resources and cinema models to the client according to the film watching information; the film watching information comprises selected cinema information, selected seat information and selected film information;
the client comprises a head-mounted device with a display screen and an optical module, and is used for acquiring a film watching instruction of a user, generating film watching information according to the film watching instruction and sending the film watching information to the server;
the client is further configured to receive the movie resources and the theater models sent by the server, display the theater models on a display screen of the head-mounted device according to the selected seat information, display the movie resources at a screen position of the theater models, and present the enlarged images on the display screen to the user through the optical module;
the client is configured to: determining the position coordinates of the selected seat in the cinema model according to the selected seat information and the cinema model;
acquiring a visual field range of the client corresponding to the head-mounted equipment, wherein the visual field range comprises a display viewing cone and a field angle of the head-mounted equipment;
determining an initial display picture of the head-mounted equipment corresponding to the client according to the position coordinates and the visual field range;
the client is further configured to:
determining the plane of the screen in the cinema model as a view plane;
according to the visual field range, determining the projection of the display cone and the intersecting surface of the visual plane and the virtual article of the cinema model in the visual field range on the intersecting surface under the visual field range;
determining a center point of the intersecting surface, a width Wp of the intersecting surface and a height Hp of the intersecting surface;
acquiring the screen width Ws and the screen height Hs of the head-mounted equipment, and displaying the projection in an initial display picture according to the following formula:
xs=(Ws-1)(xp/Wp+0.5);
ys=(Hs-1)(yp/Hp+0.5);
in the formula, xp and yp are coordinate points in the intersecting surface; xs, ys is a coordinate point in the head-mounted device screen.
2. The system of claim 1, wherein the client further comprises a sound device having two sound playback sources built therein, and wherein the client is further configured to:
determining the position coordinates of the selected seats in the cinema model according to the selected seat information;
determining the sound source position of sound equipment in the cinema model, and calculating the sound field distance between the selected seat and the sound source position according to the position coordinate;
and adjusting the loudness of the two sound playing sources according to the sound field distance.
3. The system of claim 1, wherein the server is further configured to add a character model to the theatre model based on the viewing information; the server is further configured to:
receiving the film watching information sent by a plurality of clients;
determining selected seat information corresponding to the client according to the film watching information;
adding a character model to a seat position corresponding to the selected seat information in the cinema model according to the selected seat information;
and sending the cinema model added with the character model to a plurality of clients.
4. The system of claim 3, wherein the viewing information further comprises viewer information, and wherein the server is further configured to:
selecting a character model corresponding to the audience information according to the audience information;
and adding the character model to the cinema model and the seat position corresponding to the viewing information.
5. The system of claim 1, further comprising a cinema end establishing a communication connection with the server, the cinema end configured to:
updating the cinema model, sending the updated cinema model to the server, and replacing the cinema model stored in the server; and the number of the first and second groups,
and uploading the new film resource to the server.
6. The system according to claim 1, wherein said viewing information further comprises associated user information, said associated user information is a common viewing request sent by a plurality of said clients, said associated user information at least comprises the number of common viewing clients; the server is further configured to:
matching optional cinema information which accords with the number of the common film watching clients according to the associated user information;
and pushing the selectable cinema information for a plurality of corresponding clients in the associated user information, and sending corresponding movie resources and cinema models to the plurality of clients after receiving confirmation information of the plurality of clients.
7. The system of claim 1, wherein the head-mounted device has built-in orientation sensors for detecting movement and rotation of the head-mounted device, and wherein the client is further configured to:
acquiring an initial attitude angle of the head-mounted device through the orientation sensor;
determining the position coordinates of the selected seat in the cinema model according to the selected seat information and the cinema model;
acquiring a visual field range of the head-mounted equipment corresponding to the client under the initial attitude angle, wherein the visual field range comprises a display viewing cone and a viewing angle of the head-mounted equipment;
and determining an initial display picture of the head-mounted equipment corresponding to the client according to the position coordinates and the field angle.
8. The system of claim 7, wherein the client is further configured to:
acquiring detection data of the orientation sensor during the film watching process of the head-mounted equipment;
determining the attitude data of the head-mounted equipment in the cinema model according to the detection data of the orientation sensor;
and adjusting the display picture of the head-mounted equipment according to the posture data.
9. A virtual reality cinema information processing method is characterized by comprising the following steps:
the client side obtains a film watching instruction and generates film watching information according to the film watching instruction, wherein the film watching information comprises selected cinema information, selected seat information and selected film information;
the client sends the film watching information to a server, and the server stores film resources and cinema models;
after receiving the film watching information, the server extracts film resources and cinema models corresponding to the film watching information;
the server sends the extracted film resources and the cinema model to the client;
after receiving the film resources and the cinema model, the client displays the cinema model according to the selected seat information and shows the film resources at the screen position of the cinema model;
determining the position coordinates of the selected seat in the cinema model according to the selected seat information and the cinema model;
acquiring a visual field range of the client corresponding to the head-mounted equipment, wherein the visual field range comprises a display viewing cone and a field angle of the head-mounted equipment;
determining an initial display picture of the head-mounted equipment corresponding to the client according to the position coordinates and the visual field range; the client is further configured to:
determining the plane of the screen in the cinema model as a view plane;
according to the visual field range, determining the projection of the display cone and the intersecting surface of the visual plane and the virtual article of the cinema model in the visual field range on the intersecting surface under the visual field range;
determining a center point of the intersecting surface, a width Wp of the intersecting surface and a height Hp of the intersecting surface;
acquiring the screen width Ws and the screen height Hs of the head-mounted equipment, and displaying the projection in an initial display picture according to the following formula:
xs=(Ws-1)(xp/Wp+0.5);
ys=(Hs-1)(yp/Hp+0.5);
in the formula, xp and yp are coordinate points in the intersecting surface; xs, ys is a coordinate point in the head-mounted device screen.
CN201711490235.3A 2017-12-30 2017-12-30 Virtual reality cinema system and information processing method Active CN109996060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711490235.3A CN109996060B (en) 2017-12-30 2017-12-30 Virtual reality cinema system and information processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711490235.3A CN109996060B (en) 2017-12-30 2017-12-30 Virtual reality cinema system and information processing method

Publications (2)

Publication Number Publication Date
CN109996060A CN109996060A (en) 2019-07-09
CN109996060B true CN109996060B (en) 2021-09-03

Family

ID=67111517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711490235.3A Active CN109996060B (en) 2017-12-30 2017-12-30 Virtual reality cinema system and information processing method

Country Status (1)

Country Link
CN (1) CN109996060B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703599A (en) * 2020-06-19 2021-11-26 天翼智慧家庭科技有限公司 Screen curve adjustment system and method for VR
CN114189751B (en) * 2020-09-15 2023-03-24 杭州晨熹多媒体科技有限公司 Cinema system control method, server, proxy terminal, display terminal and cinema system control system
CN112162638B (en) * 2020-10-09 2023-09-19 咪咕视讯科技有限公司 Information processing method and server in Virtual Reality (VR) viewing

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL127534A (en) * 1996-06-13 2001-08-26 Leuven K U Res & Dev Method and system for acquiring a three-dimensional shape description
US20010056574A1 (en) * 2000-06-26 2001-12-27 Richards Angus Duncan VTV system
IL166305A0 (en) * 2005-01-14 2006-01-15 Rafael Armament Dev Authority Automatic conversion from monoscopic video to stereoscopic video
US8953022B2 (en) * 2011-01-10 2015-02-10 Aria Glassworks, Inc. System and method for sharing virtual and augmented reality scenes between users and viewers
US9709806B2 (en) * 2013-02-22 2017-07-18 Sony Corporation Head-mounted display and image display apparatus
WO2015050194A1 (en) * 2013-10-02 2015-04-09 株式会社ニコン Optical assembly for head-mounted display and head-mounted display
CN104581119B (en) * 2014-12-31 2017-06-13 青岛歌尔声学科技有限公司 A kind of display methods of 3D rendering and a kind of helmet
US20170188021A1 (en) * 2015-12-24 2017-06-29 Meta Company Optical engine for creating wide-field of view fovea-based display
CN106534830B (en) * 2016-10-10 2019-04-26 上海蒙彤文化传播有限公司 A kind of movie theatre play system based on virtual reality
CN106598219A (en) * 2016-11-15 2017-04-26 歌尔科技有限公司 Method and system for selecting seat on the basis of virtual reality technology, and virtual reality head-mounted device
CN106681502A (en) * 2016-12-14 2017-05-17 深圳市豆娱科技有限公司 Interactive virtual-reality cinema system and interaction method
CN106843471A (en) * 2016-12-28 2017-06-13 歌尔科技有限公司 A kind of method of cinema system and viewing film based on virtual implementing helmet
CN107066102A (en) * 2017-05-09 2017-08-18 北京奇艺世纪科技有限公司 Support the method and device of multiple VR users viewing simultaneously

Also Published As

Publication number Publication date
CN109996060A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN112104594B (en) Immersive interactive remote participation in-situ entertainment
CN112205005B (en) Adapting acoustic rendering to image-based objects
CN108701371B (en) Method and apparatus for providing virtual reality output and augmented reality output
US9684994B2 (en) Modifying perspective of stereoscopic images based on changes in user viewpoint
US10493360B2 (en) Image display device and image display system
CN109996060B (en) Virtual reality cinema system and information processing method
US11647354B2 (en) Method and apparatus for providing audio content in immersive reality
CN114402276A (en) Teaching system, viewing terminal, information processing method, and program
US20240077941A1 (en) Information processing system, information processing method, and program
US11206452B2 (en) Video display system, information processing apparatus, and video display method
US11189080B2 (en) Method for presenting a three-dimensional object and an associated computer program product, digital storage medium and a computer system
US20220036075A1 (en) A system for controlling audio-capable connected devices in mixed reality environments
US20230037102A1 (en) Information processing system, information processing method, and program
JP6831027B1 (en) Distribution system, video generator, and video generation method
JP2022173870A (en) Appreciation system, appreciation device, and program
TWM592332U (en) An augmented reality multi-screen array integration system
KR20140121973A (en) Method and apparatus for controlling contents consumption using certification for stereoscope
KR20090016234A (en) Virtual reality performance system and method for thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant