CN112423035A - Method for automatically extracting visual attention points of user when watching panoramic video in VR head display - Google Patents

Method for automatically extracting visual attention points of user when watching panoramic video in VR head display Download PDF

Info

Publication number
CN112423035A
CN112423035A CN202011219895.XA CN202011219895A CN112423035A CN 112423035 A CN112423035 A CN 112423035A CN 202011219895 A CN202011219895 A CN 202011219895A CN 112423035 A CN112423035 A CN 112423035A
Authority
CN
China
Prior art keywords
user
video
data
ray
head display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011219895.XA
Other languages
Chinese (zh)
Inventor
杨小敏
常谦
于博
张久屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bee Sparrow Network Technology Co ltd
Original Assignee
Shanghai Bee Sparrow Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bee Sparrow Network Technology Co ltd filed Critical Shanghai Bee Sparrow Network Technology Co ltd
Priority to CN202011219895.XA priority Critical patent/CN112423035A/en
Publication of CN112423035A publication Critical patent/CN112423035A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a method for automatically extracting a visual attention point of a user when watching a panoramic video in a VR head display, which comprises the following steps: step (1), emitting a ray from a place watched by a person, wherein the ray can be expressed as follows according to a parameter equation of the ray: o + Dt, step (2) A parametric equation representing a triangle: (1-V-u) V0+u V1+v V2Step (3), then, whether the ray and the triangle have the focus is solved, and the following equation is known: o + Dt ═ V (1-V-u) V0+u V1+v V2(ii) a On one hand, the invention can lead the client to conveniently experience the introduction of the virtual reality product content on the premise of no learning cost, on the other hand, the sales consultant participates and manages the experience process of the user in the whole process, and the area concerned by the user is automatically analyzed to provideAnd recommending the supply and sale strategy. The big data acquisition module acquires behavior data of the user in the whole course, and provides strategy support for offline user analysis and refined operation.

Description

Method for automatically extracting visual attention points of user when watching panoramic video in VR head display
Technical Field
The invention relates to the technical field of data acquisition and analysis, in particular to a method for automatically extracting visual attention points of a user when watching a panoramic video in a VR head display.
Background
VR (virtual reality technology) is a computer simulation system that can create and experience virtual worlds, which uses computers to create a simulated environment, which is a systematic simulation of multi-source information-fused, interactive three-dimensional dynamic views and physical behaviors to immerse users in the environment. At present, in an exhibition hall matched with VR equipment in China, a user who does not usually contact with the VR equipment does not know how to use and experience very much, especially in high-end PCVR equipment, the operation difficulty of the user is increased when the equipment carries a handle, and a worker is often required to assist explaining and learning to complete a basic interactive experience flow, so that the user usually needs to spend a certain time cost to learn how to operate when wanting to have a good experience, and the operation interpreter needs to spend time for repeating work for explaining and using after one time, and meanwhile, the user sometimes needs to take down heavy equipment to be debugged by the worker to continue experience after finding and adjusting the problem operation interpreter which cannot be realized in a picture when using the equipment.
The operation is inconvenient: independent VR experience system needs experience personnel to select content experience by oneself, and it is with high study cost to the equipment of preliminary contact VR type. And the marketing link is not closely combined: the user takes the head display and independently feels in VR space. But the sales consultant does not know what the user is paying attention to and therefore cannot make targeted marketing referrals. The user data acquisition link is missing: during the experience, the user may leave a large amount of user behavior data. The data is used for user behavior analysis, and fine operation is a very important dimension. For the mature current situation of online user behavior analysis, at present, store VR experience facilities generally only provide simple user experience, and a large amount of online user behavior data in store VR digital experience processes are not effectively collected and analyzed.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method for automatically extracting a user's visual attention point when watching a panoramic video in a VR head display, so as to solve the problems proposed in the background art.
The technical problem solved by the invention is realized by adopting the following technical scheme: a method for automatically extracting visual attention points of a user when watching a panoramic video in a VR head display comprises the following steps:
step (1), emitting a ray from a place watched by a person, wherein the ray can be expressed as follows according to a parameter equation of the ray: o + Dt, where O is the coordinate position of the camera and D is the direction the camera is facing;
step (2) representing a parameter equation of a triangle: (1-V-u) V0+u V1+v V2Wherein V0、V1、V2Are the coordinates of the region of interest that we recorded in the description file. u and V are V1、V21-V-u is V0The weight of (c); a triangle and all points inside it satisfy u>=0,v>=0,u+v<=1;
Step (3), then, whether the ray and the triangle have the focus is solved, and the following equation is known: o + Dt ═ V (1-V-u) V0+u V1+v V2(ii) a Where t, u, v are variables, others are known;
shifting and sorting, extracting t, u and v as unknowns, and obtaining the following linear equation set:
Figure BDA0002761639120000021
according to the claime rule, the solutions that can be obtained are:
Figure BDA0002761639120000031
Figure BDA0002761639120000032
Figure BDA0002761639120000033
combining these three solutions writes:
Figure BDA0002761639120000034
and according to a mixed product formula:
|a b c|=a×b·c,
the above formula is rewritten as:
Figure BDA0002761639120000035
by the above operation, we find u and v of the triangle parameter equation.
Step (4) if v and v satisfy u > -0, v + u < > 1, then the ray falls within the triangle; in the function corresponding to the text, the attention focus of people watching the video is in the region of our buried points, and the system automatically identifies the video region.
A system for viewing panoramic video in a VR headset to automatically extract a user's visual focus, comprising: the terminal equipment is in communication connection with the cloud server and comprises a video processing end and VR head display equipment, the cloud server comprises a streaming media server, a service background and a video editing tool, and the streaming media server is used for distributing VR videos with different resolutions and code rates to the client; the service background is used for summarizing and analyzing the user behavior data of all the terminals and generating a report; the video processing terminal comprises a head display equipment management module, a streaming media decoding and playing module, a position and posture synchronization module, a user visual analysis and sales tactical recommendation module and a user data acquisition module; the VR head display equipment comprises an 8K video playing component and a video processing end communication component.
The head display equipment management module: detecting all VR head display devices which are searched and connected in the same network; a salesperson selects one or more VR devices through the control end app to perform conventional operations such as content playing, pausing and the like; the streaming media decoding and playing module: processing data sent from a cloud streaming media server to perform soft decoding, and performing operations such as playing, pausing and fast forwarding on a video stream; position posture synchronization module: the user acquires the head posture data of the user and synchronizes the result to the decoding and playing module, so that the sales consultant can see the experience condition of the user in real time; the user visual analysis and sales tactics recommendation module analyzes the current interested area of the user through a 3D space square patch label preset by the video and through the combination of the calculation of the visual center vector of the user; performing related sales jargon and sales skill recommendation on an app interface; and the user data acquisition module is used for collecting experience data generated by a user in the using process and transmitting the experience data to the background database for generating a data image. The collected data includes the duration of experience and the duration of focus of the region of interest of the user.
The 8K video playing component: the built-in 8K hardware is adopted for decoding, an 8K video can be directly played without external equipment, and a 4K OLED screen is combined, so that the 110-degree FOV field angle is realized, and the video image has no screen sense; video processing end communication assembly: the helmet is used for processing data communication with the video processing end, the helmet transmits attitude data to the control end app at a fixed frame rate, and the control end receives the data and verifies the time sequence validity of the data to perform position and rotation interpolation, so that consistency is kept; meanwhile, the helmet end also receives a video control command from the video processing end through the assembly.
Compared with the prior art, the invention has the beneficial effects that: according to the method, on one hand, a client can conveniently experience the introduction of the content of the virtual reality product on the premise of no learning cost, on the other hand, a sales advisor participates and manages the experience process of the user in the whole process, and the sales strategy recommendation is provided by automatically analyzing the area concerned by the user. The big data acquisition module acquires behavior data of the user in the whole course, and provides strategy support for offline user analysis and refined operation.
Drawings
FIG. 1 is a system architecture diagram of the present invention.
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
In the description of the present invention, it should be noted that unless otherwise specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected, mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements.
Example 1
As shown in fig. 1 and fig. 2, a method for automatically extracting a visual attention point of a user when watching a panoramic video in a VR head display includes the following steps:
generating a video matching interest point description file: loading a panoramic video through a video pre-editing tool, and generating an interest point configuration description file by imitating the format of a video subtitle file; the description file is a text file, described by lines. The structure of each row is a time period range label such as: 2: 20-2: 50(20,30,40), (0,0,0), (100,100,100) A9 vehicle head up display technology. Time represents the time that the video has been played from the beginning to the current duration. The label is a user preset abbreviated prompt to represent what the current area sees.
Step (2), searching and connecting equipment: setting a control terminal and a VR head display to run under the same local area network, sending self equipment information to the control terminal app at a fixed time interval by the client app on the VR head display, displaying an equipment name in a connection management list after the control terminal app receives the equipment name, clicking corresponding equipment in the list, connecting the equipment, clicking the equipment again, and canceling the equipment connection;
selecting a video: when the control end app enters a play control page, the streaming media server checks whether a video resource list needs to be updated or not, when the video resource list needs to be updated, a video page is reloaded and updated, after the video page is loaded, a corresponding video is clicked to play the video and simultaneously play the client end app on a connected VR head display, a play pause button is clicked to control the playing and pausing of the video, and a left button and a right button are clicked to control the switching of the video;
step (4), synchronous playing and synchronous playing mainly comprise two aspects, namely synchronization of playing time: the playing time synchronization adopts a frame synchronization technology, the control end app sets a fixed time interval frame, data are sent to the client end app at regular time, and the client end synchronizes and corrects the playing time of the two ends after receiving the network data packet and simultaneously responds and processes the instruction.
Synchronization of the viewing angles: and the client app on the VR head display synchronizes the posture quaternion head of the main camera to the control end app at regular time, and the control end app synchronizes the visual angle of the client to a preview window of the control end app through interpolation.
And (4) calculating the intersection judgment of the sight line vector of the user visual attention target camera and the interest point range, and automatically extracting the attention point information of the user through whether the sight line vector intersects with the interest point range. The specific calculation process is as follows: we emit a ray from the place where we look, and according to the parametric equation of the ray, the ray can be expressed as follows, O + Dt where O is the coordinate position where the camera is located and D is the direction the camera is facing
The parametric equation representing a triangle is as follows, (1-V-u) V0+uV1+vV2Where V0, V1, V2 are the coordinates of the region of interest we recorded in the description file. u, V are the weights of V1, V2, and 1-V-u are the weights of V0. A triangle and all points inside it satisfy u > -0, v > -0, u + v < ═ 1.
Thus, finding whether the ray and triangle have foci becomes known as follows. Where t, u, v are variables, others are known.
O+Dt=(1-v-u)V0+uV1+vV2
Shifting and sorting terms, extracting t, u and v as unknowns to obtain the following linear equation set
Figure BDA0002761639120000071
According to the Cramer rule, the solutions that can be obtained are respectively
Figure BDA0002761639120000072
Figure BDA0002761639120000073
Figure BDA0002761639120000074
Combining the three solutions together is written as
Figure BDA0002761639120000075
Then according to the formula of the mixed product
|a b c|=a×b·c
Rewriting the above formula into
Figure BDA0002761639120000081
By the above operation, we find u and v of the triangle parameter equation. If v and v satisfy u > -0, v + u < 1, then the ray falls within the triangle. In the function corresponding to the text, the attention focus of people watching the video is in the region of our buried points, and the system automatically identifies the video region.
Step (6), recommending sales explanation content: if the attention points are extracted, searching is carried out in the background through preset label information, and sales content recommendation is obtained in real time.
Step (7), user behavior data statistics: including the user's visual center, the duration of attention stay, the user's questions, geographic location information, topics of interest to the user, etc.
Step (8), analyzing data users: and analyzing and classifying the statistical data by the background to generate the dashboard. Through the mining of the data, the data support is provided for the improvement of the subsequent marketing strategy.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. A method for automatically extracting a visual attention point of a user when watching a panoramic video in a VR head display is characterized in that: the method comprises the following steps:
step (1), emitting a ray from a place watched by a person, wherein the ray can be expressed as follows according to a parameter equation of the ray: o + Dt, where O is the coordinate position of the camera and D is the direction the camera is facing;
step (2) representing a parameter equation of a triangle: (1-V-u) V0+uV1+vV2Wherein V0、V1、V2Are the coordinates of the region of interest that we recorded in the description file. u and V are V1、V21-V-u is V0The weight of (c); a triangle and all points inside it satisfy u>=0,v>=0,u+v<=1;
Step (3), then, whether the ray and the triangle have the focus is solved, and the following equation is known: o + Dt ═ V (1-V-u) V0+uV1+vV2(ii) a Where t, u, v are variables, others are known;
shifting and sorting, extracting t, u and v as unknowns, and obtaining the following linear equation set:
Figure FDA0002761639110000011
according to the claime rule, the solutions that can be obtained are:
Figure FDA0002761639110000012
Figure FDA0002761639110000013
Figure FDA0002761639110000014
combining these three solutions writes:
Figure FDA0002761639110000015
and according to a mixed product formula:
|a b c|=a×b·c,
the above formula is rewritten as:
Figure FDA0002761639110000021
by the above operation, we find u and v of the triangle parameter equation.
Step (4), if v and v satisfy u > -0, v + u < -1, then the ray falls within the triangle; in the function corresponding to the text, the attention focus of people watching the video is in the region of our buried points, and the system automatically identifies the video region.
2. A system for viewing panoramic video in a VR headset to automatically extract a user's visual focus, comprising: the terminal equipment is in communication connection with the cloud server and comprises a video processing end and VR head display equipment, the cloud server comprises a streaming media server, a service background and a video editing tool, and the streaming media server is used for distributing VR videos with different resolutions and code rates to the client; the service background is used for summarizing and analyzing the user behavior data of all the terminals and generating a report; the video processing terminal comprises a head display equipment management module, a streaming media decoding and playing module, a position and posture synchronization module, a user visual analysis and sales tactical recommendation module and a user data acquisition module; the VR head display equipment comprises an 8K video playing component and a video processing end communication component.
3. The system of claim 2, wherein the system for automatically extracting visual attention of a user viewing a panoramic video in a VR headset comprises: the head display equipment management module: detecting all VR head display devices which are searched and connected in the same network; a salesperson selects one or more VR devices through the control end app to perform conventional operations such as content playing, pausing and the like; the streaming media decoding and playing module: processing data sent from a cloud streaming media server to perform soft decoding, and performing operations such as playing, pausing and fast forwarding on a video stream; position posture synchronization module: the user acquires the head posture data of the user and synchronizes the result to the decoding and playing module, so that the sales consultant can see the experience condition of the user in real time; the user visual analysis and sales tactics recommendation module analyzes the current interested area of the user through a 3D space square patch label preset by the video and through the combination of the calculation of the visual center vector of the user; performing related sales jargon and sales skill recommendation on an app interface; and the user data acquisition module is used for collecting experience data generated by a user in the using process and transmitting the experience data to the background database for generating a data image. The collected data includes the duration of experience and the duration of focus of the region of interest of the user.
4. The system of claim 2, wherein the system for automatically extracting visual attention of a user viewing a panoramic video in a VR headset comprises: the 8K video playing component: the built-in 8K hardware is adopted for decoding, an 8K video can be directly played without external equipment, and a 4K OLED screen is combined, so that the 110-degree FOV field angle is realized, and the video image has no screen sense; video processing end communication assembly: the helmet is used for processing data communication with the video processing end, the helmet transmits attitude data to the control end app at a fixed frame rate, and the control end receives the data and verifies the time sequence validity of the data to perform position and rotation interpolation, so that consistency is kept; meanwhile, the helmet end also receives a video control command from the video processing end through the assembly.
CN202011219895.XA 2020-11-05 2020-11-05 Method for automatically extracting visual attention points of user when watching panoramic video in VR head display Withdrawn CN112423035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011219895.XA CN112423035A (en) 2020-11-05 2020-11-05 Method for automatically extracting visual attention points of user when watching panoramic video in VR head display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011219895.XA CN112423035A (en) 2020-11-05 2020-11-05 Method for automatically extracting visual attention points of user when watching panoramic video in VR head display

Publications (1)

Publication Number Publication Date
CN112423035A true CN112423035A (en) 2021-02-26

Family

ID=74828128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011219895.XA Withdrawn CN112423035A (en) 2020-11-05 2020-11-05 Method for automatically extracting visual attention points of user when watching panoramic video in VR head display

Country Status (1)

Country Link
CN (1) CN112423035A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114786037A (en) * 2022-03-17 2022-07-22 青岛虚拟现实研究院有限公司 Self-adaptive coding compression method facing VR projection

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102357A (en) * 2014-07-04 2014-10-15 Tcl集团股份有限公司 Method and device for checking 3D (three-dimensional) models in virtual scenes
CN106683197A (en) * 2017-01-11 2017-05-17 福建佳视数码文化发展有限公司 VR (virtual reality) and AR (augmented reality) technology fused building exhibition system and VR and AR technology fused building exhibition method
CN106991590A (en) * 2017-03-17 2017-07-28 北京杰出东方文化传媒有限公司 A kind of VR sales of automobile system
CN107181930A (en) * 2017-04-27 2017-09-19 新疆微视创益信息科技有限公司 For the monitoring system and its monitoring method of virtual reality
CN109547753A (en) * 2014-08-27 2019-03-29 苹果公司 The method and system of at least one image captured by the scene camera of vehicle is provided
US20190385364A1 (en) * 2017-12-12 2019-12-19 John Joseph Method and system for associating relevant information with a point of interest on a virtual representation of a physical object created using digital input data
CN111340598A (en) * 2020-03-20 2020-06-26 北京爱笔科技有限公司 Method and device for adding interactive label
CN111587086A (en) * 2017-11-14 2020-08-25 维韦德视觉公司 Systems and methods for visual field analysis

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102357A (en) * 2014-07-04 2014-10-15 Tcl集团股份有限公司 Method and device for checking 3D (three-dimensional) models in virtual scenes
CN109547753A (en) * 2014-08-27 2019-03-29 苹果公司 The method and system of at least one image captured by the scene camera of vehicle is provided
CN106683197A (en) * 2017-01-11 2017-05-17 福建佳视数码文化发展有限公司 VR (virtual reality) and AR (augmented reality) technology fused building exhibition system and VR and AR technology fused building exhibition method
CN106991590A (en) * 2017-03-17 2017-07-28 北京杰出东方文化传媒有限公司 A kind of VR sales of automobile system
CN107181930A (en) * 2017-04-27 2017-09-19 新疆微视创益信息科技有限公司 For the monitoring system and its monitoring method of virtual reality
CN111587086A (en) * 2017-11-14 2020-08-25 维韦德视觉公司 Systems and methods for visual field analysis
US20190385364A1 (en) * 2017-12-12 2019-12-19 John Joseph Method and system for associating relevant information with a point of interest on a virtual representation of a physical object created using digital input data
CN111340598A (en) * 2020-03-20 2020-06-26 北京爱笔科技有限公司 Method and device for adding interactive label

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114786037A (en) * 2022-03-17 2022-07-22 青岛虚拟现实研究院有限公司 Self-adaptive coding compression method facing VR projection
CN114786037B (en) * 2022-03-17 2024-04-12 青岛虚拟现实研究院有限公司 VR projection-oriented adaptive coding compression method

Similar Documents

Publication Publication Date Title
CN111935491B (en) Live broadcast special effect processing method and device and server
CN110570698B (en) Online teaching control method and device, storage medium and terminal
CN109478097B (en) Method and system for providing information and computer program product
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
CN108322832B (en) Comment method and device and electronic equipment
CN102572539A (en) Automatic passive and anonymous feedback system
CN114097248B (en) Video stream processing method, device, equipment and medium
Müller et al. PanoVC: Pervasive telepresence using mobile phones
Veas et al. Techniques for view transition in multi-camera outdoor environments
Rossi et al. Behavioural analysis in a 6-DoF VR system: Influence of content, quality and user disposition
CN112423035A (en) Method for automatically extracting visual attention points of user when watching panoramic video in VR head display
CN110334620A (en) Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction
CN113269781A (en) Data generation method and device and electronic equipment
JP2020150519A (en) Attention degree calculating device, attention degree calculating method and attention degree calculating program
CN113220130A (en) VR experience system for party building and equipment thereof
Gao et al. Real-time visual representations for mixed reality remote collaboration
CN106484118B (en) Augmented reality interaction method and system based on fixed trigger source
CN112288876A (en) Long-distance AR identification server and system
JP6609078B1 (en) Content distribution system, content distribution method, and content distribution program
CN111311995A (en) Remote teaching system and teaching method based on augmented reality technology
CN113076436B (en) VR equipment theme background recommendation method and system
CN210804322U (en) Virtual and real scene operation platform based on 5G and MR technology
Morais et al. A content-based viewport prediction model
Alallah et al. Exploring the need and design for situated video analytics
CN108989327B (en) Virtual reality server system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210226