CN108762482B - Data interaction method and system between large screen and augmented reality glasses - Google Patents

Data interaction method and system between large screen and augmented reality glasses Download PDF

Info

Publication number
CN108762482B
CN108762482B CN201810338583.7A CN201810338583A CN108762482B CN 108762482 B CN108762482 B CN 108762482B CN 201810338583 A CN201810338583 A CN 201810338583A CN 108762482 B CN108762482 B CN 108762482B
Authority
CN
China
Prior art keywords
large screen
glasses
data
interaction
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810338583.7A
Other languages
Chinese (zh)
Other versions
CN108762482A (en
Inventor
袁晓如
施悦凝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201810338583.7A priority Critical patent/CN108762482B/en
Publication of CN108762482A publication Critical patent/CN108762482A/en
Application granted granted Critical
Publication of CN108762482B publication Critical patent/CN108762482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Abstract

The invention discloses a method and a system for data interaction between a large screen and augmented reality glasses, wherein network communication is established between the augmented reality glasses and the large screen; performing data communication by using a communication model of a webpage end, a server end and a glasses end; firstly, initializing a server and obtaining an IP address and a port number to complete a forwarding function or other data processing functions; then initializing a large screen and augmented reality glasses; setting url to be accessed by network communication and completing basic communication processing function; one client sends communication information when triggering the interaction event, the communication information is forwarded through the server, and the other end processes according to the communication information. The method and the system can organically combine the large screen technology and the augmented reality glasses technology, so that the interaction technology better meets the requirements of users, the positioning selection of the large screen is conveniently and efficiently completed, the control of the view in the large screen is realized, more three-dimensional information is provided for the users, and the data expression is more vivid.

Description

Data interaction method and system between large screen and augmented reality glasses
Technical Field
The invention relates to the field of data and visual analysis and man-machine interaction, in particular to a method and a system for data interaction between a large screen and augmented reality glasses.
Background
With the development of multimedia technology and the arrival of big data era, visual interaction technology plays an increasingly important role in information display of various industries. In the big data era, because the traditional display equipment is limited by the resolution, the visual display requirement of people on a large-scale data set is difficult to meet.
The big screen (big screen) technology (also called as big screen display technology, the big screen refers to the big screen in the direct-viewing color TV or rear projection type projection TV, generally, the diagonal size of the screen is more than 40 inches), has the characteristics of large area, high brightness, high resolution and the like, can bring ultra-high-definition experience to users, supports the creation, the manipulation, the exploration and the labeling of more views, and is more suitable for the visual display requirement of large-scale data sets. However, there are many challenges to the problem of how to implement remote and large screen data interaction, and the conventional mouse and keyboard have difficulty meeting the interaction requirement of people with a large screen.
In the academic world, for the application scenario of data interaction between the remote and the large screen, common solutions generally include:
1. the interactive behavior of the person is detected based on the sensor and the tracking positioning equipment to acquire data interaction information,
2. and carrying out network communication based on (by) the intelligent mobile device to acquire data interaction information.
With the development of technologies, mixed reality technologies (MR, which is a further development of virtual reality technologies, and which enhance the reality of user experience by presenting virtual scene information in a real scene and building an information loop for interactive feedback among the real world, the virtual world and a user) and increasingly developed head-mounted device technologies, such as Microsoft HoloLens (Microsoft's first holographic computer device without cable limitation, which enables a user to interact with digital content and to interact with a holographic image in a surrounding real environment), etc., integrated functions such as gaze and gestures may facilitate visual interaction, and the augmented reality glasses device has excellent holographic display capability and can provide information outside a screen to the user more vividly. How to combine the augmented reality glasses technology and the large screen in two angles of enhancement and interaction provides richer and vivid visual interaction for users, and is a novel and challenging constructive problem.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for data interaction between a large screen and augmented reality glasses, aiming at the challenges in the aspect of interaction with the large screen, the data interaction is carried out by utilizing the augmented reality glasses technology and the large screen, and the problem that the large screen is difficult to interact is solved, so that a person can directly operate data on the large screen by wearing the augmented reality glasses, generate more three-dimensional virtual views and more vividly provide detailed information for a user.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a data interaction method between a large screen and augmented reality glasses comprises the following steps:
establishing network communication between the augmented reality glasses and the large screen;
the augmented reality glasses need to support a network communication function, and a CPU and a GPU unit with certain computing processing capacity are arranged in the augmented reality glasses;
the network communication uses a communication model of a webpage end, a server end and a glasses end to carry out data communication, and the webpage end and the glasses end carry out two-way communication;
firstly, initializing a server, acquiring an IP address of the server, setting a port number, and completing a forwarding function or other data processing functions;
respectively initializing a large screen and augmented reality glasses as two client sides;
setting url to be accessed by network communication as the address and the port number, and completing a basic communication processing function;
and respectively defining a message sending function and a call-back function at the webpage end and the glasses end according to each function, so that one client can send a communication message when triggering an interaction event and forward the communication message through the server, and the other end receives a message instruction and performs corresponding processing.
Further, according to the data interaction method between the large screen and the augmented reality glasses, when data communication is carried out and the communication data scale is not large, the server only needs to forward information between the two clients; when the communication data size is large, the server side can process the data from the webpage side and then forward the data to the glasses side in consideration of the storage and calculation capacity of the glasses device.
Further, according to the data interaction method between the large screen and the augmented reality glasses, the visual webpage works realized by using the traditional method are used as the webpage end displayed on the large screen;
a back-end program which is different from a webpage end and a glasses end is created, and the back-end program has the function of forwarding data among a plurality of client ends.
Further, according to the data interaction method between the large screen and the augmented reality glasses, when network communication is established, visual works and augmented reality requirements of the visual works are integrally evaluated, interaction functions of the glasses are designed, and message interfaces are designed according to interaction types and data.
Further, as described above, the function of interaction by glasses includes:
enabling a user to see a virtual object placed in the space through the augmented reality glasses, and further interacting with the virtual object;
the control assembly in the visual webpage work is converted into a virtual control assembly, and the virtual control assembly is presented through the augmented reality glasses and further interacts with the virtual control assembly;
and realizing interaction with a virtual control component in the augmented reality glasses through gaze and/or gestures, and transmitting data generated by the interaction to a webpage end through network communication so as to control content displayed in a page on a large screen.
Further, as described above, the function of interaction by glasses includes:
a virtual object is arranged in the augmented reality glasses and used as a virtual substitute object for representing a certain visual figure or object in the visual webpage works at the webpage end,
the user carries out interactive operation on the virtual substitute object in the glasses, and the result data generated by the operation is transmitted to the webpage end, so that the result of the interaction of the user by using the glasses is reflected on the page content displayed in the large screen;
the interactive operations include clicking, dragging, rotating, and zooming.
Further, the data interaction method between the large screen and the augmented reality glasses comprises the steps of firstly, selecting and activating the virtual control component through staring,
then, defining the interactive operation corresponding to various gestures of the person,
the gestures comprise basic gestures and information of the changes of the basic gestures in the duration, the position and the angle of the actual use of the user,
the operation data information generated by interaction is converted into a customized message format and is sent to the large screen end,
and the webpage end of the large screen generates a corresponding interaction effect according to the message content.
Further, according to the data interaction method between the large screen and the augmented reality glasses, when the interaction operation is carried out, according to the type of the interaction object, if the change of the attribute or the state value is detected, the numerical values generated by the operation are recorded; and if the gesture is the moving distance, calculating an actual operation result numerical value according to the interactive meaning and the space displacement of the gesture.
Further, according to the data interaction method between the large screen and the augmented reality glasses, when the glasses end sends a message, a message object is established, wherein the message object comprises identifiers of two communication parties, an interface name and specific message data, and the attribute of the specific message data is used for transmitting operation data information generated by interaction;
and after the webpage end processes the received message, analyzing the content, taking out the interactive data, and generating a corresponding interactive effect according to the function.
Further, as described above, the function of interaction by glasses includes:
the method adopts a large screen direct positioning interaction mode, and a user interacts with a specific position on the large screen by using staring and gesture operation, wherein:
uses staring to replace cursor, gestures to replace left and right keys to realize the simulation of mouse operation,
the position, size and shape of the large screen are utilized to generate a virtual surface which is attached to the surface of the large screen and has the same size and shape as the large screen, the virtual surface is used as an object which can be interacted in the glasses and simulates the desktop of the large screen which really exists in the space, the virtual surface is also used for simulating the desktop of a display cursor,
the user's gaze and gesture operations on the virtual surface are equivalent to operations on the same position of the large screen,
and acquiring the spatial position of the user gaze point by using the virtual surface, and calculating the coordinate of the gaze point relative to the surface by using the length and the width of the virtual surface so as to obtain the relative coordinate of the gaze point at the webpage end.
A system for data interaction between a large screen and augmented reality glasses, comprising:
augmented reality glasses, as a glasses end, having a network communication module, a built-in CPU and GPU unit, having an initialization module to initialize the glasses end to a client,
a large screen for displaying a visual web page work realized using a conventional method, the work being displayed as a web page end on the large screen, having an initialization module to initialize the web page end to a further client,
the message sending module and the call-back processing module are respectively arranged at the webpage end and the glasses end and used for bidirectional communication between the webpage end and the glasses end, wherein:
a message sending module for establishing a message object message containing identifiers of both communication parties, interface names and specific message data, wherein the attribute of the specific message data is used for transmitting operation data information generated by interaction,
a callback processing module for analyzing the content after processing the received message, extracting the interactive data, generating corresponding interactive effect according to the function,
the server side is provided with a communication processing module and is used for forwarding communication messages between the webpage side and the glasses side in a communication model of the webpage side, the server side and the glasses side,
a server initialization module for obtaining the server IP address and setting the port number to complete the forwarding function or other data processing functions,
and the back-end module is different from the webpage end and the glasses end and is used for forwarding data among a plurality of clients.
Further, according to the data interaction system between the large screen and the augmented reality glasses, when the communication data scale is not large, the server only needs to forward information between the two clients; when the communication data size is large, the server side can process the data from the webpage side and then forward the data to the glasses side in consideration of the storage and calculation capacity of the glasses device.
Further, according to the data interaction system between the large screen and the augmented reality glasses, after the visual works and the augmented reality requirements thereof are integrally evaluated, the function of interaction by the glasses is designed, and each message interface is designed according to the type and data of the interaction.
Further, as to the data interaction system between the large screen and the augmented reality glasses, the functions to be interacted with the glasses include:
enabling a user to see a virtual object placed in the space through the augmented reality glasses, and further interacting with the virtual object;
the control assembly in the visual webpage work is converted into a virtual control assembly, and the virtual control assembly is presented through the augmented reality glasses and further interacts with the virtual control assembly;
and realizing interaction with a virtual control component in the augmented reality glasses through gaze and/or gestures, and transmitting data generated by the interaction to a webpage end through network communication so as to control content displayed in a page on a large screen.
Further, as to the data interaction system between the large screen and the augmented reality glasses, the functions to be interacted with the glasses include:
a virtual object is arranged in the augmented reality glasses and used as a virtual substitute object for representing a certain visual figure or object in the visual webpage works at the webpage end,
the user carries out interactive operation on the virtual substitute object in the glasses, and the result data generated by the operation is transmitted to the webpage end, so that the result of the interaction of the user by using the glasses is reflected on the page content displayed in the large screen;
the interactive operations include clicking, dragging, rotating, and zooming.
Further, a data interaction system between a large screen and augmented reality glasses as described above, first, activates a virtual control component by gaze selection,
then, defining the interactive operation corresponding to various gestures of the person,
the gestures comprise basic gestures and information of the changes of the basic gestures in the duration, the position and the angle of the actual use of the user,
the operation data information generated by interaction is converted into a customized message format and is sent to the large screen end,
and the webpage end of the large screen generates a corresponding interaction effect according to the message content.
Further, according to the data interaction system between the large screen and the augmented reality glasses, when interactive operation is carried out, according to the type of an interactive object, if the change of the attribute or the state value is detected, the numerical values generated by the operation are recorded; and if the gesture is the moving distance, calculating an actual operation result numerical value according to the interactive meaning and the space displacement of the gesture.
Further, as to the data interaction system between the large screen and the augmented reality glasses, the functions to be interacted with the glasses include:
the method adopts a large screen direct positioning interaction mode, and a user interacts with a specific position on the large screen by using staring and gesture operation, wherein:
uses staring to replace cursor, gestures to replace left and right keys to realize the simulation of mouse operation,
the position, size and shape of the large screen are utilized to generate a virtual surface which is attached to the surface of the large screen and has the same size and shape as the large screen, the virtual surface is used as an object which can be interacted in the glasses and simulates the desktop of the large screen which really exists in the space, the virtual surface is also used for simulating the desktop of a display cursor,
the user's gaze and gesture operations on the virtual surface are equivalent to operations on the same position of the large screen,
and acquiring the spatial position of the user gaze point by using the virtual surface, and calculating the coordinate of the gaze point relative to the surface by using the length and the width of the virtual surface so as to obtain the relative coordinate of the gaze point at the webpage end.
The invention has the beneficial effects that: combine large screen technology and augmented reality glasses technology together for the interaction technology more accords with user's demand, utilizes augmented reality glasses technology and large screen to carry out data interaction, provides the user with detailed information more vividly, has solved the problem that the large screen is difficult to the interaction, realizes the interaction between large screen and the augmented reality glasses, wherein:
on one hand, the gaze function, the gesture function and the like of the augmented reality glasses are utilized to carry out operation, so that the positioning selection of the large screen is conveniently and efficiently completed, the view in the large screen is controlled, and the problem that the large screen is difficult to interact is solved;
on the other hand, an immersive interactive environment is generated by combining the large screen and the augmented reality glasses, and three-dimensional information is more vividly provided for the user by utilizing the holographic display capability of the augmented reality glasses.
The invention fully utilizes the respective advantages of the augmented reality glasses and the large screen, combines the augmented reality glasses and the large screen together through the network and creates an immersive interactive environment. The problem that interaction with a large screen is difficult is solved ingeniously by creating a virtual control assembly and directly positioning and interacting the large screen. In the using process, a user can be ensured to observe high-resolution display contents on a large screen at the same time, and can also see a virtual holographic object at the glasses end; as a natural interaction mode, the large-screen interactive system can interact with objects placed in a space while controlling the behavior of the view on the large screen, provides good immersive interaction experience for users, and is helpful for the users to deepen the impression of data.
The invention can organically combine the large screen technology and the augmented reality glasses technology, so that the interaction technology better meets the requirements of users, the positioning selection of the large screen is conveniently and efficiently completed, the control of the view in the large screen is realized, more three-dimensional information is provided for the users, and the data expression is more vivid.
Drawings
Fig. 1 is a flowchart of a method for data interaction between a large screen and augmented reality glasses according to an embodiment of the present invention.
FIG. 2 is a diagram of gestures recognizable by the present invention using augmented reality glasses: an Air-tap gesture.
Fig. 3a to 3c are three flows of interaction of augmented reality glasses design with a large screen.
Figure 3a illustrates step 1(step1),
figure 3b illustrates step 2(step2),
FIG. 3c illustrates step 3(step 3).
Fig. 4a to 4b are coordinate mapping operations of the augmented reality glasses when directly positioning and interacting a large screen.
In fig. 4a, virtual board represents a virtual surface, TDW is a large screen, screenheight is a large screen height, screenwidth is a large screen width,
in fig. 4b, the ScreenPosition represents the virtual surface center coordinates, the HitPosition represents the gaze position coordinates, and the relatedposition represents the position vector.
Fig. 5 is a flow chart of the augmented reality glasses when directly positioning and interacting with a large screen.
Fig. 6 is a schematic diagram of a specific scene of an augmented reality glasses technology and a large-screen interaction technology, which is taken as an example of graph visualization.
Fig. 7 is a block diagram of a data interaction system between a large screen and augmented reality glasses according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
Fig. 1 shows a flowchart of a method for data interaction between a large screen and augmented reality glasses according to an embodiment of the present invention, where the method mainly includes:
establishing a communication model of a webpage end, a server end and a glasses end:
firstly, a visual webpage work (including a plurality of pages) realized by using a traditional method is used as a webpage client (hereinafter referred to as a webpage end) displayed on a large screen, and a back-end program which is different from the webpage end and an augmented reality glasses client (namely augmented reality glasses, hereinafter referred to as glasses end) is created and has a function of forwarding data among a plurality of clients. And establishing a network link between the webpage end and the server end.
Then, the augmented reality glasses need to support network communication function, and have built-in CPU and GPU unit with certain computing processing capability, the device used in the present invention is Microsoft HoloLens, and the augmented reality glasses include but are not limited thereto.
And then, the glasses end can be connected with the webpage end through network communication (data transmission is carried out through wireless signals (such as WiFi)), and the glasses end forwards communication messages between the two clients (the webpage end and the glasses end) by relying on the server end, so that data intercommunication between the augmented reality glasses and the large screen is realized, and message protocols (common network communication protocols such as HTTP, WebSocket and the like) are determined.
Finally, the augmented reality glasses bring a communication model of a webpage end, a server end and a glasses end into consideration of communication data scale and background computing capacity. When the communication data size is not large, the server only needs to forward information between the two clients; when the communication data size is large, the server side can process the data from the webpage side and then forward the data to the glasses side in consideration of the storage and calculation capacity of the glasses device. The specific judgment of the larger scale of the communication data can be determined according to the actual condition or preset according to an empirical value.
(II) establishing network communication between the augmented reality glasses and the large screen:
the network communication design and implementation comprises the following steps:
(1) and (4) overall evaluation and design.
Firstly, a visual work and the augmented reality requirement thereof are integrally evaluated, the function of interaction (index data interaction) by using glasses is designed, and each message interface is designed according to the type and data of the interaction.
The message interface may use the following json format { "from": to ": and" name ": data {" payload1 ": and" payload2 [ { }, { }, · } } } }, wherein name represents the name of the data interface, data represents the load, wherein payload represents the value of data or control parameter, etc., and from and to are device identifiers, indicating that the source and target need to be specified explicitly when multiple devices communicate. For example, when the value of a parameter for controlling a certain representation position in a large screen is changed to 10 by using the glasses end, the following message can be sent to the Web by the glasses end: from, ARGlass, to, web, name, changePosition, data, value, 10, wherein the glasses end is identified as ARGlass, the web page end is identified as web, the interface name is changePosition, and the passed data is data, which contains a variable named value with a value of 10.
(2) And (5) initializing.
Firstly, initializing a server, acquiring an IP address of the server, setting a port number, and completing a forwarding function or other data processing functions. Then, initializing the large screen and the augmented reality glasses as two clients (namely a web page end and a glasses end), setting url to be accessed by network communication as the address (a server IP address) and a port number, and completing basic communication processing functions such as establishing, receiving, sending, interrupting and the like.
(3) And interactive event response and interactive event processing.
And respectively defining a message sending function and a call-back function at the webpage end and the glasses end according to each function, so that one client can send a communication message (interactive event response) when an interactive event is triggered and the communication message is forwarded by the server end, and the other end receives a message instruction and performs corresponding processing (interactive event processing).
(III) a data interaction mode between the large screen and the augmented reality glasses (determining the specific interaction needing to be customized in the augmented reality glasses according to the visualization requirement of the webpage end on the large screen):
(1) interacting via the virtual control component:
the interaction through the virtual control component in the system refers to: a part of control components on a page (a page presented by a webpage end) are taken out independently, and the control components are realized in a virtual object (namely a virtual control component) mode in the glasses end, so that a user can interact with the virtual control seen in the glasses end through gazing, gestures and the like without interacting with the control on the large screen through a mouse and a keyboard, and data generated by the interaction is transmitted to the webpage end through network communication, and therefore content displayed in the page on the large screen is controlled.
Augmented reality glasses allow a user to see a virtual object (i.e., a virtual control component) placed in space, and the user can also interact with the virtual object by way of gaze, gestures, and the like. Meanwhile, in the original visual page (referring to the visual webpage works), there are many control components, such as buttons, sliders, calendar selections, control panels, etc., which enable the user to control the elements on the page.
(2) Interacting through the virtual surrogate object: in addition to the traditional interactive mode, the invention firstly proposes the concept of 'virtual substitute object control'.
The concept of "Virtual Proxy Object" described in the present system refers to: a virtual object is arranged in the augmented reality glasses and represents a certain visual graph or object (namely a virtual substitute object) in the original visual page, and a user carries out interactive operation on the virtual substitute object in the glasses, such as clicking, dragging, rotating, zooming and the like. The result data (such as position movement, size change and the like) generated by the operations can be transmitted to the webpage end, so that the result of the interaction of the people by using the glasses can be reflected on the page content displayed in the large screen.
Aiming at the interaction modes corresponding to the contents in the large screen and the items (1) and (2), the system can support the customization of virtual control components or virtual substitute objects aiming at the webpage end in the program items of the glasses end, so that the user can realize the expected interaction requirements through the virtual control components or the virtual substitute objects after wearing the glasses.
Customizing virtual control components aiming at a webpage end in a program project of the glasses end is specifically divided into three types:
1) when the required interaction is time control, a calendar object can be drawn in the application of the glasses, the date, the month and the year can be interactively selected and changed, after a user selects a time, date data can be transmitted to the webpage end, and the content displayed by the webpage end can be switched to the corresponding date.
2) When the required interaction is attribute control, a corresponding control component can be set according to the action effect. If the state needs to be switched among certain states, a virtual button can be arranged in the glasses, and if the attribute value of the element needs to be continuously changed, a virtual slider can be arranged in the glasses. In the glasses, a user generates data through the operation of clicking a virtual button or dragging a virtual slider and transmits the data to a webpage end, and elements displayed by the webpage end can generate corresponding changes. If the page of the large screen is visualized as a graph visualization system, a virtual slider can be arranged in the glasses end and used for changing the repulsion and attraction among the graph nodes, changing the size of the nodes and the like, so that the graph layout is changed.
3) When the required interaction is controlled by the virtual substitute object, the shape of the required substitute object can be determined according to the shape of the original content, and the operation of the user on the virtual substitute object can be transmitted to the webpage end, so that the expression of the object in the webpage is controlled: if a three-dimensional earth display space related data exists in a visual page of a large screen, a virtual sphere can be placed in the glasses, and when a user rotates the virtual sphere, the earth on the page also rotates, so that the data at different angles can be seen; if the visualized page of the large screen is a graph visualization system, the small balls can be used at the glasses end to represent corresponding nodes of the graph structure on the large screen, and when a user operates the small balls, the webpage end can generate corresponding changes, such as highlight and displacement of the nodes.
(3) After wearing the augmented reality glasses, the user can control the content in the large screen through the virtual control components or the virtual substitute objects, and the specific steps are as follows:
1) firstly, a virtual control component is activated through gaze selection (the virtual substitute object is also separately detailed), wherein the virtual control component is activated through gaze selection, and the virtual control component is enabled to be aimed by a sight at the center of the visual field of the virtual control component by wearing glasses and rotating the head of the user, and the virtual control component aimed by the user is obtained in a program through a ray detection method so as to carry out various subsequent operations.
2) Then, interactive operations (such as dragging, long pressing and clicking) corresponding to various gestures of the person are defined, so that the various gestures are customized into different interactive operation meanings, and the specific mode is that different meanings are given to the interactions according to the basic gestures recognizable in the augmented reality glasses and the changes of the duration, the positions and the angles of the basic gestures in the actual operation of the user. For example, in HoloLens, the basic gesture of user interaction is Air-Tap, as shown in fig. 2, the gesture is an action of extending an index finger and a thumb and touching each other, the process of identifying and tracking hand movement is packaged in the internal function of the glasses, and a developer can directly acquire various information such as the spatial displacement of the gesture when the user operates in a program interface. In the Air-Tap gesture, one finger touch is regarded as a click gesture; if the posture of the right image in the figure 2 is kept after the touch, the touch can be regarded as a long-press gesture; if this gesture is maintained and the hand is moved, it may be considered a drag gesture. The process of giving these interactive meanings is related to the gestures themselves, the objects the user is interacting with, and the writing of the program. Taking HoloLens as an example, if a user wants to rotate an object, a drag gesture can be used, and the displacement of the drag gesture is mapped to the rotation angle of the object in a program; if the user wants to move an object, a drag gesture can also be used, in order to avoid a collision, the meaning corresponding to the drag gesture is no longer to rotate the object, and the displacement of the drag gesture is mapped to the displacement of the corresponding object in the program. Richer basic gestures may be supported in other augmented reality eyewear environments to avoid conflicts.
3) Converting operation data information generated by interaction into a customized message format and sending the message format to a large screen end, wherein the large screen web page end generates a corresponding interaction effect according to message content; the operation data information generated by interaction is acquired in the following mode:
firstly, a user carries out interactive operation, and if the change of the attribute or the state value is the change according to the type of an interactive object, such as clicking a button by the user or dragging a sliding bar, the numerical values generated by the operation are recorded; if the movement distance is the movement distance, three-dimensional offset (x, y, z) of the hand needs to be acquired in the program, wherein x, y and z respectively represent the displacement of the hand in the left axis, the right axis, the up axis, the down axis and the front axis and the back axis, and then the actual operation result value is calculated according to the interactive meaning.
By Hololens is taken as an example, during programming, the displacement vector (x, y, z) needs to be calculated into a specific numerical value according to different interactions, for example, in HoloLens, drag gestures can be used for rotating and moving objects, and the generated vector (x, y, z) is normalized to obtain (x, y, z)1,y1,z1). The specific calculation steps of the rotation operation and the movement operation include:
1) the rotation operation needs to calculate the rotation angle of the virtual object and clarify the mapping relation between the virtual object and the rotation of the object on the large screen. Take the rotation of a virtual sphere in two directions (e.g., x, y) for example, where rotation about the y-axis corresponds to x resulting from left-right hand movement1Value, rotation about the x-axis corresponding to y produced by hand up-and-down movement1The value is calculated by defining an angle constant AngeleContent and calculating (y)1*AngleConstant,x1Anglecontent) as the result of the rotation operation of the user at the glasses end and as the information to be sent to the webpage end, the webpage end continuously changes the display direction of the sphere according to the information to generate the rotation effect.
2) The moving operation needs to make clear the mapping relation between the moving speed of the virtual object and the displacement of the object on the large screen. First, a speed constant SpeedConstant is defined and (x) is calculated1*SpeedConstant,y1*SpeedConstant,z1SpeedConstant) as a result of the user's panning operation at the glasses end and as information to be sent to the web page end, the web page end continuously changes the position of the object according to the information, generating the effect of displacement.
(4) The data communication between the glasses end and the webpage end comprises the following specific steps:
1) the glasses end sends a message: and establishing a message object message which comprises identifiers of both communication parties, interface names and specific message data, wherein the information generated by the interaction generated in the last step is used as the attribute of the specific message data (described in the network communication part). If a message object contains a from attribute, a to attribute, a name attribute and a data attribute, wherein the data also contains a payload1 attribute and a payload2 attribute, JSON of the message object is serialized to obtain { "from": xxx, "to": xxx, "name": xxx, "data": { "payload 1": xxx, "payload 2": [ { xxx }, { 9 }, ] } } which is transmitted to a webpage end as a customized message format.
2) The webpage end processes the received message: and the webpage end analyzes the content after receiving the message, takes out the interactive data and generates a corresponding interactive effect according to the function. If the user moves the virtual substitute object of the object A in the large screen, the webpage end receives the message { "from": ARGlass, "to": web, "name": movePosition, "data": move ": xD place, yDistance, zDistance }, the webpage end finds the corresponding processing function according to the parsed interface name movePosition and transmits the data in, then the processing function calculates the position of the object A according to the move parameters [ xD place, yDistance, zDistance ], and the user finally causes the object A on the large screen to be displaced by operating the virtual object (virtual substitute object), thereby completing the moving operation.
3) In addition, due to the two-way communication between the webpage end and the glasses end, the webpage end can also send information to the glasses end, so that other three-dimensional related views (visual views drawn by the glasses end by using data received from the webpage end) can be set at the glasses end. These views such as three-dimensional bar charts, three-dimensional tracks, etc. may not have enough space to show or have poor effect in common desktop visualization, but can be rendered in real space in a 3D manner by using augmented reality glasses, so that a user can observe and explore in a more immersive manner, and the understanding of data is deepened) to show some data from a large screen. Changes to certain elements (elements corresponding to three-dimensional views or related information already in the glasses) on the large screen from other interactive devices (the sources are not limited, and the operations can be operations sent before the glasses end, or interactive operations of other devices, such as a keyboard and a mouse) can also synchronously update the views in the glasses. If the large screen shows the visualization of the three-dimensional graph, the glasses have enhanced and show the three-dimensional topological graph consistent with the three-dimensional topological graph in the large screen, and if other interaction devices come from a mouse to screen a part of sub-graphs selected by the interaction on the large screen, only the part of sub-graphs are correspondingly shown or highlighted in the augmented reality glasses.
(5) By directly positioning and interacting the large screen (large screen positioning technology):
the motivation for direct positioning interaction through a large screen is as follows: interaction by using the virtual control component (the same as the virtual substitute object, which is not separately described) requires that a user has definite directivity for required interaction functions and objects, and the user controls the virtual control component in a mode of externally arranging a corresponding control unit, which means that a developer needs to know the displayed content and interaction requirements, and also needs different application programs to customize the virtual control component for different visual works. However, this presents two problems:
1) when the interaction requirements for the visual work are complex, it is not possible to fabricate all the interactive components at the glasses end.
2) When the number of objects that can be interacted with is large and complex, the user wants to interact in a "what you see is what you get" way, and the interaction event can be triggered by pointing like a laser pen or directly looking at a certain element on the screen. The method for directly positioning and interacting the large screen enables a user to interact with a specific position on the large screen like a mouse by using staring and gesture operations, a cursor is replaced by staring, a left key and a right key are replaced by gestures, the large screen is equivalent to a larger desktop, and therefore developers can avoid customizing a large number of virtual controls again according to each visual requirement.
The purposes of large screen positioning (direct positioning) are as follows: and calculating the relative position of the interactive object in the large screen on the webpage of the webpage end through the actual spatial position of the large screen and the position selected by clicking of the user, so as to obtain the object at the position by using a program, thereby triggering an event. If the user wants to click an element A in a large screen, in the system, the user only needs to wear glasses to enable a cursor in the sight center to be aimed at the element A, click confirmation is carried out through a click gesture, and the glasses end program carries out coordinate transformation on the position data and sends the position data to the webpage end. The webpage end program obtains the coordinate (x) of A at the webpage endA,yA) And in html pages (x)A,yA) And (4) finding an object A by the position, and triggering an interaction event carried by the object A. This process is the process of user interaction through positioning and large screen.
The technical basis of large screen positioning proposed in the invention depends on three characteristics of augmented reality glasses:
1) the position of the virtual object can be dragged and adjusted in the space.
2) Once a certain virtual object is gazed at, the coordinates of the gaze ray intersecting the surface of the virtual object (hereinafter referred to as the gaze point) can be acquired.
3) The method has the capability of constructing scenes of the space, and can store the positions of virtual objects in the space. Taking the HoloLens as an example, the built-in instant positioning and mapping (SLAM) system can perform spatial three-dimensional reconstruction by using depth data in images collected by a camera, and in use, a detection function can be started to perform scene scanning on the surrounding space only through function calling. The system can scan and store the space triangular mesh model of the ground, the wall and other facilities of the room where the system is located, establish the coordinate system of the space where the user is located according to the model, and store the relative position of the virtual object in the form of a space Anchor (World Anchor), so that the positions where all the virtual objects are located before can be reproduced once the stored room model is identified when the application is restarted. This feature is important to determine and maintain the spatial position of a hanging large screen.
The large screen positioning technology is divided into two steps, namely positioning the position of the large screen, and positioning page elements on the large screen:
1) positioning the position of the large screen:
the position of the large screen is determined to determine the position (central coordinate), size (length and width) and shape (arc and plane) of the large screen. The program uses this information to create a "virtual surface" that conforms to the surface, size and shape of the large screen.
The concept of Virtual Surface provided by the invention is to simulate a large screen desktop which really exists in a space as an object which can interact in glasses for a layer of mask (a transparent solid rectangular sheet in a colored frame, wherein the frame is used for prompting the size and the position, the inner part is transparent to prevent the large screen from being shielded, and the solid mask is used for staring rays and detecting the collision of the object) in the shape of the large screen. The gaze and gesture operations of the user on the virtual surface are equal to the operations on the same position of the large screen, so that the space position of the gaze point of the user can be obtained by using the virtual surface, the coordinate of the gaze point relative to the surface is calculated by using the length and the width of the virtual surface, and the relative coordinate of the gaze point at the webpage end is further obtained.
The invention will enumerate two positioning methods for the plane type large screen:
a) by using the ImageTarget method in the augmented reality Vuforia tool, 4 picture markers are placed at four corners of the edge of a large screen (or the edge of an opened webpage). When the program is started, the program detects the image of the marker, a virtual vertex can be generated at the detected image to record the position of the marker, when four vertexes are detected, and then a plane object with the four points as vertexes is established in the program to serve as a virtual surface. The position of the virtual surface object is then saved using the spatial anchor form described above. This approach does not require additional storage of the size of the virtual surface object.
b) As shown in fig. 4a and 4b, a planar object is first placed in space as a virtual surface, and then manual size, position and direction adjustment is performed. This method relies on the spatial model built by SLAM, and also requires additional storage of the size of the virtual surface object. By starting a scene scanning function, after a user scans a space triangular mesh model of an area where a large screen is located, the space triangular mesh model is stored as a space surface model, then the virtual surface is attached to the surface of the scanned large screen through direction correction, and then the size and the position of the virtual surface are continuously adjusted manually until the virtual edge is matched with the edge of the large screen. The position can be adjusted greatly by utilizing the Tap and Place function provided by a HoloLens development tool, the function aims to Place an object, a scanning mode is started by clicking the object to be placed, the object can be directly placed at the position where a staring ray and a scanned space model intersect, and the time for adjusting the position is greatly saved. Position fine-tuning may utilize a drag function.
The virtual surface coordinate parameters obtained by the above operations are as follows: the center seat is marked as (position X, position Y, position Z), the length and width are marked as height and width, and the self coordinate system vector (x, y, z).
2) Positioning page elements on a large screen:
the gaze of a user to a large screen position can correspond to the gaze of a virtual surface, when the user aims at one position, the user clicks an available three-dimensional gaze point by using a gesture, the three-dimensional gaze point is mapped to a two-dimensional coordinate percentage on a page and is sent to a webpage end for processing, and the webpage end carries out a response event on an element at the corresponding position after receiving a message.
Taking the most common plane type large screen vertically built as an example, the intersection point of the staring ray and the virtual surface is the gaze position (gazeX, gazeY, gazeZ), and the vector from the center point (posion x, posion y, posion z) to the gaze point (gazeX, gazeY, gazeZ) is recorded as v according to the position x, posion y, posion z) of the virtual surface and the coordinate system vector (x, y, z) of the virtual surface. Since the thickness of the virtual surface is negligible, the three-dimensional to two-dimensional mapping process is as follows:
a) the horizontal offset dx and the vertical offset dy of the gaze position with respect to the center of the panel are represented by vectors v · x and v · y, respectively.
b) And (xRatio, yRatio) represents the percentage (relative to the upper left corner) of the coordinates of the page elements on the page in the browser (in a full-screen state).
c) And (3) encapsulating the percentage of gaze positions (xRatio, yRatio) as data into a message format, and sending the message format to a webpage end through network communication, wherein an interface can be named as ClickPosition. The message format is for example: from, ARGLASS, to: "web", "name": "ClickPosition", "data" { "xRatio": 0.5 "," yRatio ": 0.5} }, which means that the percentage of coagulation time positions is (0.5 ).
d) After the analysis of the web page end, the xRatio and the yRatio are obtained, and the pixel position (xRatio W, yRatio H) of the coordinate at the web page end is calculated by using the length and width H and W of the web page (different from the measurement unit of the length and width of the large screen, the web page takes the pixel px as the unit). Further, the code is used for obtaining the html element at the position, and if no element exists, the html element is regarded as a click blank area; if the element exists, triggering is carried out according to whether the element is bound with the event responding to the operation, and if the element is bound with the event, the element is highlighted after being clicked.
The large screen direct positioning interaction is exemplified as follows: when a person gazes at the object in the glasses and performs a click operation, the glasses end sends { "from": ARGlas "," to ": web", "name": ClickPosition "," data ": xRatio": 0.105 "," yRatio ": 0.098} to the web end for processing, and the object is selected to be highlighted within a reasonable error tolerance range and an object click response area.
The following are specific examples:
FIG. 1 shows a flow diagram of an interaction technique embodying augmented reality glasses technology and a large screen. The augmented reality glasses need to support network communication and have certain computing processing capacity, so that the augmented reality glasses can be connected with a webpage end through server-side communication, a message protocol format is defined, and intercommunication is completed. And then customizing a control component or a three-dimensional display component in a program project at the glasses end according to the required interaction. In the case of data communication, a user can directly interact with a component in the glasses or directly interact with a large screen through gaze and gestures, so that the display effect on the large screen is controlled; while changes to certain elements on the large screen from other interactive devices are also updated in the glasses synchronously.
FIG. 2 is a diagram of gestures for interaction in HoloLens using augmented reality glasses according to the present invention: the Air-tap gesture, extending the index finger and thumb, opens first and then closes. The gesture can be actually used for a gesture interacting with a holographic object to simulate mouse clicking operation, various semantics such as long-time pressing, moving, dragging and zooming can be formed based on the gesture, and a specific operation method of the gesture semantics is described in the text.
Fig. 3a to 3c show the tripartite curve of an augmented reality glasses design interacting with a large screen. (FIG. 3a) Step 1: the augmented reality glasses are internally provided with interactive virtual control components, and can control display on a large screen. (FIG. 3b) Step 2: and staring and gesture detection are combined with spatial information to directly position elements on the large screen. (FIG. 3c) Step 3: according to the data from the webpage end, various visual views can be set at the glasses end to enhance the data display.
Fig. 4a, 4b show one of the ways in which the enhanced display glasses of the present invention create and utilize virtual surfaces when interacting with the direct positioning of a large screen. (fig. 4a) first locate the large screen with augmented reality glasses, determine the position, size and orientation. And then, a virtual surface is created to be aligned and attached with the large screen and adjusted to be consistent in size, so that the virtual surface can be used for replacing the large screen and serving as an object of interactive operation. (fig. 4b) the user performs a selection operation on the elements seen by gaze and gesture facing the large screen. The position aimed by the user is mapped from three dimensions to a two-dimensional plane and sent to the webpage end.
FIG. 5 illustrates a flow chart of the present invention for direct positioning interaction on a large screen using augmented reality glasses. Taking microsoft HoloLens environment as an example, the edge position of the large screen is positioned through the self-carried space mapping and positioning construction method (SLAM) function of the HoloLens. After the position of the large screen is determined, a virtual screen panel with a certain thickness and a regular shape is generated and placed at the corresponding position to simulate a real screen at the glasses end, and the purpose is to provide an object carrier for interaction and integrate positioning-based operation on the object. By utilizing a ray detection mechanism of human eyes, the x and y coordinates of the gaze position on the virtual panel can be acquired, and the coordinates represent the coordinates of the page elements in a large-screen browser (in a full-screen state). Then, elements under the large screen can be selected for corresponding interaction.
Fig. 6 shows an interaction technical schematic diagram of the augmented reality glasses technology and a large screen, which is taken as an example in the view visualization of the invention. The display effect of the webpage-side force guide graph and the virtual three-dimensional force guide subgraph in the augmented reality glasses is shown in the graph. The large screen can be controlled by buttons on the sand table below and a control panel on the right side with the back large screen, and the interaction can also be realized by directly positioning the large screen. The interaction includes selecting a point edge of a graph, highlighting, changing the length of the point edge, dragging, and the like.
Corresponding to the method shown in fig. 1, an embodiment of the present invention further provides a system for data interaction between a large screen and augmented reality glasses, as shown in fig. 7, the system includes:
augmented reality glasses, as a glasses end, having a network communication module, a built-in CPU and GPU unit, having an initialization module to initialize the glasses end to a client,
a large screen for displaying a visual web page work realized using a conventional method, the work being displayed as a web page end on the large screen, having an initialization module to initialize the web page end to a further client,
after the visual works and the augmented reality requirements are integrally evaluated, the function of interaction by using glasses is designed, and each message interface is designed according to the type and data of the interaction,
the message sending module and the call-back processing module are respectively arranged at the webpage end and the glasses end and used for bidirectional communication between the webpage end and the glasses end, wherein:
a message sending module for establishing a message object message containing identifiers of both communication parties, interface names and specific message data, wherein the attribute of the specific message data is used for transmitting operation data information generated by interaction,
a callback processing module for analyzing the content after processing the received message, extracting the interactive data, generating corresponding interactive effect according to the function,
the server side is provided with a communication processing module and is used for forwarding communication messages between the webpage side and the glasses side in a communication model of the webpage side, the server side and the glasses side,
when the communication data size is not large, the server only needs to forward information between the two clients; when the communication data scale is larger, the server side can process the data from the webpage side and then forward the data to the glasses side in consideration of the storage and calculation capacity of the glasses equipment,
a server initialization module for obtaining the server IP address and setting the port number to complete the forwarding function or other data processing functions,
and the back-end module is different from the webpage end and the glasses end and is used for forwarding data among a plurality of clients.
Still further, the functions requiring interaction with eyeglasses include:
enabling a user to see a virtual object placed in the space through the augmented reality glasses, and further interacting with the virtual object;
the control assembly in the visual webpage work is converted into a virtual control assembly, and the virtual control assembly is presented through the augmented reality glasses and further interacts with the virtual control assembly;
and realizing interaction with a virtual control component in the augmented reality glasses through gaze and/or gestures, and transmitting data generated by the interaction to a webpage end through network communication so as to control content displayed in a page on a large screen.
For example: first, the virtual control component is activated by gaze selection,
then, defining the interactive operation corresponding to various gestures of the person,
the gestures comprise basic gestures and information of the changes of the basic gestures in the duration, the position and the angle of the actual use of the user,
the operation data information generated by interaction is converted into a customized message format and is sent to the large screen end,
and the webpage end of the large screen generates a corresponding interaction effect according to the message content.
When interactive operation is carried out, according to the type of an interactive object, if the attribute or state value changes, the numerical values generated by the operation are recorded; and if the gesture is the moving distance, calculating an actual operation result numerical value according to the interactive meaning and the space displacement of the gesture.
Alternatively, the functions requiring interaction with glasses include:
a virtual object is arranged in the augmented reality glasses and used as a virtual substitute object for representing a certain visual figure or object in the visual webpage works at the webpage end,
the user carries out interactive operation on the virtual substitute object in the glasses, and the result data generated by the operation is transmitted to the webpage end, so that the result of the interaction of the user by using the glasses is reflected on the page content displayed in the large screen;
the interactive operations include clicking, dragging, rotating, and zooming.
Yet another alternative is that the functions requiring interaction with glasses include:
the method adopts a large screen direct positioning interaction mode, and a user interacts with a specific position on the large screen by using staring and gesture operation, wherein:
uses staring to replace cursor, gestures to replace left and right keys to realize the simulation of mouse operation,
the position, size and shape of the large screen are utilized to generate a virtual surface which is attached to the surface of the large screen and has the same size and shape as the large screen, the virtual surface is used as an object which can be interacted in the glasses and simulates the desktop of the large screen which really exists in the space, the virtual surface is also used for simulating the desktop of a display cursor,
the user's gaze and gesture operations on the virtual surface are equivalent to operations on the same position of the large screen,
and acquiring the spatial position of the user gaze point by using the virtual surface, and calculating the coordinate of the gaze point relative to the surface by using the length and the width of the virtual surface so as to obtain the relative coordinate of the gaze point at the webpage end.
In summary, the present invention fully utilizes the respective advantages of the augmented reality glasses and the large screen, and combines the augmented reality glasses and the large screen together through the network, thereby creating an immersive interactive environment. The problem that interaction with a large screen is difficult is solved ingeniously by creating a virtual control assembly and directly positioning and interacting the large screen. In the using process, a user can be ensured to observe a real scene at the large screen end and a virtual holographic object at the glasses end.
Augmented reality glasses are stand-alone, lightweight, mobile devices that integrate detection functionality. Based on the function displayed by the hologram, the augmented reality glasses can also enhance some information in reality in the process of finishing the interaction with the large screen, and an interested part can be extracted from the visualization on the screen and rendered into a holographic object and placed in the reality, so that richer visual elements can be filled in a real space to enhance the original visualization; and secondly, the method can be used as a natural interaction mode, can interact with objects placed in the space while controlling the behavior of the view on the large screen, provides good immersive interaction experience for users, and is helpful for the users to deepen the impression of data. The present invention is therefore a novel and constructively meaningful task.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is intended to include such modifications and variations.

Claims (13)

1. A data interaction method between a large screen and augmented reality glasses comprises the following steps:
establishing network communication between the augmented reality glasses and the large screen;
the augmented reality glasses need to support a network communication function, and a CPU and a GPU unit with certain computing processing capacity are arranged in the augmented reality glasses;
the network communication uses a communication model of a webpage end, a server end and a glasses end to carry out data communication, and the webpage end and the glasses end carry out two-way communication;
firstly, initializing a server, acquiring an IP address of the server, setting a port number, and completing a forwarding function or other data processing functions;
respectively initializing a large screen and augmented reality glasses as two client sides;
setting url to be accessed by network communication as the address and the port number, and completing a basic communication processing function;
respectively defining a message sending function and a call-back function at a webpage end and a glasses end according to each function, so that one client can send a communication message when an interaction event is triggered, the communication message is forwarded by a server end, and the other end receives a message instruction and performs corresponding processing;
the visual work realized by using the traditional method is used as a webpage end displayed on a large screen;
creating a back-end program which is different from a webpage end and a glasses end and has the function of forwarding data among a plurality of client ends;
when network communication is established, the visual works and the augmented reality requirements thereof are integrally evaluated, the function of interaction by using glasses is designed, and each message interface is designed according to the type and data of the interaction;
the functions requiring interaction with glasses include:
enabling a user to see a virtual object placed in the space through the augmented reality glasses, and further interacting with the virtual object;
the control assembly in the visual works is converted into a virtual control assembly, and the virtual control assembly is presented through the augmented reality glasses and further interacts with the virtual control assembly;
and realizing interaction with a virtual control component in the augmented reality glasses through gaze and/or gestures, and transmitting data generated by the interaction to a webpage end through network communication so as to control content displayed in a page on a large screen.
2. The method for data interaction between a large screen and augmented reality glasses according to claim 1, wherein the method comprises the following steps: when data communication is carried out, when the communication data scale is not large, the server only needs to forward information between the two clients; when the communication data size is large, the server side can process the data from the webpage side and then forward the data to the glasses side in consideration of the storage and calculation capacity of the glasses device.
3. The method for data interaction between a large screen and augmented reality glasses according to claim 1, wherein the method comprises the following steps: the functions requiring interaction with glasses include:
a virtual object is arranged in the augmented reality glasses and used as a virtual substitute object for representing a certain visual figure or object in the visual works at the webpage end,
the user carries out interactive operation on the virtual substitute object in the glasses, and the result data generated by the operation is transmitted to the webpage end, so that the result of the interaction of the user by using the glasses is reflected on the page content displayed in the large screen;
the interactive operations include clicking, dragging, rotating, and zooming.
4. The method for data interaction between a large screen and augmented reality glasses according to claim 1, wherein the method comprises the following steps: first, the virtual control component is activated by gaze selection,
then, defining the interactive operation corresponding to various gestures of the person,
the gestures comprise basic gestures and information of the changes of the basic gestures in the duration, the position and the angle of the actual use of the user,
the operation data information generated by interaction is converted into a customized message format and is sent to the large screen end,
and the webpage end of the large screen generates a corresponding interaction effect according to the message content.
5. The method for data interaction between a large screen and augmented reality glasses according to claim 4, wherein the method comprises the following steps: when interactive operation is carried out, according to the type of an interactive object, if the attribute or state value changes, the numerical values generated by the operation are recorded; and if the gesture is the moving distance, calculating an actual operation result numerical value according to the interactive meaning and the space displacement of the gesture.
6. The method for data interaction between a large screen and augmented reality glasses according to claim 1, wherein the method comprises the following steps: when the glasses end sends a message, a message object message is established, which comprises identifiers of both communication parties, an interface name and specific message data, wherein the attribute of the specific message data is used for transmitting operation data information generated by interaction;
and after the webpage end processes the received message, analyzing the content, taking out the interactive data, and generating a corresponding interactive effect according to the function.
7. The method for data interaction between a large screen and augmented reality glasses according to claim 1, wherein the method comprises the following steps: the functions requiring interaction with glasses include:
the method adopts a large screen direct positioning interaction mode, and a user interacts with a specific position on the large screen by using staring and gesture operation, wherein:
uses staring to replace cursor, gestures to replace left and right keys to realize the simulation of mouse operation,
the position, size and shape of the large screen are utilized to generate a virtual surface which is attached to the surface of the large screen and has the same size and shape as the large screen, the virtual surface is used as an object which can be interacted in the glasses and simulates the desktop of the large screen which really exists in the space, the virtual surface is also used for simulating the desktop of a display cursor,
the user's gaze and gesture operations on the virtual surface are equivalent to operations on the same position of the large screen,
and acquiring the spatial position of the user gaze point by using the virtual surface, and calculating the coordinate of the gaze point relative to the surface by using the length and the width of the virtual surface so as to obtain the relative coordinate of the gaze point at the webpage end.
8. A system for data interaction between a large screen and augmented reality glasses, comprising:
augmented reality glasses, as a glasses end, having a network communication module, a built-in CPU and GPU unit, having an initialization module to initialize the glasses end to a client,
a large screen for displaying a visual work realized using a conventional method, the work being displayed on the large screen as a web page side, having an initialization module to initialize the web page side to a further client side,
the message sending module and the call-back processing module are respectively arranged at the webpage end and the glasses end and used for bidirectional communication between the webpage end and the glasses end, wherein:
a message sending module for establishing a message object message containing identifiers of both communication parties, interface names and specific message data, wherein the attribute of the specific message data is used for transmitting operation data information generated by interaction,
a callback processing module for analyzing the content after processing the received message, extracting the interactive data, generating corresponding interactive effect according to the function,
the server side is provided with a communication processing module and is used for forwarding communication messages between the webpage side and the glasses side in a communication model of the webpage side, the server side and the glasses side,
a server initialization module for obtaining the server IP address and setting the port number to complete the forwarding function or other data processing functions,
a back-end module which is different from the webpage end and the glasses end and is used for forwarding data among a plurality of client ends;
after the visual works and the augmented reality requirements thereof are integrally evaluated, designing a function needing to be interacted by using glasses, and designing each message interface according to the type and data of interaction;
the functions requiring interaction with glasses include:
enabling a user to see a virtual object placed in the space through the augmented reality glasses, and further interacting with the virtual object;
the control assembly in the visual works is converted into a virtual control assembly, and the virtual control assembly is presented through the augmented reality glasses and further interacts with the virtual control assembly;
and realizing interaction with a virtual control component in the augmented reality glasses through gaze and/or gestures, and transmitting data generated by the interaction to a webpage end through network communication so as to control content displayed in a page on a large screen.
9. The system of claim 8, wherein the data interaction system comprises: when the communication data size is not large, the server only needs to forward information between the two clients; when the communication data size is large, the server side can process the data from the webpage side and then forward the data to the glasses side in consideration of the storage and calculation capacity of the glasses device.
10. The system of claim 8, wherein the data interaction system comprises: the functions requiring interaction with glasses include:
a virtual object is arranged in the augmented reality glasses and used as a virtual substitute object for representing a certain visual figure or object in the visual works at the webpage end,
the user carries out interactive operation on the virtual substitute object in the glasses, and the result data generated by the operation is transmitted to the webpage end, so that the result of the interaction of the user by using the glasses is reflected on the page content displayed in the large screen;
the interactive operations include clicking, dragging, rotating, and zooming.
11. The system of claim 8, wherein the data interaction system comprises: first, the virtual control component is activated by gaze selection,
then, defining the interactive operation corresponding to various gestures of the person,
the gestures comprise basic gestures and information of the changes of the basic gestures in the duration, the position and the angle of the actual use of the user,
the operation data information generated by interaction is converted into a customized message format and is sent to the large screen end,
and the webpage end of the large screen generates a corresponding interaction effect according to the message content.
12. The system of claim 11, wherein the data interaction system comprises: when interactive operation is carried out, according to the type of an interactive object, if the attribute or state value changes, the numerical values generated by the operation are recorded; and if the gesture is the moving distance, calculating an actual operation result numerical value according to the interactive meaning and the space displacement of the gesture.
13. The system of claim 8, wherein the data interaction system comprises: the functions requiring interaction with glasses include:
the method adopts a large screen direct positioning interaction mode, and a user interacts with a specific position on the large screen by using staring and gesture operation, wherein:
uses staring to replace cursor, gestures to replace left and right keys to realize the simulation of mouse operation,
the position, size and shape of the large screen are utilized to generate a virtual surface which is attached to the surface of the large screen and has the same size and shape as the large screen, the virtual surface is used as an object which can be interacted in the glasses and simulates the desktop of the large screen which really exists in the space, the virtual surface is also used for simulating the desktop of a display cursor,
the user's gaze and gesture operations on the virtual surface are equivalent to operations on the same position of the large screen,
and acquiring the spatial position of the user gaze point by using the virtual surface, and calculating the coordinate of the gaze point relative to the surface by using the length and the width of the virtual surface so as to obtain the relative coordinate of the gaze point at the webpage end.
CN201810338583.7A 2018-04-16 2018-04-16 Data interaction method and system between large screen and augmented reality glasses Active CN108762482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810338583.7A CN108762482B (en) 2018-04-16 2018-04-16 Data interaction method and system between large screen and augmented reality glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810338583.7A CN108762482B (en) 2018-04-16 2018-04-16 Data interaction method and system between large screen and augmented reality glasses

Publications (2)

Publication Number Publication Date
CN108762482A CN108762482A (en) 2018-11-06
CN108762482B true CN108762482B (en) 2021-05-28

Family

ID=64010612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810338583.7A Active CN108762482B (en) 2018-04-16 2018-04-16 Data interaction method and system between large screen and augmented reality glasses

Country Status (1)

Country Link
CN (1) CN108762482B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901938B (en) * 2019-02-26 2021-11-19 北京华夏电通科技股份有限公司 Interactive large-screen system based on WebSocket communication and visual display method
CN109920065B (en) * 2019-03-18 2023-05-30 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium
CN110264818B (en) * 2019-06-18 2021-08-24 国家电网有限公司 Unit water inlet valve disassembly and assembly training method based on augmented reality
CN110597442B (en) * 2019-09-20 2021-03-16 北京华捷艾米科技有限公司 Mobile phone AR drawing method and device
CN111309203B (en) * 2020-01-22 2021-10-08 深圳市格上视点科技有限公司 Method and device for acquiring positioning information of mouse cursor
CN111818016B (en) * 2020-06-11 2022-03-22 广州恒沙数字科技有限公司 Method and system for realizing accurate positioning of three-dimensional space based on interface technology
CN112527112B (en) * 2020-12-08 2023-05-02 中国空气动力研究与发展中心计算空气动力研究所 Visual man-machine interaction method for multichannel immersion type flow field
CN112506348A (en) * 2020-12-15 2021-03-16 中国空气动力研究与发展中心计算空气动力研究所 Configuration method and device of visual parameters of immersive flow field
CN112698721A (en) * 2020-12-24 2021-04-23 上海科技大学 Virtual reality object interaction system based on gestures
CN113961107B (en) * 2021-09-30 2024-04-16 西安交通大学 Screen-oriented augmented reality interaction method, device and storage medium
CN115268757A (en) * 2022-07-19 2022-11-01 武汉乐庭软件技术有限公司 Gesture interaction recognition system on picture system based on touch screen
WO2024040430A1 (en) * 2022-08-23 2024-02-29 Qualcomm Incorporated Method and apparatus to extend field of view of an augmented reality device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107027015A (en) * 2017-04-28 2017-08-08 广景视睿科技(深圳)有限公司 3D trends optical projection system based on augmented reality and the projecting method for the system
CN107205034A (en) * 2017-06-05 2017-09-26 上海联影医疗科技有限公司 A kind of data sharing device and method
CN107465910A (en) * 2017-08-17 2017-12-12 康佳集团股份有限公司 A kind of combination AR glasses carry out the method and system of AR information real time propelling movements

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009054619A2 (en) * 2007-10-22 2009-04-30 Moon Key Lee Augmented reality computer device
CN106997235B (en) * 2016-01-25 2018-07-13 亮风台(上海)信息科技有限公司 For realizing method, the equipment of augmented reality interaction and displaying
CN106210909A (en) * 2016-08-15 2016-12-07 深圳Tcl数字技术有限公司 TV the display processing method of content, Apparatus and system
CN106354316A (en) * 2016-08-31 2017-01-25 广东格兰仕集团有限公司 Operation panel based on AR technology and image recognition technology
CN207148561U (en) * 2017-07-23 2018-03-27 供求世界科技有限公司 A kind of AR systems applied to entity products information
CN107680165B (en) * 2017-09-25 2021-01-26 中国电子科技集团公司第二十八研究所 HoloLens-based computer console holographic display and natural interaction application method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107027015A (en) * 2017-04-28 2017-08-08 广景视睿科技(深圳)有限公司 3D trends optical projection system based on augmented reality and the projecting method for the system
CN107205034A (en) * 2017-06-05 2017-09-26 上海联影医疗科技有限公司 A kind of data sharing device and method
CN107465910A (en) * 2017-08-17 2017-12-12 康佳集团股份有限公司 A kind of combination AR glasses carry out the method and system of AR information real time propelling movements

Also Published As

Publication number Publication date
CN108762482A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108762482B (en) Data interaction method and system between large screen and augmented reality glasses
Evans et al. Evaluating the Microsoft HoloLens through an augmented reality assembly application
Grossman et al. Multi-finger gestural interaction with 3d volumetric displays
US11275481B2 (en) Collaborative augmented reality system
US20190279424A1 (en) Collaborative augmented reality system
US11551403B2 (en) Artificial reality system architecture for concurrent application execution and collaborative 3D scene rendering
US10839572B2 (en) Contextual virtual reality interaction
JP5807686B2 (en) Image processing apparatus, image processing method, and program
Li et al. Cognitive issues in mobile augmented reality: an embodied perspective
JPH10134069A (en) Information retrieval device
Shim et al. Gesture-based interactive augmented reality content authoring system using HMD
CN113961107B (en) Screen-oriented augmented reality interaction method, device and storage medium
Regenbrecht et al. A tangible AR desktop environment
CN111467803A (en) In-game display control method and device, storage medium, and electronic device
Krug et al. Clear sight: Exploring the potential of interacting with transparent tablets in augmented reality
Stenicke et al. Interscopic user interface concepts for fish tank virtual reality systems
WO2019127325A1 (en) Information processing method and apparatus, cloud processing device, and computer program product
Zhang et al. A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality
de Haan et al. Hybrid Interfaces in VEs: Intent and Interaction.
Phillips Jack user's guide
WO2023207226A1 (en) Operation interface generation method and device and control method and device
Seth et al. A low cost virtual reality human computer interface for CAD model manipulation
US20220172441A1 (en) Virtual reality data-processing device, system and method
Zambon Mixed Reality-based Interaction for the Web of Things
CN117590928A (en) Multi-window processing method, equipment and system in three-dimensional space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant