CN115373615A - Multi-screen display method, device and equipment of intelligent glasses and storage medium - Google Patents
Multi-screen display method, device and equipment of intelligent glasses and storage medium Download PDFInfo
- Publication number
- CN115373615A CN115373615A CN202211027078.3A CN202211027078A CN115373615A CN 115373615 A CN115373615 A CN 115373615A CN 202211027078 A CN202211027078 A CN 202211027078A CN 115373615 A CN115373615 A CN 115373615A
- Authority
- CN
- China
- Prior art keywords
- virtual
- texture data
- screen
- texture
- screens
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 239000011521 glass Substances 0.000 title claims abstract description 20
- 239000004984 smart glass Substances 0.000 claims description 31
- 230000002194 synthesizing effect Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 230000033001 locomotion Effects 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a multi-screen display method of intelligent glasses, a device, equipment and a storage medium, the intelligent glasses are utilized to create a virtual large screen comprising a plurality of virtual screens, a page displayed on a display module of the intelligent glasses is determined based on gyroscope angle data sent by a gyroscope, the display size of the screen is greatly enlarged, stacking of application interfaces is avoided, the problem that the size of a logistics screen at a PC (personal computer) end is too small, the display size of each application must be reduced, the application display content is limited, or the application interfaces are stacked together and only display the uppermost layer is solved, different application contents are checked at the uppermost layer by means of continuous switching application, and the technical problems that searching is inconvenient and low in efficiency exist.
Description
Technical Field
The present application relates to the field of AR technologies, and in particular, to a multi-screen display method, apparatus, device, and storage medium for smart glasses.
Background
Since the information era of human beings, computing platforms are constantly evolving, and two golden ages of internet history are created by two computing platforms of a PC (personal computer) and a smart phone. Through years of development of PCs and smart phones, industries are basically mature, the space for industry growth and performance improvement is very limited, and users expect less and less. The intelligent glasses display terminal with advanced near-to-eye optical display can become a next-generation personal computing platform and a display screen, and brings subversive innovative experience to users.
In the era of PCs and smart phones, the screen size of a terminal is limited by the limitations of hardware technology and the degree of convenience of carrying, and there is no way to continue to increase the display range of a physical screen. However, there is a need for users to be able to see more displayed information with little interaction, such as the ability to simultaneously display and use documents, emails, web pages, etc. At present, the application display stacking or window switching is realized on the PC, and no way is available for tiling the application functions to a larger extent for display and interaction.
The desktop on the PC side is limited to the physical screen size and typically only provides a desktop size of 14 inches to 30 inches. All interaction and application display must be operated on the physical screen, which makes the window layout of the status bar, the navigation bar and the desktop very compact, and when a plurality of applications are opened, because the physical screen is too small, the display size of each application must be reduced, which results in limited application display content, or the applications are stacked together and only the application interface at the uppermost layer is displayed, and different application content is checked by continuously switching the applications at the uppermost layer, which has the technical problems of inconvenient searching and low efficiency.
Disclosure of Invention
The application provides a multi-screen display method, a multi-screen display device, a multi-screen display equipment and a multi-screen display storage medium for intelligent glasses, and solves the technical problems that the size of a logistics screen at a PC (personal computer) end is too small, the display size of each application is required to be reduced, the display content of the application is limited, or the application interfaces are stacked together and only the uppermost layer is displayed, different application contents are checked at the uppermost layer by means of ceaseless switching application, and the searching is very inconvenient and the efficiency is low.
In view of the above, a first aspect of the present application provides a multi-screen display method for smart glasses, the method including:
s1, initializing identification information and coordinate information of at least one virtual screen;
s2, collecting texture data of all virtual screens;
s3, splicing and synthesizing the texture data of all the virtual screens into texture data of a virtual large screen, wherein the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, and the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in the single row;
s4, acquiring gyroscope angle data sent by a gyroscope;
s5, cutting texture data of the virtual large screen according to the gyroscope angle data to obtain texture data of a display area;
and S6, sending the texture data of the display area to a display module, so that the display module displays the texture according to the texture data of the display area, and obtaining a display page.
Optionally, the step S1 specifically includes:
initializing identification information and coordinate information of at least one virtual screen, and taking a top left corner fixed point of a first virtual screen at the top left corner as (0, 0) coordinates, wherein the number of the virtual screens at most comprises 3 lines, and each line comprises 9 virtual screens.
Optionally, after the step S2, before the step S3, the method further includes:
and storing the texture data of all the virtual screens into a multi-virtual screen texture container according to the identification information of each virtual screen.
Optionally, after the step S6, the method further includes:
s7, collecting texture data of all virtual screens at intervals of first preset time;
s8, comparing the collected texture data of all the virtual screens with all the virtual screens collected in the last period stored in the multi-virtual screen texture container, if the texture data changes, updating the texture data of the corresponding virtual screens, and setting the texture change identification of the corresponding virtual screens as a preset identification;
and S9, updating the texture data of the current virtual large screen according to the texture data of the virtual screen with the texture table motion identifier set as the preset identifier to obtain a new virtual large screen.
Optionally, after step S9, the method further includes:
and S10, returning to the step S4 for re-execution.
Optionally, after the step S6, the method further includes:
and if the user uses the peripheral equipment to operate the application window on the currently displayed page, acquiring texture data of the application window operated by the user, and returning to execute the step S7.
Optionally, the texture data comprises: texture identification data, texture RGB format data, texture width data, texture height data, texture coordinate data and texture change identification.
Optionally, the step S3 specifically includes:
and splicing RGB image data according to the coordinate information and texture data of all the virtual screens, synthesizing texture data of a virtual large screen with aligned boundaries, wherein the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in a single row, and the coordinate vacancy position of the virtual large screen is filled by pure black texture data.
Optionally, the step S4 specifically includes:
if the user wearing the intelligent glasses rotates the head, acquiring gyroscope angle data sent by the gyroscope in real time.
A second aspect of the present application provides a multi-screen display device of smart glasses, the device including:
the initialization unit is used for initializing the identification information and the coordinate information of at least one virtual screen;
the acquisition unit is used for acquiring texture data of all the virtual screens;
the synthesis unit is used for splicing and synthesizing the texture data of all the virtual screens into the texture data of a virtual large screen, the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, and the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in a single row;
the acquisition unit is used for acquiring gyroscope angle data sent by a gyroscope;
the cutting unit is used for cutting the texture data of the virtual large screen according to the gyroscope angle data to obtain the texture data of the display area;
and the sending unit is used for sending the texture data of the display area to a display module, so that the display module displays the texture according to the texture data of the display area to obtain a display page.
A third aspect of the present application provides a multi-screen display device of smart glasses, the device including a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of the method for multi-screen display of smart glasses according to the first aspect as described above, according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium for storing a program code for executing the multi-screen display method for smart glasses according to the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
in the application, a multi-screen display method of intelligent glasses is provided, utilize intelligent glasses, establish the virtual large screen that contains a plurality of virtual screens, and confirm the page that shows on the display module assembly of intelligent glasses based on the gyroscope angle data that the gyroscope sent, the display size of screen has greatly been enlarged, avoid piling up of each application interface, it is too little to have solved PC end commodity circulation screen size, must reduce the display size of every application, the application that leads to shows the content limited, or pile up together and only show the application interface of the superiors, rely on incessant switching application to look over different application contents at the superiors, there is the technical problem who seeks very inconvenient and inefficiency.
Drawings
Fig. 1 is a flowchart illustrating a method for displaying multiple screens of smart glasses according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a multi-display device of smart glasses according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application designs a multi-screen display method and a multi-screen display device for intelligent glasses, and solves the technical problems that the PC end logistics screen is too small in size, the display size of each application needs to be reduced, the application display content is limited, or the application interfaces are stacked together and only display the uppermost layer, different application contents are checked at the uppermost layer by means of ceaseless switching, and the searching is very inconvenient and the efficiency is low.
For convenience of understanding, please refer to fig. 1, in which fig. 1 is a flowchart illustrating a method of a multi-screen display method of smart glasses according to an embodiment of the present disclosure, as shown in fig. 1, specifically, the method includes:
s1, initializing identification information and coordinate information of at least one virtual screen;
it should be noted that after the smart glasses are turned on, the identification information and the coordinate information of the virtual screens are initialized, it can be understood that there is at least one virtual screen, the size of one virtual screen is preset, and if there are multiple virtual screens, each virtual screen may be numbered from left to right and from top to bottom.
S2, collecting texture data of all virtual screens;
it should be noted that after the system is started and the identification information and the coordinate information of each virtual screen are initialized, texture data of each virtual screen is collected, where the texture data may include, but is not limited to, texture identification data, texture RGB format data, texture width data, texture height data, texture coordinate data, and texture change identification. The screenshot picture of the virtual screen of each texture data is formed by adding information such as texture width data, texture height data and the like, and the texture RGB format data is the screenshot picture of the virtual screen.
S3, splicing and synthesizing the texture data of all the virtual screens into texture data of a virtual large screen, wherein the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, and the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in the single row;
it should be noted that after texture data of each virtual screen is acquired, all the virtual screens need to be further spliced and synthesized into one virtual large screen based on the texture data of each virtual screen, and the texture data of the virtual large screen is obtained based on the texture data of each virtual screen. It can be understood that if there is a difference in texture width data or texture height data in the texture data of different virtual screens, the sizes of the virtual screens with different numbers will not be uniform, and when the virtual large screen is spliced, the texture width data and the texture height data in the texture data of the virtual large screen will be determined according to the sum of the maximum widths of all the virtual screens in a single row and the sum of the maximum heights of all the virtual screens in a single column.
And further, splicing RGB image data according to the coordinate information and texture data of all the virtual screens, synthesizing texture data of a virtual large screen with aligned boundaries, wherein the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in a single row, and the coordinate vacant positions of the virtual large screen are filled by pure black texture data.
It can be understood that, since one virtual large screen may be formed by splicing a plurality of rows of virtual screens and a plurality of columns of virtual screens, if the widths or heights of the rows of virtual screens or the columns of virtual screens are not consistent, the synthesized virtual large screen may be in an irregular shape. In order to realize the display uniformity of the virtual large screen, the boundary of the virtual large screen is determined according to the sum of the maximum widths of all the virtual screens in a single row and the sum of the maximum heights of all the virtual screens in a single column, so that the virtual large screen with aligned boundaries is formed, and pure black texture data is adopted to fill the coordinate vacant positions in the virtual large screen.
S4, acquiring gyroscope angle data sent by a gyroscope;
it should be noted that, in addition to texture data of each virtual screen, gyroscope angle data sent by a gyroscope needs to be considered within the display range of the smart glasses. It can be understood that, the gyroscope angle data sent by the gyroscope is actually related to whether the user wearing the smart glasses rotates the head, and the user wearing the smart glasses cannot see the display area of the whole virtual large screen at one time. After the initialization, with the user head position of wearing intelligent glasses as the basis, outside the user rotated the head, the gyroscope angle data that the gyroscope detected will change, corresponding, can influence the display range that the user can see through the display module assembly of intelligent glasses.
S5, cutting texture data of the virtual large screen according to the gyroscope angle data to obtain texture data of a display area;
it should be noted that the texture data of the virtual large screen is cut according to the gyroscope angle data, so that the texture data of the display area, which should be displayed in the display module when the user wears the smart glasses, can be obtained.
It can be understood that the texture data of the virtual large screen will surround the display module of the smart glasses from left to right, from top to bottom, namely in front of the user wearing the smart glasses. When the head rotates 360 degrees horizontally, the horizontal picture of the whole virtual large screen is displayed for the user. When the head rotates by 1 degree, the texture width data of the corresponding virtual large screen is correspondingly moved by dividing the distance of 360 degrees by the texture coordinate of the corresponding virtual large screen; every time the head rotates up and down by 1 degree, the corresponding texture height data of the virtual large screen is correspondingly moved by dividing the distance of 81 degrees by the corresponding texture coordinate of the virtual large screen, after the up-down movement angle of the head exceeds the range of 81 degrees, the texture coordinate of the virtual large screen does not move any more, and the range of the up-down movement angle of 81 degrees is the range between 40.5 degrees which can be moved upwards and 40.5 degrees which can be moved downwards by taking the horizontal position of the head as the reference.
And S6, sending the texture data of the display area to the display module, so that the display module displays the texture according to the texture data of the display area, and obtaining a display page.
Further, step S1 specifically includes:
initializing identification information and coordinate information of at least one virtual screen, taking a top left corner fixed point of a first virtual screen at the top left corner as (0, 0) coordinates, wherein the number of virtual screens comprises at most 3 lines, and each line comprises 9 virtual screens.
Further, after step S2, step S3 further includes:
and storing the texture data of all the virtual screens into a multi-virtual screen texture container according to the identification information of each virtual screen.
Further, step S6 is followed by:
s7, collecting texture data of all virtual screens at intervals of first preset time;
s8, comparing the collected texture data of all the virtual screens with all the virtual screens collected in the previous period stored in a multi-virtual screen texture container, if the texture data changes, updating the texture data of the corresponding virtual screens, and setting texture change identification of the corresponding virtual screens as preset identification;
and S9, updating the texture data of the current virtual large screen according to the texture data of the virtual screen with the texture table motion identifier set as the preset identifier to obtain a new virtual large screen.
It should be noted that, in the present application, texture data of each virtual screen is stored in a multiple virtual screen texture container, and in the multiple virtual screen texture container, the texture data of each virtual screen is stored in a number corresponding to each virtual screen, so that when a virtual large screen is spliced and synthesized, the texture data in the multiple virtual screen texture container is directly obtained and synthesized.
And simultaneously, acquiring texture data of all virtual screens at intervals of a first preset time, comparing the newly acquired texture data with the texture data of each virtual screen acquired and stored in a previous period in the multi-virtual screen texture container according to the serial numbers, updating the texture data of the corresponding virtual screen according to the serial numbers if the texture data changes, and setting a texture change identifier of the corresponding virtual screen as a preset identifier so as to identify that the texture data of the virtual screen with the serial number is different from the texture data of the virtual screen in the previous period.
The texture data of the current virtual large screen which is spliced and synthesized is updated according to the texture data of the virtual screen of which the texture change identifier is the preset identifier in the period, so that a new virtual large screen is obtained, the splicing and synthesizing operation does not need to be repeatedly executed, the texture data of the virtual large screen can be quickly updated only by replacing the texture data of the changed virtual screen in the virtual large screen, the processing efficiency is improved, and the condition that the virtual large screen needs to be spliced and synthesized again once the virtual screen is changed is avoided.
Further, step S9 is followed by:
and S10, returning to the step S4 for re-execution.
It should be noted that, for the updated virtual large screen, it is also necessary to obtain gyroscope angle data sent by the gyroscope to determine texture data of the display page of the display module of the smart glasses.
Further, step S6 is followed by:
and if the user uses the peripheral equipment to operate the application window on the current display page, acquiring texture data of the application window operated by the user, and returning to execute the step S7 to acquire the texture data of all the virtual screens.
It should be noted that, if the user uses the peripheral device to perform the operation of the application window in the current display page of the display module of the smart glasses, where the operation may include operations such as creation, deletion, and movement of the application window, texture data of the application window operated by the user needs to be collected, and the step S7 is returned to update the texture data of the virtual large screen.
In fact, the texture data of the application window is included in the texture data of the virtual screen, that is, no matter what operation is performed in the virtual screen, the change of the texture data of the virtual screen can be obtained by collecting the texture data of the virtual screen, so as to update the texture data of the virtual large screen. However, after it is detected that the user performs the operation of the application window, the texture data of the virtual screen may be collected in real time, so as to avoid that the texture data of the virtual large screen is not updated in time if the time interval of the operation is smaller than the first preset time.
Further, step S4 specifically includes:
if the user wearing the intelligent glasses rotates the head, acquiring gyroscope angle data sent by the gyroscope in real time.
As shown in fig. 2, fig. 2 is a schematic structural diagram of a multi-display device of smart glasses according to an embodiment of the present application, where the device includes:
an initializing unit 201 configured to initialize identification information and coordinate information of at least one virtual screen;
the acquisition unit 202 is used for acquiring texture data of all virtual screens;
the synthesis unit 203 is used for splicing and synthesizing the texture data of all the virtual screens into the texture data of a virtual large screen, wherein the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, and the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in a single row;
an obtaining unit 204, configured to obtain gyroscope angle data sent by a gyroscope;
a cutting unit 205, configured to cut texture data of the virtual large screen according to the gyroscope angle data to obtain texture data of the display area;
the sending unit 206 is configured to send the texture data of the display area to the display module, so that the display module displays the texture according to the texture data of the display area, and obtains a display page.
The embodiment of the present application further provides a multi-screen display device for smart glasses, where the device includes a processor and a memory:
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is configured to execute the multi-screen display method of the smart glasses according to instructions in the program code.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is configured to store a program code, and the program code is configured to execute the multi-screen display method for smart glasses in any one of the foregoing embodiments.
In the embodiment of the application, a multi-screen display method, a multi-screen display device, equipment and a storage medium of intelligent glasses are provided, the intelligent glasses are utilized, a virtual large screen comprising a plurality of virtual screens is created, pages displayed on a display module of the intelligent glasses are determined based on gyroscope angle data sent by a gyroscope, the display size of the screens is greatly enlarged, stacking of application interfaces is avoided, the problem that the logistics screen size of a PC (personal computer) end is too small, the display size of each application must be reduced, the application display content is limited, or only the application interface on the uppermost layer is displayed in a stacking mode is solved, different application contents are checked on the uppermost layer by means of ceaseless switching application, and the technical problems that searching is inconvenient and the efficiency is low exist.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in this application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. A multi-screen display method of intelligent glasses is characterized by comprising the following steps:
s1, initializing identification information and coordinate information of at least one virtual screen;
s2, collecting texture data of all virtual screens;
s3, splicing and synthesizing the texture data of all the virtual screens into texture data of a virtual large screen, wherein the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, and the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in the single row;
s4, acquiring gyroscope angle data sent by a gyroscope;
s5, cutting texture data of the virtual large screen according to the gyroscope angle data to obtain texture data of a display area;
and S6, sending the texture data of the display area to a display module, so that the display module displays the texture according to the texture data of the display area, and obtaining a display page.
2. A multi-screen display method for smart glasses according to claim 1, wherein the step S1 specifically comprises:
initializing identification information and coordinate information of at least one virtual screen, and taking a top left corner fixed point of a first virtual screen at the top left corner as (0, 0) coordinates, wherein the number of the virtual screens at most comprises 3 lines, and each line comprises 9 virtual screens.
3. A multi-screen display method for smart glasses according to claim 1, wherein after step S2, step S3 further comprises:
and storing the texture data of all the virtual screens into a multi-virtual screen texture container according to the identification information of each virtual screen.
4. A multi-screen display method for smart glasses according to claim 3, wherein the step S6 is followed by further comprising:
s7, collecting texture data of all virtual screens at intervals of first preset time;
s8, comparing the collected texture data of all the virtual screens with all the virtual screens collected in the previous period stored in the multi-virtual screen texture container, if the texture data changes, updating the texture data of the corresponding virtual screens, and setting the texture change identification of the corresponding virtual screens as a preset identification;
s9, updating the texture data of the current virtual large screen according to the texture data of the virtual screen with the texture table motion identifier set as the preset identifier to obtain a new virtual large screen;
and S10, returning to the step S4 for re-execution.
5. A multi-screen display method for smart glasses according to claim 4, wherein the step S6 is followed by further comprising:
and if the user uses the peripheral equipment to operate the application window on the currently displayed page, acquiring texture data of the application window operated by the user, and returning to execute the step S7 to acquire texture data of all the virtual screens.
6. A multi-screen display method for smart glasses according to claim 1, wherein the step S3 specifically includes:
and splicing RGB image data according to the coordinate information and texture data of all the virtual screens, synthesizing texture data of a virtual large screen with aligned boundaries, wherein the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in a single row, and the coordinate vacancy position of the virtual large screen is filled by pure black texture data.
7. A multi-screen display method for smart glasses according to claim 1, wherein the step S4 specifically includes:
and if the head of the user wearing the intelligent glasses rotates, acquiring gyroscope angle data sent by the gyroscope in real time.
8. A multi-screen display apparatus of smart glasses, comprising:
the initialization unit is used for initializing the identification information and the coordinate information of at least one virtual screen;
the acquisition unit is used for acquiring texture data of all the virtual screens;
the synthesis unit is used for splicing and synthesizing the texture data of all the virtual screens into the texture data of a virtual large screen, the width of the texture data of the virtual large screen is equal to the sum of the maximum widths of all the virtual screens in a single row, and the height of the texture data of the virtual large screen is equal to the sum of the maximum heights of all the virtual screens in a single row;
the acquisition unit is used for acquiring gyroscope angle data sent by a gyroscope;
the cutting unit is used for cutting the texture data of the virtual large screen according to the gyroscope angle data to obtain the texture data of a display area;
and the sending unit is used for sending the texture data of the display area to a display module, so that the display module displays the texture according to the texture data of the display area to obtain a display page.
9. A multi-screen display device for smart eyewear, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the multi-screen display method of the smart glasses according to any one of claims 1 to 7 according to instructions in the program code.
10. A computer-readable storage medium for storing a program code for executing the multi-screen display method of smart glasses according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211027078.3A CN115373615A (en) | 2022-08-25 | 2022-08-25 | Multi-screen display method, device and equipment of intelligent glasses and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211027078.3A CN115373615A (en) | 2022-08-25 | 2022-08-25 | Multi-screen display method, device and equipment of intelligent glasses and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115373615A true CN115373615A (en) | 2022-11-22 |
Family
ID=84068119
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211027078.3A Pending CN115373615A (en) | 2022-08-25 | 2022-08-25 | Multi-screen display method, device and equipment of intelligent glasses and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115373615A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1726711A (en) * | 2002-12-18 | 2006-01-25 | 基因系统会议有限公司 | A method and system for visually sharing an application |
CN104731546A (en) * | 2015-04-01 | 2015-06-24 | 宁波Gqy视讯股份有限公司 | Method and system for capturing high-resolution picture to be displayed on large screen |
CN113238724A (en) * | 2021-04-26 | 2021-08-10 | 深圳乐播科技有限公司 | Multi-zone combined screen projection method, device, equipment and storage medium |
-
2022
- 2022-08-25 CN CN202211027078.3A patent/CN115373615A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1726711A (en) * | 2002-12-18 | 2006-01-25 | 基因系统会议有限公司 | A method and system for visually sharing an application |
CN104731546A (en) * | 2015-04-01 | 2015-06-24 | 宁波Gqy视讯股份有限公司 | Method and system for capturing high-resolution picture to be displayed on large screen |
CN113238724A (en) * | 2021-04-26 | 2021-08-10 | 深圳乐播科技有限公司 | Multi-zone combined screen projection method, device, equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
成生辉: "《元宇宙 概念、技术及生态》", 30 April 2022, 机械工业出版社, pages: 67 - 71 * |
聂烜: "《计算机图形学》", 31 December 2021, 西北工业大学出版社, pages: 173 - 179 * |
黄海: "《虚拟现实技术》", 31 January 2014, 北京邮电大学出版社, pages: 98 - 99 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9965144B2 (en) | Information processing apparatus and method, and non-transitory computer readable medium | |
EP2871561A1 (en) | Desktop system of mobile terminal and interface interaction method and device | |
US20150143284A1 (en) | Navigable Layering Of Viewable Areas For Hierarchical Content | |
US20010050687A1 (en) | Three-dimensional object display system, three-dimensional object display method and recording medium recording a three-dimensional object display program | |
CN111062778A (en) | Product browsing method, device, equipment and storage medium | |
CN107204044B (en) | Picture display method based on virtual reality and related equipment | |
US11651556B2 (en) | Virtual exhibition space providing method for efficient data management | |
CN111190672A (en) | UI (user interface) adaptation method of electronic equipment, electronic equipment and storage medium | |
KR102317013B1 (en) | Object management and visualization using computing devices | |
WO2022262404A1 (en) | Picture presentation method, picture presentation apparatus and electronic device | |
EP2996052A1 (en) | Method and apparatus for displaying media file on terminal by using page | |
US20190073091A1 (en) | Dynamic display layout systems and methods | |
US20130132907A1 (en) | Shape pixel rendering | |
EP3388989A1 (en) | Realogram to planogram user interface | |
US20140052580A1 (en) | Product explorer page for use with interactive digital catalogs and touch-screen devices | |
CN112416238A (en) | Information processing method, information processing device, electronic equipment and storage medium | |
CN107122104B (en) | Data display method and device | |
CN104267961A (en) | Scroll bar generation method and device | |
CN115373615A (en) | Multi-screen display method, device and equipment of intelligent glasses and storage medium | |
US10430458B2 (en) | Automated data extraction from a chart from user screen selections | |
CN108108417A (en) | Exchange method, system, equipment and the storage medium of cross-platform self adaptive control | |
CN108388395B (en) | Image clipping method and device and terminal | |
CN115222840A (en) | Table rendering method, computer-readable storage medium, and electronic device | |
CN110825989A (en) | Picture display method and device, electronic equipment and readable medium | |
JP2012003579A (en) | Two-dimensional diagram display program and two-dimensional diagram display apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |