CN113391734A - Image processing method, image display device, storage medium, and electronic device - Google Patents

Image processing method, image display device, storage medium, and electronic device Download PDF

Info

Publication number
CN113391734A
CN113391734A CN202010169749.4A CN202010169749A CN113391734A CN 113391734 A CN113391734 A CN 113391734A CN 202010169749 A CN202010169749 A CN 202010169749A CN 113391734 A CN113391734 A CN 113391734A
Authority
CN
China
Prior art keywords
application
virtual
window
windows
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010169749.4A
Other languages
Chinese (zh)
Inventor
孙增才
蒋臣迪
孙兴阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010169749.4A priority Critical patent/CN113391734A/en
Priority to PCT/CN2021/080281 priority patent/WO2021180183A1/en
Publication of CN113391734A publication Critical patent/CN113391734A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application provides an image processing method, an image display device, a storage medium and an electronic device, wherein the method comprises the following steps: receiving the encoded plurality of application windows from the image source device and decoding them to obtain a decoded plurality of application windows, wherein the plurality of application windows correspond to the plurality of applications one-to-one; adding each application window into a virtual window corresponding to the application type in at least one created virtual window according to the application category corresponding to each application window in the plurality of application windows; and synthesizing and displaying at least one virtual window in one image display frame. Through Virtual Reality (VR) Virtual display technology, multiple application windows can be simultaneously projected in the visual field of a user. By adopting 360-degree annular tiling and expansion of the virtual window, combination of the thumbnail and the virtual window and arrangement of a global menu means, a high-efficiency human-computer interaction interface is realized, so that multitask flattening operation is realized, and the efficiency is improved.

Description

Image processing method, image display device, storage medium, and electronic device
Technical Field
One or more embodiments of the present application relate generally to the field of image processing of virtual reality technology, and in particular, to an image processing method and an image display device, a storage medium, and an electronic device.
Background
Virtual Reality (VR) technology is a new technology developed in the 20 th century, and it integrates computer, electronic information, optics, 3D modeling and simulation technologies. The basic implementation is to simulate a virtual environment with a computer to give the person a sense of environmental immersion. With the continuous development of social productivity and scientific technology, VR technology is increasingly in great demand in various industries. The VR technology has made great progress and gradually becomes a new scientific and technical field. The virtual desktop system of the PC is realized by VR technology, which is also a hot spot of current research.
The traditional computer desktop system projects related picture windows onto a display through a host to display, so that man-machine interaction is facilitated, the desktop display is limited by the constraint of the display, a plurality of task windows need to be packed, integrated and optimized in the host in advance and then are uniformly transmitted to the display to be displayed, and the problem is that the existing desktop display system is not convenient for flattening operation of application programs and the operation efficiency of the application programs is low.
Disclosure of Invention
Some embodiments of the present application provide an image processing method and an image display device, a storage medium, and an electronic device. The present application is described below in terms of several aspects, embodiments and advantages of which are mutually referenced.
In order to address the above-mentioned scenario, in a first aspect, an embodiment of the present application provides an image processing method for an image source device, including: the image source equipment draws application windows of a plurality of applications according to the plurality of applications opened by a user, wherein each application window corresponds to the plurality of applications one to one; the image source device encodes the plurality of application windows and transmits the encoded plurality of application windows to the image display device, wherein the transmitted encoded plurality of application windows are not composited within one image display frame at the image source device.
As can be seen from the foregoing embodiments of the first aspect, the image source device according to the embodiments of the present application may directly send the encoded multiple application windows to the image display device without performing an integration process of the image, so as to solve the problems that the existing image source device must integrate all program windows into one frame of graphic output for display, which is inefficient and inconvenient for the flattening operation of the application program.
With reference to the first aspect, in some implementations, the image source device includes at least one of a user device and a server, and the image display device includes a virtual reality display device.
In a second aspect, an embodiment of the present application provides an image processing method for an image display apparatus, including: the image display device receives a plurality of coded application windows from the image source device, decodes the coded application windows and then obtains a plurality of decoded application windows, wherein the application windows are in one-to-one correspondence with a plurality of applications opened by a user at the image source device; the image display device creates at least one virtual window; the image display device may add each application window to a virtual window corresponding to the application type in the created at least one virtual window according to an application category corresponding to each application window in the plurality of application windows, for example, applications such as microblog, trembling, Word, and the like; the image display device then synthesizes at least one virtual window within one image display frame and displays the frame on the display screen.
As can be seen from the foregoing implementation manner of the second aspect, in the implementation manner of the present application, multiple application windows can be respectively displayed on different view areas in a virtual environment of a virtual reality three-dimensional space, and the application windows can be aggregated according to application categories, so that not only more windows can be displayed, and each window is not blocked, but also a picture displayed on the view area is larger and clearer, thereby enhancing the visual experience of a user.
In combination with the second aspect, in some embodiments, further comprising: in response to a user clicking a first application window of a first virtual window of the at least one virtual window through a display screen of the image display device, controlling the display screen to display contents of the clicked first application window in the first virtual window, and simultaneously displaying thumbnails of other application windows included in the first virtual window.
In combination with the second aspect, in some embodiments, further comprising: in response to a user dragging a first application window of the first virtual window to a second virtual window of the at least one virtual window through a display screen of the image display device, controlling the display screen to display contents of the first application window in the second virtual window.
In combination with the second aspect, in some embodiments, further comprising: in response to a user gesture of sliding left or right on a display screen of the image display device, displaying at least one virtual window correspondingly sliding left or right.
With reference to the second aspect, in some embodiments, the image display device is a virtual reality display device.
In a third aspect, an embodiment of the present application provides an image processing method for an image display apparatus, including: generating a menu containing a plurality of thumbnails of the plurality of virtual windows in at least one of a case where the number of the plurality of virtual windows to be displayed exceeds a threshold and a case where a user overlooks a display screen of the image display apparatus, wherein the plurality of virtual windows correspond to the plurality of thumbnails one to one, and wherein each of the plurality of virtual windows includes at least one application window of a same application type, and the at least one application window corresponds to the at least one application one to one; and controlling the display screen to display the menu.
It can be seen from the foregoing implementation manner of the third aspect that, in the implementation manner of the present application, when the number of virtual windows is large and/or the head of a corresponding user acts, a convenient virtual window control manner is provided for the user, and by adopting technical means such as combining a thumbnail and a virtual window and arranging a global menu, an efficient human-computer interaction interface is implemented, so that a multitask flattening operation is implemented for the user, and efficiency is improved.
With reference to the third aspect, in some embodiments, the menu includes a carousel of thumbnails.
In combination with the third aspect, in some embodiments, the threshold is related to a field of view of the user.
With reference to the third aspect, in some embodiments, each of the plurality of thumbnails includes identification information of a virtual window of the plurality of virtual windows corresponding to each thumbnail.
With reference to the third aspect, in some embodiments, each of the plurality of thumbnails includes image content obtained by performing reduced image processing on a corresponding virtual window of the plurality of virtual windows, wherein the reduced image processing includes at least one of interlacing the corresponding virtual window and removing a portion of content of the corresponding virtual window.
As can be seen from the foregoing embodiment in combination with the third aspect, the thumbnails include information obtained by simplifying the content of the application window, so that a user can conveniently identify the display content in each thumbnail, and convenience in user operation is improved.
With reference to the third aspect, in some embodiments, further comprising: and according to an instruction of a user, selecting one thumbnail from a plurality of thumbnails in the menu and displaying a virtual window corresponding to the selected thumbnail in the plurality of virtual windows, wherein the instruction comprises at least one of sliding of the menu by the user and clicking of the selected thumbnail.
With reference to the third aspect, in some embodiments, in a case where the virtual window corresponding to the selected thumbnail includes a plurality of application windows of the same type, displaying the corresponding virtual window further includes: and displaying one application window in the plurality of application windows of the same type and thumbnails of other application windows in the plurality of application windows of the same type in the corresponding virtual window.
In a fourth aspect, embodiments of the present application provide an image display apparatus comprising: a display controller for receiving the encoded plurality of application windows from the image source device and decoding to obtain a decoded plurality of application windows, wherein the plurality of application windows correspond to the plurality of applications one-to-one; creating at least one virtual window; adding each application window into a virtual window corresponding to the application type in at least one virtual window according to the application category corresponding to each application window in the plurality of application windows; and compositing at least one virtual window in one image display frame; and a display screen for displaying the image display frame.
As can be seen from the implementation manner of the fourth aspect, in the implementation manner of the present application, multiple application windows can be respectively displayed on different view areas in a virtual environment of a virtual reality three-dimensional space, and the application windows can be aggregated according to application categories, so that not only more windows can be displayed, and each window is not blocked, but also a picture displayed on the view area is larger and clearer, and the visual experience of a user is enhanced.
With reference to the fourth aspect, in some embodiments, further comprising: in response to a user clicking a first application window of a first virtual window of the at least one virtual window through a display screen of the image display device, controlling the display screen to display contents of the clicked first application window in the first virtual window, and simultaneously displaying thumbnails of other application windows included in the first virtual window.
With reference to the fourth aspect, in some embodiments, further comprising: in response to a user dragging a first application window of the first virtual window to a second virtual window of the at least one virtual window through a display screen of the image display device, controlling the display screen to display contents of the first application window in the second virtual window.
With reference to the fourth aspect, in some embodiments, further comprising: in response to a user gesture of sliding left or right on a display screen of the image display device, displaying at least one virtual window correspondingly sliding left or right.
With reference to the fourth aspect, in some embodiments, the image display device is a virtual reality display device.
In a fifth aspect, embodiments of the present application provide an image display apparatus, comprising: the display controller is used for generating a menu containing a plurality of thumbnails of a plurality of virtual windows under the condition that the number of the virtual windows to be displayed exceeds a threshold value and a user overlooks at least one of the display screens, wherein the virtual windows are in one-to-one correspondence with the thumbnails, each virtual window in the virtual windows comprises at least one application window of the same application type, and the at least one application window is in one-to-one correspondence with at least one application; and controlling the display screen to display the menu.
It can be seen from the foregoing implementation manner of the fifth aspect that, in the implementation manner of the present application, when the number of virtual windows is large and/or the head of a corresponding user acts, a convenient virtual window control manner is provided for the user, and by adopting technical means such as combining a thumbnail and a virtual window and arranging a global menu, an efficient human-computer interaction interface is implemented, so that a multitask flattening operation is implemented for the user, and efficiency is improved.
With reference to the fifth aspect, in some embodiments, the menu includes a carousel of thumbnails.
In combination with the fifth aspect, in some embodiments, the threshold is related to a field of view of the user.
With reference to the fifth aspect, in some embodiments, each of the plurality of thumbnails includes identification information of a virtual window of the plurality of virtual windows corresponding to each thumbnail.
With reference to the fifth aspect, in some embodiments, each of the plurality of thumbnails includes image content obtained by performing reduced image processing on a corresponding virtual window of the plurality of virtual windows, wherein the reduced image processing includes at least one of interlacing the corresponding virtual window and removing a portion of content of the corresponding virtual window.
As can be seen from the foregoing embodiment in combination with the fifth aspect, the thumbnails include information obtained by simplifying the content of the application window, so that a user can conveniently identify the display content in each thumbnail, and convenience in user operation is improved.
With reference to the fifth aspect, in some embodiments, further comprising: the display controller is further configured to select one thumbnail from the plurality of thumbnails in the menu and display a virtual window corresponding to the selected thumbnail from the plurality of virtual windows according to an instruction of a user, wherein the instruction includes at least one of a user's sliding of the menu and a click on the selected thumbnail.
With reference to the fifth aspect, in some embodiments, in a case where the virtual window corresponding to the selected thumbnail includes a plurality of application windows of the same type, displaying the corresponding virtual window further includes: and displaying one application window in the plurality of application windows of the same type and thumbnails of other application windows in the plurality of application windows of the same type in the corresponding virtual window.
With reference to the fifth aspect, in some embodiments, the image display device comprises a virtual reality display device.
In a sixth aspect, the present application provides a computer-readable storage medium, which may be non-volatile. The storage medium contains instructions that, when executed, implement a method as described in any one of the preceding aspects or embodiments.
In a seventh aspect, the present application provides an electronic device, including: a memory for storing instructions for execution by one or more processors of an electronic device, and a processor for executing the instructions in the memory to perform a method as described in any one of the preceding aspects or embodiments.
Drawings
Fig. 1 is a schematic diagram illustrating a basic display flow of a conventional desktop display system.
FIG. 2 shows a schematic diagram of an example image processing system according to an embodiment of the present application.
Fig. 3 shows a schematic diagram of an example image display device according to an embodiment of the present application.
Fig. 4 illustrates an interactive process in which an image source device and an image display device according to an embodiment of the present application perform an image processing method of the present application.
Fig. 5a-5d show schematic diagrams of graphical user interfaces presented by a display screen of an image display device according to an exemplary embodiment.
FIG. 6 shows a schematic diagram of an image processing method according to an embodiment of the present application.
Fig. 7 shows a schematic diagram of an image processing method according to another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. While the description of the present application will be described in conjunction with the preferred embodiments, it is not intended to limit the features of the present invention to that embodiment. Rather, the invention has been described in connection with embodiments for the purpose of covering alternatives and modifications as may be extended based on the claims of the present application. In the following description, numerous specific details are included to provide a thorough understanding of the present application. The present application may be practiced without these particulars. Moreover, some of the specific details have been omitted from the description in order to avoid obscuring or obscuring the focus of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Further, various operations will be described as multiple discrete operations, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.
The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrase "A/B" means "A or B". The phrase "A and/or B" means "(A and B) or (A or B)".
It should be noted that in this specification, like reference numerals and letters refer to like items in the following drawings, and thus, once an item is defined in one drawing, it need not be further defined and explained in subsequent drawings.
As used herein, the term module or unit may refer to or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality, or may be part of an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Fig. 1 is a schematic diagram illustrating a basic display flow of a conventional desktop display system.
In the prior art, a conventional computer desktop system projects related picture windows onto a display through a host for displaying so as to facilitate man-machine interaction, and desktop display of the computer desktop system is limited by the constraint of the display, so that a plurality of task windows are required to be packed, integrated and optimized in the host and then are transmitted to the display in a unified manner for displaying.
As shown in FIG. 1, two applications 101, such as applications A and B, may be executed simultaneously or may be executed separately in sequence. A Graphics Device Interface (GDI) 102, which is mainly responsible for information exchange between system domain drawing programs, processes Graphics and image output of all windows programs, and enables a programmer to convert output of an application program into output and composition on a hardware Device without considering the hardware Device and normal driving of the Device; the multimedia programming interface 103, for example, DirectX, may enable windows-based games or multimedia programs to achieve higher execution efficiency, enhance 3D graphics and sound effects, and provide designers with a common hardware driver standard, so that developers do not have to write different drivers for each brand of hardware, thereby isolating the relationship between applications and hardware. Applications 101 (e.g., a and B) communicate with system graphics components via GDI and DirectX, respectively, and drive hardware resources such as GPU via a Windows Display Driver Model (WDDM) 104, thereby forming the graphics characteristics of the system. The WDDM 104 forms desktop windows of the application, such as the graphics surfaces 107 (e.g., a and B), by invoking hardware resources, such as a GPU, where the graphics surfaces are similar to each of the separate menu windows displayed by the display desktop. Then, the graphic surface a and the graphic surface B are integrated by a Desktop Window Manager (DWM) 108 to form a unified Desktop display graphic, and are temporarily stored in the GPU memory 109. And then transmitted to the display 110 through a Video transmission channel, such as a VGA (Video Graphics Array), an HDMI (High Definition Multimedia Interface), a DP (display Interface) or a DVI (Digital Visual Interface) line, for displaying.
It can be seen that each time the application operates, the image integration by the DWM 108 is required for output. This is a limitation of only one display desktop, all program windows must be integrated into one frame of graphical output display, which is inefficient and inconvenient for application flattening operations.
According to the technical scheme, the virtual desktop system is realized through a VR technology, and the constraint of screen projection display of the display is relieved through a wide visual field of people. Through VR virtual desktop technique, can all put into people's the field of vision a plurality of task menu windows, the tiling is expanded. Therefore, multitask flattening operation is achieved, and efficiency is improved. Some embodiments of the present application are described below with reference to the drawings.
FIG. 2 shows a schematic diagram of an example image processing system according to an embodiment of the present application.
The image processing system 20 may include an image source device 210, a network 220, and an image display device 230. The image display device 230 further includes a display controller 231 and a display screen 232. Image source device 210 may include, among other things, a user device 212 and a server 211. User devices 212 may include, but are not limited to, desktop computers, laptop computer devices, tablet computers, smart phones, in-vehicle infotainment devices, streaming client devices, and various other electronic devices for image processing. The server 211 may include a service device of a cloud infrastructure provider or a data center service provider, which may provide all services of computing, rendering, storing, encoding and decoding, and the like. The network 220 may include various transmission media for data communication between the image source device 210 and the image display device 230, such as a wired or wireless data transmission link, wherein the wired data transmission link may include a VGA line, an HDMI line, a DP line or a DVI line, etc., and the wireless data transmission link may include the internet, a local area network, a mobile communication network, and combinations thereof.
The image source device 210 may generate the image data sent to the image display device 230 in part according to the existing desktop display flow as shown in fig. 1, but in various embodiments of the present application, no image integration flow is performed in the image source device 210, i.e., DWM 108 is eliminated for Windows systems, the surface flipper image integration module is eliminated for android systems, and the Wayland composer image integration module is eliminated for Linux systems. In the following embodiments, regarding the image generation process performed by the image source device 210, a difference from the related art will be described with emphasis, and the same or similar parts to the related art will be briefly described.
The image Display device 230 may include a wearable device (e.g., a Head-Mounted Display (HMD), etc.), a Virtual Reality (VR), and/or Augmented Reality (AR) device, etc. Wherein, wear-type virtual reality equipment among the virtual reality display device is one of wearing formula equipment, also known as virtual reality helmet, VR glasses or glasses formula display. In the image display device 230, the display controller 231 is configured to execute the image processing method provided in the embodiment of the present application, and the display screen 232 is configured to display the image processed by the display controller 231 to the user. The controller 231 and the display screen 232 are further illustrated with subsequent reference to fig. 3.
In some possible scenarios of the image processing system 20, a user may use a user device 212, such as a smartphone, and connect the smartphone and an image display device 230, such as a head mounted virtual reality device, using a data line.
In other possible scenarios of the image processing system 20, the user device 212 or the server 211 may be communicatively connected to an image display device 230, such as a head mounted virtual reality device, based on a fifth-generation mobile communication technology (5G) network. As an example, with the deployment of 5G networks, it provides an ultra-large access bandwidth for end users, and its data transmission rate is much higher than that of the previous cellular networks, up to 10Gbit/s, faster than the transmission speed of the current wired internet, and 100 times faster than the current 4G LTE cellular networks. Another advantage of 5G networks is lower network latency (faster response time), typically less than 1 millisecond, which lays the bandwidth access foundation for the VR's cloud. In addition, when the server 211 is a service device of a cloud infrastructure and a data center, a user can access a configurable shared pool of computing resources (e.g., networks, servers, storage, applications, and services) via a network anytime, anywhere, conveniently, and on-demand, the resources in the pool can be allocated and released quickly, and little management workload and interaction with a service provider are required. Massive terminal users access the network through 5G, and the server 211 provides all services such as calculation, rendering, storage, encoding and decoding. The third-party content provider provides all content services, so that the user terminal can be light and convenient, and does not need strong hardware calculation support any more. Each end user only needs one login account and one receiving display terminal such as the image display device 230. Due to the deployment of the 5G network, the super-large bandwidth and the super-low delay of the network can guarantee the availability of rendered pictures of the cloud VR, and the application of the VR virtual desktop can be further promoted.
In other possible scenarios of the image processing system 20, the user device 212 or the server 211 may be interconnected with the image display device 230 based on other communication networks, which may include, for example, a wifi hotspot network, a wifi P2P network, a bluetooth network, a zigbee network, a Near Field Communication (NFC) network, and/or a future-evolution Public Land Mobile Network (PLMN) or the internet, and the like.
Exemplarily, an example of the image display apparatus 230 is specifically described below with reference to fig. 3. Fig. 3 shows an exemplary schematic diagram of a head mounted virtual reality device 300. The head mounted virtual reality device 300 may include a processing unit 310, a storage module 320, a transceiver module 330, a video rendering module 340, a left display screen 342, a right display screen 344, an audio codec module 350, a microphone 352, a speaker 354, a head motion capture module 360, a gesture motion recognition module 370, optics 380, and so forth.
It is to be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation to the head-mounted virtual reality apparatus 300. In other embodiments of the present application, the head mounted virtual reality device 300 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processing unit 310 may include one or more processing units, such as: the processing unit 310 may include an Application Processor (AP), a modem processor, a controller, a Digital Signal Processor (DSP), and/or a baseband processor, among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processing unit 310 for storing instructions and data. In some embodiments, the memory in processing unit 310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processing unit 310. If processing unit 310 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processing unit 310, thereby increasing the efficiency of the system.
The storage module 320 may be used to store computer-executable program code, which includes instructions. The storage module 320 may include a program storage area and a data storage area. Wherein the storage program area may store an operating system, an application program required for at least one function, and the like. The storage data area of the storage module 320 may store data (e.g., thumbnails of applications) created during use of the virtual reality device 300, and the like.
In one or more embodiments of the present application, the storage module 320 may store at least a portion of one or more image processing methods provided by embodiments of the present application. It is to be understood that when the code of the image processing method stored in the storage module 320 is executed by the processing unit 310, it may be implemented to display application interfaces of a plurality of applications on virtual display screens in a virtual environment of a three-dimensional space, respectively.
In addition, the storage module 320 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like. The processing unit 310 performs various functions of the virtual reality device 300 and data processing by executing instructions stored in the storage module 320 and/or instructions stored in a memory provided in the processing unit 310.
The transceiving module 330 may receive data to be transmitted from the processing unit 310 and then transmit the data to the image source device 210; meanwhile, the transceiver module 330 may also receive data, e.g., frames of image data, etc., from the image source device 210.
The video rendering module 340 may include a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a video codec, and a GPU for performing mathematical and geometric calculations for graphics rendering. In some other embodiments, the video rendering module 340 may be disposed in the processing unit 310, or some functional modules of the video rendering module 340 may be disposed in the processing unit 310, for example, the processing unit 310 may integrate a CPU and a GPU. In various embodiments, the CPU and the GPU may cooperate to execute the image processing method provided in the embodiments of the present application, for example, part of the algorithm in the image processing method is executed by the CPU, and another part is executed by the GPU, so as to obtain faster processing efficiency.
The left and right display screens 342, 344 display images to the left and right eyes of the user, respectively, where the images may be embodied as still images or video frames in a video. Both the left display screen 342 and the right display screen 344 include display panels. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. The optics 380 may include an optic such as a zoom lens, and the optics 380 perform a refractive adjustment of the light emitted by the display screen during the display of the image to compensate for the myopic eye, hyperopic eye, and the ametropia of the lens of an astigmatic user so that the user can clearly see the image of the display screen.
The audio codec module 350 is used for converting digital audio information into an analog audio signal and outputting the analog audio signal, and also for converting an analog audio input into a digital audio signal. The audio codec module 350 may also be used to encode and decode audio signals. In some embodiments, the audio codec module 350 may be disposed in the processing unit 310, or some functional modules of the audio codec module 350 may be disposed in the processing unit 310. The microphone 352 is used to input analog audio of a user to the audio codec module 350, and the speaker 354 is used to play an analog audio signal output by the audio codec module 350 to the user.
The head motion capture module 360 may capture and recognize the motion of the user's head, such as whether the user is lowering or raising the head, the angle of rotation of the user's head, and so on.
The gesture motion recognition module 370 may recognize a user's operational gesture motion, such as a slide left and right, slide up and down, click confirmation, and the like.
In some possible scenarios, when the user uses the virtual reality device 300, the transceiver module 330 receives video or image information transmitted from the image source device 210, such as a PC, and then the video or image information is processed by the processing unit 310, and the processing unit 310 processes the related video environment or graphic window information and then projects the processed information onto the left and right display screens (342, 344) through the video rendering module 340; forming a virtual display environment or virtual display desktop display system. The audio codec module 350 decodes the audio information and sends it to the speaker 354 for sound generation, so as to provide a more immersive video experience for VR. Meanwhile, the microphone 352 can capture voice information of a person, so that audio interaction between the VR system and the PC side is provided. In the process of using the virtual reality device 300, when a person starts to lower his head, the head motion capture module 360 captures the motion and triggers a corresponding operation of the virtual desktop display system, such as opening and displaying a global thumbnail; and when the person starts looking up or looking down, triggering to close the global thumbnail. Meanwhile, the head motion capture module 360 can capture the motions of head rotation and the like so as to trigger the synchronous following display of the window picture in front of the human visual field. The gesture motion recognition module 370 recognizes gesture motions of the user, such as sliding left and right, sliding up and down, click confirmation, etc., to facilitate the user's interaction with the virtual desktop system.
In some embodiments, the display controller 231 of the image display apparatus 230 shown in fig. 2 may be implemented in software, but the display controller 231 may also be implemented in hardware, software, or a combination of software and hardware. For example, referring to fig. 2 and 3, in case the image display device 230 is a virtual reality device 300, the display controller 231 of the image display device 230 may include, for example, a processing unit 310, a storage module 320, a transceiving module 330, a video rendering module 340. The display screen 232 of the image display device 230 may include a left display screen 342, a right display screen 344, and optics 380. In other embodiments, the image display device 230 may not include one or more portions shown in fig. 3, or may include other portions not shown in fig. 3.
The image processing method provided by the embodiment of the present application will be exemplarily described below with reference to the drawings and practical application scenarios. Fig. 4 illustrates an interactive process in which the image source device 210 and the image display device 230 perform the image processing method of the present application. As shown in fig. 4, part of the prior art mentioned in fig. 1 above is not shown in fig. 4, for example, one or more applications including the image source device 210 start and apply for creating an application program window, a processor of the image source device 210 executes an application program flow and completes resource allocation, and an application may call a 2D or 3D image program interface such as OpenGL, DirectX, etc., according to the type of the application. As another example, in a case where the image source device 210 is a server such as in the cloud, after the image display device 230 connects the image source device 210 through the network, for example, the image display device 230 logs in the cloud server to request a virtual reality service, the image source device 210 may allocate device resources, such as a CPU, a GPU, and the like, for the virtual reality service to the image display device 230 in the cloud.
Subsequently, at block 401: and drawing a plurality of application windows according to the plurality of applications, wherein each application window in the plurality of application windows corresponds to the plurality of applications one to one. For example, the image source device 210 may process a drawing graphics instruction of each application by using the GPU, complete drawing of a graphics application window of each application, and cache the drawn application window into a memory, where the memory may be a cache of the GPU, a cache of the CPU, or another memory of the image source device 210, and is not limited herein.
At block 402: identification information is added to each of the plurality of application windows. Illustratively, the image source device 210 adds identification information to the pixel data of these application windows. For example, the image source device 210 may add a Header file (Header) to a cached frame of an application window in memory, which may include metadata such as an application type, a file name, a creation time, an author, etc. of an application to which the application window corresponds.
At block 403: a plurality of application windows are encoded. A program, such as a display driver, of the image source device 210 may encode data of the application window in the memory.
After encoding, the image source device 210, either by wired or wireless means, at 404: the encoded plurality of application windows is transmitted to the image display apparatus 230. Here, the transmitted encoded plurality of application windows are not composited at the image source device within one image display frame for display to the user.
Next, the following flow will be described taking the virtual reality apparatus 300 as an example of the image display apparatus 230. It can be understood that, in fig. 4, a known technique for rendering the virtual display environment after the virtual reality device 300 is started is not specifically shown, for example, after the virtual reality device 300 is started, an application of a User Interface (UI) system of the virtual reality device 300 is also started, and the application applies for a corresponding application program window, and after the processing unit 310 allocates corresponding resources, the application calls a 2D/3D graphical program Interface, and processes a drawing instruction of the UI through the video rendering module 340, completes rendering of the virtual environment, and forms a VR-side virtual desktop framework.
Returning to FIG. 4, at block 405: an encoded plurality of application windows is received. The transceiving module 330 of the virtual reality device 300 receives data from the encoded plurality of application windows of the image source device 210 from the image source device 210.
After the virtual reality device 300 receives the data, at block 406: decoding to obtain a decoded plurality of application windows. The virtual reality apparatus 300 decodes the data of the application windows having the identification information and then transmits the data of the application windows to the virtual reality apparatus according to the respective types of applications, for example,
Figure BDA0002408769570000101
microblogs, jitters and the like are classified and stored in the storage module 320 of the virtual reality device 300.
At block 407: at least one virtual window is created. The virtual reality device 300 creates a corresponding number of virtual windows, which may be regions in a virtual desktop frame for displaying application windows for a user, according to the number of application windows cached in the storage module 320.
At block 408: and adding each application window into a virtual window corresponding to the application type in the plurality of virtual windows according to the application category corresponding to each application window in the plurality of application windows.
As an example, in the case that the number of application windows is less than a predetermined threshold, the virtual reality apparatus 300 may add each application window that has been saved in the storage module 320 to one virtual window, respectively. As another example, in a case that the number of application windows is greater than the predetermined threshold, the virtual reality device 300 checks the identification information of each application window already stored in the storage module 320, and adds each application window to a virtual window corresponding to the application type according to the respective type of the application corresponding to the application window, for example, when more and more open windows are available, the virtual reality device may perform classification and aggregation according to the category of the application, such as aggregation of the application windows of all open Word documents within one virtual window; the application windows of all the opened PowerPoint documents are aggregated in another virtual window, and the like. Further examples of this part will be further explained below with reference to the scene and the figures.
At block 409: at least one virtual window is synthesized in one image display frame and displayed. The video rendering module 340 composes all windows into one image display frame and generates two views of the image display frame, which are then displayed on the left display screen 342 and the right display screen 344.
The image processing method 400 shown in fig. 4 is further described below in conjunction with the scene and the figure.
Fig. 5a shows a schematic example of a virtual window presented by a display screen of a virtual reality device 300 according to an exemplary embodiment. The display scenario shown in fig. 5a is a further example of a possible implementation of the image processing method 400, and details of what has been described in fig. 4 are not repeated below.
In general, on a display screen when a user uses the virtual reality apparatus 300, the user can utilize a virtual display region of a three-dimensional space of 360 ° centered on the user. As shown in fig. 5a, an effective arrangement of virtual windows on the field of view region 520 of the virtual reality device 300 is a tiled circular arrangement. For example, a plurality of virtual windows 530a-530n may be arranged in a 360 ° circular tile within the user's field of view centered on the user 510, where each virtual window 530a-530n is content that may clearly present the application in the virtual window to the user 510.
In an exemplary scenario, in the event that the displayed virtual windows 530 do not exceed a predetermined threshold number, the virtual windows 530 are preferentially arranged in the field of view region 520 in front of the user 510. In some examples, it may be assumed that the visual angle of view of user 510 is approximately 120 degrees, and the field of view of user 510 corresponds to a range of angles of view of 120 degrees, under which condition the threshold number of displayed virtual windows 530 may be the number of virtual windows 530 that can be accommodated in field of view region 520 at the most, e.g., as illustrated in fig. 5a, the threshold number may be 4, i.e., a maximum of 4 virtual windows 530 may be displayed in one field of view region 520.
Thereafter, as the number of virtual windows 530 increases, each of the newly added virtual windows 530 may be arranged one by one in a ring shape centered on the user extending from both sides of the front view region 520 of the user 510. Thus, as the field of view of the user 510 moves, the virtual window 530 in a different field of view region 520 may be seen. For example, when the user 510 turns around, the viewing area 520, which is originally arranged behind the user 510, can be seen. User 510 may perform control operations on virtual window 530 through gestures, for example, a gesture to the left or right may slide virtual window 530, and a gesture to the up or down may close virtual window 530. Further, the user 510 may click on the content in the virtual window 530, and perform operations such as copying and cutting.
As an example, if user 510 is separately turned on a PC
Figure BDA0002408769570000111
Microblog and tremble 4 applications and display them to the user 510 himself via the virtual reality device 300, and then the virtual reality device 300 receives the programming from the PCAfter the data of the application windows of the code, the application windows of each decoded application are respectively added into a virtual window, and the virtual windows are synthesized in an image display frame by the video rendering module 340 and then displayed to the user 510, for example, in fig. 5a, a virtual window 530a may display a Word document, a virtual window 530b may display a PowerPoint document, a virtual window 530c may display the content of a microblog application, and a virtual window 530n may display the content of a tremble application.
According to the method and the device for displaying the application windows, the application windows of the plurality of applications can be respectively displayed on different view areas in the virtual environment of the three-dimensional space of the virtual reality, not only can each window not be shielded, but also the picture displayed on the view area is larger and clearer, and the visual experience of a user is enhanced.
Another possible illustrative example of a display screen of the virtual reality device 300 presenting a virtual window is described below in conjunction with fig. 5 b. The display scene shown in fig. 5b is also an example of a possible implementation of the image processing method 400, and details of what has been described in fig. 4 and fig. 5a are not repeated below.
The display scenario shown in fig. 5b is another scenario for the case where the displayed virtual windows 530 exceed a predetermined threshold number. In short, as more and more virtual windows 530 are opened, aggregation may be performed according to the category of the application, such as the application windows of all the opened Word documents, aggregated in one virtual window; and aggregating all the application windows of the opened PowerPoint documents in another virtual window.
As shown in fig. 5b, multiple application windows of each application occupy one virtual window 530, for example, virtual window 530a may display a first open Word document, virtual window 530b may display a first open PowerPoint document, virtual window 530c may display a first open content of a micro-blog application, and virtual window 530n may display a first open content of a tremble application. In each virtual window 530, the other application windows of each application that are subsequently opened are sequentially displayed in thumbnail form within each virtual window 530, for example, a plurality of thumbnails 531a, a plurality of thumbnails 532b, a plurality of thumbnails 533c, and a plurality of thumbnails 534n arranged at the bottom of the window are respectively reduced in the virtual windows 530a-530n, wherein the position of the thumbnails may also be at the top or both sides of the virtual window 530, which is not particularly limited herein. Each thumbnail can also be a virtual window added with an application window, and each thumbnail is independent and does not interfere with each other.
The user 510 may perform control operations on each virtual window 530 and the thumbnails therein through gestures, for example, when the hand of the user 510 slides to the left or right, the virtual window 530 in the field of view region 520 in front of the user also slides to the left or right. Further, clicking on the thumbnail in each virtual window 530 also enables switching of the display screen of the virtual window 530. For example, in the virtual window 530b, if the screen displayed by the current virtual window 530b is an application window in the leftmost thumbnail 532b, when the rightmost thumbnail 532b is clicked, the display screen of the virtual window 530b is switched to the application window of the rightmost thumbnail 532 b.
Alternatively or additionally, in other cases, for example, if the application windows of each application opened by user 510 have been aggregated but still spread out in a circular, tiled, 360 ° field of view of user 510. If at this point user 510 needs to tile 2 or more windows of the same application, for example, to compare the content in the application windows. The user 510 may drag any thumbnail in the virtual window 530 of the application into any other virtual window through a gesture operation, for example, drag a thumbnail 532b from the virtual window 530b into the virtual window 530c, and then the thumbnail 532b is displayed in the virtual window 530c at this time, and the other application windows displayed in the virtual window 530c are automatically closed. Further, the user 510 may click on the content in the virtual window 530, and perform operations such as copying and cutting.
Fig. 5c shows a schematic diagram of one possible example of a thumbnail in a virtual window. In fig. 5c, the thumbnail 532b may display header file information 541 of the application window, and the header file information 541 may include at least a portion of metadata in a header file of the application window, for example, for a file of Word, PowerPoint, and the like, information such as a file name, a modification time of the file, and the like may be displayed in the header file information 541. The thumbnail 532b may also display the content information after the content of the application window is simplified, wherein the content information after the simplification generally includes text information 542 and reduced picture information 543 according to the specific content included in the application window.
As an example, assuming that the content of the application window in the thumbnail 532b displayed in FIG. 5c in an enlarged manner is text-based, for example, the application window is a Word file opened by the user 510, the thumbnail 532b may display text information 542 after being simplified based on the content of the current page of the Word file, the text information 542 may include a part of the text content of the current page of the Word file, and in a possible case, the current page of the Word file is the first page of the file, the text information 542 may include, for example, the file title of the first page of the Word file, the entire text content of the first half page of the first page of the Word file, and/or the text content collected at intervals in the first page of the Word file, and so on.
As another example, assuming that the content of the application window in the thumbnail 532b displayed in an enlarged manner in fig. 5c is image content, for example, the application window is a tremble video page opened by the user 510, the thumbnail 532b may display the picture information 543 after being subjected to simplification processing based on the content of the video page, the picture information 543 may include, for example, a cover image of one or more videos in the video page having a video cover, or an image of a main area of the video page, which may be understood as an area including a play window of a video and a video title, and the image of the main area may include a cover image of a video to be played in the play window of the video, a title of the video, and the like.
As another example, assuming that the content of the application window in the thumbnail 532b displayed in an enlarged manner in fig. 5c is mainly based on both text and image content, for example, the application window is a microblog with text and matching images opened by the user 510, the thumbnail 532b may display text information 542 and picture information 543 simplified based on the text content and matching images of the microblog. The text information 542 may include, for example, a first sentence or a first segment of text in the text content of the microblog, all text content of a first half of the text content of the microblog, and/or text content collected at intervals in the text content of the microblog, and the like. The picture information 543 may include one or more images of the matching of the microblog.
The thumbnails shown in fig. 5b and 5c include information obtained by simplifying the content of the application window, so that the user can conveniently identify the display content in each thumbnail, and convenience in user operation is improved.
Another possible illustrative example of a display screen of the virtual reality device 300 presenting a virtual window is described below in conjunction with fig. 5 d. The display scenario shown in fig. 5d is also an example of a possible implementation of the image processing method 400 shown in fig. 4 and the image processing methods shown in subsequent fig. 6 to 7, and details already described in fig. 4 and fig. 5a to 5c are not repeated below.
The display scenario shown in fig. 5d is another scenario for the case where the displayed virtual windows 530 exceed a predetermined threshold number. As shown in fig. 5d, when more and more virtual windows 530 are opened, in order to reduce the trouble of the user 510 turning around, a global menu 550, for example, a thumbnail wheel, may be provided in front of the field of view of the user 510. Although the example shown in fig. 5d is an improvement over the example shown in fig. 5b, it will be understood by those skilled in the art that the menu 550 shown in fig. 5d may also be implemented in the example shown in fig. 5a, without limitation.
As shown in fig. 5d, the menu 550 may include corresponding thumbnails of each virtual window 530 arranged in a ring-shaped, tiled arrangement over the 360 ° field of view of the user 510, arranged in the menu 550 according to the position of the respective virtual window. For example, in menu 550 of FIG. 5, the thumbnail corresponding to virtual window 530a is thumbnail 551, the thumbnail corresponding to virtual window 530b is thumbnail 552, the thumbnail corresponding to virtual window 530c is thumbnail 553, and the thumbnail corresponding to virtual window 530d is thumbnail 554. The other thumbnails correspond to other virtual windows 530 that are not shown in FIG. 5 d.
In some embodiments, the image display device 230 may determine whether to generate the menu 550 based on the number of virtual windows 530 that need to be displayed to the user 510. For example, if the virtual windows 530 to be displayed are larger than a predetermined threshold number, the image display device 230 may generate the menu 550, wherein the threshold number may be, for example, 4, as illustrated in fig. 5d, i.e. a maximum of 4 virtual windows 530 may be displayed in one field of view region 520.
In some other implementations, the image display device 230 may generate the menu 550 according to a gesture operation or an indication of head movement by the user 510. As one example, the user 510 may cause the image display device 230 to generate the menu 550 by a gesture operation of simultaneously sliding down using a plurality of fingers, or the user 510 may cause the image display device 230 to generate the menu 550 by an action of lowering the head. As another example, a gesture operation or a head movement of the user 510 may be used not only to instruct the image display device 230 to generate the menu 550 but also to control the image display device 230 to display the menu 550. For example, in a case where the image display device 230 generates the menu 550 with the virtual windows 530 to be displayed being greater than the predetermined threshold number, the user 510 may control the image display device 230 to display the menu 550 by a head-down looking-down motion. Also for example, in the case where the image display device 230 generates the menu 550 according to the head movement of the user 510, the user 510 may control the image display device 230 to generate the menu 550 by the action of looking down at the head, and directly display the menu 550 after the menu 550 is generated.
Illustratively, as shown in fig. 5d, when the user 510 looks up or looking down normally, the menu 550 is not displayed, when the user 510 looks down, the menu 550 is displayed in front of the user 510, the user 510 switches the current visual field 520 by a gesture operation, for example, a specific selection action such as clicking or toggling, the user 510 can rotate the menu 550, the visual field 520 in front of the user 510 synchronously displays the rotated visual field of the menu 550 and the virtual window 530 in the visual field, and the user 510 can synchronously adjust the window displayed in the visual field 520 in front of the user 510 by operating the thumbnail in the menu 550, for example, the thumbnail 551 and 554.
According to the embodiment of the application, the image display equipment can put a plurality of application windows into the visual field of a user through a VR virtual display technology, and the efficient human-computer interaction interface is realized through technical means such as 360-degree annular tiling expansion, thumbnail and virtual window combination and global menu arrangement by adopting the virtual window, so that multitask flattening operation is realized, and the efficiency is improved.
FIG. 6 illustrates a flow diagram of an image processing method 600 according to some embodiments of the present application. In some embodiments, the image processing method 600 is implemented, for example, on an image display device, such as the image display device 230 shown in fig. 2, and the image processing method 600 may also be implemented on the virtual reality device 300 of fig. 3 as one illustrative example of the image display device 230. In some implementations, some or all of method 600 may be implemented on display controller 231 and/or display screen 232 as shown in fig. 2. In other embodiments, different components of the virtual reality device 300 as shown in FIG. 3 may implement different portions or all of the method 600.
For content not described in the embodiments of the above-described method and example scenario, reference may be made to the following method embodiments; likewise, reference may be made to embodiments of the above-described method and example scenarios for what is not described in the method embodiments below. For example, the image processing method 600 shown in fig. 6 is a further description of the image processing method 400 shown in fig. 4 and the display scene shown in fig. 5d, and what has been described in the foregoing embodiments will be briefly described below or will not be described again.
For ease of understanding, the method 600 shown in FIG. 6 is described below in conjunction with the display scenario shown in FIG. 5 d.
As shown in fig. 6, the display controller 231 of the image display apparatus 230, at block 601: it is determined whether a number of the plurality of virtual windows to be displayed exceeds a threshold associated with a field of view of the user. For example, for a threshold value associated with the user's field of view, it may be assumed that the visual angle of view of user 510 is approximately 120 degrees, and the field of view of user 510 corresponds to the 120 degree range of view, under which condition the threshold number of displayed virtual windows 530 may be the number of virtual windows 530 that can be accommodated at most in field of view region 520.
If the number of the plurality of virtual windows to be displayed exceeds a threshold associated with the user's field of view, at block 602: and performing simplified image processing on the corresponding virtual window in the plurality of virtual windows to obtain the image content of each thumbnail in the plurality of thumbnails.
In some embodiments, for a virtual window displayed as a thumbnail, the display controller 231 extracts header information of an application window to which the virtual window is to be added, places at least part of the header information in the virtual window, and then the display controller 231 performs a simplification process on the cached pixel information of the application window, for example, removing information in the lower half of the application window, collecting header information, interlacing information, or extracting pixel information for picture recognition. The display controller 231 puts the reduced information into a virtual window, which is used for the user to recognize.
At block 603: a menu is generated that includes a plurality of thumbnails of a plurality of virtual windows, wherein the plurality of virtual windows are in one-to-one correspondence with the plurality of thumbnails. As described in the foregoing embodiment, since the number of virtual windows to be displayed exceeds the threshold, the display controller 231 may generate corresponding thumbnails for the virtual windows to be displayed, and generate a menu including the thumbnails, where the menu may be the menu 550 shown in fig. 5d, and details are not repeated here.
At block 604: and controlling the display screen to display the menu. The display controller 231 controls the display screen 232 to display the menu including the plurality of thumbnails according to a gesture operation of the user or an action of the head.
At block 605: and selecting one thumbnail from the plurality of thumbnails in the menu according to the instruction of the user. For example, referring to FIG. 5d, the user may select a thumbnail from a plurality of thumbnails in menu 550, such as selecting thumbnail 552, by gestural tap.
Subsequently, at block 606: the display controller 231 determines whether the virtual window corresponding to the selected thumbnail includes a plurality of application windows of the same type. As an example, if the display controller 231 determines that the virtual window 530b corresponding to the thumbnail 552 includes a plurality of application windows of the same type, for example, as shown in fig. 5d, the virtual window 530b includes a plurality of application windows of the same type, then at block 607: one of the plurality of application windows is displayed, along with thumbnails of other application windows in the plurality of applications. For example, the first one of the virtual windows 530b or the application window currently being opened is displayed, and the other application windows are displayed as thumbnails.
If not, at block 608: displaying a virtual window of the plurality of virtual windows corresponding to the selected thumbnail. For example, if virtual window 530b is shown in FIG. 5a, then virtual window 530b corresponding to thumbnail 552 is displayed directly.
According to the embodiment of the application, the image display equipment can put a plurality of application windows into the visual field of a user through a VR virtual display technology, and the efficient human-computer interaction interface is realized by adopting technical means of combining thumbnails and virtual windows, arranging a global menu and the like, so that multitask flattening operation is realized, and the efficiency is improved.
Another image processing method of the image display apparatus 230 is described below with reference to fig. 7.
Fig. 7 shows a schematic flow diagram of an image processing method 700 according to another embodiment of the present application. In some embodiments, the image processing method 700 is implemented, for example, on an image display device, such as the image display device 230 shown in fig. 2, and the image processing method 700 may also be implemented on the virtual reality device 300 of fig. 3 as one illustrative example of the image display device 230. In some embodiments, some or all of method 700 may be implemented on display controller 231 and/or display screen 232 as shown in fig. 2. In other embodiments, different components of virtual reality device 300 as shown in FIG. 3 may implement different portions or all of method 700.
For content not described in the embodiments of the above-described method and example scenario, reference may be made to the following method embodiments; likewise, reference may be made to embodiments of the above-described method and example scenarios for what is not described in the method embodiments. The contents described in the foregoing embodiments will be briefly described below or will not be described again.
As shown in fig. 7, the contents described in the blocks 701-703 and 706-709 are similar to the blocks 601-603 and 605-608, and are not described herein again. Method 700 differs in that display controller 231 may, at block 704: it is determined whether the user overlooks the display screen 231 of the image display apparatus 230 to determine whether to display the menu. If the user heads down looking down, at block 705: the display screen 232 displays a menu. For example, as shown in FIG. 5d, the menu 550 is not displayed when the user 510 is looking up or down normally, and the menu 550 is displayed in front of the user 510 when the user 510 is looking down.
The method embodiments of the present application may be implemented in software, magnetic, firmware, etc.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
One or more aspects of at least one embodiment may be implemented by representative instructions stored on a computer-readable storage medium, which represent various logic in a processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. These representations, known as "IP cores" may be stored on a tangible computer-readable storage medium and provided to a number of customers or manufacturing facilities to load into the manufacturing machines that actually make the logic or processor.
In some cases, an instruction converter may be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter may transform (e.g., using a static binary transform, a dynamic binary transform including dynamic compilation), morph, emulate, or otherwise convert the instruction into one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on the processor, off-processor, or partially on and partially off-processor.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or other computer readable medium. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), without limitation, a floppy diskette, optical disk, read-only memory (CD-ROM), magneto-optical disk, read-only memory (ROM), Random Access Memory (RAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic or optical card, flash memory, or a tangible machine-readable memory for transmitting information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some features of the structures or methods are shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. In some embodiments, these features may be arranged in a manner and/or order different from that shown in the illustrative figures. Additionally, the inclusion of structural or methodical features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, these features may not be included or may be combined with other features.
It is to be understood that, although the terms first, second, etc. may be used herein to describe various elements or data, these elements or data should not be limited by these terms. These terms are used merely to distinguish one feature from another. For example, a first feature may be termed a second feature, and, similarly, a second feature may be termed a first feature, without departing from the scope of example embodiments.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (26)

1. An image processing method for an image source device, comprising:
drawing a plurality of application windows according to a plurality of applications, wherein each application window in the plurality of application windows corresponds to the plurality of applications one to one;
encoding the plurality of application windows and transmitting the encoded plurality of application windows to an image display device, wherein the encoded plurality of application windows are not composited within one image display frame.
2. The method of claim 1, wherein the image source device comprises at least one of a user device and a server, and the image display device comprises a virtual reality display device.
3. An image processing method for an image display apparatus, comprising:
receiving and decoding an encoded plurality of application windows from an image source device to obtain the decoded plurality of application windows, wherein the plurality of application windows correspond to a plurality of applications one-to-one;
creating at least one virtual window;
adding each application window into a virtual window corresponding to the application type in the at least one virtual window according to the application category corresponding to each application window in the plurality of application windows; and
and synthesizing the at least one virtual window in an image display frame and displaying the image display frame.
4. The method of claim 3, further comprising:
in response to a user clicking a first application window of a first virtual window of the at least one virtual window through a display screen of the image display device, controlling the display screen to display contents of the clicked first application window in the first virtual window and simultaneously display thumbnails of other application windows included in the first virtual window.
5. The method of any one of claims 3-4, further comprising:
controlling the display screen to display the content of the first application window in a second virtual window of the at least one virtual window in response to a user dragging the first application window in the first virtual window to the second virtual window through a display screen of the image display device.
6. The method of any one of claims 3-5, further comprising:
in response to a user gesture of sliding left or right on a display screen of the image display device, displaying the at least one virtual window correspondingly sliding left or right.
7. The method of any of claims 3-6, wherein the image display device is a virtual reality display device.
8. An image processing method for an image display apparatus, comprising:
generating a menu containing a plurality of thumbnails of a plurality of virtual windows in at least one of a number of the virtual windows to be displayed exceeding a threshold and a user looking down a display screen of the image display device, wherein the plurality of virtual windows are in one-to-one correspondence with the plurality of thumbnails, and wherein each of the plurality of virtual windows includes at least one application window of a same application type, and the at least one application window is in one-to-one correspondence with at least one application; and
and controlling the display screen to display the menu.
9. The image processing method according to claim 8, wherein the menu includes a carousel composed of the plurality of thumbnails.
10. The image processing method of any of claims 8-9, wherein the threshold is related to a field of view of the user.
11. The image processing method according to any one of claims 8 to 10, wherein each of the plurality of thumbnail images includes identification information of a virtual window corresponding to the each of the plurality of virtual windows.
12. The image processing method according to any one of claims 8 to 11, wherein each of the plurality of thumbnails includes image content obtained by subjecting a corresponding virtual window of the plurality of virtual windows to simplified image processing, wherein the simplified image processing includes at least one of interlacing the corresponding virtual window and removing a partial content of the corresponding virtual window.
13. The image processing method according to any one of claims 8 to 12, further comprising:
selecting one thumbnail from a plurality of thumbnails in the menu and displaying a virtual window corresponding to the selected thumbnail among the plurality of virtual windows according to the user's instruction,
wherein the instruction includes at least one of a swipe of the menu by the user and a click of the selected thumbnail.
14. The image processing method according to claim 13, wherein in a case where the virtual window corresponding to the selected thumbnail includes a plurality of application windows of the same type, said displaying the corresponding virtual window further comprises:
and displaying one application window in the multiple application windows of the same type and thumbnails of other application windows in the multiple application windows of the same type in the corresponding virtual window.
15. The image processing method of any of claims 8-14, wherein the image display device comprises a virtual reality display device.
16. An image display apparatus characterized by comprising: a display controller and a display screen are displayed,
the display controller is used for receiving a plurality of coded application windows from an image source device and decoding the application windows to obtain a plurality of decoded application windows, wherein the application windows correspond to a plurality of applications one to one; creating at least one virtual window; adding each application window into a virtual window corresponding to the application type in the at least one virtual window according to the application category corresponding to each application window in the plurality of application windows; and compositing the at least one virtual window in an image display frame; and
the display screen is used for displaying the image display frame.
17. An image display apparatus characterized by comprising: a display controller and a display screen are displayed,
the display controller is configured to generate a menu including a plurality of thumbnails of a plurality of virtual windows when at least one of a number of the virtual windows to be displayed exceeds a threshold and a user overlooks the display screen, where the plurality of virtual windows correspond to the plurality of thumbnails one to one, and where each of the plurality of virtual windows includes at least one application window of a same application type, and the at least one application window corresponds to at least one application one to one; and
and controlling the display screen to display the menu.
18. The image display device of claim 17, wherein the menu comprises a carousel of the plurality of thumbnails.
19. The image display device of any of claims 17-18, wherein the threshold is related to a field of view of the user.
20. The image display device according to any one of claims 17 to 19, wherein each of the plurality of thumbnails includes identification information of a virtual window of the plurality of virtual windows corresponding to the each thumbnail.
21. The image display device according to any one of claims 17 to 20, wherein each of the plurality of thumbnails includes image content obtained by performing reduced image processing on a corresponding virtual window of the plurality of virtual windows, wherein the reduced image processing includes at least one of interlacing the corresponding virtual window and removing a partial content of the corresponding virtual window.
22. The image display device according to any one of claims 17 to 21, further comprising:
the display controller is further configured to select one thumbnail from a plurality of thumbnails in the menu and display a virtual window corresponding to the selected thumbnail among the plurality of virtual windows according to the user's instruction,
wherein the instruction includes at least one of a swipe of the menu by the user and a click of the selected thumbnail.
23. The image display device according to claim 22, wherein in a case where the virtual window corresponding to the selected thumbnail includes a plurality of application windows of the same type, the displaying the corresponding virtual window further comprises:
and displaying one application window in the multiple application windows of the same type and thumbnails of other application windows in the multiple application windows of the same type in the corresponding virtual window.
24. The image display device of any of claims 17-23, wherein the image display device comprises a virtual reality display device.
25. A computer-readable storage medium having instructions stored thereon, which when executed on a computer cause the computer to perform the method of any one of claims 1-15.
26. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, an
A processor for executing the instructions in the memory to perform the method of any of claims 1-15.
CN202010169749.4A 2020-03-12 2020-03-12 Image processing method, image display device, storage medium, and electronic device Pending CN113391734A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010169749.4A CN113391734A (en) 2020-03-12 2020-03-12 Image processing method, image display device, storage medium, and electronic device
PCT/CN2021/080281 WO2021180183A1 (en) 2020-03-12 2021-03-11 Image processing method, image display device, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010169749.4A CN113391734A (en) 2020-03-12 2020-03-12 Image processing method, image display device, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN113391734A true CN113391734A (en) 2021-09-14

Family

ID=77615740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010169749.4A Pending CN113391734A (en) 2020-03-12 2020-03-12 Image processing method, image display device, storage medium, and electronic device

Country Status (2)

Country Link
CN (1) CN113391734A (en)
WO (1) WO2021180183A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281221A (en) * 2021-12-27 2022-04-05 广州小鹏汽车科技有限公司 Control method and device for vehicle-mounted display screen, vehicle and storage medium
CN116212361A (en) * 2021-12-06 2023-06-06 广州视享科技有限公司 Virtual object display method and device and head-mounted display device
CN116301482A (en) * 2023-05-23 2023-06-23 杭州灵伴科技有限公司 Window display method of 3D space and head-mounted display device
WO2024066754A1 (en) * 2022-09-29 2024-04-04 歌尔股份有限公司 Interaction control method and apparatus, and electronic device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129444A (en) * 2022-06-10 2022-09-30 北京凌宇智控科技有限公司 Application program display method and device and computer readable storage medium
CN116107479B (en) * 2023-03-02 2024-02-13 优视科技有限公司 Picture display method, electronic device and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105829994A (en) * 2013-12-21 2016-08-03 奥迪股份公司 Device and method for navigating within a menu for controlling a vehicle, and selecting a menu entry from the menu
CN105975146A (en) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 Content distribution display method and apparatus
US20160357358A1 (en) * 2015-06-07 2016-12-08 Apple Inc. Device, Method, and Graphical User Interface for Manipulating Application Windows
CN108924538A (en) * 2018-05-30 2018-11-30 太若科技(北京)有限公司 The screen expanding method of AR equipment
CN110347305A (en) * 2019-05-30 2019-10-18 华为技术有限公司 A kind of VR multi-display method and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138231B (en) * 2015-10-20 2019-05-24 北京奇虎科技有限公司 Application program image target rendering method and device
KR102547321B1 (en) * 2018-06-01 2023-06-23 삼성전자주식회사 Image display device and operating method for the same
CN110381195A (en) * 2019-06-05 2019-10-25 华为技术有限公司 A kind of throwing screen display methods and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105829994A (en) * 2013-12-21 2016-08-03 奥迪股份公司 Device and method for navigating within a menu for controlling a vehicle, and selecting a menu entry from the menu
US20160357358A1 (en) * 2015-06-07 2016-12-08 Apple Inc. Device, Method, and Graphical User Interface for Manipulating Application Windows
CN105975146A (en) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 Content distribution display method and apparatus
CN108924538A (en) * 2018-05-30 2018-11-30 太若科技(北京)有限公司 The screen expanding method of AR equipment
CN110347305A (en) * 2019-05-30 2019-10-18 华为技术有限公司 A kind of VR multi-display method and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116212361A (en) * 2021-12-06 2023-06-06 广州视享科技有限公司 Virtual object display method and device and head-mounted display device
CN116212361B (en) * 2021-12-06 2024-04-16 广州视享科技有限公司 Virtual object display method and device and head-mounted display device
CN114281221A (en) * 2021-12-27 2022-04-05 广州小鹏汽车科技有限公司 Control method and device for vehicle-mounted display screen, vehicle and storage medium
WO2024066754A1 (en) * 2022-09-29 2024-04-04 歌尔股份有限公司 Interaction control method and apparatus, and electronic device
CN116301482A (en) * 2023-05-23 2023-06-23 杭州灵伴科技有限公司 Window display method of 3D space and head-mounted display device
CN116301482B (en) * 2023-05-23 2023-09-19 杭州灵伴科技有限公司 Window display method of 3D space and head-mounted display device

Also Published As

Publication number Publication date
WO2021180183A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
CN113391734A (en) Image processing method, image display device, storage medium, and electronic device
US11321928B2 (en) Methods and apparatus for atlas management of augmented reality content
US11024083B2 (en) Server, user terminal device, and control method therefor
US9239661B2 (en) Methods and apparatus for displaying images on a head mounted display
CN110770785B (en) Screen sharing for display in VR
US10606564B2 (en) Companion window experience
US20140320592A1 (en) Virtual Video Camera
CN107924587A (en) Object is directed the user in mixed reality session
US20130335442A1 (en) Local rendering of text in image
US20160261841A1 (en) Method and device for synthesizing three-dimensional background content
US20220229535A1 (en) Systems and Methods for Manipulating Views and Shared Objects in XR Space
JP7418393B2 (en) 3D transition
CN112527222A (en) Information processing method and electronic equipment
US20140168239A1 (en) Methods and systems for overriding graphics commands
US20220172440A1 (en) Extended field of view generation for split-rendering for virtual reality streaming
US20140168240A1 (en) Methods and systems for overriding graphics commands
CN107925657A (en) Via the asynchronous session of user equipment
US11961178B2 (en) Reduction of the effects of latency for extended reality experiences by split rendering of imagery types
US20140173028A1 (en) Methods and systems for overriding graphics commands
US20230136064A1 (en) Priority-based graphics rendering for multi-part systems
KR20200003291A (en) Master device, slave device and control method thereof
US11961184B2 (en) System and method for scene reconstruction with plane and surface reconstruction
GB2568691A (en) A method, an apparatus and a computer program product for augmented/virtual reality
JP7419529B2 (en) Immersive teleconference and telepresence interactive overlay processing for remote terminals
US20240048727A1 (en) Method and system of low latency video coding with interactive applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination