WO2021180183A1 - 图像处理方法和图像显示设备、存储介质和电子设备 - Google Patents

图像处理方法和图像显示设备、存储介质和电子设备 Download PDF

Info

Publication number
WO2021180183A1
WO2021180183A1 PCT/CN2021/080281 CN2021080281W WO2021180183A1 WO 2021180183 A1 WO2021180183 A1 WO 2021180183A1 CN 2021080281 W CN2021080281 W CN 2021080281W WO 2021180183 A1 WO2021180183 A1 WO 2021180183A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
application
window
windows
image
Prior art date
Application number
PCT/CN2021/080281
Other languages
English (en)
French (fr)
Inventor
孙增才
蒋臣迪
孙兴阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021180183A1 publication Critical patent/WO2021180183A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • One or more embodiments of the present application generally relate to the image processing field of virtual reality technology, and specifically relate to an image processing method, an image display device, a storage medium, and an electronic device.
  • VR Virtual Reality
  • the basic way to achieve this is to simulate a virtual environment through a computer to give people a sense of immersion in the environment.
  • VR technology has also made great progress and has gradually become a new field of science and technology.
  • the realization of a PC virtual desktop system through VR technology is also a major focus of current research.
  • Some embodiments of the application provide an image processing method, an image display device, a storage medium, and an electronic device.
  • the following describes the application from multiple aspects, and the implementations and beneficial effects of the following multiple aspects can be referred to each other.
  • the embodiments of the present application provide an image processing method for an image source device, including: the image source device draws application windows of these multiple applications according to multiple applications opened by a user, Each application window has a one-to-one correspondence with multiple applications; the image source device encodes multiple application windows and sends the encoded multiple application windows to the image display device, where the sent encoded multiple application windows are not in the image
  • the source device is combined in one image display frame.
  • the image source device of the embodiment of the present application may directly send the encoded multiple application windows to the image display device without executing the image integration process, thus solving the problem of the existing image
  • the source device must integrate all program windows into one frame of graphic output display, which is inefficient and inconvenient for the flat operation of the application.
  • the image source device includes at least one of a user device and a server
  • the image display device includes a virtual reality display device.
  • the embodiments of the present application provide an image processing method for an image display device, including: the image display device receives a plurality of encoded application windows from the image source device and decodes them, and then obtains the decoded Multiple application windows, where multiple application windows have a one-to-one correspondence with multiple applications opened by the user on the image source device; the image display device creates at least one virtual window; the image display device can correspond to each of the multiple application windows
  • the application category corresponding to the window for example, Weibo, Douyin, Word and other applications, add each application window to the virtual window corresponding to the application type in the created at least one virtual window; the image display device will at least A virtual window is combined in an image display frame and the frame is displayed on the display screen.
  • the implementation of the present application can display the application windows of multiple applications on different field of view areas in the virtual environment of the three-dimensional space of virtual reality, and can adjust the application windows according to the application category.
  • the application windows are aggregated by category, not only can display more windows, and each window will not be blocked, but the picture displayed on the field of view is larger and clearer, which enhances the user's visual experience.
  • the method further includes: in response to the user clicking on the first application window in the first virtual window of the at least one virtual window through the display screen of the image display device, controlling the display screen to be displayed in the first virtual window The content of the clicked first application window is displayed in, and thumbnails of other application windows included in the first virtual window are simultaneously displayed in the first virtual window.
  • it further includes: in response to the user dragging the first application window in the first virtual window to the second virtual window in the at least one virtual window through the display screen of the image display device, controlling The display screen displays the content of the first application window in the second virtual window.
  • the method further includes: in response to a user's gesture of sliding left and right on the display screen of the image display device, displaying at least one virtual window correspondingly sliding left and right.
  • the image display device is a virtual reality display device.
  • the embodiments of the present application provide an image processing method for an image display device, including: at least one of the number of virtual windows to be displayed exceeds a threshold and the user looks down on the display screen of the image display device.
  • a menu containing multiple thumbnails of multiple virtual windows is generated, wherein the multiple virtual windows correspond to multiple thumbnails one-to-one, and each of the multiple virtual windows includes the same application type
  • At least one application window, and at least one application window has a one-to-one correspondence with at least one application; and controlling the display screen to display a menu.
  • the implementation of the present application can provide users with a convenient way to control virtual windows when there are a large number of virtual windows and/or the corresponding user’s head movements, by using thumbnails and
  • the combination of virtual windows and the arrangement of global menus and other technical means realize an efficient human-computer interaction interface, thereby realizing multi-task flat operation for users and improving efficiency.
  • the menu includes a dial composed of multiple thumbnails.
  • the threshold is related to the user's field of view.
  • each thumbnail of the plurality of thumbnails includes identification information of a virtual window corresponding to each thumbnail of the plurality of virtual windows.
  • each thumbnail of the plurality of thumbnails includes image content obtained by performing simplified image processing on a corresponding virtual window in the plurality of virtual windows, wherein the simplified image processing includes corresponding At least one of interlacing the virtual window and removing part of the content of the corresponding virtual window.
  • the thumbnails can easily identify the displayed content in each thumbnail by including the information after the content of the application window is simplified, and improve the user's operational efficiency. Convenience.
  • it further includes: according to the user's instruction, selecting a thumbnail from the multiple thumbnails in the menu and displaying the virtual window corresponding to the selected thumbnail among the multiple virtual windows , Wherein the instruction includes at least one of the user's sliding of the menu and clicking of the selected thumbnail.
  • displaying the corresponding virtual window further includes: One application window among multiple application windows of the same type and thumbnails of other application windows among multiple application windows of the same type are displayed in.
  • the embodiments of the present application provide an image display device, including: a display controller and a display screen, and a display controller for receiving and decoding multiple encoded application windows from the image source device to obtain the Decode multiple application windows, where multiple application windows correspond to multiple applications one-to-one; create at least one virtual window; add each application window to the application category according to the application category corresponding to each of the multiple application windows The at least one virtual window is in a virtual window corresponding to the application type; and the at least one virtual window is combined into one image display frame; and the display screen is used to display the image display frame.
  • the embodiments of the present application can display the application windows of multiple applications on different field of view areas in the virtual environment of the three-dimensional virtual environment of virtual reality, and can adjust the application windows according to the application category.
  • the application windows are aggregated by category, not only can display more windows, and each window will not be blocked, but the picture displayed on the field of view is larger and clearer, which enhances the user's visual experience.
  • it further includes: in response to the user clicking on the first application window in the first virtual window of the at least one virtual window through the display screen of the image display device, controlling the display screen to be displayed in the first virtual window The content of the clicked first application window is displayed in, and thumbnails of other application windows included in the first virtual window are simultaneously displayed in the first virtual window.
  • it further includes: in response to the user dragging the first application window in the first virtual window to the second virtual window in the at least one virtual window through the display screen of the image display device, controlling The display screen displays the content of the first application window in the second virtual window.
  • the method further includes: in response to a user's gesture of sliding left and right on the display screen of the image display device, displaying at least one virtual window correspondingly sliding left and right.
  • the image display device is a virtual reality display device.
  • the embodiments of the present application provide an image display device, including: a display controller and a display screen, and the display controller is used to detect when the number of multiple virtual windows to be displayed exceeds a threshold and the user looks down on the display screen
  • a menu containing multiple thumbnails of multiple virtual windows is generated, wherein the multiple virtual windows correspond to the multiple thumbnails one-to-one, and each of the multiple virtual windows includes the same At least one application window of the application type, and the at least one application window corresponds to the at least one application one-to-one; and controlling the display screen to display a menu.
  • the implementations of the present application can provide users with convenient virtual window control methods when there are a large number of virtual windows and/or the corresponding user’s head movements, by using thumbnails and
  • the combination of virtual windows and the arrangement of global menus and other technical means realize an efficient human-computer interaction interface, thereby realizing multi-task flat operation for users and improving efficiency.
  • the menu includes a dial composed of multiple thumbnails.
  • the threshold is related to the user's field of view.
  • each thumbnail of the plurality of thumbnails includes identification information of a virtual window corresponding to each thumbnail of the plurality of virtual windows.
  • each thumbnail of the plurality of thumbnails includes image content obtained by performing simplified image processing on a corresponding virtual window in the plurality of virtual windows, wherein the simplified image processing includes corresponding At least one of interlacing the virtual window and removing part of the content of the corresponding virtual window.
  • the thumbnails can easily identify the displayed content in each thumbnail by including the information after the content of the application window is simplified, and improve the user's operational efficiency. Convenience.
  • the display controller is further configured to select a thumbnail from the multiple thumbnails in the menu and display the selected thumbnail from the multiple virtual windows according to the user's instruction.
  • displaying the corresponding virtual window further includes: One application window among multiple application windows of the same type and thumbnails of other application windows among multiple application windows of the same type are displayed in.
  • the image display device includes a virtual reality display device.
  • the present application provides a computer-readable storage medium, which may be non-volatile.
  • the storage medium contains instructions that, after being executed, implement the method described in any one of the foregoing aspects or implementation manners.
  • the present application provides an electronic device, including a memory, configured to store instructions executed by one or more processors of the electronic device, and a processor, configured to execute instructions in the memory to execute the instructions in accordance with the foregoing The method described in any one aspect or embodiment.
  • Figure 1 shows a schematic diagram of the basic display process of an existing desktop display system.
  • Fig. 2 shows a schematic diagram of an example image processing system according to an embodiment of the present application.
  • Fig. 3 shows a schematic diagram of an example image display device according to an embodiment of the present application.
  • Fig. 4 shows an interactive process of an image source device and an image display device according to an embodiment of the present application in executing the image processing method of the present application.
  • 5a-5d show schematic diagrams of graphical user interfaces presented by the display screen of an image display device according to an exemplary embodiment.
  • Fig. 6 shows a schematic diagram of an image processing method according to an embodiment of the present application.
  • Fig. 7 shows a schematic diagram of an image processing method according to another embodiment of the present application.
  • module or unit may refer to or include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated or group) that executes one or more software or firmware programs, and/or Memory (shared, dedicated or group), combinational logic circuit, and/or other suitable components that provide the described functions, or may be an application specific integrated circuit (ASIC), electronic circuit, executing one or more software or firmware
  • ASIC application specific integrated circuit
  • the program is part of the processor (shared, dedicated or group) and/or memory (shared, dedicated or group), combinational logic circuit, and/or other suitable components that provide the described functions.
  • Figure 1 shows a schematic diagram of the basic display process of an existing desktop display system.
  • the traditional computer desktop system uses the host to project the relevant screen windows to the monitor for display, so as to facilitate human-computer interaction.
  • the desktop display is limited by the constraints of the monitor, and multiple task windows are required.
  • Packing integration and optimization are carried out in the host before unified transmission to the display for display.
  • two applications 101 can be executed simultaneously or separately in order.
  • Graphics Device Interface (GDI) 102 The main task is to exchange information between drawing programs in the system domain, and handle the graphics and image output of all windows programs.
  • the emergence of GDI makes programmers need not care about the normality of hardware devices and equipment. Drive, you can convert the output of the application program into the output and composition on the hardware device; and the multimedia programming interface 103, for example, DirectX can make games or multimedia programs on the windows platform obtain higher execution efficiency, strengthen 3D graphics and Sound effects, and provide designers with a common hardware driver standard, so that developers do not have to write different drivers for each brand of hardware, and isolate the relationship between the application and the hardware.
  • Application 101 communicates with system graphics components through GDI and DirectX respectively, and drives hardware resources such as GPU through the Windows Display Driver Model (WDDM) 104, thus forming the graphics characteristics of the system .
  • WDDM 104 forms the desktop window of the application program by calling hardware resources such as GPU, such as the graphics surface 107 (for example, A and B), where the graphics surface is similar to each independent menu window displayed on the monitor desktop. Then, the graphic surface A and the graphic surface B are integrated by the Desktop Window Manager (DWM) 108 to form a unified desktop display graphic, and are temporarily stored in the GPU memory 109.
  • DWM Desktop Window Manager
  • VGA Video Graphics Array, video graphics array
  • HDMI High Definition Multimedia Interface, high-definition multimedia interface
  • DP DisplayPort, display interface
  • DVI Digital Visual Interface, digital video interface
  • the technical solution of the present application hopes to realize a virtual desktop system through VR technology, and release the shackles of the projection display of the monitor through the broad field of vision of people.
  • VR virtual desktop technology Through the VR virtual desktop technology, multiple task menu windows can be put into people's field of vision and spread out. In this way, multi-task flat operation is realized and efficiency is improved.
  • Fig. 2 shows a schematic diagram of an example image processing system according to an embodiment of the present application.
  • the image processing system 20 may include an image source device 210, a network 220, and an image display device 230.
  • the image display device 230 also includes a display controller 231 and a display screen 232.
  • the image source device 210 may include a user equipment 212 and a server 211.
  • the user equipment 212 may include, but is not limited to, a desktop computer, a laptop computer device, a tablet computer, a smart phone, an in-vehicle infotainment device, a streaming media client device, and various other electronic devices for image processing.
  • the server 211 may include service equipment of a cloud infrastructure provider or a data center service provider, which may provide all services such as computing, rendering, storage, encoding and decoding.
  • the network 220 may include various transmission media for data communication between the image source device 210 and the image display device 230, such as a wired or wireless data transmission link.
  • the wired data transmission link may include a VGA cable or an HDMI cable.
  • wireless data transmission links can include the Internet, local area network, mobile communication network and their combination.
  • the image source device 210 may partially generate image data to be sent to the image display device 230 according to the existing desktop display process as shown in FIG. 1, but in various embodiments of the present application, the image source device 210 does not execute
  • the image integration process that is, for the Windows system, the DWM 108 is removed, for the Android system, the SurfaceFlinger image integration module is removed, and for the Linux system, the Wayland Compositor image integration module is removed.
  • the image generation process performed by the image source device 210 will focus on the differences from the prior art, and the same or similar parts with the prior art will be briefly described.
  • the image display device 230 may include a wearable device (for example, a head-mounted display (Head-Mounted Display, HMD), etc.), a virtual reality (Virtual Reality, VR) and/or augmented reality (Augment Reality, AR) device Wait.
  • the head-mounted virtual reality device in the virtual reality display device is a type of wearable device, and is also called a virtual reality helmet, VR glasses, or glasses-type display.
  • the display controller 231 is used to execute the image processing method provided in the embodiment of the present application, and the display screen 232 is used to display the image processed by the display controller 231 to the user.
  • the display controller 231 and the display screen 232 will be further described later with reference to FIG. 3.
  • the user may use a user device 212 such as a smart phone, and use a data cable to connect the smart phone and an image display device 230 such as a head-mounted virtual reality device.
  • a user device 212 such as a smart phone
  • an image display device 230 such as a head-mounted virtual reality device.
  • the user equipment 212 or the server 211 may be based on a fifth-generation mobile communication technology (5G) network and an image display device such as a head-mounted virtual reality device.
  • 5G fifth-generation mobile communication technology
  • 230 is communicatively connected.
  • 5G networks With the deployment of 5G networks, it provides end users with super-large access bandwidth, and its data transmission rate is much higher than that of previous cellular networks, up to 10Gbit/s, which is higher than the current transmission speed of wired Internet Fast, 100 times faster than the current 4G LTE cellular network.
  • Another advantage of the 5G network is lower network latency (faster response time), generally less than 1 millisecond, which lays the foundation for bandwidth access in the VR cloud.
  • the server 211 is a cloud infrastructure and data center service device
  • users can access configurable computing resources (for example, network, server, storage, application, and service) sharing via the network anytime, anywhere, conveniently, and on demand. Pool, the resources in the pool can be quickly allocated and released, and only a small amount of management workload and interaction with service providers are required.
  • a large number of terminal users access the network through 5G, and the server 211 provides all services such as computing, rendering, storage, encoding and decoding.
  • Third-party content providers provide all content services, which allows user terminals to be lightweight and convenient, without the need for strong hardware computing power.
  • Each terminal user only needs a login account and a receiving display terminal such as the image display device 230.
  • the deployment of 5G networks with its ultra-large bandwidth and ultra-low latency, can ensure the availability of cloud VR rendering images, and can further promote the application of VR virtual desktops.
  • the user equipment 212 or the server 211 may be interconnected with the image display device 230 based on other communication networks.
  • the communication network may include, for example, a wifi hotspot network, a wifi P2P network, a Bluetooth network, a zigbee network, or Near field communication (NFC) network and other short-distance communication networks, and/or future evolution of public land mobile network (PLMN) or the Internet, etc.
  • FIG. 3 shows an exemplary schematic diagram of a head-mounted virtual reality device 300.
  • the head-mounted virtual reality device 300 may include a processing unit 310, a storage module 320, a transceiver module 330, a video rendering module 340, a left display screen 342, a right display screen 344, an audio codec module 350, a microphone 352, a speaker 354, and a head
  • the motion capture module 360, the gesture motion recognition module 370, the optical device 380, and so on may include a head-mounted virtual reality device 300.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the head-mounted virtual reality device 300.
  • the head-mounted virtual reality device 300 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processing unit 310 may include one or more processing units.
  • the processing unit 310 may include an application processor (AP), a modem processor, a controller, and a digital signal processor (DSP). And/or baseband processor, etc.
  • AP application processor
  • DSP digital signal processor
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • the processing unit 310 may also be provided with a memory for storing instructions and data.
  • the memory in the processing unit 310 is a cache memory.
  • the memory can store instructions or data that the processing unit 310 has just used or used cyclically. If the processing unit 310 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processing unit 310 is reduced, and the efficiency of the system is improved.
  • the storage module 320 may be used to store computer executable program code, where the executable program code includes instructions.
  • the storage module 320 may include a storage program area and a storage data area. Among them, the storage program area can store an operating system, an application program required by at least one function, and the like.
  • the data storage area of the storage module 320 can store data created during the use of the virtual reality device 300 (for example, thumbnails of applications) and the like.
  • the storage module 320 may store at least a part of one or more image processing methods provided in the embodiments of the present application. It can be understood that when the code of the image processing method stored in the storage module 320 is executed by the processing unit 310, application interfaces of multiple applications can be displayed on the virtual display screen in the virtual environment of the three-dimensional space.
  • the storage module 320 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the processing unit 310 executes various functions and data processing of the virtual reality device 300 by running instructions stored in the storage module 320 and/or instructions stored in a memory provided in the processing unit 310.
  • the transceiving module 330 can receive the data to be sent from the processing unit 310 and then send it to the image source device 210; at the same time, the transceiving module 330 can also receive data from the image source device 210, for example, image data frames.
  • the video rendering module 340 may include a graphics processing unit (GPU), an image signal processor (ISP), a video codec, and the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the video rendering module 340 may be provided in the processing unit 310, or part of the functional modules of the video rendering module 340 may be provided in the processing unit 310.
  • the processing unit 310 may integrate a CPU and a GPU.
  • the CPU and GPU can cooperate to execute the image processing method provided in the embodiments of the present application. For example, in the image processing method, part of the algorithm is executed by the CPU and the other part is executed by the GPU to obtain faster processing efficiency.
  • the left display screen 342 and the right display screen 344 respectively display images to the left and right eyes of the user, where the images can be specifically represented as static images or video frames in videos.
  • Both the left display screen 342 and the right display screen 344 include display panels.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the optical device 380 may include lenses such as a zoom lens. The optical device 380 adjusts the light emitted by the display screen during image display to compensate for the short-sightedness, hyperopia, and astigmatism users' lens refractive error, so that the user can Clearly see the image on the display.
  • the audio codec module 350 is used for converting digital audio information into an analog audio signal for output, and is also used for converting an analog audio input into a digital audio signal.
  • the audio codec module 350 can also be used to encode and decode audio signals.
  • the audio codec module 350 may be disposed in the processing unit 310, or part of the functional modules of the audio codec module 350 may be disposed in the processing unit 310.
  • the microphone 352 is used to input the user's analog audio to the audio codec module 350, and the speaker 354 is used to play the analog audio signal output by the audio codec module 350 to the user.
  • the head motion capture module 360 can capture and recognize the user's head motion, for example, recognize whether the user bows or raises his head, and the rotation angle of the user's head.
  • the gesture recognition module 370 can recognize the user's operation gestures, such as sliding left and right, sliding up and down, and clicking to confirm.
  • the user uses the virtual reality device 300, and the transceiver module 330 receives the video or image information transmitted from the image source device 210 such as a PC, and then sends it to the processing unit 310 for processing.
  • the processing unit 310 transfers the relevant video After the environment or graphics window information is processed, it is put on the left and right display screens (342, 344) through the video rendering module 340 to form a virtual display environment or a virtual display desktop display system.
  • the audio codec module 350 decodes the audio information and transmits it to the speaker 354 for sound, so as to provide a more immersive video experience of VR.
  • the microphone 352 can also capture human voice information to provide audio interaction between the VR system and the PC.
  • the head motion capture module 360 captures this movement and triggers the corresponding operation of the virtual desktop display system, such as turning on and displaying the global thumbnail; when the person starts looking up or When looking up, trigger to close the global thumbnail.
  • the head motion capture module 360 can also capture head rotation and other motions to trigger synchronously following and display the window screen in front of the person's field of view.
  • the gesture recognition module 370 recognizes the user's gestures, such as sliding left and right, sliding up and down, and clicking to confirm, so as to facilitate interactive operations between the user and the virtual desktop system.
  • the display controller 231 of the image display device 230 shown in FIG. 2 may be implemented in software, but the display controller 231 may also be implemented in hardware, software, or a combination of software and hardware.
  • the display controller 231 of the image display device 230 may include, for example, a processing unit 310, a storage module 320, a transceiver module 330, and a video rendering module. 340.
  • the display screen 232 of the image display device 230 may include a left display screen 342, a right display screen 344, and an optical device 380.
  • the image display device 230 may not include one or more parts shown in FIG. 3, or may include other parts not shown in FIG. 3.
  • FIG. 4 shows the interactive process of the image source device 210 and the image display device 230 executing the image processing method of the present application.
  • Fig. 4 does not show part of the prior art mentioned in Fig. 1, for example, including, one or more applications of the image source device 210 start and apply to create an application window, and the image source device 210
  • the processor executes the application program flow and completes resource allocation.
  • the application can call 2D or 3D image program interfaces such as OpenGL and DirectX.
  • the image source device 210 is a server in the cloud
  • the image display device 230 logs in to the cloud server to request virtual reality services.
  • 210 can allocate device resources for the virtual reality service to the image display device 230 in the cloud, such as CPU, GPU, and so on.
  • each application window of the multiple application windows corresponds to the multiple applications one-to-one.
  • the image source device 210 may use the GPU to process the drawing graphics instructions of each application to complete the drawing of the graphics application window of each application, and the drawn application window is cached in the memory, where the memory may be the cache of the GPU, It may also be the cache of the CPU or other memory of the image source device 210, which is not specifically limited here.
  • the image source device 210 adds identification information to the pixel data of these application windows.
  • the image source device 210 may add a header file (Header) to the cache frame of the application window in the memory, and the header file may include the application type, file name, creation time, author, etc. of the application corresponding to the application window. data.
  • Header header file
  • a program such as a display driver of the image source device 210 may encode data of the application window in the memory.
  • the image source device 210 After encoding, the image source device 210 sends the encoded multiple application windows to the image display device 230 at 404 in a wired or wireless manner.
  • the transmitted encoded multiple application windows are not synthesized in one image display frame for display to the user in the image source device.
  • the virtual reality device 300 is taken as an example of the image display device 230 to illustrate the subsequent process.
  • the known technology for rendering the virtual display environment after the virtual reality device 300 is started is not specifically shown, for example, including, after the virtual reality device 300 is started, the user interface (User Interface, The application of the UI) system will also start, and the application will apply for the corresponding application window.
  • the processing unit 310 allocates the corresponding resources
  • the application calls the 2D/3D graphics program interface and processes the UI drawing through the video rendering module 340. Instructions to complete the rendering of the virtual environment to form a virtual desktop framework on the VR side.
  • the transceiving module 330 of the virtual reality device 300 receives the encoded data of multiple application windows from the image source device 210 from the image source device 210.
  • the virtual reality device 300 After the virtual reality device 300 receives the data, in block 406: decode to obtain decoded multiple application windows.
  • the virtual reality device 300 decodes the data of the application windows with identification information, and then converts the data of these application windows according to the respective types of applications, for example, Weibo, Douyin, etc., are classified and stored in the storage module 320 of the virtual reality device 300.
  • the virtual reality device 300 creates a corresponding number of virtual windows according to the number of application windows cached in the storage module 320, and the virtual window may be an area in the virtual desktop frame for displaying application windows for the user.
  • each application window is added to the virtual window corresponding to the application type among the multiple virtual windows.
  • the virtual reality device 300 may add each application window stored in the storage module 320 to a virtual window.
  • the virtual reality device 300 checks the identification information of each application window that has been stored in the storage module 320, and divides each application window according to the respective type of the application corresponding to the application window. A virtual window corresponding to the application type is added to each application window. For example, when more and more windows are opened, they can be classified and aggregated according to the category of the application. For example, the application windows of all opened Word documents are aggregated in a virtual window. In the window; the application windows of all opened PowerPoint documents are aggregated in another virtual window, etc. The following will further explain other examples of this part with reference to the scene and the drawings.
  • At block 409 at least one virtual window is synthesized in an image display frame and displayed.
  • the video rendering module 340 synthesizes all the windows in one image display frame, and generates the left and right views of the image display frame, and then displays them on the left display screen 342 and the right display screen 344.
  • the image processing method 400 shown in FIG. 4 will be further described below in conjunction with the scene and the drawings.
  • Fig. 5a shows a schematic example of a virtual window presented by a display screen of a virtual reality device 300 according to an exemplary embodiment.
  • the display scene shown in FIG. 5a is a further example of a possible implementation of the image processing method 400, and the content that has been described in FIG. 4 will not be described in detail below.
  • the user can use a virtual display area in a 360° three-dimensional space centered on the user.
  • an effective way of arranging the virtual windows is to arrange them in a tiled ring.
  • a plurality of virtual windows 530a-530n are arranged in a 360° circle in the user's field of view, and each of the virtual windows 530a-530n can clearly present the application in the virtual window to the user 510 Content.
  • the virtual windows 530 when the displayed virtual windows 530 do not exceed a predetermined threshold number, the virtual windows 530 are preferentially arranged in the field of view area 520 in front of the user 510.
  • the visual angle of view of the user 510 is approximately 120 degrees, and the field of view of the user 510 corresponds to the view angle range of 120 degrees.
  • the threshold number of displayed virtual windows 530 may be The maximum number of virtual windows 530 that can be accommodated in the field of view area 520, for example, as shown in FIG.
  • each of the newly added virtual windows 530 may be arranged one after another in a ring extending from both sides of the front view area 520 of the user 510 with the user as the center.
  • the virtual window 530 in a different field of view area 520 can be seen.
  • the user 510 can perform virtual window 530 control operations through gestures. For example, a left or right gesture can slide the virtual window 530, and an upward or downward gesture can close the virtual window 530. Further, the user 510 can also click to open the content in the virtual window 530, and perform operations such as copying and cutting.
  • the virtual reality device 300 receives the data of the encoded application window from the PC, the application window of each decoded application They are added to a virtual window respectively, and these virtual windows are combined into an image display frame by the video rendering module 340 and then displayed to the user 510.
  • the virtual window 530a can display Word documents
  • the virtual window 530b can display PowerPoint.
  • the virtual window 530c can display the content of the Weibo application
  • the virtual window 530n can display the content of the Douyin application.
  • the embodiments of the present application it is possible to display the application windows of multiple applications on different viewing areas in the virtual environment of the three-dimensional space of virtual reality. Not only will each window not be occluded, but the screen displayed on the viewing area will be better. Larger and clearer, enhancing the user's visual experience.
  • FIG. 5b Another possible schematic example in which the display screen of the virtual reality device 300 presents a virtual window will be described below with reference to FIG. 5b.
  • the display scene shown in FIG. 5b is also an example of a possible implementation of the image processing method 400, and the content that has been described in FIG. 4 and FIG. 5a will not be described in detail below.
  • the display scene shown in FIG. 5b is another solution for the case where the displayed virtual windows 530 exceed the predetermined threshold number.
  • the application windows of all opened Word documents are aggregated in a virtual window; the applications of all opened PowerPoint documents Window, aggregated in another virtual window.
  • multiple application windows of each application occupy a virtual window 530.
  • the virtual window 530a can display the first opened Word document
  • the virtual window 530b can display the first opened PowerPoint document
  • 530c can display the first opened content of the Weibo application
  • the virtual window 530n can display the first opened content of the Douyin application.
  • other application windows of each application that are subsequently opened are sequentially displayed in each virtual window 530 in the form of thumbnails.
  • each thumbnail may also be a virtual window added with an application window, and each thumbnail is independent of each other and does not interfere with each other.
  • the user 510 can control each virtual window 530 and the thumbnails therein through gestures. For example, when the user 510's hand slides to the left or right, the virtual window 530 in the field of view area 520 in front of the user will also be left and right. slide. In addition, clicking the thumbnail in each virtual window 530 can also switch the display screen of the virtual window 530. For example, in the virtual window 530b, if the screen currently displayed in the virtual window 530b is the application window in the leftmost thumbnail 532b, when the rightmost thumbnail 532b is clicked, the display screen of the virtual window 530b is switched to the rightmost The thumbnail 532b on the side of the application window.
  • the application windows of each application opened by the user 510 have been aggregated but are still circularly tiled to cover the 360° field of view of the user 510. If the user 510 needs to tile two or more windows of the same application at this time, for example, the content in these application windows is compared and displayed.
  • the user 510 can drag any thumbnail in the virtual window 530 of the application to any other virtual window through gesture operations. For example, drag a thumbnail 532b from the virtual window 530b to the virtual window 530c, then The thumbnail 532b is displayed in the virtual window 530c, and other application windows previously displayed in the virtual window 530c are automatically closed. Further, the user 510 can also click to open the content in the virtual window 530, and perform operations such as copying and cutting.
  • Fig. 5c shows a schematic diagram of a possible example of thumbnails in a virtual window.
  • the header file information 541 of the application window may be displayed in the thumbnail 532b, and the header file information 541 may include at least a part of the metadata in the header file of the application window.
  • the header file information 541 Can display the file name, file modification time and other information.
  • the thumbnail 532b can also display content information after the content of the application window is simplified. According to the specific content contained in the application window, the content information after the simplified processing may generally include text information 542 and reduced picture information. 543.
  • the content of the application window in the thumbnail 532b enlarged and displayed in FIG. 5c is mainly text content.
  • the thumbnail 532b may display a Word file based on it.
  • the text information 542 of the current page content is simplified and processed.
  • the text information 542 may include part of the text content of the current page of the Word file.
  • the current page of the Word file is the first page of the file.
  • the text information 542 may include, for example, the file title of the first page of the Word file, all the text content of the first half page of the first page of the Word file, and/or text content collected in a certain line on the first page of the Word file, and so on.
  • the content of the application window in the thumbnail 532b enlarged and displayed in FIG. 5c is mainly image content.
  • the picture information 543 after simplification processing based on the content of the video page, the picture information 543 may include, for example, the cover image of one or more videos with a video cover in the video page, or the image of the main area of the video page,
  • the main area of the video page can be understood as an area including the video playing window and the video title.
  • the image in the main area may include the cover image of the video that needs to be played in the video playing window and the title of the video.
  • the thumbnail 532b can display the text information 542 and the picture information 543 after the simplified processing based on the text content and layout of the microblog.
  • the text information 542 may include, for example, the first sentence or paragraph of text in the text content of the microblog, all text content of the first half of the text content of the microblog, and/or the text content of the microblog. Certain lines of collected text content and so on.
  • the picture information 543 may include one or more pictures in the picture of the Weibo.
  • thumbnails shown in Figs. 5b and 5c contain information after simplifying the content of the application window, so that the user can easily identify the displayed content in each thumbnail and improve the convenience of user operations.
  • FIG. 5d Another possible schematic example in which the display screen of the virtual reality device 300 presents a virtual window will be described below with reference to FIG. 5d.
  • the display scene shown in FIG. 5d is also an example of possible implementations of the image processing method 400 of FIG. 4 and the subsequent image processing methods shown in FIGS. 6-7. The content of the description will not be repeated below.
  • the display scene shown in FIG. 5d is another solution for the case where the displayed virtual windows 530 exceed the predetermined threshold number.
  • a global menu 550 such as a thumbnail dial, can be set in front of the user 510's field of view.
  • FIG. 5d is an improvement based on the example shown in FIG. 5b, those skilled in the art can understand that the menu 550 shown in FIG. 5d can also be implemented in the example shown in FIG. 5a, where No restrictions.
  • the menu 550 may include corresponding thumbnails of each virtual window 530 arranged in a circular tile on the 360° field of view of the user 510, and these thumbnails are arranged in the menu 550 according to the position of the respective virtual window. middle.
  • the thumbnail corresponding to the virtual window 530a is the thumbnail 551
  • the thumbnail corresponding to the virtual window 530b is the thumbnail 552
  • the thumbnail corresponding to the virtual window 530c is the thumbnail 553
  • the thumbnail corresponding to the virtual window 530d is the thumbnail 554.
  • the other thumbnails correspond to other virtual windows 530 not shown in FIG. 5d.
  • the image display device 230 may determine whether to generate the menu 550 according to the number of virtual windows 530 that need to be displayed to the user 510. For example, if the virtual window 530 to be displayed is greater than a predetermined threshold number, the image display device 230 may generate a menu 550, where the threshold number may be, for example, 4, as shown in FIG. Four virtual windows 530 can be displayed.
  • the image display device 230 may generate the menu 550 according to a gesture operation of the user 510 or an indication of head movement.
  • the user 510 may cause the image display device 230 to generate the menu 550 by using a gesture operation of simultaneously sliding down with multiple fingers, or the user 510 may cause the image display device 230 to generate the menu 550 by lowering the head.
  • the gesture operation or head movement of the user 510 may not only be used to instruct the image display device 230 to generate the menu 550, but also may be used to control the image display device 230 to display the menu 550.
  • the user 510 may control the image display device 230 to display the menu 550 by looking down.
  • the user 510 can control the image display device 230 to generate the menu 550 by looking down and directly display the menu 550 after the menu 550 is generated. .
  • the menu 550 when the user 510 is looking up or down normally, the menu 550 is not displayed.
  • the menu 550 is displayed in front of the user 510, and the user 510 operates through gestures.
  • the user 510 can rotate the menu 550, and the view area 520 in front of the user 510 synchronously displays the view area after the menu 550 is rotated, and the virtual view area in the view area.
  • the user 510 can also manipulate the thumbnails in the menu 550, such as the thumbnails 551-554, to adjust the window displayed in the field of view area 520 in front of the user 510 accordingly.
  • the image display device can put multiple application windows into the user's field of vision through the VR virtual display technology, through the use of a 360-degree circular tile expansion of the virtual window, the thumbnail and the virtual window are combined , And technical means such as arranging global menus to realize an efficient human-computer interaction interface, thereby realizing multi-task flat operation and improving efficiency.
  • FIG. 6 shows a schematic flowchart of an image processing method 600 according to some embodiments of the present application.
  • the image processing method 600 is implemented on, for example, an image display device, such as the image display device 230 shown in FIG. 2, and the image processing method 600 can also be used as an illustrative example of the image display device 230 It is implemented on the virtual reality device 300 of FIG. 3.
  • part or all of the method 600 may be implemented on the display controller 231 and/or the display screen 232 as shown in FIG. 2.
  • different components of the virtual reality device 300 as shown in FIG. 3 may implement different parts or all of the method 600.
  • the image processing method 600 shown in FIG. 6 is a further description of the image processing method 400 shown in FIG. 4 and the display scene shown in FIG. Go into details again.
  • the display controller 231 of the image display device 230 at block 601 judges whether the number of multiple virtual windows to be displayed exceeds a threshold value related to the user's field of view. For example, for a threshold related to the user's field of view, it can be assumed that the user 510's visual angle of view is approximately 120 degrees, and the user 510's field of view corresponds to the 120-degree field of view. Under such conditions, the displayed virtual window
  • the threshold number of 530 may be the maximum number of virtual windows 530 that can be accommodated in the field of view area 520.
  • the display controller 231 extracts the header file information of the application window to be added to the virtual window, places at least part of the header file information in the virtual window, and then displays the control
  • the processor 231 simplifies the buffered pixel information of the application window, for example, removes the information in the lower half of the application window, collects title information, interlace collection information, or extracts pixel information for image recognition.
  • the display controller 231 puts the simplified information into the virtual window, and the information allows the user to identify the virtual window.
  • At block 603 Generate a menu containing multiple thumbnails of multiple virtual windows, where the multiple virtual windows correspond to the multiple thumbnails one-to-one.
  • the display controller 231 may generate corresponding thumbnails for the virtual windows to be displayed, and generate a menu containing these thumbnails, the menu may be as The aforementioned menu 550 shown in FIG. 5d will not be repeated here.
  • the display controller 231 controls the display screen 232 to display the menu including multiple thumbnails according to the user's gesture operation or head movement.
  • a thumbnail is selected from a plurality of thumbnails in the menu.
  • the user can select a thumbnail from a plurality of thumbnails in the menu 550, for example, select the thumbnail 552 by clicking on a gesture operation.
  • the display controller 231 determines whether the virtual window corresponding to the selected thumbnail includes multiple application windows of the same type. As an example, the display controller 231 determines whether the virtual window 530b corresponding to the thumbnail 552 includes multiple application windows of the same type. For example, as shown in FIG. 5d, the virtual window 530b includes multiple application windows of the same type.
  • Block 607 Display one application window among multiple application windows and thumbnails of other application windows among multiple applications. For example, the first or currently open application window in the virtual window 530b is displayed, and other application windows are displayed as thumbnails.
  • block 608 Display the virtual window corresponding to the selected thumbnail among the multiple virtual windows. For example, if the virtual window 530b is shown in FIG. 5a, the virtual window 530b corresponding to the thumbnail 552 is directly displayed.
  • the image display device can put multiple application windows into the user's field of vision through the VR virtual display technology.
  • the VR virtual display technology By combining thumbnails and virtual windows, and arranging global menus and other technical means to achieve Efficient human-computer interaction interface, so as to realize multi-task flat operation and improve efficiency.
  • the following describes another image processing method of the image display device 230 with reference to FIG. 7.
  • FIG. 7 shows a schematic flowchart of an image processing method 700 according to another embodiment of the present application.
  • the image processing method 700 is implemented on, for example, an image display device, such as the image display device 230 shown in FIG. 2, and the image processing method 700 can also be used as an illustrative example of the image display device 230 It is implemented on the virtual reality device 300 of FIG. 3.
  • part or all of the method 700 may be implemented on the display controller 231 and/or the display screen 232 as shown in FIG. 2.
  • different components of the virtual reality device 300 as shown in FIG. 3 may implement different parts or all of the method 700.
  • the display controller 231 may determine whether the user looks down on the display screen 231 of the image display device 230 at block 704 to determine whether to display the menu. If the user looks down, then at block 705: the display screen 232 displays the menu. For example, as shown in FIG. 5d, when the user 510 is looking up or down normally, the menu 550 is not displayed, and when the user 510 is looking down, the menu 550 is displayed in front of the user 510.
  • Program code can be applied to input instructions to perform the functions described in this article and generate output information.
  • the output information can be applied to one or more output devices in a known manner.
  • a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the program code can be implemented in a high-level programming language or an object-oriented programming language to communicate with the processing system.
  • assembly language or machine language can also be used to implement the program code.
  • the mechanisms described in this article are not limited to the scope of any particular programming language. In either case, the language can be a compiled language or an interpreted language.
  • IP cores can be stored on a tangible computer-readable storage medium and provided to multiple customers or production facilities to be loaded into the manufacturing machine that actually manufactures the logic or processor.
  • the instruction converter can be used to convert instructions from the source instruction set to the target instruction set.
  • the instruction converter may transform (for example, use static binary transformation, dynamic binary transformation including dynamic compilation), deform, emulate, or otherwise convert the instruction into one or more other instructions to be processed by the core.
  • the instruction converter can be implemented by software, hardware, firmware, or a combination thereof.
  • the instruction converter may be on the processor, off the processor, or part on the processor and part off the processor.
  • the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments can also be implemented as instructions carried by or stored on one or more transient or non-transitory machine-readable (eg, computer-readable) storage media, which can be executed by one or more processors. Read and execute.
  • the instructions can be distributed via a network or via other computer-readable media. Therefore, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (for example, a computer), but is not limited to, a floppy disk, an optical disk, an optical disk, a read-only memory (CD-ROM), and a magneto-optical disk.
  • ROM Read only memory
  • RAM random access memory
  • EPROM erasable programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • magnetic or optical card flash memory
  • a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (for example, a computer).
  • first, second, etc. may be used herein to describe various units or data, these units or data should not be limited by these terms. These terms are used only to distinguish one feature from another.
  • first feature may be referred to as the second feature, and similarly the second feature may be referred to as the first feature.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种图像处理方法和图像显示设备、存储介质和电子设备,方法包括:从图像源设备接收经编码的多个应用窗口并对它们解码获得经解码的多个应用窗口,其中多个应用窗口与多个应用一一对应;根据与多个应用窗口中的每个应用窗口对应的应用类别,将每个应用窗口加入到创建的至少一个虚拟窗口中的与应用类型相对应的虚拟窗口内;和将至少一个虚拟窗口合成在一个图像显示帧内并显示。通过虚拟现实(Virtual Reality,VR)的虚拟显示技术,可以把多个应用窗口,同时都投放到用户的视野中。通过采用虚拟窗口的360度环形平铺展开,缩略图和虚拟窗口相结合,以及布置全局菜单手段,实现高效的人机交互界面,从而实现多任务的扁平化操作,提高效率。

Description

图像处理方法和图像显示设备、存储介质和电子设备
本申请要求于2020年03月12日提交国家知识产权局、申请号为202010169749.4、申请名称为“图像处理方法和图像显示设备、存储介质和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请的一个或多个实施例通常涉及虚拟现实技术的图像处理领域,具体涉及一种图像处理方法和图像显示设备、存储介质和电子设备。
背景技术
虚拟现实(Virtual Reality,VR)技术是20世纪发展起来的一种全新的技术,其涵盖计算机,电子信息,光学,3D建模仿真等技术于一体。其基本的实现方式是通过计算机模拟虚拟的环境从而给人以环境沉浸感。随着社会生产力和科学技术的不断发展,各行各业对VR技术的需求日益旺盛。VR技术也取得了巨大进步,并逐步成为一个新的科学技术领域。其中通过VR技术来实现PC的虚拟桌面系统,也是目前研究的一大热点。
传统的计算机桌面系统,都是通过主机把相关的画面窗口投射到显示器上进行显示,以便于人机交互,其桌面显示受限于显示器的约束,需要把多个任务窗口,在主机内先进行打包整合与优化,才统一传输到显示器进行显示,这样的问题在于,现有的桌面显示系统不便于应用程序的扁平化操作,并且应用程序的操作效率也较低。
发明内容
本申请的一些实施方式提供了一种图像处理方法和图像显示设备、存储介质和电子设备。以下从多个方面介绍本申请,以下多个方面的实施方式和有益效果可互相参考。
为了应对上述场景,第一方面,本申请的实施方式提供了一种用于图像源设备的图像处理方法,包括:图像源设备根据用户打开的多个应用,绘制这些多个应用的应用窗口,每个应用窗口与多个应用一一对应;图像源设备编码多个应用窗口并将经编码的多个应用窗口发送到图像显示设备,其中被发送的经编码的多个应用窗口并未在图像源设备被合成在一个图像显示帧内。
从上述第一方面的实施方式中可以看出,本申请的实施方式的图像源设备可以不执行图像的整合流程,直接将经编码的多个应用窗口发送到图像显示设备,这样解决现有图像源设备必须把所有程序窗口整合成一帧图形输出显示,效率低,且不便于应用程序的扁平化操作的问题。
结合第一方面,在一些实施方式中,图像源设备包括用户设备和服务器中的至少一个,并且图像显示设备包括虚拟现实显示设备。
第二方面,本申请的实施方式提供了一种用于图像显示设备的图像处理方法,包括:图像显示设备从图像源设备接收经编码的多个应用窗口并对它们进行解码,之后获得经解码的多个应用窗口,其中多个应用窗口与用户在图像源设备打开的多个应用一一对应;图像显示设备创建至少一个虚拟窗口;图像显示设备可以根据与多个应用窗口中的每个应用窗口对应的应用类别,例如,微博、抖音、Word等应用,将每个应用窗口加入到创建好的至少一个虚拟窗口中的与应用类型相对应的虚拟窗口内;图像显示设备之后将至少一个虚拟窗口合成在一个图像显示帧内并在显示屏幕上显示该帧。
从上述第二方面的实施方式中可以看出,本申请的实施方式可以实现在虚拟现实的三维空间的虚拟环境中的不同视野区域上分别显示多个应用的应用窗口,并可以根据应用类别对应用窗口进行按类别的聚合,不仅可以显示更多窗口,各个窗口不会发生遮挡,而且在视野区域上显示的画面更大更清晰,增强了用户的视觉体验。
结合第二方面,在一些实施方式中,还包括:响应于用户通过图像显示设备的显示屏幕点击至少一个虚拟窗口中的第一虚拟窗口中的第一应用窗口,控制显示屏幕在第一虚拟窗口中显示被点击的第一应用窗口的内容,并且在第一虚拟窗口中同时显示第一虚拟窗口所包括的其他应用窗口的缩略图。
结合第二方面,在一些实施方式中,还包括:响应于用户通过图像显示设备的显示屏幕将第一虚拟窗口中的第一应用窗口拖动到至少一个虚拟窗口中的第二虚拟窗口,控制显示屏幕在第二虚拟窗口中显示第一应用窗口的内容。
结合第二方面,在一些实施方式中,还包括:响应于用户在图像显示设备的显示屏幕上左右滑动的手势,显示至少一个虚拟窗口相应地左右滑动。
结合第二方面,在一些实施方式中,图像显示设备是虚拟现实显示设备。
第三方面,本申请的实施方式提供了一种用于图像显示设备的图像处理方法,包括:在待显示的多个虚拟窗口的数量超出阈值和用户俯视图像显示设备的显示屏幕中的至少一种情况下,生成包含多个虚拟窗口的多个缩略图的菜单,其中多个虚拟窗口与多个缩略图一一对应,并且,其中多个虚拟窗口中的每个虚拟窗口包括同一应用类型的至少一个应用窗口,以及至少一个应用窗口与至少一个应用一一对应;和控制显示屏幕显示菜单。
从上述第三方面的实施方式中可以看出,本申请的实施方式可以在虚拟窗口数量较多时和/或相应用户的头部动作,为用户提供便捷的虚拟窗口控制方式,通过采用缩略图和虚拟窗口相结合,以及布置全局菜单等技术手段,实现高效的人机交互界面,从而为用户实现多任务的扁平化操作,提高效率。
结合第三方面,在一些实施方式中,菜单包括多个缩略图组成的转盘。
结合第三方面,在一些实施方式中,阈值与用户的视野范围相关。
结合第三方面,在一些实施方式中,多个缩略图中的每个缩略图包括多个虚拟窗口中与每个缩略图相对应的虚拟窗口的标识信息。
结合第三方面,在一些实施方式中,多个缩略图中的每个缩略图包括通过对多个虚拟窗口中对应的虚拟窗口进行简化图像处理后获得的图像内容,其中简化图像处理包括对对应的虚拟窗口进行隔行扫描和去除对应的虚拟窗口的部分内容中的至少一种。
从上述结合第三方面的实施方式中可以看出,缩略图通过包含对应用窗口的内容进行简洁化处理后的信息,使用户可以方便地识别每个缩略图中的显示内容,提高用户操作的便捷性。
结合第三方面,在一些实施方式中,还包括:根据用户的指令,从菜单中的多个缩略图中选择一个缩略图并显示多个虚拟窗口中与所选的缩略图相对应的虚拟窗口,其中,指令包括用户对菜单的滑动和对所选缩略图的点击中的至少一种。
结合第三方面,在一些实施方式中,在与所选的缩略图相对应的虚拟窗口包括同一类型的多个应用窗口的情况下,显示相对应的虚拟窗口还包括:在相对应的虚拟窗口中显示同一类型的多个应用窗口中的一个应用窗口,以及同一类型的多个应用窗口中的其他应用窗口的缩略图。
第四方面,本申请的实施方式提供了一种图像显示设备,包括:显示控制器和显示屏幕,显示控制器,用于从图像源设备接收经编码的多个应用窗口并解码,以获得经解码的多个应用窗口,其中多个应用窗口与多个应用一一对应;创建至少一个虚拟窗口;根据与多个应用窗口中的每个应用窗口对应的应用类别,将每个应用窗口加入到至少一个虚拟窗口中的与应用类型相对应的虚拟窗口内;和将至少一个虚拟窗口合成在一个图像显示帧;和显示屏幕用于显示图像显示帧。
从上述第四方面的实施方式中可以看出,本申请的实施方式可以实现在虚拟现实的三维空间的虚拟环境中的不同视野区域上分别显示多个应用的应用窗口,并可以根据应用类别对应用窗口进行按类别的聚合,不仅可以显示更多窗口,各个窗口不会发生遮挡,而且在视野区域上显示的画面更大更清晰,增强了用户的视觉体验。
结合第四方面,在一些实施方式中,还包括:响应于用户通过图像显示设备的显示屏幕点击至少一个虚拟窗口中的第一虚拟窗口中的第一应用窗口,控制显示屏幕在第一虚拟窗口中显示被点击的第一应用窗口的内容,并且在第一虚拟窗口中同时显示第一虚拟窗口所包括的其他应用窗口的缩略图。
结合第四方面,在一些实施方式中,还包括:响应于用户通过图像显示设备的显示屏幕将第一虚拟窗口中的第一应用窗口拖动到至少一个虚拟窗口中的第二虚拟窗口,控制显示屏幕在第二虚拟窗口中显示第一应用窗口的内容。
结合第四方面,在一些实施方式中,还包括:响应于用户在图像显示设备的显示屏幕上左右滑动的手势,显示至少一个虚拟窗口相应地左右滑动。
结合第四方面,在一些实施方式中,图像显示设备是虚拟现实显示设备。
第五方面,本申请的实施方式提供了一种图像显示设备,包括:显示控制器和显示屏幕,显示控制器,用于在待显示的多个虚拟窗口的数量超出阈值和用户俯视显示屏幕中的至少一种情况下,生成包含多个虚拟窗口的多个缩略图的菜单,其中多个虚拟窗口与多个缩略图一一对应,并且,其中多个虚拟窗口中的每个虚拟窗口包括同一应用类型的至少一个应用窗口,以及至少一个应用窗口与至少一个应用一一对应;和控制显示屏幕显示菜单。
从上述第五方面的实施方式中可以看出,本申请的实施方式可以在虚拟窗口数量较多时和/或相应用户的头部动作,为用户提供便捷的虚拟窗口控制方式,通过采用缩略图和虚拟窗口相结合,以及布置全局菜单等技术手段,实现高效的人机交互界面,从而为用户实现多任务的扁平化操作,提高效率。
结合第五方面,在一些实施方式中,菜单包括多个缩略图组成的转盘。
结合第五方面,在一些实施方式中,阈值与用户的视野范围相关。
结合第五方面,在一些实施方式中,多个缩略图中的每个缩略图包括多个虚拟窗口中与每个缩略图相对应的虚拟窗口的标识信息。
结合第五方面,在一些实施方式中,多个缩略图中的每个缩略图包括通过对多个虚拟窗口中对应的虚拟窗口进行简化图像处理后获得的图像内容,其中简化图像处理包括对对应的虚拟窗口进行隔行扫描和去除对应的虚拟窗口的部分内容中的至少一种。
从上述结合第五方面的实施方式中可以看出,缩略图通过包含对应用窗口的内容进行简洁化处理后的信息,使用户可以方便地识别每个缩略图中的显示内容,提高用户操作的便捷性。
结合第五方面,在一些实施方式中,还包括:显示控制器还用于根据用户的指令,从菜单中的多个缩略图中选择一个缩略图并显示多个虚拟窗口中与所选的缩略图相对应的虚拟窗口,其中,指令包括用户对菜单的滑动和对所选缩略图的点击中的至少一种。
结合第五方面,在一些实施方式中,在与所选的缩略图相对应的虚拟窗口包括同一类型的多个应用窗口的情况下,显示相对应的虚拟窗口还包括:在相对应的虚拟窗口中显示同一类型的多个应用窗口中的一个应用窗口,以及同一类型的多个应用窗口中的其他应用窗口的缩略图。
结合第五方面,在一些实施方式中,图像显示设备包括虚拟现实显示设备。
第六方面,本申请提供了一种计算机可读存储介质,该存储介质可以是非易失性的。该存储介质中包含指令,该指令在执行后实施如前述任意一个方面或实施方式所描述的方法。
第七方面,本申请提供了一种电子设备,包括:存储器,用于存储由电子设备的一个或多个处理器执行的指令,以及处理器,用于执行存储器中的指令,以执行根据前述任意一个方面或实施方式所描述的方法。
附图说明
图1示出了现有桌面显示系统的基本显示流程的示意图。
图2示出根据本申请实施方式的示例图像处理系统的示意图。
图3示出了根据本申请实施方式的示例图像显示设备的示意图。
图4示出根据本申请实施方式的图像源设备和图像显示设备执行本申请的图像处理方法的交互过程。
图5a-5d示出了根据示例性实施方式的图像显示设备的显示屏呈现的图形用户界面的示意图。
图6示出了根据本申请一种实施方式的图像处理方法的示意图。
图7示出了根据本申请另一种实施方式的图像处理方法的示意图。
具体实施方式
以下由特定的具体实施例说明本申请的实施方式,本领域技术人员可由本说明书所揭示的内容轻易地了解本申请的其他优点及功效。虽然本申请的描述将结合较佳实施例一起介绍,但这并不代表此发明的特征仅限于该实施方式。恰恰相反,结合实施方式作发明介绍的目的是为了覆盖基于本申请的权利要求而有可能延伸出的其它选择或改造。为了提供对本申请的深度了解,以下描述中将包含许多具体的细节。本申请也可以不使用这些细节实施。此外,为了避免混乱或模糊本申请的重点,有些具体细节将在描述中被省略。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
此外,各种操作将以最有助于理解说明性实施例的方式被描述为多个离散操作;然而,描述的顺序不应被解释为暗示这些操作必须依赖于顺序。特别是,这些操作不需要按呈现顺序执行。
除非上下文另有规定,否则术语“包含”,“具有”和“包括”是同义词。短语“A/B”表示“A或B”。短语“A和/或B”表示“(A和B)或者(A或B)”。
应注意的是,在本说明书中,相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
如本文所使用的,术语“模块或单元”可以指或者包括专用集成电路(ASIC)、电子电路、执行一个或多个软件或固件程序的处理器(共享的、专用的或组)和/或存储器(共享的、专用的或组)、组合逻辑电路、和/或提供所描述的功能的其他合适的组件,或者可以是专用集成电路(ASIC)、电子电路、执行一个或多个软件或固件程序的处理器(共享的、专用的或组)和/或存储器(共享的、专 用的或组)、组合逻辑电路、和/或提供所描述的功能的其他合适的组件的一部分。
图1示出了现有桌面显示系统的基本显示流程的示意图。
在现有技术中,传统的计算机桌面系统,都是通过主机把相关的画面窗口投射到显示器上进行显示,以便于人机交互,其桌面显示受限于显示器的约束,需要把多个任务窗口,在主机内先进行打包整合与优化,才统一传输到显示器进行显示。
如图1所示,两个应用101,例如应用A和B,他们可以同时执行,也可以按次序分别执行。图形设备接口(Graphics Device Interface,GDI)102,主要任务是负责系统域绘图程序之间的信息交换,处理所有windows程序的图形和图像输出,GDI的出现使得程序员无需关心硬件设备以及设备的正常驱动,就可以将应用程序的输出转化为硬件设备上的输出和构成;而多媒体编程接口103,例如,DirectX可让以windows为平台的游戏或者多媒体程序获得更高的执行效率,加强3D图形和声音效果,并提供设计人员一个共同的硬件驱动标准,让开发者不必为每一个品牌硬件来写不同的驱动程序,隔离应用程序与硬件之间的关系。应用101(例如,A和B),分别通过GDI和DirectX,与系统图形组件通信,并通过窗口显示驱动模型(Windows Display Driver Model,WDDM)104驱动GPU等硬件资源,从而形成了系统的图形特性。WDDM 104通过调用GPU等硬件资源,形成应用程序的桌面窗口,如图形表面107(例如,A和B),这里的图形表面类似于显示器桌面显示的每个独立菜单窗口。然后图形表面A和图形表面B通过桌面窗口管理器(Desktop Window Manager,DWM)108的整合,形成统一的桌面显示图形,并在GPU显存109中,进行暂缓存储。之后通过视频传输通道,例如VGA(Video Graphics Array,视频图形阵列)线,HDMI(High Definition Multimedia Interface,高清多媒体接口)线,DP(DisplayPort,显示接口)线或者DVI(Digital Visual Interface,数字视频接口)线等传输到显示器110进行显示。
可以看到每一次应用程序的操作,都需要经过DWM 108的图像整合,才能输出。这是只有一个显示器桌面的限制,所有必须把所有程序窗口整合成一帧图形输出显示,效率低,且不便于应用程序的扁平化操作。
本申请的技术方案希望通过VR技术,来实现虚拟桌面系统,通过人宽广的视野,来解放显示器投屏显示的束缚。通过VR虚拟桌面技术,可以把多个任务菜单窗口,都投放到人的视野中,平铺展开。从而实现多任务的扁平化操作,提高效率。以下参考各附图描述本申请的一些实施方式。
图2示出根据本申请实施方式的示例图像处理系统的示意图。
图像处理系统20可以包括图像源设备210、网络220和图像显示设备230。图像显示设备230还包括显示控制器231和显示屏幕232。其中,图像源设备210可以包括用户设备212和服务器211。用户设备212可以包括但不局限于,台式计算机、膝上型计算机设备、平板电脑、智能手机、车载信息娱乐设备,流媒体客户端设备,以及各种其他用于图像处理的电子设备。服务器211可以包括云基础设施提供商或者数据中心服务商的服务设备,其可以提供计算、渲染、存储、编解码等一切服务。网络220可以包括用于使图像源设备210和图像显示设备230进行数据通信的各种传输介质,例如有线或无线的数据传输链路,其中,有线的数据传输链路可以包括VGA线,HDMI线,DP线或者DVI线等,无线的数据传输链路可以包括互联网、局域网、移动通信网及它们的组合。
图像源设备210可以部分地根据如图1所示的现有的桌面显示流程生成发送给图像显示设备230的图像数据,但是在本申请的各种实施方式中,在图像源设备210中不执行图像的整合流程,即,对于Windows系统来说,去掉了DWM 108,对于安卓系统来说去掉了SurfaceFlinger图像整合模块,而对于Linux系统来说,则去掉了Wayland Compositor图像整合模块。在以下的实施方式中,对于图 像源设备210执行的图像生成过程,将重点描述其与现有技术的不同之处,而与现有技术的相同或相似部分将简略描述。
图像显示设备230可以包括可穿戴设备(例如,头戴式显示器(Head-Mounted Display,简称HMD)等),虚拟现实(Virtual Reality,简称VR)和/或增强现实(Augment Reality,简称AR)设备等。其中,虚拟现实显示设备中的头戴式虚拟现实设备是穿戴式设备中的一种,又称虚拟现实头盔、VR眼镜或者眼镜式显示器。在图像显示设备230中,显示控制器231用于执行本申请实施方式提供的图像处理方法,显示屏幕232用于向用户显示经过显示控制器231处理后的图像。后续参考图3进一步说明显示控制器231和显示屏幕232。
在一些图像处理系统20的可能场景中,用户可以使用诸如智能手机的用户设备212,并使用数据线连接该智能手机和诸如头戴式虚拟现实设备的图像显示设备230。
在另一些图像处理系统20的可能场景中,用户设备212或服务器211可以基于第五代移动通信技术(5th-generation mobile communication technology,5G)网络与和诸如头戴式虚拟现实设备的图像显示设备230通信地连接。作为一个示例,随着5G网络的部署,其为终端用户提供超大的接入带宽,其数据传输速率远远高于以前的蜂窝网络,最高可达10Gbit/s,比当前的有线互联网的传输速度快,比现在的4G LTE蜂窝网络快100倍。5G网络的另一个优点是较低的网络延迟(更快的响应时间),一般低于1毫秒,这为VR的云端奠定了带宽接入基础。另外服务器211为云基础设施以及数据中心的服务设备时,用户可以随时随地、便捷地、随需应变地通过网络访问可配置的计算资源(例如,网络、服务器、存储、应用、及服务)共享池,池中资源可以快速分配及释放,并且只需要很少的管理工作量以及与服务提供商的交互。海量的终端用户通过5G接入网络,服务器211提供计算、渲染、存储、编解码等一切服务。第三方内容供应商提供一切内容服务,这使得用户终端可以轻量化,便捷化,不再需要很强的硬件算力支撑。每个终端用户只需要一个登陆账号和一个诸如图像显示设备230的接收显示终端即可。5G网络的部署,其超大的带宽和超低的延时,可以保证了云端VR渲染画面的可获得性,还可以进一步促进了VR虚拟桌面的应用。
在另一些图像处理系统20的可能场景中,用户设备212或服务器211可以与图像显示设备230基于其他通信网络互联,通信网络例如可以包括,wifi热点网络、wifi P2P网络、蓝牙网络、zigbee网络或近场通信(near field communication,NFC)网络等近距离通信网络,和/或未来演进的公共陆地移动网络(public land mobile network,PLMN)或因特网等。
示例性地,以下参考图3,对图像显示设备230的一个示例进行具体说明。图3示出了头戴式的虚拟现实设备300的示例性的示意图。头戴式虚拟现实设备300可以包括处理单元310、存储模块320、收发模块330、视频渲染模块340、左显示屏342、右显示屏344、音频编解码模块350、麦克风352、扬声器354、头部动作捕捉模块360、手势动作识别模块370、以及光学器件380等。
可以理解的是,本发明实施例示意的结构并不构成对头戴式虚拟现实设备300的具体限定。在本申请另一些实施例中,头戴式虚拟现实设备300可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理单元310可以包括一个或多个处理单元,例如:处理单元310可以包括应用处理器(application processor,AP),调制解调处理器,控制器,数字信号处理器(digital signal processor,DSP),和/或基带处理器等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理单元310中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理单元310中的存储器为高速缓冲存储器。该存储器可以保存处理单元310刚用过或循环使用的指令或数据。如果处理单元310需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理单元310的等待时间,因而提高了系统的效率。
存储模块320可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。存储模块320可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序等。存储模块320的存储数据区可存储虚拟现实设备300使用过程中所创建的数据(比如,应用的缩略图)等。
在本申请的一个或多个实施方式中,存储模块320可存储本申请实施方式提供的一个或多个图像处理方法的至少一部分。可以理解,当存储模块320中存储的图像处理方法的代码被处理单元310运行时,可以实现在三维空间的虚拟环境中的虚拟显示屏幕上分别显示多个应用的应用界面。
此外,存储模块320可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理单元310通过运行存储在存储模块320的指令,和/或存储在设置于处理单元310中的存储器的指令,执行虚拟现实设备300的各种功能以及数据处理。
收发模块330可以从处理单元310接收待发送的数据,然后发送给图像源设备210;同时,收发模块330还可以从图像源设备210接收数据,例如,图像数据帧等。
视频渲染模块340可以包括图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频编解码器,GPU用于执行数学和几何计算,用于图形渲染。在一些其他实施方式中,视频渲染模块340可以设置于处理单元310中,或将视频渲染模块340的部分功能模块设置于处理单元310中,例如处理单元310可以集成CPU和GPU。在各种实施方式中,CPU和GPU可以配合执行本申请实施例提供的图像处理方法,比如图像处理方法中部分算法由CPU执行,另一部分由GPU执行,以得到较快的处理效率。
左显示屏342、右显示屏344分别向用户的左眼和右眼显示图像,其中图像具体可以表现为静态图像或视频中的视频帧。左显示屏342和右显示屏344都括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。光学器件380可以包括变焦透镜等镜片,光学器件380对显示屏在显示图像过程中发出的光线进行屈光调整,以补偿近视眼、远视眼以及散光用户的晶状体屈光不正的不足,使得用户可以清晰地看到显示屏的图像。
音频编解码模块350用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频编解码模块350还可以用于对音频信号编码和解码。在一些实施例中,音频编解码模块350可以设置于处理单元310中,或将音频编解码模块350的部分功能模块设置于处理单元310中。麦克风352用于将用户的模拟音频输入到音频编解码模块350,扬声器354用于向用户播放音频编解码模块350输出的模拟音频信号。
头部动作捕捉模块360可以对用户头部的动作进行捕获和识别,例如识别用户是否低头或抬头,以及用户头部的转动角度等。
手势动作识别模块370可以识别用户的操作手势动作,例如左右滑动,上下滑动,点击确认等 动作。
在一些可能的场景中,用户使用虚拟现实设备300,收发模块330接收从诸如PC的图像源设备210传输过来的视频或图像信息后,交由处理单元310进行处理,处理单元310把相关的视频环境、或者图形窗口信息处理好后,并通过视频渲染模块340,投放到左右显示屏(342、344)上;形成虚拟显示环境或者虚拟显示桌面显示系统。音频编解码模块350把音频信息解码后传到扬声器354发声,以提供VR更加沉浸式的视频体验。同时麦克风352也可以扑捉人的声音信息,以提供VR系统与PC端的音频交互。在使用虚拟现实设备300的过程是,当人开始低头时,头部动作捕捉模块360捕捉这一动作,并触发虚拟桌面显示系统的相应操作,例如开启并显示全局缩略图;当人开始平视或者仰视的时候,触发关闭全局缩略图。同时头部动作捕捉模块360也可以捕捉头部转动等动作,以触发同步跟随显示人视野前方的窗口画面。手势动作识别模块370识别用户的手势动作,例如左右滑动,上下滑动,点击确认等动作,以便于用户与虚拟桌面系统进行交互操作。
在一些实施方式中,图2所示的图像显示设备230的显示控制器231可以以软件方式实现,但是显示控制器231还可以以硬件,软件或软件和硬件的组合实现。例如,参考图2和图3,在图像显示设备230为虚拟现实设备300的情况下,图像显示设备230的显示控制器231例如可以包括处理单元310、存储模块320、收发模块330、视频渲染模块340。图像显示设备230的显示屏幕232可以包括左显示屏342、右显示屏344、以及光学器件380。在其他实施方式中,图像显示设备230可以不包括图3中所示出的一个或多个部分,或者可以包括图3中未示出的其他部分。
以下将结合附图和实际应用场景,对本申请实施方式提供的图像处理方法进行示例性地描述。图4示出图像源设备210和图像显示设备230执行本申请的图像处理方法的交互过程。如图4所示,图4中未示出上述图1中提及的部分现有技术,例如包括,图像源设备210的一个或多个应用启动并申请创建应用程序窗口,图像源设备210的处理器执行应用程序流程并完成资源分配,根据应用的类型,应用可以调用例如OpenGL、DirectX等的2D的或3D图像程序接口。作为另一个示例,在图像源设备210为诸如在云端的服务器的情况下,图像显示设备230通过网络连接图像源设备210后,例如,图像显示设备230登录云端服务器请求虚拟现实服务,图像源设备210可以在云端为图像显示设备230分配用于虚拟现实服务的设备资源,例如CPU、GPU等。
随后,在块401:根据多个应用,绘制多个应用窗口,其中多个应用窗口中的每个应用窗口与多个应用一一对应。示例性地,图像源设备210可以利用GPU处理每个应用的绘制图形指令,完成每个应用的图形应用窗口的绘制,绘制好的应用窗口缓存到存储器中,其中,存储器可以是GPU的缓存,也可以是CPU的缓存或者图像源设备210的其他内存,这里不做具体限定。
在块402:对多个应用窗口中的每个应用窗口添加标识信息。示例性地,图像源设备210对这些应用窗口的像素数据添加标识信息。例如,图像源设备210可以对应用窗口的在存储器中的缓存帧添加头文件(Header),该头文件中可以包括该应用窗口对应的应用的诸如应用类型、文件名称、创建时间、作者等元数据。
在块403:编码多个应用窗口。图像源设备210的诸如显示器驱动的程序可以将存储器中的应用窗口的数据进行编码。
在编码之后,图像源设备210通过有线或者无线的方式在404:发送经编码的多个应用窗口到图像显示设备230。这里,被发送的经编码的多个应用窗口并未在图像源设备被合成在用于显示给用户的一个图像显示帧内。
接下来,以虚拟现实设备300作为图像显示设备230的一个示例来说明后续流程。可以理解, 在图4中,没有具体示出虚拟现实设备300启动后进行虚拟显示环境的进行渲染的公知技术,例如包括,虚拟现实设备300启动后,虚拟现实设备300的用户接口(User Interface,UI)系统的应用也会随之启动,并且该应用会申请相应的应用程序窗口,处理单元310分配相应的资源后,应用调用2D/3D的图形程序接口,通过视频渲染模块340处理UI的绘图指令,完成虚拟环境的渲染,形成VR端虚拟桌面框架。
回到图4,在块405:接收经编码的多个应用窗口。虚拟现实设备300的收发模块330从图像源设备210接收来自图像源设备210的已编码的多个应用窗口的数据。
虚拟现实设备300在接收到数据后,在块406:解码,以获得经解码的多个应用窗口。虚拟现实设备300解码出具有标识信息的应用窗口的数据,然后将这些应用窗口的数据,按照应用的各自类型,例如,
Figure PCTCN2021080281-appb-000001
微博、抖音等,分类保存到虚拟现实设备300的存储模块320中。
在块407:创建至少一个虚拟窗口。虚拟现实设备300根据在存储模块320中缓存的应用窗口的数量,创建对应数量的虚拟窗口,虚拟窗口可以是在虚拟桌面框架中用于为用户显示应用窗口的区域。
在块408:根据与多个应用窗口中的每个应用窗口对应的应用类别,将每个应用窗口加入到多个虚拟窗口中的与应用类型相对应的虚拟窗口内。
作为一个示例,在应用窗口的数量小于预定阈值的情况下,虚拟现实设备300可以将已保存在存储模块320中的每个应用窗口分别加入一个虚拟窗口中。作为另一个示例,在应用窗口的数量在大于预定阈值的情况下,虚拟现实设备300查看已保存在存储模块320中的每个应用窗口标识信息,根据应用窗口对应的应用的各自类型,将每个应用窗口加入与应用类型相对应的虚拟窗口,例如,当打开的窗口越来越多的时候,可以根据应用的类别进行归类聚合,比如打开的所有Word文档的应用窗口,聚合在一个虚拟窗口内;打开的所有PowerPoint文档的应用窗口,聚合在另外一个虚拟窗口内等。以下将参考场景和附图进一步说明这部分的其他示例。
在块409:将至少一个虚拟窗口合成在一个图像显示帧内并显示。视频渲染模块340合成所有的窗口在一个图像显示帧内,并把图像显示帧生成左右两个视图,之后在左显示屏342、右显示屏344显示。
以下结合场景和附图进一步说明图4所示的图像处理方法400。
图5a示出了根据示例性实施方式的虚拟现实设备300的显示屏呈现的虚拟窗口的示意性的示例。其中,图5a所示的显示场景是对图像处理方法400的可能的实施方式的进一步地举例,对在图4中已描述的内容,以下不再赘述。
通常,在用户使用虚拟现实设备300时的显示屏上,用户可以利用以用户为中心的360°的三维空间的虚拟显示区域。如图5a所示,在虚拟现实设备300的视野区域520上,一种有效的虚拟窗口排列方式就是,平铺环形排列。例如,以用户510为中心,在用户的视野范围内360°环形平铺地排列多个虚拟窗口530a-530n,其中每个虚拟窗口530a-530n都是可以向用户510清晰地呈现虚拟窗口中应用的内容。
在一示例性的场景中,在显示的虚拟窗口530没有超过预定的阈值数量的情况下,虚拟窗口530优先排列在用户510的前方的视野区域520。在一些示例中,可以假设用户510的可视的视角大概为120度,则用户510的视野区域与120度的视角范围相对应,在这样的条件下,显示的虚拟窗口530的阈值数量可以是在视野区域520中最多可容纳的虚拟窗口530的数量,例如,如图5a中所示意地,该阈值数量可以为4,即在一个视野区域520中最多可以显示4个虚拟窗口530。
之后,随着虚拟窗口530的增加,新增的每个虚拟窗口530可以一个接一个,以用户为中心从在用户510的前方视野区域520两侧延伸地环形排列。由此,随着用户510的视野移动,可以看到不同的视野区域520中的虚拟窗口530。例如,用户510转身的时候,就可以看到原本排列在用户510的后面的视野区域520。用户510可以通过手势进行虚拟窗口530控制操作,例如,向左或者向右的手势就可以滑动虚拟窗口530,向上或者向下的手势就可以关闭虚拟窗口530。进一步地,用户510也可以对虚拟窗口530内的内容进行点击打开,以及进行复制、剪切等操作。
作为一个示例,如果用户510在PC上分别打开
Figure PCTCN2021080281-appb-000002
微博和抖音4个应用,并通过虚拟现实设备300向用户510自己显示这些应用,那么,虚拟现实设备300从PC接收到编码的应用窗口的数据后,每个解码后的应用的应用窗口被分别加入到一个虚拟窗口中,这些虚拟窗口通过视频渲染模块340合成在一个图像显示帧后向用户510显示,例如,在图5a中,虚拟窗口530a可以显示Word文档,虚拟窗口530b可以显示PowerPoint文档,虚拟窗口530c可以显示微博应用的内容,虚拟窗口530n可以显示抖音应用的内容。
根据本申请的实施方式,可以实现在虚拟现实的三维空间的虚拟环境中的不同视野区域上分别显示多个应用的应用窗口,不仅各个窗口不会发生遮挡,而且在视野区域上显示的画面更大更清晰,增强了用户的视觉体验。
以下结合附图5b描述虚拟现实设备300的显示屏呈现虚拟窗口的另一种可能的示意性的示例。其中,图5b所示的显示场景也是对图像处理方法400的可能的实施方式的举例,对在图4和图5a中已描述的内容,以下不再赘述。
图5b所示的显示场景是针对显示的虚拟窗口530超过预定的阈值数量的情况下的另一种方案。简而言之,当打开的虚拟窗口530越来越多的时候,可以根据应用的类别进行聚合,比如打开的所有Word文档的应用窗口,聚合在一个虚拟窗口内;打开的所有PowerPoint文档的应用窗口,聚合在另外一个虚拟窗口内。
如图5b所示,每个应用的多个应用窗口占用一个虚拟窗口530,例如,虚拟窗口530a可以显示第一个打开的Word文档,虚拟窗口530b可以显示第一个打开的PowerPoint文档,虚拟窗口530c可以显示微博应用的第一个打开的内容,虚拟窗口530n可以显示抖音应用的第一个打开的内容。在每个虚拟窗口530中,每个应用的随后打开的其他应用窗口以缩略图的形式依次显示在每个虚拟窗口530内,例如,在虚拟窗口530a-530n中分别缩小排列在窗口的底部的多个缩略图531a、多个缩略图532b、多个缩略图533c和多个缩略图534n,其中缩略图的位置还可以在虚拟窗口530的顶部,或者两边,这里不作具体限定。其中,每个缩略图也可以是加入了应用窗口的虚拟窗口,而且每个缩略图是相互独立的,互不干扰。
用户510可以通过手势对每个虚拟窗口530和其中的缩略图进行控制操作,例如,当用户510的手向左或者向右滑动时,用户前方的视野区域520中的虚拟窗口530也随之左右滑动。此外,点击每个虚拟窗口530中的缩略图也能进行虚拟窗口530的显示画面的切换。例如,在虚拟窗口530b内,假如当前虚拟窗口530b显示的画面是最左侧的缩略图532b中的应用窗口,当点击最右侧的缩略图532b时,虚拟窗口530b的显示画面切换到最右侧的缩略图532b的应用窗口。
可选地或附加地,在另一些情况下,例如如果用户510打开的每个应用的应用窗口已经聚合但是仍环形平铺地布满了用户510的360°的视野范围。如果此时用户510需要平铺同一个应用的2个或者多个窗口,例如,对比显示这些应用窗口中的内容。用户510可以通过手势操作,拖动该应用的虚拟窗口530中的任意缩略图到任意其他虚拟窗口中,例如,从虚拟窗口530b中拖动一个缩略图 532b到虚拟窗口530c中,则这时该缩略图532b在该虚拟窗口530c显示,同时之前该虚拟窗口530c显示的其他应用窗口自动关闭。进一步地,用户510也可以对虚拟窗口530内的内容进行点击打开,以及进行复制、剪切等操作。
图5c示出了虚拟窗口中缩略图的一种可能示例的示意图。在图5c中,缩略图532b中可以显示应用窗口的头文件信息541,头文件信息541可以包括应用窗口的头文件中的至少一部分元数据,例如,对于Word、PowerPoint等文件,头文件信息541中可以显示文件名称、文件的修改时间等信息。缩略图532b中还可以显示对应用窗口的内容进行简洁化处理后的内容信息,其中根据应用窗口中包含的具体内容,简洁化处理后的内容信息大体可以包括文字信息542和缩小后的图片信息543。
作为一个示例,假设在图5c中放大显示的缩略图532b中的应用窗口的内容是以文字内容为主,例如,应用窗口是用户510打开的一个Word文件,那么缩略图532b可以显示基于Word文件的当前页面内容的简洁化处理后的文字信息542,文字信息542可以包括该Word文件的当前页面的部分文字内容,在一种可能的情况下,该Word文件的当前页面是文件的首页,那么文字信息542可以包括,例如,该Word文件首页的文件标题、该Word文件首页的前半页的全部文字内容,和/或该Word文件首页中相隔一定行采集的文字内容等等。
作为另一个示例,假设在图5c中放大显示的缩略图532b中的应用窗口的内容是以图像内容为主,例如,应用窗口是用户510打开的一个抖音视频页面,那么缩略图532b可以显示基于该视频页面的内容进行简洁化处理后的图片信息543,图片信息543可以包括,例如,视频页面中一个或多个具有视频封面的视频的封面图像,或者该视频页面的主区域的图像,视频页面的主区域可以理解为包括视频的播放窗口和视频标题的区域,该主区域的图像可以包含视频的播放窗口中需要播放的视频的封面图像和该视频的标题等。
作为另一个示例,假设在图5c中放大显示的缩略图532b中的应用窗口的内容是以文字和图像内容共同为主,例如,应用窗口是用户510打开的一条具有文字和配图的微博,那么缩略图532b可以显示基于微博的文字内容和配图进行简洁化处理后的文字信息542和图片信息543。文字信息542可以包括,例如,该微博的文字内容中的第一句或第一段文字、该微博的文字内容的前半部分的全部文字内容,和/或该微博的文字内容中相隔一定行采集的文字内容等等。图片信息543可以包括该微博的配图中一个或多个配图的图像等。
在图5b和5c所示的缩略图通过包含对应用窗口的内容进行简洁化处理后的信息,使用户可以方便地识别每个缩略图中的显示内容,提高用户操作的便捷性。
以下结合附图5d描述虚拟现实设备300的显示屏呈现虚拟窗口的另一种可能的示意性的示例。其中,图5d所示的显示场景也是对图4的图像处理方法400和后续图6-7中所示的图像处理方法的可能的实施方式的举例,对在图4和图5a-5c中已描述的内容,以下不再赘述。
图5d所示的显示场景是针对显示的虚拟窗口530超过预定的阈值数量的情况下的另一种方案。如图5d所示,当打开的虚拟窗口530越来越多的时候,为了减少用户510转身操作的麻烦,可以在用户510的视野前方设置一个全局的菜单550,例如,缩略图转盘。虽然图5d中示出的示例是在图5b所示的示例的基础上的改进,但是本领域技术人员可以理解,图5d所示的菜单550也可以在图5a所示的示例中实施,这里不作限制。
如图5d所示,菜单550可以包括环形平铺地布置在用户510的360°的视野范围上的每个虚拟窗口530的对应的缩略图,这些缩略图按照各自虚拟窗口的位置布置在菜单550中。例如,在图5的菜单550中,与虚拟窗口530a对应的缩略图是缩略图551,与虚拟窗口530b对应的缩略图是缩略 图552,与虚拟窗口530c对应的缩略图是缩略图553,以及与虚拟窗口530d对应的缩略图是缩略图554。其他缩略图对应于图5d中未示出的其他虚拟窗口530。
在一些实施方式中,图像显示设备230可以根据需要向用户510显示的虚拟窗口530的数量的情况,确定是否生成菜单550。例如,如果待显示的虚拟窗口530大于预先确定的阈值数量,图像显示设备230可以生成菜单550,其中,阈值数量可以诸如是4,如图5d中所示意地,即在一个视野区域520中最多可以显示4个虚拟窗口530。
在一些其他实施方式中,图像显示设备230可以根据用户510的手势操作或者头部运动的指示,来生成菜单550。作为一个示例,用户510可以通过使用多个手指同时向下滑动的手势操作使得图像显示设备230生成菜单550,或者,用户510可以通过低头的动作使得图像显示设备230生成菜单550。作为另一个示例,用户510的手势操作或者头部运动不仅可以用于指示图像显示设备230生成菜单550,还可以用于控制图像显示设备230显示菜单550。例如,在待显示的虚拟窗口530大于预先确定的阈值数量,图像显示设备230生成菜单550的情况下,用户510可以通过低头俯视的动作控制图像显示设备230显示菜单550。还例如,在图像显示设备230根据用户510的头部运动生成菜单550的情况下,用户510可以通过低头俯视的动作控制图像显示设备230生成菜单550,并在菜单550生成后,直接显示菜单550。
示例性地,如图5d所示,用户510在正常平视或者仰视的时候,菜单550不显示,当用户510的视线俯视的时候,菜单550在用户510的面前显示出来,用户510通过手势操作,例如,通过诸如点选或者拨动的特定选取动作进行当前视野区域520切换,用户510可以转动菜单550,用户510前方的视野区域520同步显示菜单550转动后的视野区域,以及视野区域内的虚拟窗口530,用户510还可以通过操作菜单550中的缩略图,例如缩略图551-554,可以同步对用户510前方的视野区域520显示的窗口进行相应地调节。
根据本申请的实施方式,图像显示设备通过VR虚拟显示技术,可以把多个应用窗口,都投放到用户的视野中,通过采用虚拟窗口的360度环形平铺展开,缩略图和虚拟窗口相结合,以及布置全局菜单等技术手段,实现高效的人机交互界面,从而实现多任务的扁平化操作,提高效率。
图6示出了根据本申请的一些实施方式的图像处理方法600的流程示意图。在一些实施方式中,图像处理方法600例如在图像显示设备上实施,例如,如图2所示的图像显示设备230,以及图像处理方法600还可以在作为图像显示设备230的一个示意性示例的图3的虚拟现实设备300上实施。在一些实施方式中,方法600的部分或全部可以在如图2中所示的显示控制器231和/或显示屏幕232上实施。在另一些实施方式中,如图3所示的虚拟现实设备300的不同组件可以实施方法600的不同部分或全部。
对于上述方法和示例场景的实施方式中未描述的内容,可以参见下述方法实施方式;同样地,对于下述方法实施方式中未描述的内容,可参见上述方法和示例场景的实施方式。例如,图6所示的图像处理方法600是对图4所示的图像处理方法400以及图5d所示显示场景的进一步说明,在前述各实施方式中已描述的内容,以下将简略描述或不再赘述。
为了便于理解,以下结合图5d中所示的显示场景说明图6所示的方法600。
如图6所示,图像显示设备230的显示控制器231在块601:判断待显示的多个虚拟窗口的数量是否超出与用户的视野范围相关的阈值。例如,对于用户的视野范围相关的阈值,可以假设用户510的可视的视角大概为120度,则用户510的视野区域与120度的视角范围相对应,在这样的条件下,显示的虚拟窗口530的阈值数量可以是在视野区域520中最多可容纳的虚拟窗口530的数量。
如果待显示的多个虚拟窗口的数量超过与用户的视野范围相关的阈值,则在块602:通过对多个虚拟窗口中对应的虚拟窗口进行简化图像处理,得到多个缩略图中的每个缩略图的图像内容。
在一些实施方式中,对于作为缩略图显示的虚拟窗口,显示控制器231提取中要加入该虚拟窗口的应用窗口的头文件信息,将至少部分头文件信息放置在虚拟窗口中,之后,显示控制器231对缓存的应用窗口的像素信息进行简洁化处理,例如,去除应用窗口的下半部分的信息,采集标题信息,隔行采集信息,或者提取像素信息进行图片识别等。显示控制器231将简洁化处理后的信息放入虚拟窗口中,这些信息供用户识别该虚拟窗口。
在块603:生成包含多个虚拟窗口的多个缩略图的菜单,其中多个虚拟窗口与多个缩略图一一对应。如上述实施方式中所述,由于待显示的虚拟窗口的数量超过了阈值,显示控制器231可以针对待显示的虚拟窗口生成对应的缩略图,并且生成包含这些缩略图的菜单,菜单可以是如前述图5d中所示的菜单550,在此不再赘述。
在块604:控制显示屏幕显示菜单。显示控制器231根据用户的手势操作或头部的动作,控制显示屏幕232显示该包括多个缩略图的菜单。
在块605:根据用户的指令,从菜单中的多个缩略图中选择一个缩略图。例如,参考图5d,用户可以通过手势操作的点选,从菜单550中的多个缩略图中选择一个缩略图,例如选择缩略图552。
随后,在块606:显示控制器231判断在与所选的缩略图相对应的虚拟窗口是否包括同一类型的多个应用窗口。作为一个示例,显示控制器231判断缩略图552对应的虚拟窗口530b中是否包括同一类型的多个应用窗口,例如,如图5d所示,虚拟窗口530b包括多个相同类型的应用窗口,那么在块607:显示多个应用窗口中的一个应用窗口,以及多个应用中的其他应用窗口的缩略图。例如,显示虚拟窗口530b中的第一个或者当前正在打开的应用窗口,将其他应用窗口显示为缩略图。
如果为否则在块608:显示多个虚拟窗口中与所选的缩略图相对应的虚拟窗口。例如,虚拟窗口530b是图5a中所示,则直接显示与缩略图552对应的虚拟窗口530b。
根据本申请的实施方式,图像显示设备通过VR虚拟显示技术,可以把多个应用窗口,都投放到用户的视野中,通过采用缩略图和虚拟窗口相结合,以及布置全局菜单等技术手段,实现高效的人机交互界面,从而实现多任务的扁平化操作,提高效率。
以下参考图7描述图像显示设备230的另一种图像处理方法。
图7示出了根据本申请的另一实施方式的图像处理方法700的流程示意图。在一些实施方式中,图像处理方法700例如在图像显示设备上实施,例如,如图2所示的图像显示设备230,以及图像处理方法700还可以在作为图像显示设备230的一个示意性示例的图3的虚拟现实设备300上实施。在一些实施方式中,方法700的部分或全部可以在如图2中所示的显示控制器231和/或显示屏幕232上实施。在另一些实施方式中,如图3所示的虚拟现实设备300的不同组件可以实施方法700的不同部分或全部。
对于上述方法和示例场景的实施方式中未描述的内容,可以参见下述方法实施方式;同样地,对于方法实施方式中未描述的内容,可参见上述方法和示例场景的实施方式。在前述各实施方式中描述的内容,以下将简略描述或不再赘述。
如图7所示,块701-703以及块706-709中描述的内容与块601-603以及块605-608类似,在此不再赘述。方法700的不同之处在于,显示控制器231可以在块704:判断用户是否俯视图像显示设备230的显示屏幕231,来确定是否显示菜单。如果用户低头俯视,则在块705:显示屏幕232显示菜单。例如,如图5d所示,用户510在正常平视或者仰视的时候,菜单550不显示,当用户510的 视线俯视的时候,菜单550在用户510的面前显示出来。
本申请的各方法实施方式均可以以软件、磁件、固件等方式实现。
可将程序代码应用于输入指令,以执行本文描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理系统包括具有诸如例如数字信号处理器(DSP)、微控制器、专用集成电路(ASIC)或微处理器之类的处理器的任何系统。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理系统通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本文中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
至少一个实施例的一个或多个方面可以由存储在计算机可读存储介质上的表示性指令来实现,指令表示处理器中的各种逻辑,指令在被机器读取时使得该机器制作用于执行本文所述的技术的逻辑。被称为“IP核”的这些表示可以被存储在有形的计算机可读存储介质上,并被提供给多个客户或生产设施以加载到实际制造该逻辑或处理器的制造机器中。
在一些情况下,指令转换器可用来将指令从源指令集转换至目标指令集。例如,指令转换器可以变换(例如使用静态二进制变换、包括动态编译的动态二进制变换)、变形、仿真或以其它方式将指令转换成将由核来处理的一个或多个其它指令。指令转换器可以用软件、硬件、固件、或其组合实现。指令转换器可以在处理器上、在处理器外、或者部分在处理器上且部分在处理器外。
在一些情况下,所公开的实施例可以以硬件、固件、软件或其任何组合来实现。所公开的实施例还可以被实现为由一个或多个暂时或非暂时性机器可读(例如,计算机可读)存储介质承载或存储在其上的指令,其可以由一个或多个处理器读取和执行。例如,指令可以通过网络或通过其他计算机可读介质的途径分发。因此,机器可读介质可以包括用于以机器(例如,计算机)可读的形式存储或传输信息的任何机制、但不限于、软盘、光盘、光盘、只读存储器(CD-ROM)、磁光盘、只读存储器(ROM)、随机存取存储器(RAM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)、磁卡或光卡、闪存、或用于通过电、光、声或其他形式的传播信号(例如,载波、红外信号、数字信号等)通过因特网传输信息的有形的机器可读存储器。因此,机器可读介质包括适合于以机器(例如,计算机)可读的形式存储或传输电子指令或信息的任何类型的机器可读介质。
在附图中,以特定布置和/或顺序示出一些结构或方法特征。然而,应该理解,可以不需要这样的特定布置和/或排序。在一些实施例中,这些特征可以以不同于说明性附图中所示的方式和/或顺序来布置。另外,在特定图中包含结构或方法特征并不意味着暗示在所有实施例中都需要这样的特征,并且在一些实施例中,可以不包括这些特征或者可以与其他特征组合。
应当理解的是,虽然在这里可能使用了术语“第一”、“第二”等等来描述各个单元或是数据,但是这些单元或数据不应当受这些术语限制。使用这些术语仅仅是为了将一个特征与另一个特征进行区分。举例来说,在不背离示例性实施例的范围的情况下,第一特征可以被称为第二特征,并且类似地第二特征可以被称为第一特征。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种用于图像源设备的图像处理方法,其特征在于,包括:
    根据多个应用,绘制多个应用窗口,其中所述多个应用窗口中的每个应用窗口与所述多个应用一一对应;
    编码所述多个应用窗口并将所述经编码的多个应用窗口发送到图像显示设备,其中所述经编码的多个应用窗口未被合成在一个图像显示帧内。
  2. 如权利要求1所述的方法,其特征在于,所述图像源设备包括用户设备和服务器中的至少一个,并且所述图像显示设备包括虚拟现实显示设备。
  3. 一种用于图像显示设备的图像处理方法,其特征在于,包括:
    从图像源设备接收经编码的多个应用窗口并解码,以获得经解码的所述多个应用窗口,其中所述多个应用窗口与多个应用一一对应;
    创建至少一个虚拟窗口;
    根据与所述多个应用窗口中的每个应用窗口对应的应用类别,将所述每个应用窗口加入到所述至少一个虚拟窗口中的与所述应用类型相对应的虚拟窗口内;和
    将所述至少一个虚拟窗口合成在一个图像显示帧内并显示。
  4. 如权利要求3所述的方法,其特征在于,还包括:
    响应于用户通过所述图像显示设备的显示屏幕点击所述至少一个虚拟窗口中的第一虚拟窗口中的第一应用窗口,控制所述显示屏幕在所述第一虚拟窗口中显示被点击的所述第一应用窗口的内容,并且在所述第一虚拟窗口中同时显示所述第一虚拟窗口所包括的其他应用窗口的缩略图。
  5. 如权利要求3-4中任一项所述的方法,其特征在于,还包括:
    响应于用户通过所述图像显示设备的显示屏幕将所述第一虚拟窗口中的所述第一应用窗口拖动到所述至少一个虚拟窗口中的第二虚拟窗口,控制所述显示屏幕在所述第二虚拟窗口中显示所述第一应用窗口的内容。
  6. 如权利要求3-5中任一项所述的方法,其特征在于,还包括:
    响应于用户在所述图像显示设备的显示屏幕上左右滑动的手势,显示所述至少一个虚拟窗口相应地左右滑动。
  7. 如利要求3-6中任一项所述的方法,其特征在于,所述图像显示设备是虚拟现实显示设备。
  8. 一种用于图像显示设备的图像处理方法,其特征在于,包括:
    在待显示的多个虚拟窗口的数量超出阈值和用户俯视所述图像显示设备的显示屏幕中的至少一种情况下,生成包含所述多个虚拟窗口的多个缩略图的菜单,其中所述多个虚拟窗口与所述多个缩略图一一对应,并且,其中所述多个虚拟窗口中的每个虚拟窗口包括同一应用类型的至少一个应用窗口,以及所述至少一个应用窗口与至少一个应用一一对应;和
    控制所述显示屏幕显示所述菜单。
  9. 如权利要求8所述的图像处理方法,其特征在于,所述菜单包括所述多个缩略图组成的转盘。
  10. 如权利要求8-9中任一项所述的图像处理方法,其特征在于,所述阈值与所述用户的视野范围相关。
  11. 如权利要求8-10中任一项所述的图像处理方法,其特征在于,所述多个缩略图中的每个缩略图包括所述多个虚拟窗口中与所述每个缩略图相对应的虚拟窗口的标识信息。
  12. 如权利要求8-11中任一项所述的图像处理方法,其特征在于,所述多个缩略图中的每个缩略图包括通过对所述多个虚拟窗口中对应的虚拟窗口进行简化图像处理后获得的图像内容,其中所述简化图像处理包括对所述对应的虚拟窗口进行隔行扫描和去除所述对应的虚拟窗口的部分内容中的至少一种。
  13. 如权利要求8-12中任一项所述的图像处理方法,其特征在于,还包括:
    根据所述用户的指令,从所述菜单中的多个缩略图中选择一个缩略图并显示所述多个虚拟窗口中与所选的缩略图相对应的虚拟窗口,
    其中,所述指令包括所述用户对所述菜单的滑动和对所选缩略图的点击中的至少一种。
  14. 如权利要求13所述的图像处理方法,其特征在于,在与所选的缩略图相对应的虚拟窗口包括所述同一类型的多个应用窗口的情况下,所述显示所述相对应的虚拟窗口还包括:
    在所述相对应的虚拟窗口中显示所述同一类型的多个应用窗口中的一个应用窗口,以及所述同一类型的多个应用窗口中的其他应用窗口的缩略图。
  15. 如权利要求8-14中任一项所述的图像处理方法,其特征在于,所述图像显示设备包括虚拟现实显示设备。
  16. 一种图像显示设备,其特征在于,包括:显示控制器和显示屏幕,
    所述显示控制器,用于从图像源设备接收经编码的多个应用窗口并解码,以获得经解码的所述多个应用窗口,其中所述多个应用窗口与多个应用一一对应;创建至少一个虚拟窗口;根据与所述多个应用窗口中的每个应用窗口对应的应用类别,将所述每个应用窗口加入到所述至少一个虚拟窗口中的与所述应用类型相对应的虚拟窗口内;和将所述至少一个虚拟窗口合成在一个图像显示帧;和
    所述显示屏幕用于显示所述图像显示帧。
  17. 一种图像显示设备,其特征在于,包括:显示控制器和显示屏幕,
    所述显示控制器,用于在待显示的多个虚拟窗口的数量超出阈值和用户俯视所述显示屏幕中的至少一种情况下,生成包含所述多个虚拟窗口的多个缩略图的菜单,其中所述多个虚拟窗口与所述多个缩略图一一对应,并且,其中所述多个虚拟窗口中的每个虚拟窗口包括同一应用类型的至少一个应用窗口,以及所述至少一个应用窗口与至少一个应用一一对应;和
    控制所述显示屏幕显示所述菜单。
  18. 如权利要求17所述的图像显示设备,其特征在于,所述菜单包括所述多个缩略图组成的转盘。
  19. 如权利要求17-18中任一项所述的图像显示设备,其特征在于,所述阈值与所述用户的视野范围相关。
  20. 如权利要求17-19中任一项所述的图像显示设备,其特征在于,所述多个缩略图中的每个缩略图包括所述多个虚拟窗口中与所述每个缩略图相对应的虚拟窗口的标识信息。
  21. 如权利要求17-20中任一项所述的图像显示设备,其特征在于,所述多个缩略图中的每个缩略图包括通过对所述多个虚拟窗口中对应的虚拟窗口进行简化图像处理后获得的图像内容,其中所述简化图像处理包括对所述对应的虚拟窗口进行隔行扫描和去除所述对应的虚拟窗口的部分内容中的至少一种。
  22. 如权利要求17-21中任一项所述的图像显示设备,其特征在于,还包括:
    所述显示控制器还用于根据所述用户的指令,从所述菜单中的多个缩略图中选择一个缩略图并显示所述多个虚拟窗口中与所选的缩略图相对应的虚拟窗口,
    其中,所述指令包括所述用户对所述菜单的滑动和对所选缩略图的点击中的至少一种。
  23. 如权利要求22所述的图像显示设备,其特征在于,在与所选的缩略图相对应的虚拟窗口包括所述同一类型的多个应用窗口的情况下,所述显示所述相对应的虚拟窗口还包括:
    在所述相对应的虚拟窗口中显示所述同一类型的多个应用窗口中的一个应用窗口,以及所述同一类型的多个应用窗口中的其他应用窗口的缩略图。
  24. 如权利要求17-23中任一项所述的图像显示设备,其特征在于,所述图像显示设备包括虚拟现实显示设备。
  25. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,该指令在计算机上执行时使所述计算机执行根据权利要求1-15中任一项所述的方法。
  26. 一种电子设备,其特征在于,包括:
    存储器,用于存储由所述电子设备的一个或多个处理器执行的指令,以及
    处理器,用于执行所述存储器中的所述指令,以执行根据权利要求1-15中任一项所述的方法。
PCT/CN2021/080281 2020-03-12 2021-03-11 图像处理方法和图像显示设备、存储介质和电子设备 WO2021180183A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010169749.4A CN113391734A (zh) 2020-03-12 2020-03-12 图像处理方法和图像显示设备、存储介质和电子设备
CN202010169749.4 2020-03-12

Publications (1)

Publication Number Publication Date
WO2021180183A1 true WO2021180183A1 (zh) 2021-09-16

Family

ID=77615740

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080281 WO2021180183A1 (zh) 2020-03-12 2021-03-11 图像处理方法和图像显示设备、存储介质和电子设备

Country Status (2)

Country Link
CN (1) CN113391734A (zh)
WO (1) WO2021180183A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107479A (zh) * 2023-03-02 2023-05-12 优视科技有限公司 图片显示方法、电子设备及计算机存储介质
CN116212361A (zh) * 2021-12-06 2023-06-06 广州视享科技有限公司 虚拟对象显示方法、装置和头戴式显示装置
WO2023236515A1 (zh) * 2022-06-10 2023-12-14 北京凌宇智控科技有限公司 一种应用程序显示方法、装置及计算机可读存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281221A (zh) * 2021-12-27 2022-04-05 广州小鹏汽车科技有限公司 车载显示屏的控制方法及其装置、车辆和存储介质
CN115617166A (zh) * 2022-09-29 2023-01-17 歌尔科技有限公司 交互控制方法、装置及电子设备
CN116301482B (zh) * 2023-05-23 2023-09-19 杭州灵伴科技有限公司 3d空间的窗口显示方法和头戴式显示设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138231A (zh) * 2015-10-20 2015-12-09 北京奇虎科技有限公司 应用程序图标的呈现方法及装置
CN105975146A (zh) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 一种内容分布显示方法及装置
CN108924538A (zh) * 2018-05-30 2018-11-30 太若科技(北京)有限公司 Ar设备的屏幕拓展方法
CN110381195A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏显示方法及电子设备
US20190369847A1 (en) * 2018-06-01 2019-12-05 Samsung Electronics Co., Ltd. Image display apparatus and operating method of the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013021834B4 (de) * 2013-12-21 2021-05-27 Audi Ag Vorrichtung und Verfahren zum Navigieren innerhalb eines Menüs zur Fahrzeugsteuerung sowie Auswählen eines Menüeintrags aus dem Menü
EP3889748A1 (en) * 2015-06-07 2021-10-06 Apple Inc. Device, method, and graphical user interface for manipulating application windows
CN110347305A (zh) * 2019-05-30 2019-10-18 华为技术有限公司 一种vr多屏显示方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138231A (zh) * 2015-10-20 2015-12-09 北京奇虎科技有限公司 应用程序图标的呈现方法及装置
CN105975146A (zh) * 2016-04-26 2016-09-28 乐视控股(北京)有限公司 一种内容分布显示方法及装置
CN108924538A (zh) * 2018-05-30 2018-11-30 太若科技(北京)有限公司 Ar设备的屏幕拓展方法
US20190369847A1 (en) * 2018-06-01 2019-12-05 Samsung Electronics Co., Ltd. Image display apparatus and operating method of the same
CN110381195A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏显示方法及电子设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116212361A (zh) * 2021-12-06 2023-06-06 广州视享科技有限公司 虚拟对象显示方法、装置和头戴式显示装置
CN116212361B (zh) * 2021-12-06 2024-04-16 广州视享科技有限公司 虚拟对象显示方法、装置和头戴式显示装置
WO2023236515A1 (zh) * 2022-06-10 2023-12-14 北京凌宇智控科技有限公司 一种应用程序显示方法、装置及计算机可读存储介质
CN116107479A (zh) * 2023-03-02 2023-05-12 优视科技有限公司 图片显示方法、电子设备及计算机存储介质
CN116107479B (zh) * 2023-03-02 2024-02-13 优视科技有限公司 图片显示方法、电子设备及计算机存储介质

Also Published As

Publication number Publication date
CN113391734A (zh) 2021-09-14

Similar Documents

Publication Publication Date Title
WO2021180183A1 (zh) 图像处理方法和图像显示设备、存储介质和电子设备
US11321928B2 (en) Methods and apparatus for atlas management of augmented reality content
CN107251567B (zh) 用于生成视频流的注释的方法和装置
CN112558825A (zh) 一种信息处理方法及电子设备
US20140320592A1 (en) Virtual Video Camera
CN113661471A (zh) 混合渲染
JP2023503679A (ja) マルチウィンドウ表示方法、電子デバイス及びシステム
CN112527174B (zh) 一种信息处理方法及电子设备
US20140168239A1 (en) Methods and systems for overriding graphics commands
CN112527222A (zh) 一种信息处理方法及电子设备
JP2021531561A (ja) 3d移行
US20220229535A1 (en) Systems and Methods for Manipulating Views and Shared Objects in XR Space
CN116136784A (zh) 数据处理方法、装置、存储介质及程序产品
US20230336841A1 (en) System and method for streaming in metaverse space
CN114968152B (zh) 减少virtio-gpu额外性能损耗的方法
WO2022252924A1 (zh) 图像传输与显示方法、相关设备及系统
CN114570020A (zh) 数据处理方法以及系统
US20140168240A1 (en) Methods and systems for overriding graphics commands
US11961178B2 (en) Reduction of the effects of latency for extended reality experiences by split rendering of imagery types
US20140173028A1 (en) Methods and systems for overriding graphics commands
US11468611B1 (en) Method and device for supplementing a virtual environment
CN116982069A (zh) 用于灵活图形增强和执行的方法和系统
US10678553B2 (en) Pro-active GPU hardware bootup
CN103842982B (zh) 用于本地生成的手势和过渡图形与终端控制服务的交互的方法和系统
US20230136064A1 (en) Priority-based graphics rendering for multi-part systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21768199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21768199

Country of ref document: EP

Kind code of ref document: A1