CN113438534B - Image display method, device and system - Google Patents

Image display method, device and system Download PDF

Info

Publication number
CN113438534B
CN113438534B CN202110653992.8A CN202110653992A CN113438534B CN 113438534 B CN113438534 B CN 113438534B CN 202110653992 A CN202110653992 A CN 202110653992A CN 113438534 B CN113438534 B CN 113438534B
Authority
CN
China
Prior art keywords
window
display window
display
virtual
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110653992.8A
Other languages
Chinese (zh)
Other versions
CN113438534A (en
Inventor
曹季
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110653992.8A priority Critical patent/CN113438534B/en
Publication of CN113438534A publication Critical patent/CN113438534A/en
Application granted granted Critical
Publication of CN113438534B publication Critical patent/CN113438534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region

Abstract

The application provides an image display method, device and system, comprising: selecting a source display window to be operated from all display windows of the display equipment; generating a first virtual window based on the information of the source display window, wherein the size of the first virtual window is the same as that of the source display window, and the position of the first virtual window is the same as that of the source display window; acquiring first operation data aiming at a source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished; generating a target display window based on the information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; and deleting the second virtual window and the source display window, and transferring the video image sent by the host corresponding to the source display window to the target display window. According to the technical scheme, the source display window can display the latest video image in real time.

Description

Image display method, device and system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image display method, apparatus, and system.
Background
The seat management system has the advantages of multi-service convenient management, seamless switching, cooperative work and the like, and is widely applied to various fields, the seat management system can solve the problems of information safety, data safety, isolated work of seats and the like, and the seat management system can help a user to conveniently push or acquire a signal of any position in the seat management system when facing a large amount of data and needing to make a complex decision through processing and sharing of visual data, and can perform remote operation, so that the service processing flow is greatly simplified, and the service processing efficiency is improved.
The agent management system comprises a display device and an operation device, wherein the display device is used for displaying a plurality of display windows, and each display window can display a video image sent by the host. When the operation device moves a certain display window of the display device, the display window will keep the last frame of video image sent by the display host unchanged, and in the moving process of the display window, because the last frame of video image sent by the display host is kept, the latest video image sent by the host cannot be displayed in real time, so that the user experience is poor.
Disclosure of Invention
The application provides an image display method, wherein an agent management system comprises a display device, an output device, an operation device and a host, the display device is used for displaying at least one display window, each display window is used for displaying a video image sent by the host, the method is applied to the output device, and the method comprises the following steps:
selecting a source display window to be operated from all display windows of the display equipment; generating a first virtual window based on the information of the source display window, wherein the size of the first virtual window is the same as that of the source display window, and the position of the first virtual window is the same as that of the source display window;
acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
generating a target display window based on the information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; and deleting the second virtual window and the source display window, and transferring the video image sent by the host corresponding to the source display window to the target display window.
The application provides an image display method, which comprises the following steps:
the method comprises the steps of obtaining a source display window to be operated, and generating a first virtual window based on information of the source display window; wherein the size of the first virtual window is the same as the size of the source display window, and the position of the first virtual window is the same as the position of the source display window;
acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
after the second virtual window is obtained, generating a target display window based on information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; and deleting the second virtual window and the source display window, and transferring the image displayed by the source display window to the target display window for displaying.
The application provides an image display device, the seat management system includes display device, output device, operating device and host computer, display device is used for showing at least one display window, and every display window is used for showing the video image that the host computer sent, the device is applied to output device, the device includes:
the selection module is used for selecting a source display window to be operated from all display windows of the display equipment;
a generating module, configured to generate a first virtual window based on information of the source display window, where a size of the first virtual window is the same as a size of the source display window, and a position of the first virtual window is the same as a position of the source display window;
the operation module is used for acquiring first operation data aiming at the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
the generating module is further configured to generate a target display window based on information of the second virtual window, where a size of the target display window is the same as a size of the second virtual window, and a position of the target display window is the same as a position of the second virtual window;
and the processing module is used for deleting the second virtual window and the source display window and transferring the video image sent by the host corresponding to the source display window to the target display window.
The application provides an agent management system, agent management system includes display device, output device, operating device, input device and host computer, display device is used for showing at least one display window, and every display window is used for showing the video image that the host computer sent, wherein:
the host acquires a video image and sends the video image to input equipment so that the input equipment forwards the video image to output equipment; the output device receives the video image and sends the video image to the display device so as to display the video image through a display window of the display device;
the operating equipment operates the host, and the output equipment acquires control data when the operating equipment operates the host and sends the control data to the input equipment; the input device forwards the control data to the host computer to control the host computer based on the control data;
the operation equipment operates the display window of the display equipment, and the output equipment acquires operation data when the display window is operated and operates the display window based on the operation data; wherein, the output device is specifically configured to, when operating the display window based on the operation data:
selecting a source display window to be operated from all display windows of the display equipment; generating a first virtual window based on the information of the source display window, wherein the size of the first virtual window is the same as that of the source display window, and the position of the first virtual window is the same as that of the source display window;
acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
generating a target display window based on the information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; and deleting the second virtual window and the source display window, and transferring the video image sent by the host corresponding to the source display window to the target display window.
According to the technical scheme, in the embodiment of the application, when the operation device operates a certain display window (namely a source display window) of the display device, the first virtual window is generated based on the source display window and is operated instead of operating the source display window, so that the source display window can display the latest video image in real time instead of displaying the last frame of video image before operation all the time, the user experience is improved, and the real-time display of the video image cannot be influenced in the window operation process.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a schematic diagram of an agent management system according to an embodiment of the present application;
FIGS. 2A and 2B are schematic diagrams illustrating movement of a display window according to an embodiment of the present application;
FIG. 3A is a schematic flow chart diagram illustrating an image display method according to an embodiment of the present application;
3B-3E are schematic diagrams of display window movement in one embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating an image display method according to another embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating an image display method according to another embodiment of the present application;
fig. 6 is a schematic configuration diagram of an image display device according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "at … …" or "in response to a determination.
Referring to fig. 1, a schematic diagram of an agent management system is shown, where the agent management system may include, but is not limited to, a management subsystem (e.g., a machine room), a switching subsystem, and an agent subsystem. The management subsystem is used for acquiring the video image and sending the video image to the seat subsystem through the exchange subsystem. The switching subsystem is used for sending the video image of the management subsystem to the seat subsystem so as to enable the seat subsystem to display the video image, and sending the control data of the seat subsystem to the management subsystem so as to control the management subsystem through the control data. The agent subsystem is used for acquiring control data (such as operation data of a keyboard and/or operation data of a mouse) and sending the control data to the management subsystem through the exchange subsystem.
The management subsystem may include, but is not limited to, a host, which may be a device capable of operating with a Keyboard and/or a Mouse, such as a personal computer, a Video camera, a laptop, a Video recorder, etc., and an input device, which may also be referred to as an input node or an input box, which may be a KVM (Keyboard, Video, and Mouse) type input device.
The host may capture a video image and send the video image to the input device. The input device is a bridge between the host and the output device, and is configured to forward the video image to the output device, for example, encode the video image and forward the code stream to the output device. The input device can also receive control data (such as operation data of a keyboard and/or operation data of a mouse) from the output device and forward the control data to the host computer so as to control the host computer through the control data and realize remote control of the host computer.
The switching subsystem may include, but is not limited to, a data network switch for transmitting the video images of the management subsystem, for example, the input device may send the video images to the data network switch, which forwards the video images to the output device, and a control network switch. The control network switch is configured to transmit control data of the agent subsystem, for example, the output device may send the control data to the control network switch, and the control network switch sends the control data to the input device.
The agent subsystems may be deployed in a distributed manner, that is, multiple agent subsystems may be deployed at the same time, in fig. 1, 3 agent subsystems are taken as an example, and the 3 agent subsystems are respectively denoted as an agent subsystem 1, an agent subsystem 2, and an agent subsystem 3, of course, the number of the agent subsystems may be more, or the number of the agent subsystems may also be 1, and the number of the agent subsystems is not limited.
For each agent subsystem, the agent subsystem may include, but is not limited to, a display device, an output device, and an operating device. The display device may be a display or a display screen or the like, i.e. a device for displaying video images. The output device, which may also be referred to as an output node or output box, may be a KVM-type output device. The operating device may include, but is not limited to, a keyboard and/or a mouse, among other devices.
The display device may be configured to display at least one display window capable of displaying video images transmitted by the host. The operation device can operate the display window, and the output device can acquire operation data when the operation device operates the display window, and operate the display window based on the operation data, so that the control of the display window is realized. The operation device can also operate the host of the management subsystem, the output device can acquire control data (such as operation data of a keyboard and/or operation data of a mouse) when the operation device operates the host, and send the control data to the input device of the management subsystem, the input device forwards the control data to the host, and the host is controlled based on the control data, so that remote control of the host is realized.
The output device can also receive the video image sent by the input device and send the video image to the display device to display the video image through a display window of the display device. For example, the output device receives a code stream sent by the input device, decodes the code stream to obtain a video image, and displays the video image through the display window. Illustratively, the display device is configured to display at least one display window, each display window being configured to display video images sent by a host, and different display windows may display video images sent by different hosts. For example, assuming that a video image transmitted by the host 1 and a video image transmitted by the host 2 need to be displayed by the display device, a display window 1 and a display window 2 are created in the display device, where the display window 1 is used for displaying the video image transmitted by the host 1, and the display window 2 is used for displaying the video image transmitted by the host 2. The output device displays the video image transmitted from the host 1 through the display window 1 after obtaining the video image. The output device displays the video image transmitted from the host 2 through the display window 2 after obtaining the video image.
The management subsystem may further include a master device, which may also be referred to as a master node and is a master device of all the output devices, and the master device is not shown in fig. 1. One output device may be selected from all the output devices as the master control device, and one device may also be deployed alone as the master control device, which is not limited to this. The main control device is used for managing all output devices, such as adding or deleting output devices, configuring IP addresses of the output devices, and the like. For example, the master control device may further determine a relationship between the host and the display window, for example, determine that the display window 1 has a matching relationship with the host 1, and the display window 2 has a matching relationship with the host 2, so that the display window 1 may be controlled to display the video image sent by the host 1, and the display window 2 may be controlled to display the video image sent by the host 2. Of course, the above are only a few examples of the functions of the master device, and the functions of the master device are not particularly limited.
Referring to the above embodiment, the operation device may operate the display window of the display device, and the output device acquires the operation data and operates the display window based on the operation data, thereby implementing control of the display window. For this process, in one possible implementation, taking the operation type as a move type as an example, see fig. 2A, the left side is an example of a move operation not yet performed on the display window, and the right side is an example of a move operation performed on the display window. Obviously, when the operating device moves the display window (for example, the mouse is usually pressed to lock the display window, the mouse moves to drag the display window, and the display window is moved to a new position to be released), the display window will keep displaying the video image of the last frame.
For example, before the operation device performs a moving operation on the display window, the last frame of video image sent by the host is image a, and the display window always displays image a during the moving process of the display window, but if the host sends video image B, video image C, and the like during the moving process of the display window, the display window cannot display video image B, video image C, and the user experience is poor.
Different from the above manner, in the embodiment of the present application, as shown in fig. 2B, the left side is an example of a display window that has not been moved, the middle is an example of a display window moving process, and the right side is an example of a display window after the display window has been moved. Obviously, when the operating device moves the source display window (for example, the mouse normally locks the source display window), the virtual window is generated based on the source display window, and the virtual window is operated (for example, the mouse moves and drags the virtual window, the virtual window is moved to a new position to be released, but not the mouse moves and drags the source display window, the source display window is moved to a new position to be released), and in the moving process of the virtual window, the source display window continues to display the video image, so that the source display window can display the latest video image in real time instead of displaying the last frame of video image before the operation all the time, the real-time display of the video image cannot be influenced in the window operation process, and the user experience is improved.
After the virtual window is moved to the new position and released (for example, the left button of the mouse is released), the source display window is changed to the position of the virtual window, namely, the source display window is changed to the new position to continue displaying the video image.
For example, before the operation device performs the moving operation on the source display window, the last frame of video image sent by the host is image a, and the source display window displays image a. When the source display window needs to be moved, the virtual window is moved instead of the source display window, so that if the host sends the video image B, the video image C and the like in the moving process of the virtual window, since the position of the source display window is kept unchanged, the source display window can continuously display the video image B and the video image C, and the user experience is improved.
The technical solutions of the embodiments of the present application are described below with reference to specific embodiments.
The embodiment of the present application provides an image display method, which may be applied to an agent management system, where the agent management system may include, but is not limited to, a display device, an output device, an operation device, a host, an input device, and the like, as shown in fig. 1. The display device may be configured to display at least one display window, and each display window is configured to display a video image transmitted by the host. Referring to fig. 3A, a flowchart of the image display method, which may be applied to an output device, may include:
step 301, selecting a source display window to be operated from all display windows of a display device, and generating a first virtual window based on information of the source display window, where the size of the first virtual window may be the same as the size of the source display window, and the position of the first virtual window may be the same as the position of the source display window.
For example, the operation modes of the operating device can be divided into a window mode and a non-window mode (e.g., a host mode, etc.). The window mode indicates that the operation device operates the display window, such as performing a zoom operation or a move operation on the display window, and therefore, after the output device obtains the operation data of the operation device, it is known that the operation data is the operation data for the display window, and it is necessary to operate the display window based on the operation data. The non-window mode indicates that the operating device does not operate the display window, so that the output device knows that the operating data is not the operating data for the display window after obtaining the operating data of the operating device, and does not operate the display window based on the operating data. For example, the operation data may be operation data for the host, and thus, the output device may transmit the operation data to the input device, so that the input device forwards the operation data to the host to implement remote control of the host by the operation data.
In summary, the output device may determine an operation mode of the operation device, and if the operation mode is a window mode for operating the display window, the source display window to be operated may be selected from all display windows of the display device, that is, the technical solution of the embodiment of the present application is adopted to implement an operation process of the display window, and if the operation mode is not the window mode, the implementation manner is not limited in the embodiment of the present application.
In one possible embodiment, the operating mode of the operating device can be determined as follows: second operation data of the operating device is obtained, and the second operation data can comprise the first operation coordinate and the first operation type. And if the first operation coordinate is matched with the menu position corresponding to the window mode and the first operation type is clicking, determining that the operation mode of the operation equipment is the window mode. Or, if the first operation coordinate is not matched with the menu position corresponding to the window mode (for example, the first operation coordinate is matched with the menu position corresponding to the non-window mode), and the first operation type is clicking, determining that the operation mode of the operation device is the non-window mode.
Of course, the above manner is only an example of determining the operation mode, and the determination manner is not limited.
For example, when a user needs to operate a display window, an operation mode of an operating device may be first switched to a window mode, and an example of the process is as follows: the display device displays a menu in a floating layer mode (namely, the menu is displayed on the upper surface of the display window in a floating layer mode, and the menu can comprise options of a window mode, a non-window mode and the like), and a user moves a mouse to a menu position corresponding to the window mode and clicks a left button of the mouse.
For the operation of the user, the output device may obtain second operation data of the operation device (i.e., the mouse), where the second operation data may include the first operation coordinate and the first operation type. Obviously, since the user moves the mouse to the menu position corresponding to the window mode, the first operation coordinate is matched with the menu position corresponding to the window mode, and since the user clicks the left mouse button, the first operation type is clicking.
To sum up, since the first operation coordinate is matched with the menu position corresponding to the window mode and the first operation type is a click, the output device determines that the operation mode of the operation device is the window mode.
In a possible implementation manner, if the operation mode of the operation device is a window mode, the source display window to be operated may be selected from all display windows of the display device in the following manner: acquiring third operation data of the operation device, wherein the third operation data may include a second operation coordinate and a second operation type; and if the second operation coordinate is matched with the coordinate range of the display window (namely the second operation coordinate is positioned in the coordinate range) and the second operation type is clicking, selecting the display window as a source display window to be operated.
Of course, the above manner is only an example of selecting the source display window, and the selection manner is not limited thereto.
For example, when a user needs to operate a display window a (i.e., a source display window) in all display windows, the user moves the mouse to a position of the display window a (e.g., a center position or an edge position, which is not limited to this, as long as the mouse is located in a coordinate range of the display window a), and clicks a left button of the mouse.
For the operation of the user, the output device may obtain third operation data of the operation device (i.e., the mouse), where the third operation data may include the second operation coordinate and the second operation type. Obviously, since the user moves the mouse to the position of the display window a, the second operation coordinate is matched with the coordinate range of the display window a, and since the user clicks the left button of the mouse, the second operation type may be clicking.
To sum up, since the second operation coordinate is matched with the coordinate range of the display window a and the second operation type is clicking, the output device selects the display window a as the source display window to be operated.
In a possible embodiment, after the source display window is selected from all the display windows, the first virtual window may be generated based on the information of the source display window, and the generation manner is not limited, for example, when the source display window is a rectangle, the information of the source display window may be the size (e.g., width, height, etc.) of the source display window and the position (e.g., a position of a center point, or a position of at least one vertex of four vertices, etc.) of the source display window, the size of the first virtual window is the same as the size of the source display window, and the position of the first virtual window is the same as the position of the source display window. The first virtual window having the same size as the source display window means: the width of the first virtual window is the same as the width of the source display window, and the height of the first virtual window is the same as the height of the source display window. The position of the first virtual window is the same as the position of the source display window, and the following means that: the position of the upper left corner of the first virtual window is the same as the position of the upper left corner of the source display window, the position of the upper right corner of the first virtual window is the same as the position of the upper right corner of the source display window, the position of the lower left corner of the first virtual window is the same as the position of the lower left corner of the source display window, the position of the lower right corner of the first virtual window is the same as the position of the lower right corner of the source display window, and the central position of the first virtual window is the same as the central position of the source display window.
The style of the first virtual window may be the same as or different from the style of the source display window, for example, the source display window may be a black solid frame, and the first virtual window may also be a black solid frame. Alternatively, the source display window may be a black solid frame, and the first virtual window may be a black solid frame, or the first virtual window may be a red solid frame (or a solid frame of another color), without limitation to the style of the first virtual window.
Step 302, obtaining first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is completed. And generating a target display window based on the information of the second virtual window, wherein the size of the target display window and the size of the second virtual window can be the same, and the position of the target display window and the position of the second virtual window can be the same.
For example, after the first operation data for the source display window is obtained, the source display window is not operated based on the first operation data, but the first virtual window is operated based on the first operation data, and after the operation of the first virtual window is completed, the first virtual window after the operation is completed is referred to as a second virtual window.
In a possible implementation manner, if the first operation data includes that the operation type is long press, and the first operation data includes a scaled size and a scaled position, it is determined that the operation type of the source display window is a scaled type, and a scaling operation (such as a zoom-out operation or a zoom-in operation) is performed on the source display window, for example, the first operation data includes the scaled size and the scaled position, and the first virtual window may be scaled based on the scaled size and the scaled position to obtain a scaled second virtual window. For example, the scaled size may be the scaled width and height, and the scaled position may be the scaled center position (or the upper left corner position, or the upper right corner position, or the lower left corner position, or the lower right corner position), based on the above information, the first virtual window may be scaled to obtain the scaled second virtual window, that is, the size of the second virtual window is the scaled width and height, and the center position of the second virtual window is the scaled center position.
For example, when a user needs to perform a zoom operation (e.g., a zoom-out operation or a zoom-in operation) on a display window a (i.e., a source display window), taking a zoom-in operation as an example, the user moves a mouse to a position of a first virtual window corresponding to the display window a (e.g., an upper left corner position, an upper right corner position, a lower left corner position, or a lower right corner position of the first virtual window), presses a left mouse button for a long time, zooms in the first virtual window outward on the basis of pressing the left mouse button for the long time, and releases the left mouse button after the zoom-in operation of the first virtual window is completed.
For the above operation of the user, the output device may obtain a plurality of first operation data of the operation device (i.e., the mouse) and, for each first operation data, the first operation data may include an operation coordinate and an operation type. Since the user operation is to enlarge the first virtual window outward, the operation coordinates include an enlarged size (e.g., width, height, etc.) and an enlarged position (e.g., a center position, or an upper left corner position, or an upper right corner position, or a lower left corner position, or a lower right corner position, etc.). And the operation type is long pressing because the user operation is long pressing of the left mouse button.
In summary, for each first operation data, the output device may obtain the enlarged size and the enlarged position, and perform an enlarging operation on the first virtual window based on the enlarged size and the enlarged position to obtain an enlarged virtual window, and the plurality of first operation data may cause the output device to gradually enlarge the first virtual window until the user releases the left mouse button. And when the first virtual window is subjected to the amplification operation every time, the size of the amplified virtual window is the same as the amplified size in the first operation data, and the position of the amplified virtual window is the same as the amplified position in the first operation data, so that the amplified virtual window is obtained.
For the above operation of the user, when the user releases the left button of the mouse, the output device may further obtain operation data of the operation device (i.e. the mouse), and the operation data may include the operation type. Since the user operation is to release the left mouse button, the operation type is release. After the output device knows that the operation type is released, it determines that the operation is completed, takes the first virtual window when the operation is completed as the second virtual window after the operation is completed, and generates a target display window based on the information of the second virtual window, and the generation process refers to the following embodiments.
In a possible implementation manner, if the first operation data includes that the operation type is long press and the first operation data includes a moved position, it is determined that the operation type of the source display window is a moved type, and the source display window is moved, for example, the first operation data includes the moved position, and the first virtual window may be moved based on the moved position to obtain a moved second virtual window. The position after the movement may be a center position after the movement (or an upper left corner position, or an upper right corner position, or a lower left corner position, or a lower right corner position), based on the above information, the first virtual window may be moved to obtain a second virtual window after the movement, that is, the center position of the second virtual window is a center position after the zooming.
For example, when a user moves a display window a (source display window), the user moves a mouse to a position (e.g., a center position or an edge position) of a first virtual window corresponding to the display window a, presses a left button of the mouse for a long time, and moves the first virtual window based on the long-time pressing of the left button of the mouse. And after the moving operation of the first virtual window is completed, namely the first virtual window is moved to the target position, and the left mouse button is released.
For the operation of the user, the output device may obtain a plurality of first operation data of the operation device (that is, a plurality of first operation data may be generated in the process of long-pressing the left button of the mouse), and each of the first operation data may include an operation coordinate and an operation type. Since the user operation is to move the first virtual window, the operation coordinate includes a moved position (such as a center position, or an upper left corner position, or an upper right corner position, or a lower left corner position, or a lower right corner position, etc.). And the operation type is long pressing because the user operation is long pressing of the left mouse button.
In summary, for each first operation data, the output device may obtain a moved position, and perform a moving operation on the first virtual window based on the moved position to obtain a moved virtual window, and the plurality of first operation data may cause the output device to gradually move the first virtual window until the user releases the left mouse button. And when the first virtual window is moved each time, the position of the moved virtual window is the same as the moved position in the first operation data, so that the moved virtual window is obtained.
For the above operation of the user, when the user releases the left button of the mouse, the output device may further obtain operation data of the operation device (i.e. the mouse), and the operation data may include the operation type. Since the user operation is to release the left mouse button, the operation type is release. After the output device knows that the operation type is released, it determines that the operation is completed, takes the first virtual window when the operation is completed as the second virtual window after the operation is completed, and generates a target display window based on the information of the second virtual window, and the generation process refers to the following embodiments.
Of course, the zoom operation and the move operation are only examples of the operation of the first virtual window, and the operation type is not limited, and in the following embodiments, the move operation is taken as an example for explanation.
In a possible implementation manner, after obtaining the second virtual window after the operation is completed, the target display window may be further generated based on information of the second virtual window, and the generation manner is not limited, for example, when the second virtual window is a rectangle, the information of the second virtual window may be a size (e.g., width, height, etc.) of the second virtual window and a position (e.g., a center point position, or a position of at least one vertex of four vertices) of the second virtual window, the size of the target display window is the same as the size of the second virtual window, and the position of the target display window is the same as the position of the second virtual window. The size of the target display window is the same as the size of the second virtual window: the width of the target display window is the same as the width of the second virtual window, and the height of the target display window is the same as the height of the second virtual window. The position of the target display window is the same as the position of the second virtual window, namely that: the position of the upper left corner of the target display window is the same as the position of the upper left corner of the second virtual window, the position of the upper right corner of the target display window is the same as the position of the upper right corner of the second virtual window, the position of the lower left corner of the target display window is the same as the position of the lower left corner of the second virtual window, the position of the lower right corner of the target display window is the same as the position of the lower right corner of the second virtual window, and the central position of the target display window is the same as the central position of the second virtual window.
The style of the target display window may be the same as or different from the style of the second virtual window, the style of the target display window being the same as the style of the source display window, the style of the second virtual window being the same as the style of the first virtual window. For example, when the source display window is a black solid frame, the target display window is a black solid frame.
Step 303, deleting the second virtual window and the source display window, and migrating the video image sent by the host corresponding to the source display window to the target display window.
For example, after the target display window (the target display window is used to replace the source display window) is generated, the second virtual window and the source display window may be deleted from all windows displayed by the display device, that is, the second virtual window and the source display window are no longer displayed in the display device, and the target display window may be reserved.
In one possible implementation, the second virtual window may be deleted before the source display window is deleted after the target display window is generated. Alternatively, the source display window may be deleted before the second virtual window is deleted after the target display window is generated. Alternatively, the second virtual window may be deleted first, followed by the deletion of the source display window, followed by the regeneration of the target display window. Alternatively, the source display window may be deleted first, then the second virtual window may be deleted, and then the target display window may be regenerated. Alternatively, the second virtual window may be deleted first, the target display window regenerated, and then the source display window deleted. Alternatively, the second virtual window may be deleted first, the target display window regenerated, and then the source display window deleted. Of course, the above are just a few examples, and the order of deleting the source display window, deleting the second virtual window, and generating the target display window may be configured arbitrarily, as long as the above three operations can be implemented in a relatively short time, which is not limited to this.
In a possible implementation manner, before the video image sent by the host corresponding to the source display window is migrated to the target display window, the video image sent by the host may be displayed through the source display window each time the video image sent by the host corresponding to the source display window is acquired. For example, in the process of operating the first virtual window based on the first operation data, the video image sent by the host is also displayed through the source display window, so that the source display window can display the latest video image in real time, the real-time display of the video image cannot be influenced in the window operation process, and the viewing experience of the user is improved.
In one possible embodiment, migrating the video image sent by the host corresponding to the source display window to the target display window may include, but is not limited to: when the video image sent by the host corresponding to the source display window is acquired each time, the video image sent by the host can be displayed through the target display window instead of the video image sent by the host through the source display window, namely the video image is migrated to the target display window to be displayed, so that the source display window is replaced by the target display window.
In a possible implementation manner, the number of the display devices is at least two, and when the operation type of the source display window is a mobile type, the source display window and the target display window may be located in the same display device; alternatively, the source display window and the target display window may be located on different display devices.
For example, when the source display window of the display device 1 is moved, if the first virtual window corresponding to the source display window is moved to a certain position of the display device 1, the source display window and the target display window are located on the display device 1. Or, when the source display window of the display device 1 is moved, if the first virtual window corresponding to the source display window is moved to a certain position of the display device 2, the source display window is located in the display device 1, and the target display window is located in the display device 2, that is, the source display window and the target display window are located in different display devices.
In one possible implementation manner, for at least two display devices corresponding to the same roaming matrix, all display windows in the at least two display devices are display windows in the same coordinate system; and if the first operation data comprises the operation type of long press and the first operation data comprises the position after movement, the operation type of the source display window is the movement type. And if the source display window is positioned on the first display device and the moved position belongs to a position matched with the display window of the first display device in the coordinate system, the second virtual window is positioned on the first display device, and the target display window is positioned on the first display device. Or, if the source display window is located on the first display device and the moved position belongs to a position matched with a display window of the second display device in the coordinate system, the second virtual window is located on the second display device, and the target display window is located on the second display device.
For example, the roaming matrix is a device group composed of a plurality of output devices, and may be configured in an N × M format (that is, N × M output devices form the roaming matrix, N output devices exist in the transverse direction of the roaming matrix, M output devices exist in the longitudinal direction of the roaming matrix, and at least one of N and M may be greater than 1), and a mouse or a window may move in the roaming matrix, and which output device is moved is identified according to the coordinate information.
Assuming that the roaming matrix is composed of 4 output devices, and the 4 output devices correspond to 4 display devices, referring to fig. 3B, a coordinate system may be established, where the display device 1 corresponds to the area a in the coordinate system, that is, all display windows in the display device 1 are located in the area a, the display device 2 corresponds to the area B in the coordinate system, that is, all display windows in the display device 2 are located in the area B, the display device 3 corresponds to the area C in the coordinate system, that is, all display windows in the display device 3 are located in the area C, and the display device 4 corresponds to the area D in the coordinate system, that is, all display windows in the display device 4 are located in the area D.
Assuming that the source display window is the window 1 located in the area a, the user may view the positional relationship of all areas in the coordinate system, and move the window 1 to any area position, for example, the user may move the window 1 to the area a, the user may move the window 1 to the area B, the user may move the window 1 to the area C, and the user may move the window 1 to the area D, and the position after the movement is not limited.
For example, referring to fig. 3C, the user may move window 1 to another position of area a, i.e. the moved position also belongs to area a, and obviously, in this case, the source display window and the target display window (second virtual window) are located in the display area of the same display device, i.e. area a.
For another example, referring to fig. 3D, the user may move window 1 to another position of region B, i.e., the moved position belongs to region B, and it is apparent that, in this case, the source display window and the target display window (second virtual window) are located in display regions of different display devices, i.e., region a and region B.
In the process of moving the window 1 to another position of the area B, the window 1 may span two display areas (the area a and the area B), and since the area a and the area B are areas in the same coordinate system, in the moving process of the window 1, it may also be known that the window 1 spans two display areas based on the position after the movement, that is, it is known that a part of the window is located in the area a based on the position after the movement, and a part of the window is located in the area B, so that a virtual window spanning two display areas is obtained based on the position after the movement, as shown in fig. 3E.
According to the technical scheme, in the embodiment of the application, when the operating device operates a certain display window (namely a source display window) of the display device, the first virtual window is generated based on the source display window, and the first virtual window is operated instead of the source display window, so that the source display window can display the latest video image in real time instead of displaying the last frame of video image before operation all the time, the user experience is improved, and the real-time display of the video image cannot be influenced in the window operation process.
The following describes an image display method according to an embodiment of the present application with reference to a specific application scenario.
Application scenario 1: and performing image roaming in the same output device, wherein the image roaming refers to that the display window is locked by a mouse, and the position of the display window is moved according to the position of the mouse by moving the mouse. Considering that one output device may control one display device, image navigation within the same output device is to move the display window from one location to another location of the display area of the display device.
The output device may include an Application module (e.g., a function module implemented by an Application process), an Interface module (e.g., a function module implemented by a GUI (Graphical User Interface)), and a display module (e.g., a function module implemented by an FPGA (Field Programmable Gate Array), a CPLD (Complex Programmable logic device), an ASIC (Application Specific Integrated Circuit), etc.).
When a user selects a source display window through operating equipment to carry out image roaming, the source display window can continue to display video images, and the output node can generate a virtual window according to the source display window and operate the virtual window, so that real-time display of the video images cannot be influenced in the window operation process.
Referring to fig. 4, a flowchart of an image display method is shown, where the method may include:
step 401, the interface module obtains coordinate information of each display window from the application module, such as a coordinate of a certain position (e.g., an upper left corner position, an upper right corner position, a lower left corner position, a lower right corner position, or a center position) of the display window, and a width and a height of the display window; or, 4 positions (such as an upper left corner position, an upper right corner position, a lower left corner position, and a lower right corner position) of the display window can be obtained through the coordinate information, so that the rectangular area range of each display window can be obtained.
For example, the application module may detect the display device to obtain coordinate information of each display window on the display device, and send the coordinate information of each display window to the interface module.
Step 402, a user moves a mouse (i.e. an operating device) to a menu position of a window mode of a user interface menu (i.e. the window mode is a button, and the mouse is moved to the button position), and clicks a left button of the mouse, an application module obtains operation data of the mouse, the operation data includes a first operation coordinate (i.e. a coordinate of a current position of the mouse) and a first operation type (e.g. clicking, which is used for indicating that the mouse is clicked by the user), and sends the operation data to an interface module, and the interface module receives the operation data.
In step 403, if the interface module determines that the first operation coordinate is matched with the menu position corresponding to the window mode and the first operation type is click, it determines that the operation mode is the window mode.
For example, the user interface may include a menu position corresponding to the window mode and a menu position corresponding to the non-window mode, and the interface module may determine whether the first operation coordinate matches the menu position corresponding to the window mode. If not, determining that the operation mode is not the window mode, and ending the process. If yes, determining whether the first operation type is clicking, if not, determining that the operation mode is not the window mode, ending the process, and if so, determining that the operation mode is the window mode and needing to execute the subsequent steps.
In step 404, the user moves the mouse to the display window to be operated (i.e. the source display window), and clicks the left button of the mouse, the application module obtains the operation data of the mouse, where the operation data includes the second operation coordinate (i.e. the coordinate of the current position of the mouse) and the second operation type (i.e. the operation type of the current operation of the mouse, such as clicking), and sends the operation data to the interface module, and the interface module receives the operation data.
In step 405, if the interface module determines that the second operation coordinate matches with the coordinate range of a certain display window and the second operation type is click, the interface module selects the display window as a source display window to be operated.
For example, in step 401, the interface module has obtained the coordinate information of each display window, and knows the rectangular area range (i.e., the coordinate range) of each display window, based on which, if the second operation coordinate is located in the coordinate range of a certain display window, that is, the second operation coordinate is located in the coordinate range of the display window, and the second operation type is a click, the display window is the source display window to be operated.
In step 406, the interface module writes the information of the source display window (such as the size of the source display window, the position of the source display window, etc.) into the specified memory address, and the display module reads the information of the source display window from the specified memory address and generates a first virtual window based on the information of the source display window, where the size of the first virtual window is the same as the size of the source display window, and the position of the first virtual window is the same as the position of the source display window.
Step 407, on the basis that the mouse is moved to the source display window, the user presses the left mouse button for a long time, the user moves the mouse, the application module obtains operation data of the mouse, the operation data includes a position after the movement (namely the position after the movement of the mouse) and an operation type (namely the operation type of the current operation of the mouse, such as long pressing, which is used for indicating that the operation of the mouse by the user is long pressing the mouse, namely the mouse is not released and moved after the left mouse button is clicked), and the operation data is sent to the interface module, and the interface module receives the operation data.
Step 408, after the interface module learns that the operation type is long press, it is determined that the user is moving the first virtual window, the moved position is written into the specified memory address, the display module reads the moved position from the specified memory address, and the first virtual window is moved to a position matched with the moved position.
Because the user presses the left mouse button for a long time and moves the mouse, the application module can acquire operation data of the mouse for many times, the position of the operation data after movement changes every time, the application module sends the operation data to the interface module after the operation data is acquired every time, the interface module writes the position after movement every time into a specified memory address, the display module reads the position after movement from the specified memory address, and the first virtual window is moved to the position matched with the position after movement, so that the first virtual window is moved.
For example, when the application module acquires operation data of the mouse for the first time, the position after the movement is position 1, the interface module writes position 1 into a specified memory address, the display module reads position 1 from the specified memory address and moves the first virtual window to position 1, when the application module acquires operation data of the mouse for the second time, the position after the movement is position 2, the interface module writes position 2 into the specified memory address, the display module reads position 2 from the specified memory address and moves the first virtual window to position 2, and so on.
In step 409, after the moving operation is completed, the user releases the left button of the mouse, the application module obtains the operation data of the mouse, the operation data includes the operation type (i.e. the operation type of the current operation of the mouse, such as release), and sends the operation data to the interface module, and the interface module receives the operation data.
In step 410, after learning that the operation type is released, the interface module determines that the operation of the first virtual window is completed (i.e., the interface module has moved to the target position and has completed the moving operation for the first virtual window), and the current first virtual window is used as the second virtual window after the operation is completed. The interface module writes information of the second virtual window (such as the size of the second virtual window, the position of the second virtual window, and the like) into a specified memory address, the display module reads the information of the second virtual window from the specified memory address, and generates a target display window based on the information of the second virtual window, wherein the size of the target display window and the size of the second virtual window may be the same, and the position of the target display window and the position of the second virtual window may be the same.
In step 411, the interface module sends a window roaming command to the application module, where the window roaming command indicates that the movement of the display window is completed, that is, a target display window for replacing the source display window is obtained. The application module executes the window roaming command, namely deleting the second virtual window and the source display window, and migrating the video image sent by the host corresponding to the source display window to the target display window, namely displaying the video image through the target display window instead of displaying the video image through the source display window when the video image sent by the host corresponding to the source display window is acquired each time, namely the video image is migrated to the target display window for displaying, so that the target display window is used for replacing the source display window.
Application scenario 2: the image roaming is performed between different output devices, and is also called the roaming across the output devices, and the image roaming refers to that the position of the display window is moved according to the position of the mouse by locking the display window through the mouse and moving the mouse. Considering that one output device may control one display device, image roaming between different output devices is to move a display window from a display area of one display device to a display area of another display device. For example, the output device 1 controls the display device 1, the output device 2 controls the display device 2, and the display window can be moved from the display area of the display device 1 to the display area of the display device 2.
The roaming matrix is illustratively a node group consisting of a plurality of output devices, and a mouse or a window can be moved within the roaming matrix, and which output device of the roaming matrix the mouse or the window is moved to is identified according to the coordinate information, and based thereon, image roaming can be performed between different output devices of the roaming matrix, so as to move the display window from the display area of one display device to the display area of another display device.
In the present embodiment, it is assumed that the output device 1 controls the display device 1, and the output device 2 controls the display device 2 to move the display window from the display area of the display device 1 to the display area of the display device 2. It should be noted that the output device 1 and the output device 2 belong to the same roaming matrix, and the display area of the display device 1 and the display area of the display device 2 are display areas in the same coordinate system, similar to fig. 3B.
The output device 1 may include an application module 11, an interface module 12, and a display module 13, and the output device 2 may include an application module 21, an interface module 22, and a display module 23. On the basis, referring to fig. 5, another flow chart of the image display method is shown, and the method may include:
in step 501, the interface module 12 obtains coordinate information of each display window from the application module 11, such as coordinates of a certain position of the display window, width and height of the display window, and can obtain a rectangular area range of the display window through the coordinate information, so as to obtain the rectangular area range of each display window.
Step 502, the user moves the mouse to the menu position of the window mode of the user interface menu, and clicks the left button of the mouse, the application module 11 obtains the operation data of the mouse, the operation data includes the first operation coordinate and the first operation type (such as clicking), sends the operation data to the interface module 12, and the interface module 12 receives the operation data.
In step 503, if it is determined that the first operation coordinate matches the menu position corresponding to the window mode and the first operation type is click, the interface module 12 determines that the operation mode is the window mode.
In step 504, the interface module 12 sends the information that the operation mode is the window mode to the application module 11, the application module 11 forwards the information to the application module 21, the application module 21 sends the information that the operation mode is the window mode to the interface module 22, and the interface module 22 determines that the operation mode is the window mode.
Step 505, the user moves the mouse to the display window to be operated, and clicks the left button of the mouse, the application module 11 obtains the operation data of the mouse, where the operation data includes the second operation coordinate and the second operation type (such as clicking), and sends the operation data to the interface module 12, and the interface module 12 receives the operation data.
In step 506, if it is determined that the second operation coordinate matches the coordinate range of a certain display window and the second operation type is click, the interface module 12 selects the display window as the source display window to be operated.
In step 507, the interface module 12 writes the information of the source display window (such as the size of the source display window, the position of the source display window, etc.) into the specified memory address, and the display module 13 reads the information of the source display window from the specified memory address and generates a first virtual window based on the information of the source display window, where the size of the first virtual window is the same as the size of the source display window, and the position of the first virtual window is the same as the position of the source display window.
In step 508, the interface module 12 sends the information of the source display window to the application module 11, the application module 11 sends the information of the source display window to the application module 21, the application module 21 sends the information of the source display window to the interface module 22, and the interface module 22 stores the information of the source display window.
In step 509, on the basis that the mouse is moved to the source display window, the user presses the left button of the mouse for a long time, and the user moves the mouse, assuming that the position after the movement is located in the display area of the display device 1, the application module 11 obtains operation data of the mouse, where the operation data may include the position after the movement and an operation type (such as long pressing), and sends the operation data to the interface module 12, and the interface module 12 receives the operation data.
In step 510, after the interface module 12 learns that the operation type is long-time pressing, it is determined that the user is moving the first virtual window, the moved position is written into the specified memory address, and the display module 13 reads the moved position from the specified memory address and moves the first virtual window to a position matched with the moved position.
Because the user presses the left mouse button for a long time and moves the mouse, the application module 11 may acquire operation data of the mouse many times, the moved position in the operation data changes each time, after the operation data is acquired each time, if the moved position is located in the display area of the display device 1, the application module 11 may send the operation data to the interface module 12, the interface module 12 writes the moved position to the specified memory address each time, the display module 13 reads the moved position from the specified memory address, and moves the first virtual window to a position matched with the moved position, thereby moving the first virtual window.
After each acquisition of the operation data, if the moved position is located in the display area of the display device 2, not in the display area of the display device 1, the subsequent step 511 may be performed.
In step 511, the user presses the left mouse button for a long time, and moves the mouse, assuming that the position after the movement is located in the display area of the display device 2, that is, the position after the movement is moved from the display area of the display device 1 to the display area of the display device 2, the application module 11 obtains the operation data of the mouse, where the operation data includes the position after the movement (the position after the movement is located in the display area of the display device 2) and the operation type (such as long-time pressing), and sends the operation data to the application module 21, the application module 21 sends the operation data to the interface module 22, and the interface module 22 receives the operation data. And the application module 11 sends a command to stop drawing the virtual window to the interface module 12, and the interface module 12 deletes the first virtual window in the display area of the display device 1 based on the command.
Illustratively, when the user moves the mouse so that the mouse moves from the display area of the display device 1 to the display area of the display device 2, the coordinates of the current position of the mouse may exceed the coordinate range of the display area of the display device 1 and be located in the coordinate range of the display area of the display device 2, and thus, it may be determined that the moved position of the mouse moves from the display area of the display device 1 to the display area of the display device 2.
For example, the display area of the display device 1 and the display area of the display device 2 are display areas in the same coordinate system, that is, the coordinate ranges of the display area of the display device 1 and the display area of the display device 2 are not overlapped, similar to fig. 3B. Based on this, in the moving process of the first virtual window, whether the first virtual window is located in the display area of the display device 1, the display area of the display device 2, or both the display area of the display device 1 and the display area of the display device 2 can be known based on the post-movement position.
If it is known that the first virtual window is located in the display area of the display device 1 based on the moved position, the first virtual window is displayed in the display area of the display device 1 by a relevant module of the output device 1, and the first virtual window is matched with the moved position. Or, if it is known that the first virtual window is located in the display area of the display device 2 based on the moved position, the first virtual window is displayed in the display area of the display device 2 by a related module of the output device 2, that is, the first virtual window is matched with the moved position. Or, if it is known that the first virtual window is located in both the display area of the display device 1 and the display area of the display device 2 based on the moved position, assuming that the first virtual window is composed of the sub virtual window 1 and the sub virtual window 2, the sub virtual window 1 is located in the display area of the display device 1, and the sub virtual window 2 is located in the display area of the display device 2, the sub virtual window 1 is displayed in the display area of the display device 1 by a relevant module of the output device 1, and the sub virtual window 2 is displayed in the display area of the display device 2 by a relevant module of the output device 2.
In step 512, after the interface module 22 receives the operation data, it learns that the operation type is long press, and determines that the user is moving the first virtual window, so that the information of the source display window (such as the size of the source display window, the position of the source display window, and the like) is written into the specified memory address based on the information of the source display window stored by the interface module 22, the display module 23 reads the information of the source display window from the specified memory address, and generates the first virtual window based on the information of the source display window, where the size of the first virtual window is the same as the size of the source display window, and the position of the first virtual window is the same as the position of the source display window. Then, the interface module 22 writes the moved position into the specified memory address, and the display module 23 reads the moved position from the specified memory address, and moves the first virtual window to a position matching the moved position.
Because the user presses the left mouse button for a long time and moves the mouse, the application module 11 may acquire operation data of the mouse many times, a position of the operation data after movement changes each time, after the operation data is acquired each time, if the position after movement is located in the display area of the display device 2, the application module 11 may send the operation data to the application module 21, the application module 21 sends the operation data to the interface module 22, the interface module 22 writes the position after movement each time into a specified memory address, the display module 23 reads the position after movement from the specified memory address, and moves the first virtual window to a position matched with the position after movement, thereby moving the first virtual window. Illustratively, the specified memory addresses at which the interface module 22 and the display module 23 operate are different from the specified memory addresses at which the interface module 12 and the display module 13 operate.
Step 513, after the movement operation is completed, the user releases the left button of the mouse, the application module 11 obtains the operation data of the mouse, where the operation data includes the operation type (for example, release), and sends the operation data to the application module 21, the application module 21 sends the operation data to the interface module 22, and the interface module 22 receives the operation data.
In step 514, after knowing that the operation type is released, the interface module 22 determines that the operation of the first virtual window is completed, and the current first virtual window is used as the second virtual window after the operation is completed. The interface module 22 writes the information of the second virtual window (such as the size of the second virtual window, the position of the second virtual window, etc.) into the specified memory address, and the display module 23 reads the information of the second virtual window from the specified memory address and generates the target display window based on the information of the second virtual window, where the size of the target display window is the same as the size of the second virtual window, and the position of the target display window is the same as the position of the second virtual window.
In step 515, interface module 22 sends a window roaming command to application module 21, the window roaming command indicating that the movement of the display window has been completed, i.e., that a target display window for replacing the source display window has been obtained. Application module 21 executes the window roaming command, and the second virtual window may be deleted.
In step 516, the application module 21 sends a window roaming command to the application module 11, and the application module 11 executes the window roaming command to delete the source display window.
Step 517, the application module 11 or the application module 21 sends a window roaming command to the main control device, and the main control device migrates the video image sent by the host corresponding to the source display window to the target display window.
Because the source display window and the target display window are located on different display devices and are managed by different output devices, when the video image sent by the host corresponding to the source display window is migrated to the target display window for display, the video image is migrated across the output devices, so the video image sent by the host corresponding to the source display window can be migrated to the target display window by the main control device, that is, when the video image needs to be migrated across the output devices, the migration is realized by the main control device, therefore, a window roaming command needs to be sent to the main control device, and the video image sent by the host corresponding to the source display window is migrated to the target display window by the main control device. Of course, in practical applications, the application module 11 may also migrate the video image sent by the host corresponding to the source display window to the target display window, and the application module 21 may also migrate the video image sent by the host corresponding to the source display window to the target display window, which is not limited to this.
Based on the same application concept as the method described above, an image display method is provided in this embodiment of the present application, where an application scenario of the image display method is not limited, as long as a display window needs to be operated and the display window is used for displaying an image, and the method may include:
step S11, obtaining a source display window to be operated, and generating a first virtual window based on the information of the source display window; the size of the first virtual window is the same as that of the source display window, and the position of the first virtual window is the same as that of the source display window.
For example, before the source display window to be operated is obtained, second operation data may also be obtained, where the second operation data includes a first operation coordinate and a first operation type; and if the first operation coordinate is matched with the menu position corresponding to the window mode and the first operation type is clicking, determining that the operation mode is the window mode for operating the display window, and executing the operation of obtaining the source display window to be operated.
Illustratively, the obtaining the source display window to be operated comprises: acquiring third operation data, wherein the third operation data comprises a second operation coordinate and a second operation type; and if the second operation coordinate is matched with the coordinate range of the display window and the second operation type is clicking, selecting the display window as a source display window to be operated.
Illustratively, the implementation process of step S11 is similar to step 301, and is not described herein again.
Step S12, acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is completed; and after the second virtual window is obtained, generating a target display window based on the information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window.
For example, the operating the first virtual window based on the first operation data to obtain the second virtual window after the operation is completed may include, but is not limited to: if the first operation data comprises that the operation type is long press and the first operation data comprises the zoomed size and the zoomed position, determining that the operation type of the source display window is the zoomed type, and zooming the first virtual window based on the zoomed size and the zoomed position to obtain a zoomed second virtual window; or if the first operation data comprises that the operation type is long press and the first operation data comprises the moved position, determining that the operation type of the source display window is the movement type, and moving the first virtual window based on the moved position to obtain the moved second virtual window.
Illustratively, the implementation process of step S12 is similar to step 302, and will not be described herein again.
And step S13, deleting the second virtual window and the source display window, and transferring the image displayed by the source display window to the target display window for displaying.
Illustratively, the implementation process of step S13 is similar to step 303, and is not described herein again.
Based on the same application concept as the method, an embodiment of the present application provides an image display apparatus, where an agent management system includes a display device, an output device, an operation device, and a host, where the display device is configured to display at least one display window, and each display window is configured to display a video image sent by the host, and the apparatus is applied to the output device, as shown in fig. 6, and is a schematic structural diagram of the apparatus, and the apparatus includes:
a selecting module 61, configured to select a source display window to be operated from all display windows of the display device; a generating module 62, configured to generate a first virtual window based on the information of the source display window, where a size of the first virtual window is the same as a size of the source display window, and a position of the first virtual window is the same as a position of the source display window; an operation module 63, configured to obtain first operation data for the source display window, and operate the first virtual window based on the first operation data to obtain a second virtual window after the operation is completed; the generating module 62 is further configured to generate a target display window based on the information of the second virtual window, where the size of the target display window is the same as the size of the second virtual window, and the position of the target display window is the same as the position of the second virtual window; and a processing module 64, configured to delete the second virtual window and the source display window, and migrate the video image sent by the host corresponding to the source display window to the target display window.
Illustratively, the apparatus further comprises: a determination module for determining an operation mode of the operating device; and if the operation mode of the operation equipment is a window mode for operating a display window, triggering the selection module to select a source display window to be operated from all display windows of the display equipment.
For example, the determining module is specifically configured to, when determining the operation mode of the operating device: acquiring second operation data of the operation equipment, wherein the second operation data comprises a first operation coordinate and a first operation type; and if the first operation coordinate is matched with a menu position corresponding to a window mode and the first operation type is clicking, determining that the operation mode of the operation equipment is the window mode.
For example, when the selection module 61 selects the source display window to be operated from all the display windows of the display device, it is specifically configured to: acquiring third operation data of the operation equipment, wherein the third operation data comprises a second operation coordinate and a second operation type; and if the second operation coordinate is matched with the coordinate range of the display window and the second operation type is clicking, selecting the display window as a source display window to be operated.
For example, the operation module 63 operates the first virtual window based on the first operation data, and when the second virtual window after the operation is obtained, is specifically configured to: if the first operation data comprises that the operation type is long press and the first operation data comprises the zoomed size and the zoomed position, determining that the operation type of the source display window is the zoomed type, and zooming the first virtual window based on the zoomed size and the zoomed position to obtain a zoomed second virtual window; or if the first operation data comprises that the operation type is long press and the first operation data comprises the position after movement, determining that the operation type of the source display window is the movement type, and moving the first virtual window based on the position after movement to obtain a second virtual window after movement.
For example, before the processing module 64 migrates the video image sent by the host corresponding to the source display window to the target display window, the processing module is further configured to: displaying the video image sent by the host through the source display window when the video image sent by the host corresponding to the source display window is obtained each time; the processing module 64 is specifically configured to, when migrating the video image sent by the host corresponding to the source display window to the target display window: and displaying the video image sent by the host through the target display window when the video image sent by the host corresponding to the source display window is acquired each time.
Based on the same application concept as the method, an output device is provided in this embodiment of the present application, and is applied to an agent management system, where the agent management system further includes a display device, an operating device, and a host, the display device includes at least one display window, each display window is used to display a video image of the host, and the output device may include: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute the machine-executable instructions to implement the steps of the image display method, which are not described herein again.
Based on the same application concept as the method, the embodiment of the application provides an agent management system, wherein the agent management system comprises a display device, an output device, an operation device, an input device and a host, the display device is used for displaying at least one display window, and each display window is used for displaying a video image sent by the host.
The host acquires a video image and sends the video image to input equipment so that the input equipment forwards the video image to output equipment; the output device receives the video image and sends the video image to the display device so as to display the video image through a display window of the display device;
the operating equipment operates the host, and the output equipment acquires control data when the operating equipment operates the host and sends the control data to the input equipment; the input device forwards the control data to the host computer to control the host computer based on the control data;
the operation equipment operates the display window of the display equipment, and the output equipment acquires operation data when the display window is operated and operates the display window based on the operation data; wherein, the output device is specifically configured to, when operating the display window based on the operation data:
selecting a source display window to be operated from all display windows of the display equipment; generating a first virtual window based on the information of the source display window, wherein the size of the first virtual window is the same as that of the source display window, and the position of the first virtual window is the same as that of the source display window;
acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
generating a target display window based on the information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; and deleting the second virtual window and the source display window, and transferring the video image sent by the host corresponding to the source display window to the target display window.
Based on the same application concept as the method, the embodiment of the present application further provides a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the image display method disclosed in the above example of the present application can be implemented.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may be in the form of a personal computer, laptop, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An image display method is characterized in that an agent management system comprises a display device, an output device, an operation device and a host, wherein the display device is used for displaying at least one display window, each display window is used for displaying a video image sent by the host, the method is applied to the output device, and the method comprises the following steps:
selecting a source display window to be operated from all display windows of the display equipment; generating a first virtual window based on the information of the source display window, wherein the size of the first virtual window is the same as that of the source display window, and the position of the first virtual window is the same as that of the source display window;
acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
generating a target display window based on the information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; deleting the second virtual window and the source display window, and transferring the video image sent by the host corresponding to the source display window to the target display window;
before the video image sent by the host corresponding to the source display window is migrated to the target display window, the method further includes: displaying the video image sent by the host through the source display window when the video image sent by the host corresponding to the source display window is obtained each time;
the migrating the video image sent by the host corresponding to the source display window to the target display window includes: and displaying the video image sent by the host through the target display window when the video image sent by the host corresponding to the source display window is acquired each time.
2. The method according to claim 1, wherein before the selecting the source display window to be operated from all the display windows of the display device, the method further comprises:
determining an operating mode of the operating device;
if the operation mode of the operation equipment is a window mode for operating a display window, executing operation of selecting a source display window to be operated from all display windows of the display equipment;
wherein the determining an operation mode of the operating device comprises:
acquiring second operation data of the operation equipment, wherein the second operation data comprises a first operation coordinate and a first operation type; and if the first operation coordinate is matched with a menu position corresponding to a window mode and the first operation type is clicking, determining that the operation mode of the operation equipment is the window mode.
3. The method of claim 1,
the selecting a source display window to be operated from all display windows of the display device includes:
acquiring third operation data of the operation equipment, wherein the third operation data comprises a second operation coordinate and a second operation type; and if the second operation coordinate is matched with the coordinate range of the display window and the second operation type is clicking, selecting the display window as a source display window to be operated.
4. The method of claim 1, wherein the operating the first virtual window based on the first operation data to obtain an operated second virtual window comprises:
if the first operation data comprises that the operation type is long press and the first operation data comprises the zoomed size and the zoomed position, determining that the operation type of the source display window is the zoomed type, and zooming the first virtual window based on the zoomed size and the zoomed position to obtain a zoomed second virtual window; alternatively, the first and second liquid crystal display panels may be,
and if the first operation data comprises that the operation type is long press and the first operation data comprises the position after movement, determining that the operation type of the source display window is the movement type, and moving the first virtual window based on the position after movement to obtain a second virtual window after movement.
5. The method according to any one of claims 1 to 4,
aiming at least two display devices corresponding to the same roaming matrix, all display windows in the at least two display devices are display windows in the same coordinate system;
if the first operation data comprises that the operation type is long press and the first operation data comprises the position after movement, the operation type of the source display window is movement type;
if the source display window is located on a first display device and the moved position belongs to a position matched with a display window of the first display device in the coordinate system, the second virtual window is located on the first display device and the target display window is located on the first display device;
and if the source display window is positioned on the first display device and the moved position belongs to a position matched with a display window of a second display device in the coordinate system, the second virtual window is positioned on the second display device, and the target display window is positioned on the second display device.
6. An image display method, characterized in that the method comprises:
the method comprises the steps of obtaining a source display window to be operated, and generating a first virtual window based on information of the source display window; wherein the size of the first virtual window is the same as the size of the source display window, and the position of the first virtual window is the same as the position of the source display window;
acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
after the second virtual window is obtained, generating a target display window based on information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; deleting the second virtual window and the source display window, and transferring the image displayed by the source display window to the target display window for displaying;
wherein, before migrating the image displayed by the source display window to the target display window for display, the method further comprises: displaying the image through the source display window when the image corresponding to the source display window is acquired each time; the moving the image displayed by the source display window to the target display window for displaying comprises: and displaying the image through the target display window when the image corresponding to the source display window is acquired every time.
7. The method of claim 6,
before the obtaining of the source display window to be operated, the method further includes: acquiring second operation data, wherein the second operation data comprises a first operation coordinate and a first operation type; if the first operation coordinate is matched with the menu position corresponding to the window mode and the first operation type is clicking, determining that the operation mode is the window mode for operating the display window, and executing the operation of obtaining the source display window to be operated;
the acquiring of the source display window to be operated includes: acquiring third operation data, wherein the third operation data comprises a second operation coordinate and a second operation type; if the second operation coordinate is matched with the coordinate range of the display window and the second operation type is clicking, selecting the display window as a source display window to be operated;
the operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is completed includes: if the first operation data comprise that the operation type is long press and the first operation data comprise the zoomed size and the zoomed position, determining that the operation type of the source display window is the zoomed type, and zooming the first virtual window based on the zoomed size and the zoomed position to obtain a zoomed second virtual window; or if the first operation data comprise that the operation type is long press and the first operation data comprise the moved position, determining that the operation type of the source display window is the moved type, and moving the first virtual window based on the moved position to obtain a moved second virtual window.
8. An image display device, characterized in that, an agent management system includes a display device, an output device, an operation device and a host, the display device is used to display at least one display window, each display window is used to display a video image sent by the host, the device is applied to the output device, the device includes:
the selection module is used for selecting a source display window to be operated from all display windows of the display equipment;
a generating module, configured to generate a first virtual window based on information of the source display window, where a size of the first virtual window is the same as a size of the source display window, and a position of the first virtual window is the same as a position of the source display window;
the operation module is used for acquiring first operation data aiming at the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
the generating module is further configured to generate a target display window based on information of the second virtual window, where the size of the target display window is the same as the size of the second virtual window, and the position of the target display window is the same as the position of the second virtual window;
the processing module is used for deleting the second virtual window and the source display window and transferring the video image sent by the host corresponding to the source display window to the target display window;
wherein before the processing module migrates the video image sent by the host corresponding to the source display window to the target display window, the processing module is further configured to: displaying the video image sent by the host through the source display window when the video image sent by the host corresponding to the source display window is obtained each time;
the processing module is specifically configured to, when migrating the video image sent by the host corresponding to the source display window to the target display window: and displaying the video image sent by the host through the target display window when the video image sent by the host corresponding to the source display window is acquired each time.
9. The apparatus of claim 8,
the device further comprises: a determination module for determining an operation mode of the operating device; if the operation mode of the operation equipment is a window mode for operating a display window, triggering the selection module to select a source display window to be operated from all display windows of the display equipment;
the determining module is specifically configured to, when determining the operation mode of the operating device: acquiring second operation data of the operation equipment, wherein the second operation data comprises a first operation coordinate and a first operation type; if the first operation coordinate is matched with a menu position corresponding to a window mode and the first operation type is clicking, determining that the operation mode of the operation equipment is the window mode;
the selection module is specifically configured to, when selecting a source display window to be operated from all display windows of the display device: acquiring third operation data of the operation equipment, wherein the third operation data comprises a second operation coordinate and a second operation type; if the second operation coordinate is matched with the coordinate range of the display window and the second operation type is clicking, selecting the display window as a source display window to be operated;
the operation module operates the first virtual window based on the first operation data, and is specifically configured to, when obtaining a second virtual window after the operation is completed: if the first operation data comprise that the operation type is long press and the first operation data comprise the zoomed size and the zoomed position, determining that the operation type of the source display window is the zoomed type, and zooming the first virtual window based on the zoomed size and the zoomed position to obtain a zoomed second virtual window; or if the first operation data comprise that the operation type is long press and the first operation data comprise the moved position, determining that the operation type of the source display window is the moved type, and moving the first virtual window based on the moved position to obtain a moved second virtual window.
10. The agent management system is characterized by comprising a display device, an output device, an operation device, an input device and a host, wherein the display device is used for displaying at least one display window, and each display window is used for displaying a video image sent by the host, and the agent management system comprises:
the host acquires a video image and sends the video image to input equipment so that the input equipment forwards the video image to output equipment; the output device receives the video image and sends the video image to the display device so as to display the video image through a display window of the display device;
the operating equipment operates the host, and the output equipment acquires control data when the operating equipment operates the host and sends the control data to the input equipment; the input device forwards the control data to the host computer to control the host computer based on the control data;
the operation equipment operates the display window of the display equipment, and the output equipment acquires operation data when the display window is operated and operates the display window based on the operation data; wherein, the output device is specifically configured to, when operating the display window based on the operation data:
selecting a source display window to be operated from all display windows of the display equipment; generating a first virtual window based on the information of the source display window, wherein the size of the first virtual window is the same as that of the source display window, and the position of the first virtual window is the same as that of the source display window;
acquiring first operation data for the source display window, and operating the first virtual window based on the first operation data to obtain a second virtual window after the operation is finished;
generating a target display window based on the information of the second virtual window, wherein the size of the target display window is the same as that of the second virtual window, and the position of the target display window is the same as that of the second virtual window; deleting the second virtual window and the source display window, and transferring the video image sent by the host corresponding to the source display window to the target display window;
before the video image sent by the host corresponding to the source display window is migrated to the target display window, displaying the video image sent by the host through the source display window every time the video image sent by the host corresponding to the source display window is acquired;
the migrating the video image sent by the host corresponding to the source display window to the target display window includes: and displaying the video image sent by the host through the target display window when the video image sent by the host corresponding to the source display window is obtained every time.
CN202110653992.8A 2021-06-11 2021-06-11 Image display method, device and system Active CN113438534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110653992.8A CN113438534B (en) 2021-06-11 2021-06-11 Image display method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110653992.8A CN113438534B (en) 2021-06-11 2021-06-11 Image display method, device and system

Publications (2)

Publication Number Publication Date
CN113438534A CN113438534A (en) 2021-09-24
CN113438534B true CN113438534B (en) 2022-09-30

Family

ID=77755700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110653992.8A Active CN113438534B (en) 2021-06-11 2021-06-11 Image display method, device and system

Country Status (1)

Country Link
CN (1) CN113438534B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101714781B1 (en) * 2009-11-17 2017-03-22 엘지전자 주식회사 Method for playing contents
CN106534733B (en) * 2015-09-09 2019-09-17 杭州海康威视数字技术股份有限公司 The display methods and device of video window
CN105549934A (en) * 2015-12-16 2016-05-04 广东威创视讯科技股份有限公司 Display interface control method and system
CN108173944A (en) * 2017-12-29 2018-06-15 北京奇艺世纪科技有限公司 A kind of virtual window sharing method and system

Also Published As

Publication number Publication date
CN113438534A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
KR100892932B1 (en) Electronic conference system, electronic conference support method, electronic conference support device, and conference server
KR101532963B1 (en) Information processing apparatus and control method therefor
CN103106000B (en) The implementation method of multifocal window and communication terminal
US20170185272A1 (en) Information processing method and information processing system
CN108829314B (en) Screenshot selecting interface selection method, device, equipment and storage medium
JP2015097070A (en) Communication system, information processor, and program
JP2006031359A (en) Screen sharing method and conference support system
JPH10285580A (en) Communication equipment and communication display method
JPH07210357A (en) Remote emphasis display of object in conference system
JPH09212323A (en) Device and method for communication
JP2008293361A (en) Screen display system, control method therefor, program, and storage medium
CN107690612A (en) A kind of display control method and electronic equipment
CN103324419A (en) Method and electronic device for determining display area and switching display mode
JP2003050653A (en) Method for generating input event and information terminal equipment with the same method
CN110955739B (en) Plotting processing method, shared image plotting method, and plot reproducing method
JP4802037B2 (en) Computer program
US9875571B2 (en) Image combining apparatus, terminal device, and image combining system including the image combining apparatus and terminal device
CN113438534B (en) Image display method, device and system
CN111061381A (en) Screen global input control system and method
US10733925B2 (en) Display control apparatus, display control method, and non-transitory computer-readable storage medium
US11310430B2 (en) Method and apparatus for providing video in portable terminal
JP2012118751A (en) External input apparatus, display data creation method and program
CN104285199A (en) Cursor movement control method, computer program, cursor movement control device and image display system
JP2019139332A (en) Information processor, information processing method and information processing program
KR101683130B1 (en) Method for providing user interface capable of allowing a user at local area and a user at remote area to interact each other, terminal, and computer-readable recording medium using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant