CN111741274B - Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture - Google Patents
Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture Download PDFInfo
- Publication number
- CN111741274B CN111741274B CN202010860576.0A CN202010860576A CN111741274B CN 111741274 B CN111741274 B CN 111741274B CN 202010860576 A CN202010860576 A CN 202010860576A CN 111741274 B CN111741274 B CN 111741274B
- Authority
- CN
- China
- Prior art keywords
- image
- interest
- region
- processed
- original video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/44504—Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The embodiment of the application provides an ultra high definition video monitoring method supporting local amplification and roaming of a picture, and the ultra high definition video monitoring method comprises the following steps: respectively acquiring display instruction information and an original video image to be processed containing an interested area, wherein the original video image to be processed is each frame image decoded from a video to be processed frame by frame; intercepting an image of a region of interest from an original video image to be processed; processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed; displaying the image to be displayed on a display screen of the monitor and/or transmitting the image to be displayed to a display device connected to the monitor. By means of the technical scheme, the embodiment of the application can realize the monitoring of the pixel level of the image.
Description
Technical Field
The application relates to the technical field of video monitoring, in particular to an ultra-high-definition video monitoring method supporting local amplification and roaming of pictures.
Background
At present, in the case that the resolutions of an original video image and a display device are not consistent, in order to view the image, the original video image is usually scaled down to the resolution corresponding to the display device for display.
In the process of implementing the invention, the inventor finds that the following problems exist in the prior art: the existing method is only used for realizing the view of image content, but the existing method cannot realize the view of the pixel level of the original video image. For example, during shooting, it is necessary to check whether a scene and a person are in focus accurately through a display device, but in the case of zooming out an original video image, it is impossible to accurately determine whether the focus is accurate.
Disclosure of Invention
An object of an embodiment of the present application is to provide an ultra high definition video monitoring method supporting local enlargement and roaming of a picture, so as to solve a problem that in the prior art, it is not possible to view a pixel level of an image.
The implementation process of the embodiment of the application is as follows:
the embodiment of the application discloses an ultra high definition video monitoring method supporting local amplification and roaming of a picture, which is used for a video monitor and comprises the following steps: respectively acquiring display instruction information and an original video image to be processed containing an interested area, wherein the original video image to be processed is each frame image decoded from a video to be processed frame by frame; intercepting an image of a region of interest from an original video image to be processed; processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed; displaying the image to be displayed on a display screen of the video monitor and/or transmitting the image to be displayed to a display device connected to the video monitor.
Therefore, the embodiment of the application can realize the pixel-level monitoring of the original video image by intercepting the image of the region of interest from the original video image to be processed, processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate the image to be displayed, displaying the image to be displayed on the display screen of the video monitor, and/or sending the image to be displayed to the display device connected with the video monitor.
In one possible embodiment, the obtaining of the display instruction information and the raw video image to be processed containing the region of interest respectively comprises: and acquiring an original video image to be processed through a path of 48G-digital serial interface SDI signal input interface on a video monitor.
Therefore, the embodiment of the application can quickly acquire the video to be processed through one path of 48G-SDI signal input interface.
In one possible embodiment, the obtaining of the display instruction information and the raw video image to be processed containing the region of interest respectively comprises: and acquiring an original video image to be processed through a four-path 12G-digital serial interface SDI signal input interface on a video monitor.
Therefore, the embodiment of the application can rapidly acquire the video to be processed through the four-path 12G-SDI signal input interface.
In one possible embodiment, the obtaining of the display instruction information and the raw video image to be processed containing the region of interest respectively comprises: and acquiring an original video image to be processed through a high-definition multimedia interface HDMI2.1 input interface on the video monitor.
Therefore, the embodiment of the application can quickly acquire the video to be processed through one HDMI2.1 input interface.
In one possible embodiment, the obtaining of the display instruction information and the raw video image to be processed containing the region of interest respectively comprises: and acquiring an original video image to be processed through a four-path high-definition multimedia interface HDMI2.0 input interface on the video monitor.
Therefore, the embodiment of the application can quickly acquire the video to be processed through the four-path HDMI2.0 input interface.
In one possible embodiment, the process of intercepting an image of a region of interest from an original video image to be processed comprises: acquiring attribute information corresponding to the region of interest, wherein the attribute information comprises an initial coordinate of the region of interest on the original video image and the size of the region of interest; and intercepting the image of the region of interest from the original video image to be processed according to the attribute information.
Therefore, the embodiment of the application can accurately intercept the image of the region of interest from the original video image to be processed through the attribute information.
In one possible embodiment, acquiring attribute information corresponding to the region of interest includes: the attribute information is set or adjusted by a key on the video monitor.
In one possible embodiment, acquiring attribute information corresponding to the region of interest includes: receiving key information sent by a remote controller of a video monitor; and setting or adjusting the attribute information according to the key information.
In a possible embodiment, building an HTTP server in the video monitor, running an interactive attribute information setting service on the HTTP server, and acquiring attribute information corresponding to the region of interest includes: and accessing the attribute information setting service on the HTTP server through the browser of the tablet device, and setting or adjusting the attribute information through an interactive page provided by the attribute information setting service.
In one possible embodiment, the acquiring the attribute information corresponding to the region of interest includes: and setting or adjusting the attribute information through the interactive information of the touch display screen.
In one possible embodiment, the display instruction information includes at least one of the following information: the instruction information used for displaying the reduced image of the original video image to be processed, the instruction information used for displaying the original video image to be processed and the region of interest and the instruction information used for displaying the region of interest.
Therefore, by means of the technical scheme, the video monitor in the embodiment of the application can support various forms of display.
In one possible embodiment, processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed includes: and processing the image of the region of interest into an image with a first target resolution in a point-to-point mode, wherein the first target resolution is the resolution of the display device. Therefore, the embodiment of the application can realize point-to-point processing, so that the pixel level monitoring of the image can be realized.
In one possible embodiment, processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed includes: and reducing the original video image to be processed into a whole reduced image of the original video image with a second target resolution, wherein the second target resolution is the resolution of the display equipment or the display screen.
Therefore, the embodiment of the application can reduce the original video image to be processed into the whole reduced image of the original video image with the second target resolution, thereby realizing the whole viewing of the content of the original video image.
In a possible embodiment, the number of the regions of interest is at least one, and the processing of the image of the region of interest and/or the original video image to be processed to generate the image to be displayed according to the display instruction information further includes: and overlaying an image of at least one region of interest on the whole reduced image of the original video image, wherein the size of the image of the region of interest is equal to or smaller than the resolution of the display device.
Therefore, the embodiment of the application can overlay the image of at least one region of interest onto the whole reduced image of the original video image, so that not only the pixel-level monitoring of the image can be realized, but also the content of the original video image can be simultaneously viewed.
In one possible embodiment, the number of the regions of interest is multiple, and the processing of the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate the image to be displayed includes: and splicing the images of the multiple interested areas to generate an image to be displayed.
Therefore, the embodiment of the application can realize the splicing of the images of a plurality of interested areas, thereby realizing the pixel-level monitoring of the plurality of interested areas simultaneously.
In one possible embodiment, the number of the regions of interest is one or more, and the processing of the image of the region of interest and/or the original video image to be processed to generate the image to be displayed includes: acquiring measurement information of an original video image to be processed; and superposing the measurement information on the image to be displayed generated after the original video image to be processed and the image of the region of interest are processed so as to generate the image to be displayed containing the measurement information.
Therefore, by means of the technical scheme, the embodiment of the application can not only realize the pixel-level monitoring of the image, but also realize the viewing of the measurement information.
In one possible embodiment, the measurement information includes at least one of the following information: spectral information, luminance information, and color information.
In one possible embodiment, the resolution of the original video image is 7680 × 4320, the resolution of the image of the region of interest is 3840 × 2160 or 1920 × 1080, and the resolution of the display device is 3840 × 2160 or 1920 × 1080 or other resolutions.
In one possible embodiment, the resolution of the original video image is 3840 × 2160, the resolution of the image of the region of interest is 1920 × 1080, and the resolution of the display device is 1920 × 1080 or other resolutions.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic diagram illustrating an application scenario provided in an embodiment of the present application;
fig. 2 is a flowchart illustrating an ultra high definition video monitoring method supporting local enlargement and roaming of a picture according to an embodiment of the present application;
fig. 3 is a block diagram illustrating a structure of an ultra high definition video monitoring apparatus supporting local enlargement and roaming of a picture according to an embodiment of the present application;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
At present, in a case where the resolution of an original video image is not consistent with the resolution of a display device, in order to realize the viewing of the image, the resolution of the original video image is generally reduced to a resolution matching the display device in a whole by a video monitor in proportion, and then the reduced original video image is displayed by the display device connected to the video monitor.
For example, in the case where the resolution of a video is 8K Ultra High Definition (UHD) and the resolution of a display device is 4K Ultra High Definition video or High Definition video (HD), an existing method generally reduces an original video image of 8K to a 4K video image or a High Definition video image in a whole by a video monitor, and then displays the 4K video image or the High Definition video image on the display device.
However, the conventional method has at least the following problems:
in the shooting process, whether the people and the scenes at the key parts are focused accurately can be checked through the display equipment. However, after the original 8K video image is reduced, the display of the 4K ultra-high definition video or the display of the high definition video cannot determine whether focusing is accurate;
during the post-production of video, the 8K original video pictures cannot be used, resulting in an intermediate uncontrolled loss of quality. Therefore, neither a display of a 4K ultra-high definition video nor a display of a high definition video can realize exposure viewing of a sensitive area. And during the post-production of the video, the brightness and the like of the video image also need to be adjusted, which may cause that some details which need to be represented in a sensitive area are lost, and the loss of the details cannot be well represented on a display of the 4K ultra-high definition video or a display of the high definition video.
In summary, no matter the monitor monitors the accurate focus of the video image or monitors the picture quality of the sensitive area of the video image, the display of the 4K ultra high definition video or the display of the high definition video cannot accurately view the original 8K video image.
Based on this, the embodiment of the present application skillfully provides an ultra high definition video monitoring method supporting local screen enlargement and roaming, which is applied to an existing low-resolution video monitor, by respectively acquiring display instruction information and an original video image to be processed containing an area of interest, where the original video image to be processed is each frame image decoded from a video to be processed, and intercepting an image of the area of interest from the original video image to be processed, and processing the image of the area of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed, and displaying the image to be displayed on a display screen of the video monitor, and/or sending the image to be displayed to a display device connected to the video monitor.
Therefore, the embodiment of the application can realize the pixel-level monitoring of the original video image by intercepting the image of the region of interest from the original video image to be processed, processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate the image to be displayed, displaying the image to be displayed on the display screen of the video monitor, and/or sending the image to be displayed to the display device connected with the video monitor.
To facilitate understanding of the embodiments of the present application, some terms in the embodiments of the present application are first explained herein as follows:
the resolution of 8K ultra high definition video is 7680 × 4320.
The resolution of 4K ultra high definition video is 3840 × 2160.
The resolution of high definition video is 1920 × 1080.
Referring to fig. 1, fig. 1 illustrates a schematic diagram of an application scenario 100 according to an embodiment of the present application. As shown in FIG. 1, the application scenario 100 includes a video monitor 110 and a display device 120.
It should be understood that the specific devices of the video monitor 110 may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
It should also be understood that the specific devices of the display device 120 may also be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, the display device 120 may be a display of high definition video, a display of 4K ultra high definition video, or the like.
It should also be understood that the display device 120 may be an external device separate from the video monitor 110 or may be integrated with the video monitor 110. In order to facilitate understanding of the embodiments of the present application, the following description may be given by way of specific examples.
Specifically, the video monitor 110 may obtain a video to be processed. And, the video monitor 110 can decode the original video image to be processed containing the region of interest from the video to be processed. And, the video monitor 110 may intercept an image of the region of interest from the raw video image to be processed. And, the video monitor 110 may process the image of the region of interest and/or the original video image to be processed according to the acquired display instruction information to generate an image to be displayed.
And, the video monitor 110 may display the image to be displayed on the display screen of the video monitor itself, and/or the video monitor 110 may transmit the image to be displayed to a display device connected to the video monitor 110.
It should be noted that the ultra-high-definition video monitoring method supporting local screen enlargement and roaming provided by the embodiment of the present application may be further extended to other suitable application scenarios, and is not limited to the application scenario 100 shown in fig. 1.
For example, although FIG. 1 shows 1 display, those skilled in the art will appreciate that in an actual application scenario, the application scenario 100 may include many more displays.
Referring to fig. 2, fig. 2 is a flowchart illustrating an ultra high definition video monitoring method supporting local enlargement and roaming of a picture according to an embodiment of the present application. The video monitoring method as shown in fig. 2 includes:
in step S210, the video monitor obtains a video to be processed.
It should be understood that the method for the video monitor to obtain the video to be processed may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
Optionally, the video monitor may be provided with a Digital Serial Interface (SDI) signal input Interface, so that the video to be processed may be acquired through the SDI signal input Interface.
It should be understood that the transmission rate of the SDI signal input interface may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, in the case where the transmission rate of the SDI signal input interface is 12G, the video to be processed may be acquired through the four-way 12G-SDI signal input interface.
For another example, when the transmission rate of the SDI signal input interface is 48G, the video to be processed may be acquired through one path of 48G-SDI signal input interface.
Alternatively, the video monitor may be provided with a High Definition Multimedia Interface (HDMI) input Interface, so that the video to be processed may be acquired through the HDMI Interface.
It should be understood that the transmission rate of the HDMI interface may also be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, in the case where the transmission rate of the HDMI interface is 18Gbps, the HDMI interface is an HDMI2.0 input interface, so that the video to be processed can be acquired through the four-way HDMI2.0 input interface.
For another example, when the transmission rate of the HDMI is 48Gbps, the HDMI is an HDMI2.1 input interface, so that the video to be processed can be acquired through one HDMI2.1 input interface.
Step S220, the video monitor decodes the video to be processed to obtain an original video image to be processed including the region of interest.
It should be understood that the original video image to be processed is each frame image decoded from the video to be processed frame by frame.
That is, the processing procedure of the original video image to be processed is the processing procedure of each frame of image.
It should also be understood that the number of the regions of interest in the original video image to be processed may also be set according to actual requirements.
For example, the original video image to be processed may include one region of interest, may also include 2 regions of interest, may also include 4 regions of interest, and the like.
It should also be understood that the resolution of the original video image to be processed may also be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, the resolution of the original video image to be processed may be 7680 × 4320, or 3840 × 2160, or the like.
In order to facilitate understanding of the embodiments of the present application, the following description will be given by way of specific examples.
Specifically, the video monitor may decapsulate the video to be processed to restore the original video image to be processed. And, the video monitor may also buffer the original video image to be processed.
In step S230, the video monitor intercepts an image of the region of interest from the original video image to be processed.
It should be understood that the resolution of the image of the region of interest may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, in the case that the resolution of the original video image to be processed is 7680 × 4320, the resolution of the image of the region of interest may be 3840 × 2160, or 1920 × 1080.
As another example, in the case where the resolution of the original video image to be processed is 3840 × 2160, the resolution of the image of the region of interest may be 1920 × 1080.
In order to facilitate understanding of the embodiments of the present application, the following description will be given by way of specific examples.
Specifically, the video monitor acquires attribute information corresponding to the region of interest, and the video monitor can determine the position of the region of interest in the original video image to be processed according to the attribute information corresponding to the region of interest, so that the video monitor can intercept the image of the region of interest from the original video image to be processed.
It should be understood that the specific information included in the attribute information may be set according to actual requirements, and the embodiment of the present application is not limited thereto.
For example, the attribute information may include a start coordinate of the region of interest on the original video image and a size of the region of interest.
It should be understood that the specific position of the start coordinate may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, in the case where the region of interest is rectangular, the start coordinate may be the coordinate of the vertex of the upper left corner of the rectangle.
It should also be understood that the specific manner in which the video monitor obtains the attribute information corresponding to the region of interest may also be set according to actual requirements, and the embodiment of the present application is not limited thereto.
Alternatively, in the case where a key for setting or adjusting the attribute information is provided on the video monitor, the attribute information may be set or adjusted by the key on the video monitor.
For example, in the case where a key for adjusting the size of the region of interest is provided on the video monitor, the size of the region of interest can be adjusted by the key information.
Alternatively, in the case where the video monitor is configured with a remote controller, the attribute information is set or adjusted by remote control information transmitted from the remote controller.
For example, in the case where a mapping relationship between the remote control information and the setting instruction of the attribute information is stored in the video monitor, the video monitor may receive target remote control information transmitted from the remote controller, and the video monitor may further query the target setting instruction corresponding to the target remote control information according to the mapping relationship, so that the monitor may set the attribute information with the target setting instruction.
For another example, in a case where a mapping relationship between the remote control information and the adjustment instruction of the attribute information is stored in the video monitor, the video monitor may receive target remote control information transmitted by the remote controller, and the monitor may further query a target adjustment instruction corresponding to the target remote control information according to the mapping relationship, so that the monitor may adjust the attribute information according to the target adjustment instruction.
Optionally, an HTTP server is built in the video monitor, and an interactive attribute information setting service is run on the HTTP server, so that the attribute information setting service on the HTTP server is accessible through a browser of the tablet device, and the attribute information is set or adjusted through an interactive page provided by the attribute information setting service.
It should be understood that the HTTP server may be a service program.
It should also be understood that an HTTP server may also be referred to as an HTTP server.
Alternatively, in the case where the display screen of the video monitor is a touch display screen supporting touch interaction, the attribute information is set or adjusted by interaction information of the touch display screen.
For example, in a case where an area for adjusting the size of the region of interest is correspondingly set on the touch display screen, the video monitor sets or adjusts the attribute information in response to the user operating the area for adjusting the size of the region of interest.
In step S240, the video monitor acquires display instruction information.
It should be understood that the display instruction information is used to indicate the display manner of the image.
It should also be understood that the information included in the display instruction information may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, the display instruction information may include at least one of the following information: the instruction information used for displaying the reduced image of the original video image to be processed, the instruction information used for displaying the original video image to be processed and the region of interest and the instruction information used for displaying the region of interest. The instruction information for displaying the region of interest may be instruction information for displaying one region of interest in the original video image to be processed, or may be instruction information for displaying a plurality of regions of interest in the original video image to be processed.
It should also be understood that the display instruction information may be preset or may be input by the user in real time, and the embodiment of the present application is not limited thereto.
And step S250, the video monitor processes the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed.
It should be understood that, the specific manner in which the video monitor processes the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate the image to be displayed may be set according to actual requirements, and the embodiment of the present application is not limited thereto.
Alternatively, the video monitor may point-to-point process the image of the region of interest into an image of the first target resolution or image to be displayed. Wherein the first target resolution is a resolution of the display device.
For example, in the case where the resolution of the image of the region of interest is 3840 × 2160 and the resolution of the display device is also 3840 × 2160, the video monitor may point-to-point process the image of the region of interest into an image of the resolution of 3840 × 2160.
It should be understood that the specific resolution of the display device may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, in the case where the resolution of the original video image is 7680 × 4320 and the resolution of the image of the region of interest is 3840 × 2160 or 1920 × 1080, the resolution of the display device may be 3840 × 2160 or 1920 × 1080 or other resolutions. Here, the other resolution may be equal to or less than 7680 × 4320 and equal to or more than the resolution of the image of the region of interest.
As another example, in the case where the resolution of the original video image is 3840 × 2160 and the resolution of the image of the region of interest is 1920 × 1080, the resolution of the display device may be 1920 × 1080 or other resolutions. The other resolution here may be equal to or less than 3840 × 2160 and equal to or more than the resolution of the image of the region of interest.
At the same time, the first target resolution may also be smaller than the resolution of the display device, in which case the image point-to-point of interest will not fill the entire screen when displayed on the screen. Such as: the input video is 8K video, and the original resolution is: 7680X 4320; the resolution of the display device/screen is: 3840 × 2160; the size of the region of interest can be set to 3840 × 2160, at which time the image of the region of interest will fill the display; the size of the region of interest may also be set to: 1920 × 1080; 1/4, the picture of point-to-point display of the region of interest only occupies the display/display screen; the remaining 3/4 spaces not covered by the region of interest may also display thumbnails of the original video image so that the user can master the composition of the entire video frame.
Whether the point-to-point display image of the region of interest occupies the display or not, compared with the display effect of the whole reduced image of the original video image, the point-to-point display image of the region of interest displays the local picture of the region of interest in a partially enlarged manner. When the position of the interested area on the original image is adjusted by using keys and a roller wheel on the display or a remote controller of the display, the display effect is that an enlarged display area is displayed on the original image in a roaming way.
Alternatively, the video monitor may reduce the original video image to be processed to an entire reduced image of the original video image at the second target resolution. And the second target resolution is the resolution of the display device or the display screen.
For example, in the case where the original video image is an 8K video image, the video monitor may reduce the entire 8K video image to a 4K video image.
For another example, in the case where the original video image is an 8K video image, the video monitor may reduce the entire 8K video image into a thumbnail image, so that the thumbnail image may be displayed on the display screen of the video monitor.
It should be understood that the specific resolution of the display screen may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
Further, in the case where the second resolution is the resolution of the display device, an image of at least one region of interest, the size of which is equal to or smaller than the resolution of the display device, may be superimposed on the entire reduced image of the original video image.
That is, the embodiments of the present application may display an image to be displayed in a manner similar to picture-in-picture.
It should be understood that the overlapping position of the image of the region of interest on the image of the whole reduced original video image may also be set according to actual requirements, and the embodiment of the present application is not limited thereto.
For example, an image of the region of interest may be superimposed on the lower right corner of the entire reduced image of the original video image.
Alternatively, in the case where the number of the regions of interest is plural, the video monitor may stitch images of the plural regions of interest to generate an image to be displayed.
It should be understood that the splicing manner of the images of the multiple regions of interest may also be set according to actual requirements, and the embodiment of the present application is not limited thereto.
For example, in the case where the number of the regions of interest is 4 and the images of the regions of interest are rectangular, four regions of interest may be stitched into one rectangular image to be displayed. The image to be displayed comprises two lines of images, and each line of image is formed by images of two parallel interested areas.
Optionally, in the case that the number of the regions of interest is multiple, the video monitor acquires measurement information of an original video image to be processed, and superimposes the measurement information on the image to be displayed to generate an image to be displayed containing the measurement information.
It should be understood that the specific form of the image to be displayed according to the embodiment of the present application may be set according to actual requirements, and the embodiment of the present application is not limited thereto.
For example, the image to be displayed in the embodiment of the present application may be an original video image, or may be an image obtained by reducing the original video image in its entirety, or may be an image generated by processing the original video image to be processed and the region-of-interest image.
It should also be understood that the specific information included in the measurement information may also be set according to actual requirements, and the embodiments of the present application are not limited thereto.
For example, the measurement information may include at least one of the following information: spectral information, luminance information, and color information.
And step S260, the video monitor displays the image to be displayed on a display screen of the video monitor and/or sends the image to be displayed to a display device connected with the video monitor.
In order to facilitate understanding of the embodiments of the present application, the following description will be given by way of specific examples.
Alternatively, the video monitor may display the image to be displayed on a display screen of the video monitor.
For example, in the case where the image to be displayed is an overall thumbnail of the original video image, the image to be displayed may be displayed on the display screen.
Alternatively, the video monitor may send the image to be displayed to the display device. Correspondingly, the display device may receive the image to be displayed transmitted by the video monitor.
For example, in the case where the image to be displayed is a full-size reduced image of the original video image of the second target resolution, the video monitor may transmit the image to be displayed to the display device.
Alternatively, the video monitor may display the image to be displayed on a display screen of the video monitor, and may also transmit the image to be displayed to the display device.
For example, in the case where the image to be displayed is an overall thumbnail of the original video image, the video monitor may display the image to be displayed on a display screen of the video monitor, and may also transmit the image to be displayed to the display apparatus.
Therefore, the embodiment of the application can realize the pixel-level monitoring of the original video image by intercepting the image of the region of interest from the original video image to be processed, processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate the image to be displayed, displaying the image to be displayed on the display screen of the video monitor, and/or sending the image to be displayed to the display device connected with the video monitor.
It should be understood that the ultra high definition video monitoring method supporting local screen enlargement and roaming is only an example, and those skilled in the art may make various modifications according to the above method, and the solution after the modification is also within the scope of the embodiments of the present application.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Referring to fig. 3, fig. 3 is a block diagram illustrating a structure of an ultra high definition video monitoring apparatus 300 supporting local screen enlargement and roaming according to an embodiment of the present application, and it should be understood that the ultra high definition video monitoring apparatus 300 is capable of performing the steps in the above method embodiment, and specific functions of the ultra high definition video monitoring apparatus 300 may be referred to the description above, and detailed descriptions are appropriately omitted herein to avoid repetition. The ultra high definition video monitoring apparatus 300 includes at least one software function module that can be stored in a memory in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the ultra high definition video monitoring apparatus 300. Specifically, the ultra high definition video monitor apparatus 300 is applicable to a video monitor, and the ultra high definition video monitor apparatus 300 includes:
an obtaining module 310, configured to obtain display instruction information and an original video image to be processed including an area of interest, respectively, where the original video image to be processed is each frame image decoded from a video to be processed frame by frame; an intercepting module 320, configured to intercept an image of a region of interest from an original video image to be processed; the processing module 330 is configured to process the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed; the display sending module 340 is configured to display the image to be displayed on a display screen of the video monitor and/or send the image to be displayed to a display device connected to the video monitor.
In one possible embodiment, the obtaining module 310 is configured to obtain a raw video image to be processed through a path of 48G-digital serial interface SDI signal input interface on the video monitor.
In one possible embodiment, the obtaining module 310 is used for obtaining the raw video image to be processed through a four-way 12G-digital serial interface SDI signal input interface on the video monitor.
In one possible embodiment, the obtaining module 310 is configured to obtain an original video image to be processed through a high definition multimedia interface HDMI2.1 input interface on a video monitor.
In one possible embodiment, the obtaining module 310 is configured to obtain a raw video image to be processed through a four-way high definition multimedia interface HDMI2.0 input interface on a video monitor.
In a possible embodiment, the obtaining module 310 is further configured to obtain attribute information corresponding to the region of interest, where the attribute information includes a start coordinate of the region of interest on the original video image and a size of the region of interest; and the intercepting module 320 is configured to intercept an image of the region of interest from the original video image to be processed according to the attribute information.
In one possible embodiment, the obtaining module 310 is further configured to set or adjust the attribute information through a key on the video monitor.
In a possible embodiment, the obtaining module 310 is further configured to: receiving key information sent by a remote controller of a video monitor; and setting or adjusting the attribute information according to the key information.
In a possible embodiment, an HTTP server is built in the video monitor, an interactive attribute information setting service is run on the HTTP server, and the obtaining module 310 is further configured to: and accessing the attribute information setting service on the HTTP server through the browser of the tablet device, and setting or adjusting the attribute information through an interactive page provided by the attribute information setting service.
In a possible embodiment, the display screen is a touch display screen supporting touch interaction, and the obtaining module 310 is further configured to: and setting or adjusting the attribute information through the interactive information of the touch display screen.
In one possible embodiment, the display instruction information includes at least one of the following information: the instruction information used for displaying the reduced image of the original video image to be processed, the instruction information used for displaying the original video image to be processed and the region of interest and the instruction information used for displaying the region of interest.
In one possible embodiment, the processing module 330 is configured to perform a point-to-point processing on the image of the region of interest into an image of a first target resolution, wherein the first target resolution is a resolution of the display device.
In one possible embodiment, the processing module 330 is configured to reduce the original video image to be processed into a whole reduced image of the original video image with a second target resolution, where the second target resolution is a resolution of a display device or a display screen.
In a possible embodiment, the number of the regions of interest is at least one, and the processing module 330 is further configured to superimpose an image of the at least one region of interest on the entire reduced image of the original video image, wherein the size of the image of the region of interest is equal to or smaller than the resolution of the display device.
In a possible embodiment, the number of the regions of interest is multiple, and the processing module 330 is configured to stitch the images of the multiple regions of interest to generate an image to be displayed.
In one possible embodiment, the number of regions of interest is one or more, and the processing module 330 is configured to: acquiring measurement information of an original video image to be processed; and superposing the measurement information on the image to be displayed generated after the original video image to be processed and the image of the region of interest are processed so as to generate the image to be displayed containing the measurement information.
In one possible embodiment, the measurement information includes at least one of the following information: spectral information, luminance information, and color information.
In one possible embodiment, the resolution of the original video image is 7680 × 4320, the resolution of the image of the region of interest is 3840 × 2160 or 1920 × 1080, and the resolution of the display device is 3840 × 2160 or 1920 × 1080 or other resolutions.
In one possible embodiment, the resolution of the original video image is 3840 × 2160, the resolution of the image of the region of interest is 1920 × 1080, and the resolution of the display device is 1920 × 1080 or other resolutions.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
Fig. 4 shows a block diagram of an electronic device 400 according to an embodiment of the present application. Electronic device 400 may include a processor 410, a communication interface 420, a memory 430, and at least one communication bus 440. Wherein the communication bus 440 is used to enable direct connection communication of these components. The communication interface 420 in the embodiment of the present application is used for communicating signaling or data with other devices. The processor 410 may be an integrated circuit chip having signal processing capabilities. The Processor 410 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor 410 may be any conventional processor or the like.
The Memory 430 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 430 stores computer readable instructions that, when executed by the processor 410, the electronic device 400 may perform the steps of the method embodiment of fig. 2 described above.
The electronic device 400 may further include a memory controller, an input-output unit, an audio unit, and a display unit.
The memory 430, the memory controller, the processor 410, the peripheral interface, the input/output unit, the audio unit, and the display unit are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, these components may be electrically coupled to each other via one or more communication buses 440. The processor 410 is used to execute executable modules stored in the memory 430, such as software functional modules or computer programs included in the electronic device 400.
The input and output unit is used for providing input data for a user to realize the interaction of the user and the server (or the local terminal). The input/output unit may be, but is not limited to, a mouse, a keyboard, and the like.
The audio unit provides an audio interface to the user, which may include one or more microphones, one or more speakers, and audio circuitry.
The display unit provides an interactive interface (e.g. a user interface) between the electronic device and a user or for displaying image data to a user reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing.
It will be appreciated that the configuration shown in fig. 4 is merely illustrative and that the electronic device 400 may include more or fewer components than shown in fig. 4 or may have a different configuration than shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof.
The present application also provides a storage medium having a computer program stored thereon, which, when executed by a processor, performs the method of the method embodiments.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method of the method embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (16)
1. An ultra high definition video monitoring method supporting local enlargement and roaming of a picture, wherein the ultra high definition video monitoring method is applied to a video monitor, and the ultra high definition video monitoring method comprises the following steps:
respectively acquiring display instruction information and an original video image to be processed containing an interested area, wherein the original video image to be processed is each frame image decoded from a video to be processed frame by frame;
intercepting an image of the region of interest from the original video image to be processed;
processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed;
displaying the image to be displayed on a display screen of the video monitor and/or sending the image to be displayed to a display device connected with the video monitor;
the processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed includes:
superimposing an image of at least one region of interest on the reduced image of the original video image, wherein the size of the image of the region of interest is equal to or smaller than the resolution of the display device;
the processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed includes:
reducing the original video image to be processed into a whole reduced image of the original video image with a second target resolution, wherein the second target resolution is the resolution of the display equipment or the display screen;
the number of the regions of interest is one or more, and the processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed includes:
acquiring measurement information of the original video image to be processed;
and superposing the measurement information on the image to be displayed generated after the original video image to be processed and the image of the region of interest are processed so as to generate the image to be displayed containing the measurement information.
2. The ultra high definition video monitoring method according to claim 1, wherein said separately acquiring display instruction information and a to-be-processed original video image containing a region of interest comprises:
and acquiring the original video image to be processed through a path of 48G-digital serial interface SDI signal input interface on the video monitor.
3. The ultra high definition video monitoring method according to claim 1, wherein said separately acquiring display instruction information and a to-be-processed original video image containing a region of interest comprises:
and acquiring the original video image to be processed through a four-path 12G-digital serial interface SDI signal input interface on the video monitor.
4. The ultra high definition video monitoring method according to claim 1, wherein said separately acquiring display instruction information and a to-be-processed original video image containing a region of interest comprises:
and acquiring the original video image to be processed through a high-definition multimedia interface HDMI2.1 input interface on the video monitor.
5. The ultra high definition video monitoring method according to claim 1, wherein said separately acquiring display instruction information and a to-be-processed original video image containing a region of interest comprises:
and acquiring the original video image to be processed through a four-path high-definition multimedia interface (HDMI 2.0) input interface on the video monitor.
6. The ultra high definition video monitoring method according to claim 1, wherein the step of intercepting the image of the region of interest from the original video image to be processed comprises:
acquiring attribute information corresponding to the region of interest, wherein the attribute information comprises a starting coordinate of the region of interest on the original video image and the size of the region of interest;
and intercepting the image of the region of interest from the original video image to be processed according to the attribute information.
7. The ultra high definition video monitoring method according to claim 6, wherein the obtaining of the attribute information corresponding to the region of interest includes:
setting or adjusting the attribute information through a key on the video monitor.
8. The ultra high definition video monitoring method according to claim 6, wherein the obtaining of the attribute information corresponding to the region of interest includes:
receiving key information sent by a remote controller of the video monitor;
and setting or adjusting the attribute information according to the key information.
9. The ultra high definition video monitoring method according to claim 6, wherein an HTTP server is built in the video monitor, an interactive attribute information setting service is run on the HTTP server, and said obtaining the attribute information corresponding to the region of interest comprises:
and accessing the attribute information setting service on the HTTP server through a browser of the tablet device, and setting or adjusting the attribute information through an interactive page provided by the attribute information setting service.
10. The ultra high definition video monitoring method according to claim 6, wherein the display screen is a touch display screen supporting touch interaction, and the acquiring the attribute information corresponding to the region of interest includes:
and setting or adjusting the attribute information through the interactive information of the touch display screen.
11. The ultra high definition video monitoring method according to claim 1, wherein the display instruction information includes at least one of the following information: the instruction information is used for displaying the reduced image of the original video image to be processed, the instruction information is used for displaying the original video image to be processed and the region of interest, and the instruction information is used for displaying the region of interest.
12. The ultra high definition video monitoring method according to claim 1, wherein the processing the image of the region of interest and/or the original video image to be processed according to the display instruction information to generate an image to be displayed comprises:
and processing the image of the region of interest into an image with a first target resolution in a point-to-point mode, wherein the first target resolution is the resolution of a display device.
13. The ultra high definition video monitoring method according to claim 1, wherein the number of the regions of interest is plural, and the processing the image of the region of interest and/or the original video image to be processed to generate the image to be displayed according to the display instruction information comprises:
and splicing the images of the multiple interested areas to generate the image to be displayed.
14. Ultra high definition video surveillance method according to claim 1, characterized in that said measurement information comprises at least one of the following information: spectral information, luminance information, and color information.
15. Ultra high definition video surveillance method according to claim 1, characterized in that the resolution of the original video image is 7680 x 4320, the resolution of the image of the region of interest is 3840 x 2160 or 1920 x 1080, and the resolution of the display device is 3840 x 2160 or 1920 x 1080 or other resolutions.
16. The ultra high definition video surveillance method of claim 1, characterized in that the resolution of the original video image is 3840 x 2160, the resolution of the image of the region of interest is 1920 x 1080, and the resolution of the display device is 1920 x 1080 or other resolutions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010860576.0A CN111741274B (en) | 2020-08-25 | 2020-08-25 | Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010860576.0A CN111741274B (en) | 2020-08-25 | 2020-08-25 | Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111741274A CN111741274A (en) | 2020-10-02 |
CN111741274B true CN111741274B (en) | 2020-12-29 |
Family
ID=72658842
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010860576.0A Active CN111741274B (en) | 2020-08-25 | 2020-08-25 | Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111741274B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308780A (en) * | 2020-10-30 | 2021-02-02 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN115134633B (en) * | 2021-03-26 | 2024-04-26 | 华为技术有限公司 | Remote video method and related device |
CN113099254B (en) * | 2021-03-31 | 2023-10-17 | 深圳市企鹅网络科技有限公司 | Online teaching method, system, equipment and storage medium for regional variable resolution |
CN113645370B (en) * | 2021-08-16 | 2024-06-18 | 上海欧太医疗器械有限公司 | High-definition electronic endoscope image processor based on micro CMOS |
CN113891145B (en) * | 2021-11-12 | 2024-01-30 | 北京中联合超高清协同技术中心有限公司 | Super-high definition video preprocessing main visual angle roaming playing system and mobile terminal |
CN116723282B (en) * | 2023-08-07 | 2023-10-20 | 成都卓元科技有限公司 | Ultrahigh-definition-to-high-definition multi-machine intelligent video generation method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108933920A (en) * | 2017-05-25 | 2018-12-04 | 中兴通讯股份有限公司 | A kind of output of video pictures, inspection method and device |
CN110708586A (en) * | 2019-09-11 | 2020-01-17 | 南京图格医疗科技有限公司 | Medical image processing method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101383969B (en) * | 2008-10-27 | 2011-11-09 | 杭州华三通信技术有限公司 | Method, decoder and main control module for enlarging local region of image |
KR101899877B1 (en) * | 2012-04-04 | 2018-09-19 | 삼성전자주식회사 | Apparatus and method for improving quality of enlarged image |
CN102801963B (en) * | 2012-08-27 | 2015-03-11 | 北京尚易德科技有限公司 | Electronic PTZ method and device based on high-definition digital camera monitoring |
CN103929627B (en) * | 2014-05-08 | 2018-01-30 | 深圳英飞拓科技股份有限公司 | Video monitoring interactive approach and device based on Dptz |
CN104980697A (en) * | 2015-04-28 | 2015-10-14 | 杭州普维光电技术有限公司 | Video transmission method for web camera |
JP6714819B2 (en) * | 2016-01-13 | 2020-07-01 | 株式会社リコー | Image display system, information processing device, image display method, and image display program |
CN106534972A (en) * | 2016-12-12 | 2017-03-22 | 广东威创视讯科技股份有限公司 | Method and device for nondestructive zoomed display of local video |
-
2020
- 2020-08-25 CN CN202010860576.0A patent/CN111741274B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108933920A (en) * | 2017-05-25 | 2018-12-04 | 中兴通讯股份有限公司 | A kind of output of video pictures, inspection method and device |
CN110708586A (en) * | 2019-09-11 | 2020-01-17 | 南京图格医疗科技有限公司 | Medical image processing method |
Also Published As
Publication number | Publication date |
---|---|
CN111741274A (en) | 2020-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111741274B (en) | Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture | |
WO2022100677A1 (en) | Picture preview method and apparatus, and storage medium and electronic device | |
KR102463304B1 (en) | Video processing method and device, electronic device, computer-readable storage medium and computer program | |
US10110821B2 (en) | Image processing apparatus, method for controlling the same, and storage medium | |
CN110569013B (en) | Image display method and device based on display screen | |
JP5460793B2 (en) | Display device, display method, television receiver, and display control device | |
US7821575B2 (en) | Image processing apparatus, receiver, and display device | |
US20170178290A1 (en) | Display device, display system, and recording medium | |
JP2015114798A (en) | Information processor, information processing method, and program | |
US20190045109A1 (en) | Information processing apparatus, information processing method, and storage medium | |
US20210035537A1 (en) | Full-screen displays | |
EP3582504A1 (en) | Image processing method, device, and terminal device | |
US20180091852A1 (en) | Systems and methods for performing distributed playback of 360-degree video in a plurality of viewing windows | |
KR100846798B1 (en) | Method and system for displaying pixilated input image | |
CN112188269B (en) | Video playing method and device and video generating method and device | |
JP2023550764A (en) | Methods, devices, smart terminals and media for creating panoramic images based on large displays | |
US11064103B2 (en) | Video image transmission apparatus, information processing apparatus, system, information processing method, and recording medium | |
US10440266B2 (en) | Display apparatus and method for generating capture image | |
JP2007201816A (en) | Video image display system and video image receiver | |
CN110572411A (en) | Method and device for testing video transmission quality | |
TWI493502B (en) | Processing method for image rotating and apparatus thereof | |
US11144273B2 (en) | Image display apparatus having multiple operation modes and control method thereof | |
JP7468391B2 (en) | Image capture device and image capture processing method | |
JP6917800B2 (en) | Image processing device and its control method and program | |
JP6889622B2 (en) | Image processing device and its control method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |