CN111836028A - Image processing method, readable storage medium, and electronic device - Google Patents

Image processing method, readable storage medium, and electronic device Download PDF

Info

Publication number
CN111836028A
CN111836028A CN202010648383.9A CN202010648383A CN111836028A CN 111836028 A CN111836028 A CN 111836028A CN 202010648383 A CN202010648383 A CN 202010648383A CN 111836028 A CN111836028 A CN 111836028A
Authority
CN
China
Prior art keywords
image data
resolution
sub
display area
picture block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010648383.9A
Other languages
Chinese (zh)
Other versions
CN111836028B (en
Inventor
陈恒
黄秀瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Fanxiu Technology Co.,Ltd.
Original Assignee
Beijing Yizhicheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhicheng Technology Co ltd filed Critical Beijing Yizhicheng Technology Co ltd
Priority to CN202010648383.9A priority Critical patent/CN111836028B/en
Publication of CN111836028A publication Critical patent/CN111836028A/en
Application granted granted Critical
Publication of CN111836028B publication Critical patent/CN111836028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3188Scale or resolution adjustment

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention discloses an image processing method, a readable storage medium and electronic equipment. And transmitting the intermediate image data through the image output interface so as to respectively transmit the sub-image data in the intermediate image data to at least one image output device for output according to a preset sequence. The embodiment of the invention ensures the image details and fully utilizes the physical resolution of the image output interface in the process of projecting the ultra-wide image or the ultra-long image.

Description

Image processing method, readable storage medium, and electronic device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method, a readable storage medium, and an electronic device.
Background
At present, there are more and more scenes needing to display an ultra-wide picture, and in the process of projecting the ultra-wide picture, the prior art usually selects a compressed picture and projects the ultra-wide picture by adopting a plurality of multimedia interfaces. The former projection method may lose a lot of picture details, and the latter projection method may increase the equipment cost, and also needs to design a complex multi-machine communication protocol to ensure the synchronization of operations.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, a readable storage medium, and an electronic device, which aim to ensure details of an image picture without adding additional devices in a process of projecting an ultra-wide image or an ultra-long image.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
determining target image data;
dividing a display area corresponding to a second resolution according to a first resolution to determine at least one picture block area, wherein the first resolution is a physical resolution of an image output device, the second resolution is a physical resolution of an image output interface, and the first resolution is smaller than the second resolution;
determining sub-image data corresponding to each of the block areas from the target image data in a predetermined direction according to each of the block areas;
splicing corresponding sub-image data according to the position of each picture block area in the display area to determine intermediate image data;
transmitting the intermediate image data through an image output interface;
and respectively transmitting the sub-image data in the intermediate image data to at least one image output device according to a preset sequence, so as to output the corresponding sub-image data through each image output device, and fusing to obtain the output image data corresponding to the target image data.
Further, the determining the target image data includes:
creating a frame buffer area, wherein the resolution of the frame buffer area is a preset third resolution;
and rewriting the image data to the frame buffer area to obtain the target image data with the resolution of the third resolution.
Further, the dividing the display area corresponding to the second resolution according to the first resolution to determine at least one of the tile areas specifically includes:
and dividing the display area corresponding to the second resolution to determine the maximum number of picture block areas with the resolution being the first resolution and not overlapping with each other.
Further, the dividing the display area corresponding to the second resolution according to the first resolution to determine at least one of the tile areas specifically includes:
and averagely dividing a display area corresponding to the second resolution into a plurality of picture block areas which have the same resolution and are not mutually overlapped, wherein the resolution of each picture block area is not more than the first resolution, and the sum of the resolutions of the picture block areas is the second resolution.
Further, the determining the sub-image data corresponding to each of the block areas from the target image data in the predetermined direction according to each of the block areas includes:
dividing the target image data into a plurality of sub-image data with the same number as the picture block areas on the longitudinal average;
numbering each of the picture block regions according to the position in the display region and a first numbering sequence;
numbering the sub-image data according to the position in the target image data and a second numbering sequence;
it is determined that the same-numbered tile areas correspond to the sub-image data.
Further, the stitching the corresponding sub-image data according to the position of each of the tile areas in the display area to determine intermediate image data includes:
performing data processing on each sub-image data;
and splicing the sub-image data after data processing according to the position of the corresponding picture area in the display area to determine intermediate image data.
Further, the data processing of each sub-image data specifically includes:
and performing feathering processing on the fusion edge of each sub-image data, wherein the fusion edge is the edge where each sub-image data is contacted with each other.
Further, the data processing of each sub-image data further includes:
and scaling each sub-image data according to the resolution of the corresponding picture block area.
In a second aspect, an embodiment of the present invention provides a computer-readable storage medium for storing computer program instructions, which when executed by a processor implement the method according to any one of the first aspect.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a logic processing unit, where the memory is configured to store one or more computer program instructions, where the one or more computer program instructions are executed by the processor to implement the following steps:
determining target image data;
dividing a display area corresponding to a second resolution according to a first resolution to determine at least one picture block area, wherein the first resolution is a physical resolution of an image output device, the second resolution is a physical resolution of an image output interface, and the first resolution is smaller than the second resolution;
determining sub-image data corresponding to each of the block areas from the target image data in a predetermined direction according to each of the block areas;
splicing corresponding sub-image data according to the position of each picture block area in the display area to determine intermediate image data;
transmitting the intermediate image data through an image output interface;
the logic processing unit is used for respectively transmitting each sub-image data in the intermediate image data to at least one image output device according to a preset sequence, so that the corresponding sub-image data is output through each image output device, and the output image data corresponding to the target image data is obtained through fusion.
The method comprises the steps of determining target image data, dividing a display area of an image output interface according to first resolution and second resolution of image output equipment and the image output interface to obtain a plurality of picture block areas, obtaining sub-image data corresponding to each picture block area according to the target image data in a preset direction, and splicing the sub-image data according to the positions of the corresponding picture block areas in the display area to obtain intermediate image data. And transmitting the intermediate image data through the image output interface so as to respectively transmit the sub-image data in the intermediate image data to at least one image output device for output according to a preset sequence. The embodiment of the invention ensures the image details and fully utilizes the physical resolution of the image output interface in the process of projecting the ultra-wide image or the ultra-long image.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an image data rewriting process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a partitioned display area according to an alternative implementation of the embodiment of the present invention;
FIG. 4 is a schematic diagram of a partitioned display area according to another alternative implementation of the embodiment of the present invention;
FIG. 5 is a schematic diagram of a tile area according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating sub-image data according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating stitching sub-image data according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a target image data processing process according to an embodiment of the present invention;
fig. 9 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout the description, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The embodiment of the invention is applied to application scenes of projecting, displaying, processing and the like of an ultra-wide image or an ultra-long image, wherein the ultra-wide image is an image with an aspect ratio larger than a threshold value, and the ultra-long image is an image with an aspect ratio larger than a threshold value.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the image processing method includes the steps of:
and step S100, determining target image data.
Specifically, the target image data may be determined by the arm processor to be an ultra-wide image having an aspect ratio greater than a threshold value or an ultra-long image having an aspect ratio greater than a threshold value. In an embodiment of the present invention, the determining of the target image data may include:
and step S110, creating a frame buffer.
Specifically, the resolution of the frame buffer is a third resolution, and the application service for creating the frame buffer is called by an operating system loaded by the processor and created according to the preset third resolution. Taking the step of creating the frame buffer as an example, the step is implemented by an arm processor carrying an Android operating system, and the Android operating system carried by the arm processor can create an ultra-wide frame buffer (FrameBuffer) with a pixel column number of 7680 and a pixel row number of 1080, that is, a third resolution of 7680 × 1080, by calling a SurfaceFlinger application service in system multimedia.
And step S120, rewriting the image data to the frame buffer.
Specifically, the image data is original image data corresponding to the target image data. In the embodiment of the present invention, the display interface image of the operating system may include at least one component element, and the component element may be, for example, an add component element, a close component element, a move component element, and a zoom component element in the operating system. And after the image data is rewritten to the frame buffer area, obtaining target image data with the resolution being the third resolution. The rewriting process of the image data is realized through an operating system carried by a processor, so that the resolution of the image data is changed, and the ultra-wide target image data corresponding to the image data is obtained. The steps of rewriting the image data will be described by taking an example of an arm processor equipped with an Android operating system as an implementation. And editing a window corresponding to each component element in the image data by an Android operating system carried by the arm processor through receiving a WM-SIZE command, and rewriting each edited component element to the frame buffer area to obtain target image data with the resolution being the third resolution. The editing process includes at least one of moving, scaling, redefining an aspect ratio, and changing a resolution of a window corresponding to the component element.
Fig. 2 is a schematic diagram of an image data rewriting process according to an embodiment of the present invention. As shown in FIG. 2, the processor rewrites the less resolved image data 20 into the frame buffer to obtain target image data 21 having the same resolution as the frame buffer.
Step S200, dividing the display area corresponding to the second resolution according to the first resolution.
Specifically, the display area corresponding to the second resolution is divided according to the first resolution to determine at least one picture block area. The first resolution is a physical resolution of an image output device, and is used for representing a maximum pixel row number and a maximum pixel column number of an image output by the image output device. The second resolution is a physical resolution of the image output interface and is used for representing the maximum pixel row number and the maximum pixel column number of the image output by the image output interface. The display area is a pixel area occupied by the image output interface when the resolution of the image is a second resolution. The first resolution is less than the second resolution.
In an embodiment of the present invention, the image output interface is configured to output all information included in the target image data, and the aspect ratio difference between the second resolution and the third resolution is generally large. In the process of image processing, in order to improve the application rate of the physical resolution of the image output interface and avoid compression distortion of the third resolution, a display area corresponding to the second resolution needs to be divided to obtain a plurality of picture block areas, and each picture block area is used for bearing a part of target image data. And receiving the corresponding picture block areas respectively through a plurality of image output devices, splicing and fusing to obtain the image corresponding to the target image data. And the resolution size corresponding to each picture block area is limited by the physical resolution of the corresponding image output equipment.
In an optional implementation manner of the embodiment of the present invention, when the display area is divided, the resolution corresponding to the block area is set as the physical resolution of the image output device, so as to fully utilize the physical resolution of the image output device and ensure the details of the output image. The process is specifically to divide the display area corresponding to the second resolution to determine the maximum number of non-overlapping picture block areas with the resolution being the first resolution. Namely, a plurality of block areas are determined in the display area according to a preset dividing sequence, and the height and the width of each block area are respectively the number of pixel rows and the number of pixel columns of the first resolution. The preset dividing rule is that a plurality of adjacent picture block areas which are not mutually overlapped are determined from the left upper corner of the display area to the right and from top to bottom in sequence.
Fig. 3 is a schematic diagram of dividing a display area according to an alternative implementation manner of the embodiment of the present invention. As shown in fig. 3, the second resolution is 4096 × 2160, and the first resolution is 1280 × 800. When the predetermined division rule is that a plurality of adjacent block areas 31 which are not overlapped with each other are sequentially determined from left to right and from top to bottom from the upper left corner of the display area 30, three block areas 31 having a height of 800 pixels and a width of 1280 pixels are sequentially determined from left to right from the upper left corner of the display area 30. At this time, the number of pixels remaining undivided in the horizontal direction in the display area 30 is 4096-. At this time, the number of pixels remaining undivided in the horizontal direction in the display area 30 is 4096-.
In another optional implementation manner of the embodiment of the present invention, to fully utilize the physical resolution of the image output interface, the display area may be equally divided to obtain a plurality of block areas having a resolution less than or equal to the first resolution and not overlapping with each other, and a sum of the resolutions of the block areas is the second resolution. In order to avoid occupying a plurality of image output devices after output, the height and width of each of the block regions should be values closest to the height and width corresponding to the first resolution among a plurality of pixels capable of equally dividing the display region. That is, the minimum tile area is determined when the display area is divided on average.
Fig. 4 is a schematic diagram of dividing a display area according to another alternative implementation manner of the embodiment of the present invention. As shown in fig. 4, the second resolution is 4096 × 2160, and the first resolution is 1280 × 800. The display area 40 has a width of 4096 pixels and is divisible by 2 into 2048 pixels, but 2048 pixels are larger than 1280 pixels wide at the first resolution and cannot divide the picture block area 41. It is further determined that 4096 is divisible by 4 to yield 1024 pixels, less than 1280 pixels, and thus the display area 40 may be divided vertically into 4 sections. Meanwhile, the height of the display area is 2160 pixels, which can be divided by 2 to obtain 1080 pixels, but 1080 pixels are larger than 800 pixels at the first resolution, and the block area 41 cannot be divided. It is further determined that 2160 may be divided by 3 to yield 720 pixels, less than 800 pixels, and thus the display area 40 may be divided laterally into 3 portions. Thus, the display area 40 may be divided into 4 × 3 — 12 block areas 41, and the resolution of each block area 41 is 1024 × 720.
And step S300, the target image data is arranged in a preset direction according to each picture block area.
Specifically, after a plurality of block areas are determined, the target image data is divided in a predetermined direction according to each of the block areas, and sub-image data corresponding to each of the block areas is obtained. And further carrying corresponding sub-image data through each picture block area, and outputting the sub-image data through an image output interface. In the embodiment of the present invention, the predetermined direction may be a transverse direction or a longitudinal direction. When the target image data is an ultra-wide image, the preset direction is a longitudinal direction; when the target image data is an ultra-long image, the predetermined direction is a transverse direction. The process of segmenting the target image data may comprise the steps of:
step S310, equally dividing the target image data into a plurality of sub-image data having the same number as the block areas in a predetermined direction.
Specifically, after determining the picture area in the display area, the number of the picture area is determined. And averagely dividing the target image data in a preset direction according to the number of the picture block areas, and determining a plurality of sub-image data with the same number as the picture block areas. For example, when the resolution of the target image data is 7680 × 1080 for an ultra-wide image, and the display area includes 4 block areas, the predetermined direction is a vertical direction. I.e. the target image data is divided longitudinally into 4 sub-image data with a resolution of 1920 x 1080.
And step S320, numbering each picture block area according to the position in the display area and the first numbering sequence.
Specifically, the first numbering sequence and the corresponding numbering content may be preset, for example, the numbering is 1-N in the display area from top to bottom and from left to right, and N is the number of the picture block areas.
FIG. 5 is a block area diagram according to an embodiment of the present invention. As shown in fig. 5, the display area 50 includes four tile areas 40, which are numbered as a tile area 1, a tile area 2, a tile area 3, and a tile area 4 sequentially from top to bottom and from left to right according to a first numbering sequence.
And step S330, numbering the sub-image data according to the position in the target image data and the second numbering sequence.
Specifically, the second numbering sequence and the corresponding numbering content may be preset, for example, when the target image data is longitudinally divided into a plurality of sub-image data, the target image data is sequentially numbered from left to right as 1-N, and N is the number of the sub-image data. When the target image data is transversely divided into a plurality of sub-image data, the target image data is sequentially numbered from top to bottom as 1-N, and N is the number of the sub-image data.
Fig. 6 is a schematic diagram of sub-image data according to an embodiment of the invention. As shown in fig. 6, in the case where 4 tile areas are included in the display area, the target image data 60 is divided into 4 sub-image data 61 in the vertical direction. And numbering sub-image data 1, sub-image data 2, sub-image data 3 and sub-image data 4 from left to right in sequence according to the position of each sub-image data in the target image data.
Step S340, determining that the picture block regions with the same number correspond to the sub-image data.
Specifically, after numbering each of the picture block regions and the sub-image data, it is determined that the picture block regions with the same number correspond to the sub-image data.
And S400, splicing corresponding sub-image data according to the position of each picture block area in the display area.
Specifically, in order to ensure that the entire content of the target image data can be output through the image output interface, the sub-image data may be spliced according to the position of the corresponding tile area in the display area. In the embodiment of the present invention, the splicing process may include the following steps:
step S410, data processing is performed on each of the sub-image data.
Specifically, each of the sub-image data is fused again according to the position in the target image data after being output through the image transmission interface and the image transmission device. Because the cut image fusion can have an unnatural condition, it is necessary to determine a plurality of sub-image data from the cut target image data and then perform feathering on the fusion edge of each sub-image data, where the fusion edge is an edge where each sub-image data is in contact with each other.
Further, when the sub-image data are spliced, since the resolution corresponding to the picture block region is different from the resolution of the corresponding sub-image data, the sub-image data need to be scaled according to the resolution of the corresponding picture block region, so that the resolution of the sub-image data is not greater than the resolution of the corresponding picture block region.
Step S420, stitching the sub-image data after data processing according to the position of the corresponding block area in the display area to determine intermediate image data.
Specifically, after data processing is performed on each sub-image data, the sub-image data is respectively transmitted to the position of the corresponding picture block area in the display area to obtain intermediate image data.
Fig. 7 is a schematic diagram of splicing sub-image data according to an embodiment of the present invention. As shown in fig. 7, the display area 71 includes 4 block areas, and the target image data 70 includes 4 sub-image data after data processing. And taking the sub-image data 1 as the upper left corner, the sub-image data 2 as the upper right corner, the sub-image data 3 as the lower left corner, and the sub-image data 4 as the lower right corner to be spliced to obtain intermediate image data.
And step S500, transmitting the intermediate image data through an image output interface.
Specifically, the image data interface transmits the intermediate image data to a logic processing unit, and the logic processing unit transmits the sub-image data in the intermediate image data to a plurality of image output devices respectively according to the position sequence.
Step S600, respectively transmitting each sub-image data in the intermediate image data to at least one image output device according to a predetermined sequence.
Specifically, after the image data interface transmits the intermediate image data to a logic processing unit, the logic processing unit transmits each sub-image data in the intermediate image data to at least one image output device according to a predetermined sequence. The image output devices are arranged in a horizontal or vertical direction for outputting the received sub-image data, respectively. When the target image data is ultra-wide data, the image output devices are arranged in the horizontal direction. When the target image data is the super-long data, the image output devices are arranged in the vertical direction. Therefore, each image output device can output the received sub-image data according to the position of each sub-image data in the target image data, so as to fuse each sub-image data and obtain output image data corresponding to the target image data.
FIG. 8 is a diagram illustrating a target image data processing procedure according to an embodiment of the present invention. As shown in fig. 8, in the embodiment of the present invention, after rewriting image data into target image data 80 and dividing a display area corresponding to an image output interface into 4 block areas, the target image data 80 is divided into sub-image data 1, sub-image data 2, sub-image data 3, and sub-image data 4 according to the number of the block areas. And according to the position of the corresponding picture block area in the display area, splicing the sub-image data to obtain intermediate image data 82, and transmitting the intermediate image data 82 to a logic processing unit 83 through an image output interface (HDMI), wherein the logic processing unit 83 may be an FPGA processor in the embodiment of the present invention. The logic processing unit 83 transmits the sub-image data 1, the sub-image data 2, the sub-image data 3, and the sub-image data 4 in the intermediate image data to the image output devices projector 1, projector 2, projector 3, and projector 4, respectively. Because the target data image is an ultra-wide image, the projector 1, the projector 2, the projector 3, and the projector 4 are horizontally arranged to output corresponding sub-image data, respectively, and the sub-image data are fused to obtain output image data corresponding to the target image data.
In the image processing method of the embodiment of the invention, in the process of projecting the ultra-wide image or the ultra-long image, the target image data is output in a mode of dividing the display area corresponding to the physical resolution of the image output interface and the target image data, and the output image data corresponding to the target image data is obtained by carrying out image fusion after the output. The method fully utilizes the physical resolution of the image output interface while ensuring the image details.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 9, the electronic device shown in fig. 9 is a general address query device, which includes a general computer hardware structure, which includes at least a processor 90, a memory 91, and a logic processing unit 93. The processor 90, memory 91 and logical processing unit 93 are connected by a bus 92. The memory 91 is adapted to store instructions or programs executable by the processor 90. Processor 90 may be a stand-alone microprocessor or a collection of one or more microprocessors. The logic processing unit may be an FPGA processor or other devices capable of performing logic processing known in the art. Thus, the processor 90 executes the instructions stored in the memory 91, thereby performing the following steps in the method of the embodiment of the present invention as described above:
determining target image data;
dividing a display area corresponding to a second resolution according to a first resolution to determine at least one picture block area, wherein the first resolution is a physical resolution of an image output device, the second resolution is a physical resolution of an image output interface, and the first resolution is smaller than the second resolution;
longitudinally dividing the target image data according to each of the picture block regions to determine sub-image data corresponding to each of the picture block regions;
splicing corresponding sub-image data according to the position of each picture block area in the display area to determine intermediate image data;
and transmitting the intermediate image data through an image output interface.
The bus 92 connects the above-described components together, and also connects the above-described components to the image output device 94. In the embodiment of the present invention, the image output device 94 may be a projector, a display, or other devices capable of displaying images known in the art. Typically, an image output device 94 is connected to the system.
The logic processing unit 93 is configured to transmit each sub-image data in the intermediate image data according to the embodiment of the present invention to at least one image output device 94 according to a predetermined sequence, so as to output corresponding sub-image data through each image output device, and obtain output image data corresponding to the target image data by fusion.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device) or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may employ a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow in the flow diagrams can be implemented by computer program instructions.
These computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
Another embodiment of the invention is directed to a non-transitory storage medium storing a computer-readable program for causing a computer to perform some or all of the above-described method embodiments.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image processing method, characterized in that the method comprises:
determining target image data;
dividing a display area corresponding to a second resolution according to a first resolution to determine at least one picture block area, wherein the first resolution is a physical resolution of an image output device, the second resolution is a physical resolution of an image output interface, and the first resolution is smaller than the second resolution;
determining sub-image data corresponding to each of the block areas from the target image data in a predetermined direction according to each of the block areas;
splicing corresponding sub-image data according to the position of each picture block area in the display area to determine intermediate image data;
transmitting the intermediate image data through an image output interface;
and respectively transmitting the sub-image data in the intermediate image data to at least one image output device according to a preset sequence so as to output the output image data corresponding to the target image data through each image output device.
2. The method of claim 1, wherein the determining target image data comprises:
creating a frame buffer area, wherein the resolution of the frame buffer area is a preset third resolution;
and rewriting the image data to the frame buffer area to obtain the target image data with the resolution of the third resolution.
3. The method according to claim 1, wherein the dividing the display area corresponding to the second resolution according to the first resolution to determine at least one of the tile areas specifically comprises:
and dividing the display area corresponding to the second resolution to determine the maximum number of picture block areas with the resolution being the first resolution and not overlapping with each other.
4. The method according to claim 1, wherein the dividing the display area corresponding to the second resolution according to the first resolution to determine at least one of the tile areas specifically comprises:
and averagely dividing a display area corresponding to the second resolution into a plurality of picture block areas which have the same resolution and are not mutually overlapped, wherein the resolution of each picture block area is not more than the first resolution, and the sum of the resolutions of the picture block areas is the second resolution.
5. The method of claim 1, wherein determining the target image data in a predetermined direction from each of the block regions to determine sub-image data corresponding to each of the block regions comprises:
dividing the target image data into a plurality of sub-image data with the same number as the picture block areas on the longitudinal average;
numbering each of the picture block regions according to the position in the display region and a first numbering sequence;
numbering the sub-image data according to the position in the target image data and a second numbering sequence;
it is determined that the same-numbered tile areas correspond to the sub-image data.
6. The method of claim 1, wherein the stitching the corresponding sub-image data according to the position of each of the tile regions in the display region to determine intermediate image data comprises:
performing data processing on each sub-image data;
and splicing the sub-image data after data processing according to the position of the corresponding picture area in the display area to determine intermediate image data.
7. The method according to claim 6, wherein the data processing of each of the sub-image data is specifically:
and performing feathering processing on the fusion edge of each sub-image data, wherein the fusion edge is the edge where each sub-image data is contacted with each other.
8. The method of claim 7, wherein said data processing each of said sub-image data further comprises:
and scaling each sub-image data according to the resolution of the corresponding picture block area.
9. A computer readable storage medium storing computer program instructions which, when executed by a processor, implement the method of any one of claims 1-8.
10. An electronic device comprising a memory, a processor, and a logical processing unit, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the steps of:
determining target image data;
dividing a display area corresponding to a second resolution according to a first resolution to determine at least one picture block area, wherein the first resolution is a physical resolution of an image output device, the second resolution is a physical resolution of an image output interface, and the first resolution is smaller than the second resolution;
determining sub-image data corresponding to each of the block areas from the target image data in a predetermined direction according to each of the block areas;
splicing corresponding sub-image data according to the position of each picture block area in the display area to determine intermediate image data;
transmitting the intermediate image data through an image output interface;
the logic processing unit is used for respectively transmitting each sub-image data in the intermediate image data to at least one image output device according to a preset sequence, so that the corresponding sub-image data is output through each image output device, and the output image data corresponding to the target image data is obtained through fusion.
CN202010648383.9A 2020-07-07 2020-07-07 Image processing method, readable storage medium, and electronic device Active CN111836028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010648383.9A CN111836028B (en) 2020-07-07 2020-07-07 Image processing method, readable storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010648383.9A CN111836028B (en) 2020-07-07 2020-07-07 Image processing method, readable storage medium, and electronic device

Publications (2)

Publication Number Publication Date
CN111836028A true CN111836028A (en) 2020-10-27
CN111836028B CN111836028B (en) 2022-03-04

Family

ID=72900283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010648383.9A Active CN111836028B (en) 2020-07-07 2020-07-07 Image processing method, readable storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN111836028B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911169A (en) * 2021-01-29 2021-06-04 上海七牛信息技术有限公司 Video image segmentation method and device and video image control method and device
CN113286100A (en) * 2021-05-17 2021-08-20 西安诺瓦星云科技股份有限公司 Configuration method and device of video output interface and video output equipment
TWI812003B (en) * 2022-02-10 2023-08-11 宏正自動科技股份有限公司 Method and system for previewing the image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019047452A (en) * 2017-09-07 2019-03-22 キヤノン株式会社 Image transmission device and control method therefor
CN109819180A (en) * 2019-01-18 2019-05-28 上海皓空数码科技有限公司 A kind of ultra-wide picture fusion display methods and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019047452A (en) * 2017-09-07 2019-03-22 キヤノン株式会社 Image transmission device and control method therefor
CN109819180A (en) * 2019-01-18 2019-05-28 上海皓空数码科技有限公司 A kind of ultra-wide picture fusion display methods and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911169A (en) * 2021-01-29 2021-06-04 上海七牛信息技术有限公司 Video image segmentation method and device and video image control method and device
CN113286100A (en) * 2021-05-17 2021-08-20 西安诺瓦星云科技股份有限公司 Configuration method and device of video output interface and video output equipment
CN113286100B (en) * 2021-05-17 2022-12-13 西安诺瓦星云科技股份有限公司 Configuration method and device of video output interface and video output equipment
TWI812003B (en) * 2022-02-10 2023-08-11 宏正自動科技股份有限公司 Method and system for previewing the image

Also Published As

Publication number Publication date
CN111836028B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN111836028B (en) Image processing method, readable storage medium, and electronic device
EP1628490A1 (en) Image display device and program
CN108021671B (en) Page transparent processing method and device
EP3721957A1 (en) Providing apparatus, providing method and computer program
CN103680470B (en) The method for displaying image that large-size screen monitors control and system
KR20180058762A (en) Creation of a triangular mesh for a three-dimensional image
US20130127989A1 (en) Conversion of 2-Dimensional Image Data into 3-Dimensional Image Data
US10217259B2 (en) Method of and apparatus for graphics processing
JP5476910B2 (en) Image generating apparatus, image generating method, and program
US9830880B1 (en) Method and system for adjusting the refresh rate of a display device based on a video content rate
CN108876700A (en) A kind of method and circuit promoting VR display effect
US9568333B2 (en) Method and system for selectively blending buildings to improve route visibility in a 3D navigation system
CN112445995A (en) Scene fusion display method and device under WebGL
CN111527517B (en) Image processing apparatus and control method thereof
JP2011039801A (en) Apparatus and method for processing image
CN109643462B (en) Real-time image processing method based on rendering engine and display device
WO2012089595A1 (en) Method and device for generating an image view for 3d display
EP3723365A1 (en) Image processing apparatus, system that generates virtual viewpoint video image, control method of image processing apparatus and storage medium
US8120620B2 (en) Graphics system and drawing method thereof
KR20140051035A (en) Method and apparatus for image encoding
CN112567430A (en) Image generation device, image generation method, and program
CN112686806B (en) Image splicing method and device, electronic equipment and storage medium
CN105761253B (en) A kind of three dimensions virtual data high definition screenshot method
CN113691835A (en) Video implantation method, device, equipment and computer readable storage medium
KR101684834B1 (en) Method, device and system for resizing and restoring original depth frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230714

Address after: Room 317, Floor 3, Building 1, Yard 6, Gaoxin Fourth Street, Huilongguan Town, Changping District, Beijing 102200

Patentee after: Beijing Fanxiu Technology Co.,Ltd.

Address before: 100096 room 1106, 11th floor, building 1, courtyard 3, Longyu North Street, Huilongguan town, Changping District, Beijing

Patentee before: Beijing yizhicheng Technology Co.,Ltd.

TR01 Transfer of patent right