CN108762706B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN108762706B
CN108762706B CN201810532178.9A CN201810532178A CN108762706B CN 108762706 B CN108762706 B CN 108762706B CN 201810532178 A CN201810532178 A CN 201810532178A CN 108762706 B CN108762706 B CN 108762706B
Authority
CN
China
Prior art keywords
image
fused
output
area
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810532178.9A
Other languages
Chinese (zh)
Other versions
CN108762706A (en
Inventor
黄飞
孙聪
薛菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN201810532178.9A priority Critical patent/CN108762706B/en
Publication of CN108762706A publication Critical patent/CN108762706A/en
Application granted granted Critical
Publication of CN108762706B publication Critical patent/CN108762706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method and device. The image processing method of the present invention includes: intercepting and amplifying the image to be processed according to a first pixel area, a second pixel area, a third pixel area and a fourth pixel area respectively to obtain a first image to be fused, a second image to be fused, a third image to be fused and a fourth image to be fused; and performing fusion processing on the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused to obtain a first image to be output, a second image to be output, a third image to be output and the fourth image to be output, and outputting the first image to be output, the second image to be output, the third image to be output and the fourth image to be output to corresponding display screens respectively. The embodiment of the invention can reduce the operation resources required by fusion splicing, thereby reducing the cost.

Description

Image processing method and device
Technical Field
The present invention relates to multimedia technologies, and in particular, to an image processing method and apparatus.
Background
Large screen splicing is a technology for splicing plasma, liquid crystal, rear projection and other screens to meet the requirement of users on viewing larger screens. The technology differs according to different splicing objects.
In the large-screen splicing process, image segmentation is required to be carried out on an image signal source according to splicing requirements, and because the spliced screen can have physical offset, the whole picture which is not provided with gaps, is brighter, and is oversized and high in resolution is required to be displayed through a fusion technology. With the increasing requirements of commercial range and large-screen display, a large number of soft fusion schemes (computer processing) and splicing schemes of a hardware Field Programmable Gate Array (FPGA) have more restriction conditions, and especially, the operation resources and the sorting cost increase when the resolution ratio is increased to 4K. For example, a 4K fusion scheme is required, the cost of performing soft fusion by using a computer graphics card is equivalent to the cost of one server, and the cost of fusion splicing is high.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which are used for reducing the operation resources required by fusion splicing and further reducing the cost.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
intercepting and amplifying the image to be processed according to a first pixel area, a second pixel area, a third pixel area and a fourth pixel area respectively to obtain a first image to be fused, a second image to be fused, a third image to be fused and a fourth image to be fused;
performing fusion processing on the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused to obtain a first image to be output, a second image to be output, a third image to be output and the fourth image to be output, and outputting the first image to be output, the second image to be output, the third image to be output and the fourth image to be output to corresponding display screens respectively;
wherein the maximum value of the vertical coordinates of the first pixel region and the second pixel region is different by a first preset value, the maximum values of the abscissas of the first pixel region and the third pixel region are different by a second preset value, the first pixel area and the fourth pixel area are centrosymmetric, the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused all comprise a fusion area, and the fusion area of the first image to be fused is asymmetrical to the fusion area of the second image to be fused, the fusion area of the first image to be fused is asymmetrical to the fusion area of the third image to be fused, the fusion area of the fourth image to be fused is asymmetrical to the fusion area of the second image to be fused, and the fusion area of the fourth image to be fused is asymmetrical to the fusion area of the third image to be fused.
With reference to the first aspect, in a possible implementation manner of the first aspect, the size of the image to be processed is X × Y, the coordinate range of the first pixel region is (1 to X/2, 1 to (Y/2+ v)), the coordinate range of the second pixel region is ((X/2+1-h) to X, 1 to Y/2)), the coordinate range of the third pixel region is (1 to (X/2+ h), (Y/2+1) to Y), and the coordinate range of the fourth pixel region is ((X/2+1) to X, (Y/2+1-v) to Y);
wherein v is the first preset value, and h is the second preset value.
With reference to a possible implementation manner of the first aspect, in another possible implementation manner of the first aspect, the obtaining a first image to be fused, a second image to be fused, a third image to be fused, and a fourth image to be fused by performing interception and amplification on the image to be processed according to a first pixel region, a second pixel region, a third pixel region, and a fourth pixel region, respectively includes:
intercepting an image in the image to be processed according to the coordinate range of the first pixel area, and carrying out amplification processing to obtain the first image to be fused;
intercepting an image in the image to be processed according to the coordinate range of the second pixel area, and carrying out amplification processing to obtain a second image to be fused;
intercepting an image in the image to be processed according to the coordinate range of the third pixel area, and carrying out amplification processing to obtain a third image to be fused;
and intercepting an image in the image to be processed according to the coordinate range of the fourth pixel area, and carrying out amplification processing to obtain the fourth image to be fused.
With reference to any one of the possible implementation manners of the first aspect, in another possible implementation manner of the first aspect, the storing the first image to be fused, the second image to be fused, the third image to be fused, and the fourth image to be fused in a random access memory of a system on chip SOC respectively, and performing fusion processing on the first image to be fused, the second image to be fused, the third image to be fused, and the fourth image to be fused to obtain a first image to be output, a second image to be output, a third image to be output, and the fourth image to be output includes:
performing image analysis processing on the first image to be fused to obtain original image information of the first image to be fused, and adjusting image information of a fusion area of the second image to be fused according to the original image information of the first image to be fused to obtain the second image to be output;
performing image analysis processing on the second image to be fused to obtain original image information of the second image to be fused, and adjusting image information of a fusion area of a fourth image to be fused according to the original image information of the second image to be fused to obtain a fourth image to be output;
performing image analysis processing on the fourth image to be fused to obtain original image information of the fourth image to be fused, and adjusting the image information of a fusion area of the third image to be fused according to the original image information of the fourth image to be fused to obtain a third image to be output;
and performing image analysis processing on the third image to be fused to obtain original image information of the third image to be fused, and adjusting the image information of the fusion area of the first image to be fused according to the original image information of the third image to be fused to obtain the first image to be output.
With reference to any one of the possible implementation manners of the first aspect, in another possible implementation manner of the first aspect, the image information includes any one of or a combination of luminance distribution information, chrominance distribution information, and texture information.
With reference to any one of the possible implementation manners of the first aspect, in another possible implementation manner of the first aspect, the performing image analysis processing on the first image to be fused to obtain original image information of the first image to be fused includes:
and performing image analysis processing on a partial image corresponding to the fusion area of the second image to be fused in the first image to be fused to obtain original image information of the first image to be fused.
With reference to any one of the possible implementation manners of the first aspect, in another possible implementation manner of the first aspect, the first preset value and the second preset value are determined according to a fused deviation percentage, and the fused deviation percentage is determined by a sensor.
With reference to any one of the possible implementation manners of the first aspect, in another possible implementation manner of the first aspect, the respectively outputting to the corresponding display screens includes:
outputting the first image to be output to a first display screen, outputting the second image to be output to a second display screen, outputting the third image to be output to a third display screen, and outputting the fourth image to be output to a fourth display screen.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
a memory for storing a computer program;
a processor configured to execute the computer program to implement the method according to any one of the possible implementation manners of the first aspect.
In a third aspect, an embodiment of the present invention provides a computer storage medium, including: the computer storage medium is used for storing a computer program, which when executed is used for implementing the method according to any one of the possible implementation manners of the first aspect.
The image processing method and the image processing device provided by the embodiment of the invention are used for respectively intercepting and amplifying the image to be processed according to the first pixel area, the second pixel area, the third pixel area and the fourth pixel area to obtain the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused, carrying out fusion processing on the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused to obtain the first image to be output, the second image to be output, the third image to be output and the fourth image to be output, and respectively outputting the images to the corresponding display screens. The maximum values of the vertical coordinates of the first pixel area and the second pixel area for intercepting the images to be fused are different by a first preset value, and the maximum values of the horizontal coordinates of the first pixel area and the third pixel area are different by a second preset value, so that the fusion areas of the adjacent images to be fused are asymmetric, the fusion splicing of a plurality of images can be realized, and the calculation resources required by the fusion splicing can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present invention;
FIG. 2A is a flowchart illustrating a first embodiment of an image processing method according to the present invention;
FIG. 2B is a schematic diagram of each image to be fused according to the image processing method of the present invention;
FIG. 3A is a schematic diagram of an image to be fused during fusion splicing in the prior art;
FIG. 3B is a schematic diagram of the image to be fused shown in FIG. 3A being spliced;
FIG. 4A is a flowchart of a second embodiment of an image processing method according to the present invention;
FIG. 4B is a schematic diagram of each image to be fused in a coordinate system according to the image processing method of the present invention;
FIG. 4C is a schematic diagram of the image processing method according to the present invention, in which images to be fused are spliced;
fig. 5 is a schematic structural diagram of an image processing apparatus according to a first embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Reference herein to "and/or" describing an associative relationship of associated objects means that there may be three relationships, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present invention, as shown in fig. 1, the application scenario includes a plurality of terminal devices, four terminal devices are taken as an example for illustration, each terminal device includes a display device and a display screen, the display device is configured to process a content to be played, send a processed image to a display screen connected to the display device, and the display screen displays the processed image. The Display device and the Display screen may be combined, for example, a Plasma Display Panel (Plasma Display Panel), a liquid crystal television (lcd television), or the like, and may be separated, for example, a projection television, a laser television, or the like. The display screens of the four terminal devices are spliced in the manner shown in fig. 1 to display a larger picture.
In the splicing installation process of the display screens of the four terminal devices, physical gaps exist or do not exist, but due to image deviation caused by temperature drift and the like, the image processing method provided by the embodiment of the invention needs to be implemented to display a whole picture without gaps.
Fig. 2A is a flowchart of a first embodiment of an image processing method according to the present invention, and as shown in fig. 2A, the method of this embodiment may include:
step 101, intercepting and amplifying the image to be processed according to a first pixel area, a second pixel area, a third pixel area and a fourth pixel area respectively to obtain a first image to be fused, a second image to be fused, a third image to be fused and a fourth image to be fused.
The image to be processed specifically refers to a complete image that needs to be displayed on four display screens, that is, a partial image in the image to be processed is displayed on each display screen, and the four display screens are combined to display the complete image of the image to be processed.
The maximum values of the vertical coordinates of the first pixel area and the second pixel area are different from each other by a first preset value, the maximum values of the horizontal coordinates of the first pixel area and the third pixel area are different from each other by a second preset value, and the first pixel area and the fourth pixel area are centrosymmetric.
Taking fig. 2B as an example, fig. 2B is a schematic diagram of each image to be fused in the image processing method of the present invention, wherein the enlargement process is performed by taking 1: the example 1 is illustrated, that is, the pixel regions are equal to the corresponding to-be-fused images, where the first pixel region is a region corresponding to the first to-be-fused image a in fig. 2B, the second pixel region is a region corresponding to the second to-be-fused image B in fig. 2B, the third pixel region is a region corresponding to the third to-be-fused image C in fig. 2B, the fourth pixel region is a region corresponding to the fourth to-be-fused image DC in fig. 2B, fig. 2B is only used to describe the relationship among the first pixel region, the second pixel region, the third pixel region, and the fourth pixel region, and fig. 2B does not show the specific positions of the pixel regions. Taking the horizontal direction as an abscissa axis, the abscissa axis increasing from left to right, taking the vertical direction as an ordinate axis, the ordinate axis increasing from top to bottom, as shown in fig. 2B, the maximum values of the ordinate of the first pixel region and the second pixel region differ by a first preset value, the maximum values of the abscissa of the first pixel region and the third pixel region differ by a second preset value, and the first pixel region and the fourth pixel region are centrosymmetric.
In this embodiment, an image of a corresponding area of the image to be processed is captured according to the first pixel area, the second pixel area, the third pixel area, and the fourth pixel area, and then an enlargement process is performed to obtain the first image to be fused, the second image to be fused, the third image to be fused, and the fourth image to be fused, where the enlargement process is to enlarge the captured image to the same size as the display screen.
The first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused all comprise a fusion area, the fusion area of the first image to be fused is asymmetric with the fusion area of the second image to be fused and the fusion area of the fourth image to be fused, and the fusion area of the third image to be fused is asymmetric with the fusion area of the second image to be fused and the fusion area of the fourth image to be fused.
The "fusion region" referred to herein specifically refers to a region of one image that overlaps another image, for example, the fusion region of the first image to be fused specifically refers to a region where the first image to be fused overlaps a fourth image to be fused, the fusion region of the second image to be fused specifically refers to a region where the second image to be fused overlaps the first image to be fused, the fusion region of the third image to be fused specifically refers to a region where the third image to be fused overlaps the second image to be fused, and the fusion region of the fourth image to be fused specifically refers to a region where the fourth image to be fused overlaps the third image to be fused.
And 102, performing fusion processing on the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused, acquiring a first image to be output, a second image to be output, a third image to be output and the fourth image to be output, and outputting the first image to be output, the second image to be output, the third image to be output and the fourth image to be output to corresponding display screens respectively.
Because the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused all comprise a fusion area, image quality processing needs to be carried out on each fusion area, so that the images output to each display screen are spliced to form a whole image which is free of gaps and clear in image quality.
In this embodiment, an image to be processed is intercepted and amplified according to a first pixel area, a second pixel area, a third pixel area and a fourth pixel area, so as to obtain a first image to be fused, a second image to be fused, a third image to be fused and a fourth image to be fused, the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused are fused, so as to obtain a first image to be output, a second image to be output, a third image to be output and the fourth image to be output, and the first image to be output, the second image to be output, the third image to be output and the fourth image to be output are output to corresponding display screens respectively. The maximum values of the vertical coordinates of the first pixel area and the second pixel area for intercepting the images to be fused are different by a first preset value, and the maximum values of the horizontal coordinates of the first pixel area and the third pixel area are different by a second preset value, so that the fusion areas of the adjacent images to be fused are asymmetric, the fusion splicing of a plurality of images can be realized, and the calculation resources required by the fusion splicing can be reduced.
Explaining further the reduction of the computational resources required for fusion and splicing, fig. 3A is a schematic diagram of images to be fused in the prior art during fusion and splicing, fig. 3B is a schematic diagram of splicing the images to be fused shown in fig. 3A, each image to be fused obtained by intercepting and amplifying in the prior art during fusion and splicing is shown in fig. 3A, A, B, C and D are respectively an image to be fused, two sides of each image to be fused are fusion areas, as shown in fig. 3A, the fusion areas of adjacent images to be fused are symmetrical, when A, B, C and D are spliced together, image quality processing needs to be performed on an intermediate overlapping area as shown in fig. 3B, after A, B, C and D are spliced together, an 1/4 area appears, and the 1/4 area is formed by overlapping the fusion areas of A, B, C and D together, in the image quality processing of the 1/4 region, it is necessary to determine the image information of the fusion region of the integrated A, B, C and D, that is, in the case of the solution fusion processing of the related art, it is necessary to perform the image quality processing of not only the image quality processing of each fusion region as shown in fig. 3B but also the image information of the fusion region of the integrated A, B, C and D on the 1/4 region, which increases the computational resources.
Compared with the prior art, after the images to be fused are spliced together, the 1/4 area does not exist in image quality processing, so that the computing resources required by fusion splicing can be reduced. The reason why the 1/4 region does not exist is that the first pixel region, the second pixel region, the third pixel region, and the fourth pixel region of the image to be fused are cut out in such a manner that the fusion regions of the adjacent images to be fused are asymmetric.
The following describes in detail the technical solution of the embodiment of the method shown in fig. 2, using several specific embodiments.
FIG. 4A is a flowchart of a second embodiment of the image processing method of the present invention, FIG. 4B is a schematic diagram of each image to be fused in a coordinate system of the image processing method of the present invention, FIG. 4C is a schematic diagram of stitching each image to be fused of the image processing method of the present invention, in this embodiment, the size of the image to be processed is X Y, the coordinate range of the first pixel region is (1-X/2, 1-Y/2 + v)), the coordinate range of the second pixel region is ((X/2+1-h) -X, 1-Y/2)), the coordinate range of the third pixel region is (1-X/2 + h), (Y/2+1) -Y), the coordinate range of the fourth pixel region is ((X/2+1) -X, (Y/2+1-v) -Y), where v is a first preset value and h is a second preset value, as shown in fig. 4A, the method of this embodiment may include:
and 2011, capturing the image according to the coordinate range of the first pixel region in the image to be processed, and performing amplification processing to obtain a first image to be fused.
Specifically, the size of the image to be processed is X × Y, the coordinate range of the first pixel region is (1 to X/2, 1 to (Y/2+ v)), and the image is captured according to the coordinate range of the first pixel region (1 to X/2, 1 to (Y/2+ v)) and is subjected to an amplification process to obtain a first image to be fused, which is shown as an image a in fig. 4B.
Step 2012, intercepting the image in the image to be processed according to the coordinate range of the second pixel area, and performing amplification processing to obtain a second image to be fused.
Specifically, the size of the image to be processed is X × Y, the coordinate range of the second pixel region is ((X/2+1-h) -X, 1-Y/2)), the image is captured according to the coordinate range ((X/2+1-h) -X, 1-Y/2)) of the second pixel region, and the image is magnified to obtain a second image to be fused, where the first image to be fused is shown as the image B in fig. 4B. Since the first image to be fused and the second image to be fused are partially overlapped, the position of the B image in fig. 4B is based on the coordinates for the sake of clarity of illustration of the second image to be fused.
And 2013, intercepting the image in the image to be processed according to the coordinate range of the third pixel area, and performing amplification processing to obtain a third image to be fused.
Specifically, the size of the image to be processed is X × Y, the coordinate range of the third pixel region is (1 to (X/2+ h), (Y/2+1) to Y), the image is cut according to the coordinate range of the third pixel region (1 to (X/2+ h), (Y/2+1) to Y), and the image is magnified to obtain a third image to be fused, where the first image to be fused is shown as the image C in fig. 4B.
Step 2014, capturing the image in the to-be-processed image according to the coordinate range of the fourth pixel area, and performing amplification processing to obtain a fourth to-be-fused image.
Specifically, the size of the image to be processed is X × Y, the coordinate range of the fourth pixel region is ((X/2+1) -X, (Y/2+1-v) -Y), the image is cut according to the coordinate range of the fourth pixel region ((X/2+1) -X, (Y/2+1-v) -Y), and an enlargement process is performed to obtain a fourth image to be fused, where the first image to be fused is shown as the D image in fig. 4B.
The schematic diagram of stitching A, B, C and D shown in fig. 4B is shown in fig. 4C, and as shown in fig. 4C, only the fusion regions of the adjacent images to be fused are superimposed together, and there is no region in which the fusion regions of the four images to be fused are superimposed together, that is, there is no region 1/4 in the above-mentioned prior art.
Wherein the first image to be fused, the second image to be fused, the third image to be fused, and the fourth image to be fused may be stored in a random access memory of a System On Chip (SOC), respectively.
Step 2021, performing image analysis processing on the first image to be fused to obtain original image information of the first image to be fused, and adjusting image information of a fusion area of the second image to be fused according to the original image information of the first image to be fused to obtain the second image to be output.
As further illustrated by the above fig. 4B, when performing adjustment between a and B, since a already has all the multi-display content required by B, it is able to acquire the original image information of a and adjust the image information of the fusion area of B by using the original image information of a.
The "image information" referred to herein includes any one or a combination of luminance distribution information, chrominance distribution information, and texture information.
The brightness distribution information may be obtained by means of a histogram of the area image. The color distribution information may be obtained by color fraction analysis. The texture information may be obtained by the LBP algorithm.
Step 2022, performing image analysis processing on the second image to be fused to obtain original image information of the second image to be fused, and adjusting image information of a fusion area of the fourth image to be fused according to the original image information of the second image to be fused to obtain the fourth image to be output.
As further illustrated in fig. 4B, when adjusting between B and D, B already has all the multi-display content required by D, so that B can acquire the original image information of B, and the image information of the fusion region of D can be adjusted by using the original image information of B.
Step 2023, performing image analysis processing on the fourth image to be fused to obtain original image information of the fourth image to be fused, and adjusting image information of a fusion area of the third image to be fused according to the original image information of the fourth image to be fused to obtain the third image to be output.
As further illustrated in the above fig. 4B, when performing adjustment between D and C, D already has all the multi-display content required by C, so that the original image information of D can be acquired, and the image information of the fusion area of C can be adjusted by using the original image information of D.
Step 2024, performing image analysis processing on the third image to be fused to obtain original image information of the third image to be fused, and adjusting the image information of the fusion area of the first image to be fused according to the original image information of the third image to be fused to obtain the first image to be output.
As further illustrated in the above fig. 4B, when the adjustment is performed between a and C, C already has all the multi-display content required by a, so that the original image information of C can be acquired, and the image information of the fusion area of a can be adjusted by using the original image information of C.
Step 203, outputting the first image to be output to a first display screen, outputting the second image to be output to a second display screen, outputting the third image to be output to a third display screen, and outputting the fourth image to be output to a fourth display screen.
Specifically, each image to be output is output to a corresponding display screen.
In some embodiments, the first and second preset values are determined based on a fused percentage of deviation determined by the sensor. For example, the deviation percentage may be 2%, the first predetermined value is 38, and the second predetermined value is 21.
In this embodiment, an image to be processed is intercepted and amplified according to a first pixel area, a second pixel area, a third pixel area and a fourth pixel area, so as to obtain a first image to be fused, a second image to be fused, a third image to be fused and a fourth image to be fused, the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused are fused, so as to obtain a first image to be output, a second image to be output, a third image to be output and the fourth image to be output, and the first image to be output, the second image to be output, the third image to be output and the fourth image to be output are output to corresponding display screens respectively. The maximum values of the vertical coordinates of the first pixel area and the second pixel area for intercepting the images to be fused are different by a first preset value, and the maximum values of the horizontal coordinates of the first pixel area and the third pixel area are different by a second preset value, so that the fusion areas of the adjacent images to be fused are asymmetric, the fusion splicing of a plurality of images can be realized, and the calculation resources required by the fusion splicing can be reduced.
Fig. 5 is a schematic structural diagram of a first embodiment of an image processing apparatus according to the present invention, and as shown in fig. 5, the apparatus of the present embodiment may include: a memory 11 and a processor 12, wherein the memory 11 is used for storing a computer program, and the processor 12 is used for executing the computer program to implement the method described in the above embodiments.
The apparatus of this embodiment may be configured to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium is used to store a computer program, and the computer program is used to implement the method according to any of the above embodiments when executed. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An image processing method, comprising:
intercepting and amplifying the image to be processed according to a first pixel area, a second pixel area, a third pixel area and a fourth pixel area respectively to obtain a first image to be fused, a second image to be fused, a third image to be fused and a fourth image to be fused;
performing fusion processing on the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused to obtain a first image to be output, a second image to be output, a third image to be output and a fourth image to be output, and outputting the first image to be output, the second image to be output, the third image to be output and the fourth image to be output to corresponding display screens respectively;
wherein the maximum value of the vertical coordinates of the first pixel region and the second pixel region is different by a first preset value, the maximum values of the abscissas of the first pixel region and the third pixel region are different by a second preset value, the first pixel area and the fourth pixel area are centrosymmetric, the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused all comprise a fusion area, and the fusion area of the first image to be fused is asymmetrical to the fusion area of the second image to be fused, the fusion area of the first image to be fused is asymmetrical to the fusion area of the third image to be fused, the fusion area of the fourth image to be fused is asymmetrical to the fusion area of the second image to be fused, and the fusion area of the fourth image to be fused is asymmetrical to the fusion area of the third image to be fused.
2. The method according to claim 1, wherein the size of the image to be processed is X Y, the first pixel region has a coordinate range of (1 to X/2, 1 to (Y/2+ v)), the second pixel region has a coordinate range of ((X/2+1-h) to X, 1 to Y/2)), the third pixel region has a coordinate range of (1 to (X/2+ h), (Y/2+1) to Y), and the fourth pixel region has a coordinate range of ((X/2+1) to X, (Y/2+1-v) to Y);
wherein v is the first preset value, and h is the second preset value.
3. The method according to claim 2, wherein the image to be processed is intercepted and amplified according to a first pixel region, a second pixel region, a third pixel region and a fourth pixel region respectively, and the acquiring of the first image to be fused, the second image to be fused, the third image to be fused and the fourth image to be fused comprises:
intercepting an image in the image to be processed according to the coordinate range of the first pixel area, and carrying out amplification processing to obtain the first image to be fused;
intercepting an image in the image to be processed according to the coordinate range of the second pixel area, and carrying out amplification processing to obtain a second image to be fused;
intercepting an image in the image to be processed according to the coordinate range of the third pixel area, and carrying out amplification processing to obtain a third image to be fused;
and intercepting an image in the image to be processed according to the coordinate range of the fourth pixel area, and carrying out amplification processing to obtain the fourth image to be fused.
4. The method according to any one of claims 1 to 3, wherein the first image to be fused, the second image to be fused, the third image to be fused, and the fourth image to be fused are respectively stored in a random access memory of a System On Chip (SOC), and the obtaining of the first image to be fused, the second image to be fused, the third image to be fused, and the fourth image to be fused by performing the fusion processing on the first image to be fused, the second image to be fused, the third image to be output, and the fourth image to be output comprises:
performing image analysis processing on the first image to be fused to obtain original image information of the first image to be fused, and adjusting image information of a fusion area of the second image to be fused according to the original image information of the first image to be fused to obtain the second image to be output;
performing image analysis processing on the second image to be fused to obtain original image information of the second image to be fused, and adjusting image information of a fusion area of a fourth image to be fused according to the original image information of the second image to be fused to obtain a fourth image to be output;
performing image analysis processing on the fourth image to be fused to obtain original image information of the fourth image to be fused, and adjusting the image information of a fusion area of the third image to be fused according to the original image information of the fourth image to be fused to obtain a third image to be output;
and performing image analysis processing on the third image to be fused to obtain original image information of the third image to be fused, and adjusting the image information of the fusion area of the first image to be fused according to the original image information of the third image to be fused to obtain the first image to be output.
5. The method according to claim 4, wherein the image information comprises any one or a combination of luminance distribution information, chrominance distribution information, and texture information.
6. The method according to claim 5, wherein the performing image analysis processing on the first image to be fused to obtain original image information of the first image to be fused includes:
and performing image analysis processing on a partial image corresponding to the fusion area of the second image to be fused in the first image to be fused to obtain original image information of the first image to be fused.
7. The method according to any one of claims 1 to 3, wherein the first and second preset values are determined from a fused percentage of deviation determined by a sensor.
8. The method of any of claims 1 to 3, wherein the outputting to the respective display screen comprises:
outputting the first image to be output to a first display screen, outputting the second image to be output to a second display screen, outputting the third image to be output to a third display screen, and outputting the fourth image to be output to a fourth display screen.
9. An image processing apparatus characterized by comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the method of any one of claims 1 to 8.
10. A computer storage medium, comprising: the computer storage medium is for storing a computer program which when executed is for implementing the method of any one of claims 1 to 8.
CN201810532178.9A 2018-05-29 2018-05-29 Image processing method and device Active CN108762706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810532178.9A CN108762706B (en) 2018-05-29 2018-05-29 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810532178.9A CN108762706B (en) 2018-05-29 2018-05-29 Image processing method and device

Publications (2)

Publication Number Publication Date
CN108762706A CN108762706A (en) 2018-11-06
CN108762706B true CN108762706B (en) 2021-07-27

Family

ID=64003650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810532178.9A Active CN108762706B (en) 2018-05-29 2018-05-29 Image processing method and device

Country Status (1)

Country Link
CN (1) CN108762706B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825895B (en) * 2019-11-07 2022-06-21 威创集团股份有限公司 Visual fusion method of isolation information and desktop image processor
CN111562869B (en) * 2020-03-31 2022-02-25 完美世界(北京)软件科技发展有限公司 Screenshot method and device, storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1688160A (en) * 2005-04-30 2005-10-26 广东威创日新电子有限公司 Split display image edge blending method for large screen
CN101442653A (en) * 2008-12-18 2009-05-27 北京航空航天大学 Multichannel video play and large screen display method
CN101692335A (en) * 2009-09-24 2010-04-07 广东威创视讯科技股份有限公司 Image processing method and device thereof for achieving seamless splicing large screen display
CN101964891A (en) * 2010-09-13 2011-02-02 广东威创视讯科技股份有限公司 Screen seamless mosaicing image processing method and device therefor
CN102508397A (en) * 2011-10-27 2012-06-20 王悦 Image edge fusion projection method
CN105279735A (en) * 2015-11-20 2016-01-27 沈阳东软医疗系统有限公司 Fusion method, fusion device and fusion equipment of image splicing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011091604A1 (en) * 2010-01-29 2011-08-04 华为终端有限公司 Method, apparatus and system for video communication

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1688160A (en) * 2005-04-30 2005-10-26 广东威创日新电子有限公司 Split display image edge blending method for large screen
CN101442653A (en) * 2008-12-18 2009-05-27 北京航空航天大学 Multichannel video play and large screen display method
CN101692335A (en) * 2009-09-24 2010-04-07 广东威创视讯科技股份有限公司 Image processing method and device thereof for achieving seamless splicing large screen display
CN101964891A (en) * 2010-09-13 2011-02-02 广东威创视讯科技股份有限公司 Screen seamless mosaicing image processing method and device therefor
CN102508397A (en) * 2011-10-27 2012-06-20 王悦 Image edge fusion projection method
CN105279735A (en) * 2015-11-20 2016-01-27 沈阳东软医疗系统有限公司 Fusion method, fusion device and fusion equipment of image splicing

Also Published As

Publication number Publication date
CN108762706A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
US11194536B2 (en) Image processing method and apparatus for displaying an image between two display screens
CN106873162B (en) Pixel arrangement method of display device, display device and near-to-eye display equipment
CN110569013B (en) Image display method and device based on display screen
US11532075B2 (en) Display apparatus for restoring high-frequency component of input image and image processing method thereof
US9208537B1 (en) Super-resolution reconstructing method for enhancing smoothness and sharpness of video image
CN111741274B (en) Ultrahigh-definition video monitoring method supporting local amplification and roaming of picture
CN109495729B (en) Projection picture correction method and system
US20210327093A1 (en) Image processing method of virtual reality and apparatus thereof
WO2018025474A1 (en) Information processing device, information processing method, and program
US20230362328A1 (en) Video frame insertion method and apparatus, and electronic device
CN108762706B (en) Image processing method and device
WO2019109699A1 (en) Picture processing method and apparatus
US8279223B2 (en) Image processing apparatus, image processing method and program
CN114143524B (en) Laser projection system and projection method
US8830268B2 (en) Non-linear image mapping using a plurality of non-linear image mappers of lesser resolution
JP2014176053A (en) Image signal processor
US20100182331A1 (en) Method and apparatus for drawing image
JP2020162085A (en) Video processing device and image synthesis method
CN101364303B (en) Edge pixel extracting and processing method
WO2018177043A1 (en) Stereoscopic display driving method and apparatus, and display device
CN112650458A (en) Display screen adjusting method, storage medium and terminal equipment
WO2018189961A1 (en) Image processing device, terminal device, and image processing program
WO2022017271A1 (en) Image display method for display device, and device and computer-readable storage medium
US20240244156A1 (en) Display control method, display control device, display apparatus and computer readable medium
CN117577061B (en) Liquid crystal splicing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant after: Hisense Visual Technology Co., Ltd.

Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant before: QINGDAO HISENSE ELECTRONICS Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant