CN117152031A - Image fusion method and device, electronic equipment and storage medium - Google Patents

Image fusion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117152031A
CN117152031A CN202211160692.7A CN202211160692A CN117152031A CN 117152031 A CN117152031 A CN 117152031A CN 202211160692 A CN202211160692 A CN 202211160692A CN 117152031 A CN117152031 A CN 117152031A
Authority
CN
China
Prior art keywords
image
display device
value
pixels
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211160692.7A
Other languages
Chinese (zh)
Inventor
李宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202211160692.7A priority Critical patent/CN117152031A/en
Priority to PCT/CN2022/129361 priority patent/WO2024060360A1/en
Publication of CN117152031A publication Critical patent/CN117152031A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image fusion method, an image fusion device, electronic equipment and a storage medium, wherein the method comprises the steps of obtaining an environment image, wherein the environment image is an image representing display equipment and the surrounding environment of the display equipment; image filling is carried out from at least two directions of display equipment in the environment image so as to obtain a bottom overlapping area, wherein the brightness of the bottom overlapping area is lower than that of a non-bottom overlapping area; and adjusting the brightness of the overlapping area to generate a target image fused with the surrounding environment of the display device. The invention can integrate the display equipment and the surrounding environment in different directions, and promote the living visual experience of the user.

Description

Image fusion method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image fusion method, an image fusion device, an electronic device, and a storage medium.
Background
Display devices, such as televisions, displays, billboards, etc., typically appear as a black screen when not in operation and cannot be fused with the surrounding environment. The environment fusion means that the display device acquires content from surrounding environment, generates a picture close to the environment, and displays the picture on the display device, so that the display device and the surrounding environment are integrated, and living visual experience of a user is improved.
At present, some televisions have own environment fusion modes, but only environment images can be acquired from the upper part of the television, and the fusion effect is poor.
Disclosure of Invention
The invention provides an image fusion method, an image fusion device, electronic equipment and a storage medium, which are used for solving the problems that in the prior art, environmental images can only be acquired from the upper part of a television and the fusion effect is poor.
In a first aspect, the present invention provides an image fusion method, the method comprising:
acquiring an environment image, wherein the environment image is an image representing the display equipment and the surrounding environment of the display equipment;
image filling is carried out from at least two directions of display equipment in the environment image so as to obtain a bottom overlapping area, wherein the brightness of the bottom overlapping area is lower than that of a non-bottom overlapping area;
and adjusting the brightness of the overlapping area to generate a target image fused with the surrounding environment of the display device.
In an embodiment of the present invention, the step of image filling from at least two directions of the display device in the environment image to obtain the bottom overlapping area includes:
calculating the total number of pixels corresponding to each direction, wherein the total number of pixels represents the number of pixels with RGB values smaller than a preset value;
And determining a target direction according to the total number of pixels, and performing image filling on the target direction to obtain a bottom overlapping area.
In an embodiment of the present invention, the at least two directions include four directions, i.e., up, down, left, and right, and the step of calculating the total number of pixels corresponding to each direction includes:
configuring the number of the exploration rectangles based on the ratio of the length to the width of the display equipment to obtain exploration rectangles with the first number in the up-down direction and exploration rectangles with the second number in the left-right direction, wherein the exploration rectangles are used for acquiring pixels with RGB values smaller than a preset value;
and accumulating and storing the number of the pixels detected by the detection rectangle in the up, down, left and right directions to obtain the total number of the pixels in the corresponding directions.
In an embodiment of the present invention, the step of configuring the number of probing rectangles based on a ratio of a length and a width of the display device includes:
if the ratio of the length to the width of the display device is 16:9, obtaining 16 probing rectangles in the up-down direction and 9 probing rectangles in the left-right direction;
the width of each exploration rectangle is 8-10 pixels, the length of each exploration rectangle in the left-right direction is W/2-3W/4, the height of each exploration rectangle in the up-down direction is H/2-3H/4, W represents the number of the length of the pixels occupied by the display device in the environment image, and H represents the number of the width of the pixels occupied by the display device in the environment image.
In an embodiment of the present invention, the step of accumulating and storing the number of pixels of the probing rectangle in the up, down, left and right directions to obtain the total number of pixels in the corresponding direction includes:
configuring corresponding variables based on the upper direction, the lower direction, the left direction and the right direction;
the number of pixels of which the RGB values are smaller than a preset value, detected by the detection rectangle corresponding to the up-down left-right direction, is respectively accumulated and stored to corresponding variables, so that the total number of pixels in the corresponding direction is obtained;
the variables corresponding to the up, down, left and right are top_count, bottom_count, left_count and right_count respectively, and the pixels with RGB values smaller than the preset value represent pixels with blackish colors.
In an embodiment of the present invention, the step of determining a target direction according to the total number of pixels and performing image filling on the target direction to obtain a bottom overlapping area includes:
the method comprises the steps of obtaining a first average value according to a calculation formula (top_count+button_count)/first quantity by preset weights, and obtaining a second average value according to a calculation formula (left_count+right_count)/second quantity;
and comparing the first average value with the second average value, taking the direction of the smaller value as a target direction to acquire the content, and filling the acquired content into the display equipment in the environment image to obtain the overlapping bottom area.
In an embodiment of the present invention, the step of comparing the first average value and the second average value, taking a direction of a smaller value as a target direction to acquire content, and image-filling the acquired content with a display device in the environmental image to obtain a bottom overlapping area further includes:
comparing the total number of pixels in two opposite directions in the target direction, taking the direction with a smaller value as a first target direction and the direction with a larger value as a second target direction;
assuming that the first target direction is the left direction and the second target direction is the right direction, selecting an image with the length W/2+A1 and the width H in the left direction of the display device on the environment image to fill and horizontally invert to obtain a first image, and selecting an image with the length W/2+B1 and the width H in the right direction of the display device on the environment image to fill and horizontally invert to obtain a second image;
calculating RGB values of the bottom overlapping area to obtain a third image comprising the first image and the second image, wherein the bottom overlapping area is a cross overlapping part of the first image and the second image;
wherein W represents the number of the length of the pixel occupied by the display device in the environment image, A1 represents the number of the length of the pixel occupied, B1 represents the number of the length of the pixel occupied, A1> B1 and W/10< A1+B1< W/2.
In an embodiment of the present invention, the calculation formula of the RGB values of the bottom overlapping area is:
Result Color=(Top Color)*(Bottom Color)/255;
wherein Result Color represents the RGB value of the Bottom-stacked region, top Color represents the RGB value of A1, and Bottom Color represents the RGB value of B1.
In an embodiment of the present invention, the step of performing brightness adjustment on the bottom overlapping area to generate a target image fused with the surrounding environment of the display device includes:
acquiring a first average brightness value and a second average brightness value, wherein the first average brightness value is the average brightness value of the W/2-B1-1 th column in the third image, and the second average brightness value is the average brightness value of the W/2+A1 th column in the third image;
gradually changing the first average brightness value to the second average brightness value of the overlapped area to carry out brightness adjustment on the overlapped bottom area so as to obtain the target image;
wherein the length of the superposition area is A1+B1, and the height is H, and H represents the number of the width of the pixels occupied by the display device in the environment image.
In an embodiment of the present invention, the step of gradually changing the overlay area from the first average luminance value to the second average luminance value to perform luminance adjustment on the overlay bottom area to obtain the target image includes:
Assuming that the first average brightness value is greater than the second average brightness value, calculating a decreasing value of the gradual change, wherein the decreasing value is calculated by the following formula:
△Light=(LA-LB)/(A1+B1);
where Δlight represents a taper value, LA represents a first average luminance value, and LB represents a second average luminance value.
In an embodiment of the present invention, an adjustment formula of RGB values of the bottom overlapping area is:
Result Color=((Top Color)*(Bottom Color)+△Light)/255。
in an embodiment of the present invention, the step of acquiring an environmental image includes:
invoking a shooting interface of the display device or the mobile terminal, wherein the shooting interface is provided with a rectangular frame;
and placing the image of the display device in the rectangular frame to acquire the environment image.
In an embodiment of the present invention, before the step of image filling from at least two directions of the display device in the environment image to obtain the bottom area, the method further includes:
determining a position of the display device in the environment image and saving the position information;
performing high-pass filtering processing on the environment image to obtain a processed environment image;
wherein the position information comprises X, Y coordinates of the display device in the environment image, the number W representing the length of the pixels occupied by the display device in the environment image and the number H representing the width of the pixels occupied by the display device in the environment image.
In an embodiment of the present invention, after the step of performing brightness adjustment on the bottom overlapping area to generate a target image fused with the surrounding environment of the display device, the method further includes:
if the environment image is acquired through the display equipment, the target image is directly displayed on the display equipment; or alternatively
And if the environment image is acquired through the mobile terminal, the target image is sent to the display device to be displayed on the display device.
In a second aspect, the present invention also provides an image fusion apparatus, the apparatus comprising:
the system comprises an acquisition module, a display device and a display module, wherein the acquisition module is used for acquiring an environment image, wherein the environment image is an image representing the display device and the surrounding environment of the display device;
the bottom overlapping processing module is used for carrying out image filling from at least two directions of the display equipment in the environment image to obtain a bottom overlapping area, and the brightness of the bottom overlapping area is different from that of a non-bottom overlapping area;
and the fusion module is used for carrying out brightness adjustment from the bottom overlapping area so as to generate a target image fused with the surrounding environment of the display equipment.
In a third aspect, the present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, the processor implementing the steps of the image fusion method according to any one of the first aspects when executing the program.
In a fourth aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image fusion method according to any of the first aspects.
According to the image fusion method, the image fusion device, the electronic equipment and the storage medium, at least two directions of the display equipment in the acquired environment image are subjected to image filling, brightness adjustment is carried out to generate the target image fused with the surrounding environment of the display equipment, the display equipment and the surrounding environment are integrated in different directions, and living visual experience of a user is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an image fusion method provided by the invention;
FIG. 2 is a schematic diagram of a rectangular frame provided by an embodiment of the present invention;
FIG. 3 is an effect diagram of an environmental image provided by an embodiment of the present invention;
fig. 4 is an effect diagram of the environment image after the high-pass filtering process according to the embodiment of the present invention;
FIG. 5 is one of the schematic diagrams of a probe rectangle provided by an embodiment of the present invention;
FIG. 6 is a second schematic diagram of a probe rectangle provided by an embodiment of the present invention;
FIG. 7 is one of the effect graphs of image population provided by an embodiment of the present invention;
FIG. 8 is a second effect diagram of image filling provided by an embodiment of the present invention;
FIG. 9 is a third effect diagram of image filling provided by an embodiment of the present invention;
FIG. 10 is an effect diagram of a bottom overlapping area provided by an embodiment of the present invention;
FIG. 11 is an effect diagram of a target image provided by an embodiment of the present invention;
fig. 12 is a schematic structural view of an image fusion apparatus provided by the present invention;
fig. 13 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein.
Technical terms related to the present invention are described as follows:
the environment fusion means that the display device acquires content from the surrounding environment, generates a picture close to the environment, and displays the picture on the display device, so that the display device and the surrounding environment are integrated, and living visual experience of a user is improved.
In order to solve the problems that in the prior art, environmental images can only be acquired from the upper side of a television and the fusion effect is poor, the image fusion method, the device, the electronic equipment and the storage medium provided by the invention realize that the display equipment and the surrounding environment are fused into a whole in different directions by performing image filling on at least two directions of the display equipment in the acquired environmental images and performing brightness adjustment to generate a target image fused with the surrounding environment of the display equipment, so that living visual experience of a user is improved.
The image fusion method, apparatus, electronic device, and storage medium of the present invention are described below with reference to fig. 1 to 14.
Referring to fig. 1, fig. 1 is a flow chart of an image fusion method provided by the present invention, and the method includes:
step 110, an environmental image is acquired, the environmental image being an image representing a display device and a surrounding environment of the display device.
The ambient image may be acquired by a camera device that is self-contained on the display device, or may be acquired by a camera device (e.g., a cell phone) that is independent of the display device. Therefore, the present invention is not limited to the manner of acquiring the environmental image.
And 120, performing image filling from at least two directions of the display device in the environment image to obtain an overlapping bottom area, wherein the brightness of the overlapping bottom area is lower than that of a non-overlapping bottom area.
Illustratively, the present invention is an adaptation and extension of the display device to the surrounding environment in different directions, compared to the prior art, and the direction in which the display content is obtained is selected according to the intensity of the color change around the display device.
And 130, performing brightness adjustment on the bottom overlapping area to generate a target image fused with the surrounding environment of the display device.
Illustratively, the present invention performs a depth fusion process on the display content to generate a fused image. The television and the surrounding environment are integrated, and living visual experience of a user is improved.
The above steps 110 to 130 are specifically described below.
Illustratively, in the step 110, the step of acquiring an environmental image includes:
step 111, call a shooting interface of the display device or the mobile terminal (for example, mobile phone end), where the shooting interface has a rectangular frame.
At step 112, the image of the display device is placed within a rectangular frame to obtain the ambient image.
For example, an application program of the mobile terminal is opened, and a photographing interface is opened, where a rectangular frame is provided (as shown in fig. 2). When shooting, a display device (for example, a television) needs to be located in a rectangular frame, and the rectangular frame is located in the middle of a shooting interface. The rectangular frame is a horizontally placed rectangle, and the display equipment can be completely positioned in the rectangular frame during photographing, so that the display equipment does not need to be completely attached, and the display equipment can be arranged in the rectangular frame.
Illustratively, prior to performing 120 above, the image fusion method further comprises:
step 113, determining a position of the display device in the environment image and saving the position information.
For example, assuming that a captured environment image is as shown in fig. 3, assuming that positional information of a display device in the environment image is (X, Y, W, H), X, Y indicates coordinates of the display device in the environment image, W indicates the number of the length of pixels occupied by the display device in the environment image, and H indicates the number of the width of the pixels occupied by the display device in the environment image. The position information is obtained by traversing the display device (i.e. the black area) within the rectangular box in the ambient image.
Specifically, the environment image can be opened through bitmap, the image of the designated position (namely, in the rectangular frame) is read and stored in a two-dimensional array, and the pixels are traversed from left to right and from top to bottom. If a pixel with RGB value less than the preset value (35) is encountered, judging whether the right and lower pixels of the pixel meet the requirement, if so, cutting to the right to carry out repeated judgment. Until an unsatisfactory point is found. Then cutting to the pixel below the pixel to judge. Finally, the region with the most continuous points is used. Note that, pixels having RGB values less than 35 represent areas where colors are blackened, i.e., black areas represented by the display device.
And 114, performing high-pass filtering processing on the environment image to obtain a processed environment image.
Illustratively, the ambient image is subjected to a high-pass filter (high-pass filter) process, resulting in a processed ambient image (as shown in fig. 4). The high-pass filtering is a filtering mode, and the rule is that high-frequency signals can normally pass through, and low-frequency signals lower than a set critical value are blocked and weakened.
Illustratively, in the step 120, the step of image filling from at least two directions of the display device in the environment image to obtain the bottom overlapping area includes:
step 121, calculating the total number of pixels corresponding to each direction, wherein the total number of pixels represents the number of pixels with RGB values smaller than a preset value.
And step 122, determining a target direction according to the total number of pixels, and performing image filling on the target direction to obtain a bottom overlapping area.
It should be noted that, the present invention may perform image filling for at least two directions of the display device in the environmental image, for example, a left direction and a right direction, and an up direction and a down direction, and may also be a left direction and an up direction, or a left direction, a right direction and an up direction, or four directions of up, down, left and right directions. The present invention is not limited in the direction of image filling.
The following description will be given by taking the example in which the at least two directions include up, down, left, and right directions.
Illustratively, in the step 121, the step of calculating the total number of pixels corresponding to each direction includes:
step 1211, configuring the number of probing rectangles based on the ratio of the length and the width of the display device, so as to obtain a first number of probing rectangles in the up-down direction and a second number of probing rectangles in the left-right direction, where the probing rectangles are used for obtaining pixels with RGB values smaller than a preset value.
Illustratively, as shown in fig. 5 and 6, the probing rectangle rises from the edge of the display device to the periphery in the environment image, the width is 8 to 10 pixels, the length of the probing rectangle in the left-right direction is W/2 to 3W/4 (for example, W/2), the height of the probing rectangle above is H/2 to 3H/4 (for example, H/2), the height of the probing rectangle below is also H/4 to 3H/4 (for example, H/2), but the probing rectangle below is not attached to the display device but is shifted downwards by a preset number (for example, 20) of pixels, because the bottom of the display device has devices such as a panel, an infrared head, a sound box and the like, and the border of the bottom is thicker than the other three sides.
Specifically, if the ratio of the length and the width of the display device is 16:9, 16 probing rectangles in the up-down direction and 9 probing rectangles in the left-right direction were obtained (as shown in fig. 6).
And 1212, accumulating and storing the number of the pixels detected by the detection rectangle in the up, down, left and right directions to obtain the total number of the pixels in the corresponding directions.
The method includes the steps of configuring corresponding variables based on the up-down, left-right directions, accumulating the numbers of pixels with RGB values smaller than a preset value detected by the detection rectangle corresponding to the up-down, left-right directions and storing the numbers of pixels with RGB values smaller than the preset value in the corresponding variables, and obtaining the total number of pixels in the corresponding directions.
The variables corresponding to the up, down, left and right are top_count, bottom_count, left_count and right_count respectively, and the pixels with RGB values smaller than the preset value represent pixels with blackish colors, namely the variables are used for storing the total number of blackish pixels.
And, the variables are also used for statistical exploration results, traversing the blackened pixels encountered within the exploration rectangle (i.e., RGB less than 35), the corresponding value is incremented by one.
Illustratively, in the step 122, the step of determining the target direction according to the total number of pixels and performing image filling on the target direction to obtain the bottom overlapping area includes:
step 1221, presetting weights according to a calculation formula (top_count+button_count)/first number to obtain a first average value, and obtaining a second average value according to a calculation formula (left_count+right_count)/second number.
For example, (top_count+button_count)/16×110% represents a first average value of variation in the up-down direction, and (left_count+right_count)/9 represents a second average value of variation in the left-right direction.
It should be noted that, the setting of the preset weight (for example, 110%) is because: first, display devices include wall-mounted display devices and desktop display devices, which have a stand, typically having an area of less than 10% of the area of the display device; and the frame at the bottom of the display device is thicker than other frames, and the black edge at the bottom can lead to the overall visual effect and have more incontinuous feeling. Therefore, for the above purpose, in order to achieve a better fusion effect, it is necessary to increase the preset weight in the up-down direction.
Step 1222, comparing the first average value with the second average value, taking the direction of the smaller value as the target direction to acquire the content, and image-filling the acquired content into the display device in the environment image to obtain the bottom overlapping area.
The direction in which the content is acquired in the present invention is the up-down direction or the left-right direction, and the smaller the value is, the smaller the color change of the environmental image in the direction is, that is, the content is relatively single. Therefore, the less the changing image is selected for filling, the better the fusion effect. If the first average value and the second average value are equal, the left-right direction is selected as the target direction.
Illustratively, first, the total number of pixels in two opposite directions of the target directions is compared, with the direction of the smaller value being the first target direction and the direction of the larger value being the second target direction.
And comparing the values of the variable left_count and the variable right_count again if the determined target direction is the left-right direction, and assuming that the value of the variable left_count is smaller, the first target direction is the left direction and the second target direction is the right direction.
Then, an image of length W/2+a1 (for example, a1=w/6) and width H in the left direction of the display device on the environment image is selected to be filled and horizontally flipped to obtain a first image a (as shown in fig. 7 and 9), and an image of length W/2+b1 (for example, b1=w/8) and width H in the right direction of the display device on the environment image is selected to be filled and horizontally flipped to obtain a second image B (as shown in fig. 8 and 9).
Wherein W represents the number of the length of the pixel occupied by the display device in the environment image, A1 represents the number of the length of the pixel occupied, B1 represents the number of the length of the pixel occupied, A1> B1 and W/10< A1+B1< W/2.
Note that, a1=w/6 and b1=w/8 are preferred values, and actually, the numbers selected here should be W/10< a1+b1< W/2, (A1, B1 are more than half of the left and right sides), the closer the effect of W/2 superposition is better, but the larger the superposition part is, the more fluency and aesthetic feeling of the original image are changed; the closer to W/10, the more intense the later brightness transition, and the better fusion effect is not obtained.
In addition, since the above example is the left-right direction, the filled image may be subjected to the horizontal flipping process, but the present invention may also be subjected to other processes such as the angular rotation, the vertical flipping, etc., so the present invention is not limited to what kind of process is performed on the filled image.
Finally, calculating RGB values of the bottom overlapping area to obtain a third image (shown in FIG. 10) comprising the first image and the second image, wherein the bottom overlapping area is a crossed overlapped part of the first image and the second image.
Illustratively, the calculation formula of the RGB values of the bottom-overlapping area is:
Result Color=(Top Color)*(Bottom Color)/255;
wherein Result Color represents the RGB value of the Bottom-stacked region, top Color represents the RGB value of A1, and Bottom Color represents the RGB value of B1.
Specifically, data of three colors of R, G, B (Red, green, blue) in the pixel are respectively operated. For example, the red formula becomes:
the red value of the last pixel= (red of Top image. Red of Button image)/255, the value is modified into the response pixel, and the last is saved as the third image.
The other two colors Green and Blue operate as in the red formula described above.
Illustratively, in the step 130, the step of adjusting the brightness of the bottom overlapping area to generate the target image fused with the surrounding environment of the display device includes:
Step 131, obtaining a first average brightness value and a second average brightness value, where the first average brightness value is an average brightness value of a W/2-B1-1 th column in the third image, and the second average brightness value is an average brightness value of a W/2+a1+1 th column in the third image.
Since the image of the bottom-overlapped region becomes dark after filling the image (as shown in fig. 10), the sea needs to make brightness adjustment for the bottom-overlapped region.
For example, a first average luminance value LA of (W/2-W/8-1) column in the third image is taken, and a second average luminance value LB of (W/2+W/6+1) column in the third image is taken, where both LA and LB are average luminance values of the black part edge.
Step 132, gradually changing the first average brightness value to the second average brightness value of the overlapped area so as to adjust the brightness of the overlapped area to obtain the target image;
illustratively, the above brightness is graded to obtain a decreasing value by dividing the average brightness value by the pixel, which is larger than the average brightness value minus the average brightness value.
Specifically, assuming that the first average luminance value LA is greater than the second average luminance value LB, a decreasing value of the gradation is calculated, and the decreasing value Δlight is calculated as:
△Light=(LA-LB)/(A1+B1);
therefore, the brightness of the first column on the left side of the bottom overlapping region (length a1+b1 and height H, where H represents the number of pixels occupied by the display device in the ambient image) is LA, and gradually decreases until LB, and the brightness-adjusted target image is as shown in fig. 11.
Illustratively, the adjustment formula of the RGB values of the bottom-overlapping area is:
Result Color=((Top Color)*(Bottom Color)+△Light)/255。
for example, description is made with red:
red= (Red value in pixel of Top image in bottom region + Red value in pixel of bottom image in bottom region + brightness decrease value of the column)/255;
the luminance formula flow light= (Red 0.2126f+green 0.7152f+blue 0.0722 f)/255, where the above values may be chosen to preserve the four bits after the decimal point.
In one embodiment of the present invention, after performing the step 130, the method further comprises:
step 140, if the environmental image is acquired through the display device, the target image is directly displayed on the display device; or alternatively
And step 150, if the environment image is acquired through the mobile terminal, the target image is sent to the display device to be displayed on the display device.
For example, pushing a target image generated by the mobile terminal to a display device for display, if the image processing algorithm is set on the display device, transmitting the photographed environment image to the display device for processing, and displaying the generated target image after processing.
Therefore, the invention adapts and extends the surrounding environment of the display device in different directions, selects the direction for obtaining the filling content according to the intensity of the surrounding color change of the display device, and carries out deep fusion processing on the filling content to generate a fusion image, so that the display device and the surrounding environment are integrated, and the living visual experience and the using experience of a smart screen of a user are improved.
The image fusion apparatus provided by the present invention will be described below, and the image fusion apparatus described below and the image fusion method described above may be referred to correspondingly to each other.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an image fusion apparatus according to the present invention. An image fusion apparatus 1200 includes an acquisition module 1210, a superimposition processing module 1220, and a fusion module 1230.
Illustratively, the acquiring module 1210 is configured to acquire an environment image, where the environment image is an image representing the display device and a surrounding environment of the display device.
Illustratively, the bottom stacking processing module 1220 is configured to perform image filling from at least two directions of the display device in the environment image to obtain a bottom stacking area, where the brightness of the bottom stacking area is different from the brightness of the non-bottom stacking area.
Illustratively, the fusing module 1230 is configured to perform brightness adjustment on the bottom region to generate a target image fused with the surrounding environment of the display device.
Illustratively, the bottom-stacked processing module 1220 is further configured to:
calculating the total number of pixels corresponding to each direction, wherein the total number of pixels represents the number of pixels with RGB values smaller than a preset value;
And determining a target direction according to the total number of pixels, and performing image filling on the target direction to obtain a bottom overlapping area.
Illustratively, the bottom-stacked processing module 1220 is further configured to:
configuring the number of the exploration rectangles based on the ratio of the length to the width of the display equipment to obtain exploration rectangles with the first number in the up-down direction and exploration rectangles with the second number in the left-right direction, wherein the exploration rectangles are used for acquiring pixels with RGB values smaller than a preset value;
and accumulating and storing the number of the pixels detected by the detection rectangle in the up, down, left and right directions to obtain the total number of the pixels in the corresponding directions.
Illustratively, the bottom-stacked processing module 1220 is further configured to:
if the ratio of the length to the width of the display device is 16:9, obtaining 16 probing rectangles in the up-down direction and 9 probing rectangles in the left-right direction;
the width of each exploration rectangle is 8-10 pixels, the length of each exploration rectangle in the left-right direction is W/2-3W/4, the height of each exploration rectangle in the up-down direction is H/2-3H/4, W represents the number of the length of the pixels occupied by the display device in the environment image, and H represents the number of the width of the pixels occupied by the display device in the environment image.
Illustratively, the bottom-stacked processing module 1220 is further configured to:
configuring corresponding variables based on the upper direction, the lower direction, the left direction and the right direction;
the number of pixels of which the RGB values are smaller than a preset value, detected by the detection rectangle corresponding to the up-down left-right direction, is respectively accumulated and stored to corresponding variables, so that the total number of pixels in the corresponding direction is obtained;
the variables corresponding to the up, down, left and right are top_count, bottom_count, left_count and right_count respectively, and the pixels with RGB values smaller than the preset value represent pixels with blackish colors.
Illustratively, the bottom-stacked processing module 1220 is further configured to:
the method comprises the steps of obtaining a first average value according to a calculation formula (top_count+button_count)/first quantity by preset weights, and obtaining a second average value according to a calculation formula (left_count+right_count)/second quantity;
and comparing the first average value with the second average value, taking the direction of the smaller value as a target direction to acquire the content, and filling the acquired content into the display equipment in the environment image to obtain the overlapping bottom area.
Illustratively, the bottom-stacked processing module 1220 is further configured to:
comparing the total number of pixels in two opposite directions in the target direction, taking the direction with a smaller value as a first target direction and the direction with a larger value as a second target direction;
Assuming that the first target direction is the left direction and the second target direction is the right direction, selecting an image with the length W/2+A1 and the width H in the left direction of the display device on the environment image to fill and horizontally invert to obtain a first image, and selecting an image with the length W/2+B1 and the width H in the right direction of the display device on the environment image to fill and horizontally invert to obtain a second image;
calculating RGB values of the bottom overlapping area to obtain a third image comprising the first image and the second image, wherein the bottom overlapping area is a cross overlapping part of the first image and the second image;
wherein W represents the number of the length of the pixel occupied by the display device in the environment image, A1 represents the number of the length of the pixel occupied, B1 represents the number of the length of the pixel occupied, A1> B1 and W/10< A1+B1< W/2.
Illustratively, the RGB values of the overlay bottom region are calculated as:
Result Color=(Top Color)*(Bottom Color)/255;
wherein Result Color represents the RGB value of the Bottom-stacked region, top Color represents the RGB value of A1, and Bottom Color represents the RGB value of B1.
Illustratively, the fusion module 1230 is also configured to:
acquiring a first average brightness value and a second average brightness value, wherein the first average brightness value is the average brightness value of the W/2-B1-1 th column in the third image, and the second average brightness value is the average brightness value of the W/2+A1 th column in the third image;
Gradually changing the first average brightness value to the second average brightness value of the overlapped area to carry out brightness adjustment on the overlapped bottom area so as to obtain the target image;
wherein the length of the superposition area is A1+B1, and the height is H, and H represents the number of the width of the pixels occupied by the display device in the environment image.
Illustratively, the fusion module 1230 is also configured to:
assuming that the first average brightness value is greater than the second average brightness value, calculating a decreasing value of the gradual change, wherein the decreasing value is calculated by the following formula:
△Light=(LA-LB)/(A1+B1);
where Δlight represents a taper value, LA represents a first average luminance value, and LB represents a second average luminance value.
Illustratively, the adjustment formula of the RGB values of the bottom-overlapping area is:
Result Color=((Top Color)*(Bottom Color)+△Light)/255。
illustratively, the acquisition module 1210 is further configured to:
invoking a shooting interface of the display device or the mobile terminal, wherein the shooting interface is provided with a rectangular frame;
and placing the image of the display device in the rectangular frame to acquire the environment image.
Illustratively, the image fusion apparatus 1200 further includes a preprocessing module for:
determining a position of the display device in the environment image and saving the position information;
Performing high-pass filtering processing on the environment image to obtain a processed environment image;
wherein the position information comprises X, Y coordinates of the display device in the environment image, the number W representing the length of the pixels occupied by the display device in the environment image and the number H representing the width of the pixels occupied by the display device in the environment image.
Illustratively, the image fusion apparatus 1200 further includes a display module for:
if the environment image is acquired through the display equipment, the target image is directly displayed on the display equipment; or alternatively
And if the environment image is acquired through the mobile terminal, the target image is sent to the display device to be displayed on the display device.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the invention. The image fusion device is applied to the mobile terminal, the generated target image is sent to the display equipment end, the target image is received through the receiving module of the display equipment end, and the received target image is displayed through the display module.
It should be noted that, the image fusion device provided in the embodiment of the present invention can implement all the method steps implemented in the method embodiment and achieve the same technical effects, and specific details of the same parts and beneficial effects as those of the method embodiment in the embodiment are not described herein.
Fig. 14 illustrates a physical structure diagram of an electronic device, as shown in fig. 14, which may include: processor 1410, communication interface (Communications Interface) 1420, memory 1430 and communication bus 1440, wherein Processor 1410, communication interface 1420 and Memory 1430 communicate with each other via communication bus 1440. Processor 1410 may invoke logic instructions in memory 1430 to perform the image fusion method, which includes:
acquiring an environment image, wherein the environment image is an image representing the display equipment and the surrounding environment of the display equipment;
image filling is carried out from at least two directions of display equipment in the environment image so as to obtain a bottom overlapping area, wherein the brightness of the bottom overlapping area is lower than that of a non-bottom overlapping area;
and adjusting the brightness of the overlapping area to generate a target image fused with the surrounding environment of the display device.
In addition, the logic instructions in the memory 1430 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the image fusion method provided by the methods described above.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the image fusion method provided by the above.
The electronic device, the computer program product, and the computer readable storage medium provided by the embodiments of the present invention have the advantage that the computer program stored thereon enables the processor to implement all the method steps implemented by the embodiments of the method and achieve the same technical effects, and detailed descriptions of the same parts and advantages as those of the embodiments of the method are omitted herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (17)

1. A method of image fusion, the method comprising:
acquiring an environment image, wherein the environment image is an image representing the display equipment and the surrounding environment of the display equipment;
image filling is carried out from at least two directions of display equipment in the environment image so as to obtain a bottom overlapping area, wherein the brightness of the bottom overlapping area is lower than that of a non-bottom overlapping area;
and adjusting the brightness of the overlapping area to generate a target image fused with the surrounding environment of the display device.
2. The image fusion method of claim 1, wherein the step of image filling from at least two directions of a display device in the environmental image to obtain an overlay bottom region comprises:
calculating the total number of pixels corresponding to each direction, wherein the total number of pixels represents the number of pixels with RGB values smaller than a preset value;
and determining a target direction according to the total number of pixels, and performing image filling on the target direction to obtain a bottom overlapping area.
3. The image fusion method according to claim 2, wherein the at least two directions include four directions of up, down, left and right, and the step of calculating the total number of pixels corresponding to each direction includes:
Configuring the number of the exploration rectangles based on the ratio of the length to the width of the display equipment to obtain exploration rectangles with the first number in the up-down direction and exploration rectangles with the second number in the left-right direction, wherein the exploration rectangles are used for acquiring pixels with RGB values smaller than a preset value;
and accumulating and storing the number of the pixels detected by the detection rectangle in the up, down, left and right directions to obtain the total number of the pixels in the corresponding directions.
4. The image fusion method of claim 3, wherein the configuring the number of probing rectangles based on a ratio of a length and a width of the display device includes:
if the ratio of the length to the width of the display device is 16:9, obtaining 16 probing rectangles in the up-down direction and 9 probing rectangles in the left-right direction;
the width of each exploration rectangle is 8-10 pixels, the length of each exploration rectangle in the left-right direction is W/2-3W/4, the height of each exploration rectangle in the up-down direction is H/2-3H/4, W represents the number of the length of the pixels occupied by the display device in the environment image, and H represents the number of the width of the pixels occupied by the display device in the environment image.
5. The image fusion method according to claim 3, wherein the step of accumulating the number of pixels of the probing rectangle in the four directions of up, down, left, right to obtain the total number of pixels in the corresponding direction includes:
Configuring corresponding variables based on the upper direction, the lower direction, the left direction and the right direction;
the number of pixels of which the RGB values are smaller than a preset value, detected by the detection rectangle corresponding to the up-down left-right direction, is respectively accumulated and stored to corresponding variables, so that the total number of pixels in the corresponding direction is obtained;
the variables corresponding to the up, down, left and right are top_count, bottom_count, left_count and right_count respectively, and the pixels with RGB values smaller than the preset value represent pixels with blackish colors.
6. The image fusion method of claim 5, wherein the steps of determining a target direction from the total number of pixels and image-filling the target direction to obtain a bottom-overlapping region include:
the method comprises the steps of obtaining a first average value according to a calculation formula (top_count+button_count)/first quantity by preset weights, and obtaining a second average value according to a calculation formula (left_count+right_count)/second quantity;
and comparing the first average value with the second average value, taking the direction of the smaller value as a target direction to acquire the content, and filling the acquired content into the display equipment in the environment image to obtain the overlapping bottom area.
7. The image fusion method according to claim 6, wherein the steps of comparing the first average value and the second average value, taking a direction of a smaller value as a target direction for content acquisition, and image-filling the acquired content with a display device in the environment image to obtain an overlay region further include:
Comparing the total number of pixels in two opposite directions in the target direction, taking the direction with a smaller value as a first target direction and the direction with a larger value as a second target direction;
assuming that the first target direction is the left direction and the second target direction is the right direction, selecting an image with the length W/2+A1 and the width H in the left direction of the display device on the environment image to fill and horizontally invert to obtain a first image, and selecting an image with the length W/2+B1 and the width H in the right direction of the display device on the environment image to fill and horizontally invert to obtain a second image;
calculating RGB values of the bottom overlapping area to obtain a third image comprising the first image and the second image, wherein the bottom overlapping area is a cross overlapping part of the first image and the second image;
wherein W represents the number of the length of the pixel occupied by the display device in the environment image, A1 represents the number of the length of the pixel occupied, B1 represents the number of the length of the pixel occupied, A1> B1 and W/10< A1+B1< W/2.
8. The image fusion method of claim 7, wherein the RGB values of the overlay region are calculated as:
Result Color=(Top Color)*(Bottom Color)/255;
Wherein Result Color represents the RGB value of the Bottom-stacked region, top Color represents the RGB value of A1, and Bottom Color represents the RGB value of B1.
9. The image fusion method of claim 8, wherein the step of brightness adjusting the bottom region to generate a target image fused with the surrounding of the display device comprises:
acquiring a first average brightness value and a second average brightness value, wherein the first average brightness value is the average brightness value of the W/2-B1-1 th column in the third image, and the second average brightness value is the average brightness value of the W/2+A1 th column in the third image;
gradually changing the first average brightness value to the second average brightness value of the overlapped area to carry out brightness adjustment on the overlapped bottom area so as to obtain the target image;
wherein the length of the superposition area is A1+B1, and the height is H, and H represents the number of the width of the pixels occupied by the display device in the environment image.
10. The image fusion method of claim 9, wherein the step of fading the overlay region from the first average luminance value to the second average luminance value to adjust the luminance of the overlay region to obtain the target image comprises:
Assuming that the first average brightness value is greater than the second average brightness value, calculating a decreasing value of the gradual change, wherein the decreasing value is calculated by the following formula:
△Light=(LA-LB)/(A1+B1);
where Δlight represents a taper value, LA represents a first average luminance value, and LB represents a second average luminance value.
11. The image fusion method of claim 10, wherein the adjustment formula of RGB values of the overlay region is:
Result Color=((Top Color)*(Bottom Color)+△Light)/255。
12. the method of image fusion according to claim 1, wherein the step of acquiring an environmental image comprises:
invoking a shooting interface of the display device or the mobile terminal, wherein the shooting interface is provided with a rectangular frame;
and placing the image of the display device in the rectangular frame to acquire the environment image.
13. The image fusion method of claim 1, wherein prior to the step of image filling at least two directions of a display device in the ambient image to obtain an underlayment area, the method further comprises:
determining a position of the display device in the environment image and saving the position information;
performing high-pass filtering processing on the environment image to obtain a processed environment image;
Wherein the position information comprises X, Y coordinates of the display device in the environment image, the number W representing the length of the pixels occupied by the display device in the environment image and the number H representing the width of the pixels occupied by the display device in the environment image.
14. The image fusion method of claim 1, further comprising, after the step of performing brightness adjustment on the bottom region to generate a target image fused with an ambient environment of the display device:
if the environment image is acquired through the display equipment, the target image is directly displayed on the display equipment; or alternatively
And if the environment image is acquired through the mobile terminal, the target image is sent to the display device to be displayed on the display device.
15. An image fusion apparatus, the apparatus comprising:
the system comprises an acquisition module, a display device and a display module, wherein the acquisition module is used for acquiring an environment image, wherein the environment image is an image representing the display device and the surrounding environment of the display device;
the bottom overlapping processing module is used for carrying out image filling from at least two directions of the display equipment in the environment image to obtain a bottom overlapping area, and the brightness of the bottom overlapping area is different from that of a non-bottom overlapping area;
And the fusion module is used for carrying out brightness adjustment on the bottom overlapping area so as to generate a target image fused with the surrounding environment of the display equipment.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, characterized in that the processor implements the steps of the image fusion method according to any one of claims 1 to 14 when the program is executed.
17. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the image fusion method according to any one of claims 1 to 14.
CN202211160692.7A 2022-09-22 2022-09-22 Image fusion method and device, electronic equipment and storage medium Pending CN117152031A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211160692.7A CN117152031A (en) 2022-09-22 2022-09-22 Image fusion method and device, electronic equipment and storage medium
PCT/CN2022/129361 WO2024060360A1 (en) 2022-09-22 2022-11-02 Image fusion method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211160692.7A CN117152031A (en) 2022-09-22 2022-09-22 Image fusion method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117152031A true CN117152031A (en) 2023-12-01

Family

ID=88906844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211160692.7A Pending CN117152031A (en) 2022-09-22 2022-09-22 Image fusion method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN117152031A (en)
WO (1) WO2024060360A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6074254B2 (en) * 2012-12-18 2017-02-01 キヤノン株式会社 Image processing apparatus and control method thereof
CN107071169A (en) * 2017-03-31 2017-08-18 努比亚技术有限公司 The processing unit and method of screen wallpaper
CN112712485B (en) * 2019-10-24 2024-06-04 杭州海康威视数字技术股份有限公司 Image fusion method and device
CN113645494B (en) * 2021-08-10 2023-09-15 海信视像科技股份有限公司 Screen fusion method, display device, terminal device and server

Also Published As

Publication number Publication date
WO2024060360A1 (en) 2024-03-28

Similar Documents

Publication Publication Date Title
JP4253655B2 (en) Color interpolation method for digital camera
US8538147B2 (en) Methods and appartuses for restoring color and enhancing electronic images
US8208011B2 (en) Stereoscopic display apparatus
CN105850114A (en) Method for inverse tone mapping of an image
CN105205354B (en) Data generating device and data creation method
US8681880B2 (en) Adaptive dithering during image processing
US11194536B2 (en) Image processing method and apparatus for displaying an image between two display screens
KR101384166B1 (en) Apparatus, display device and method thereof for processing image data for display by a display panel
JP2005526300A (en) A general-purpose image enhancement algorithm that enhances visual recognition of details in digital images
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
CN112351195B (en) Image processing method, device and electronic system
WO2012015020A1 (en) Method and device for image enhancement
CN113596573B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN106815827A (en) Image interfusion method and image fusion device based on Bayer format
CN105791793A (en) Image processing method and electronic device
CN105654424B (en) Adjustment ratio display methods, display system, display device and the terminal of image
CN110933313B (en) Dark light photographing method and related equipment
US20060159340A1 (en) Digital image photographing apparatus and method
US20090046942A1 (en) Image Display Apparatus and Method, and Program
CN108156397A (en) A kind of method and apparatus for handling monitored picture
CN117152031A (en) Image fusion method and device, electronic equipment and storage medium
CN111462158A (en) Image processing method and device, intelligent device and storage medium
CN113596425B (en) Image processing method and device for ink screen terminal, storage medium and intelligent device
CN111582268B (en) License plate image processing method and device and computer storage medium
CN110941413B (en) Display screen generation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination