CN111383166B - Method and device for processing image to be displayed, electronic equipment and readable storage medium - Google Patents
Method and device for processing image to be displayed, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN111383166B CN111383166B CN201811641110.0A CN201811641110A CN111383166B CN 111383166 B CN111383166 B CN 111383166B CN 201811641110 A CN201811641110 A CN 201811641110A CN 111383166 B CN111383166 B CN 111383166B
- Authority
- CN
- China
- Prior art keywords
- image
- displayed
- processed
- display area
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 94
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000008859 change Effects 0.000 claims description 36
- 239000003086 colorant Substances 0.000 claims description 9
- 239000002537 cosmetic Substances 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 27
- 230000008569 process Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 24
- 238000004891 communication Methods 0.000 description 10
- 238000002834 transmittance Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The disclosure relates to a method and a device for processing an image to be displayed, an electronic device and a readable storage medium. A method of processing an image to be displayed, the image to be displayed being displayed by a display screen comprising a primary display area and a secondary display area, the method comprising: determining a region to be processed in an image to be displayed and a first image in the region to be processed according to the main display region and the auxiliary display region; performing image blurring processing on the first image to obtain a second image; and filling the second image into the region to be processed to obtain a processed image to be displayed. In the embodiment, the image blurring processing is performed on the first image, so that the display difference between the main display area and the auxiliary display area in the process of displaying the processed image to be displayed can be reduced, split feeling is relieved or avoided, and the display effect is improved.
Description
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a method and device for processing an image to be displayed, electronic equipment and a readable storage medium.
Background
At present, a display screen of a part of electronic equipment can be divided into a main display area and an auxiliary display area, and a camera can be placed below the auxiliary display area, so that the camera can acquire images through the auxiliary display area, and the auxiliary display area is required to have a display function and higher light transmittance in the scene.
However, due to the requirements of the light transmittance and the display effect of the auxiliary display area, some special designs are required for the pixels in the auxiliary display area, so that the display effect of the main display area and the auxiliary display area is different due to different pixels in the display process, and the screen has split feeling, so that the viewing effect is affected.
Disclosure of Invention
The present disclosure provides a method and apparatus for processing an image to be displayed, an electronic device, and a readable storage medium, to solve the deficiencies of the related art.
According to a first aspect of embodiments of the present disclosure, there is provided a method of processing an image to be displayed, the image to be displayed being displayed by a display screen including a main display area and a sub display area, the method comprising:
determining a region to be processed in an image to be displayed and a first image in the region to be processed according to the main display region and the auxiliary display region;
Performing image blurring processing on the first image to obtain a second image;
and filling the second image into the region to be processed to obtain a processed image to be displayed.
Optionally, the blurring degree of the second image is continuously changed according to a set direction.
Optionally, determining the area to be processed in the image to be displayed according to the main display area and the auxiliary display area includes:
acquiring a preset shape of the region to be treated;
and adjusting the preset shape according to the position and the shape of the auxiliary display area until the auxiliary display area is positioned in the preset shape, so as to obtain the area to be processed.
Optionally, determining the area to be processed in the image to be displayed according to the main display area and the auxiliary display area includes:
identifying the image to be displayed to obtain a plurality of target objects contained in the image to be displayed;
and determining at least one target object which is wholly or partially positioned in the auxiliary display area, and taking an area containing the auxiliary display area and the at least one target object as a to-be-processed area of the to-be-displayed image.
Optionally, determining the area to be processed in the image to be displayed according to the main display area and the auxiliary display area includes:
Acquiring a first pixel density of the main display area and a second pixel density of the auxiliary display area;
determining the number of times of change from the second pixel density to the first pixel density according to a preset change step length;
and determining a region to be processed based on the change step length, the change times and the edge of the auxiliary display region.
Optionally, the shape of the area to be treated includes at least one of: semicircular, rectangular, triangular, trapezoidal, cosmetic pointed and circular.
Optionally, performing image blurring processing on the first image to obtain a second image includes:
sequentially acquiring data values of a first number of pixels in the first image;
and obtaining an average value of the data values of the first number of pixels, and taking the average value as a new data value of the first number of pixels to obtain a second image.
Optionally, the method further comprises:
if each pixel in the secondary display area corresponds to a plurality of pixels in a plurality of images to be displayed, continuously acquiring an average value of the data values of the plurality of pixels, and taking the average value as the data value of each pixel.
Optionally, performing image blurring processing on the first image to obtain a second image includes:
Acquiring the colors of a first pixel and a last pixel of each row of pixels in the first image, and determining the weights of sub-pixels in each pixel;
and adjusting the data value of each row of pixels based on the weights of the sub-pixels in each pixel to obtain a second image.
Optionally, performing image blurring processing on the first image to obtain a second image includes:
dividing the first image according to the shape of the first image to obtain a plurality of strip-shaped areas;
and carrying out fuzzy processing of different degrees on each strip-shaped area in the plurality of strip-shaped areas based on a preset processing strategy to obtain a second image.
According to a second aspect of embodiments of the present disclosure, there is provided a method of processing an image to be displayed, the image to be displayed being displayed by a display screen including a main display area and a sub display area, the method comprising:
acquiring a second image of the previous frame and determining a region to be processed;
and filling the second image of the previous frame into a to-be-processed area of the to-be-displayed image of the current frame to obtain the processed to-be-displayed image.
According to a third aspect of embodiments of the present disclosure, there is provided an apparatus for processing an image to be displayed, the image to be displayed being displayed by a display screen including a main display area and a sub display area, the apparatus comprising:
The first image acquisition module is used for determining a to-be-processed area in the to-be-displayed image and a first image in the to-be-processed area according to the main display area and the auxiliary display area;
the second image acquisition module is used for carrying out image blurring processing on the first image to obtain a second image;
and the second image filling module is used for filling the region to be processed with the second image to obtain the processed image to be displayed.
Optionally, the blurring degree of the second image is continuously changed according to a set direction.
Optionally, the first image acquisition module includes:
a preset shape acquisition unit, configured to acquire a preset shape of the area to be treated;
and the preset shape adjusting unit is used for adjusting the preset shape according to the position and the shape of the auxiliary display area until the auxiliary display area is positioned in the preset shape, so as to obtain the area to be processed.
Optionally, the first image acquisition module includes:
the target object acquisition unit is used for identifying the image to be displayed and obtaining a plurality of target objects contained in the image to be displayed;
and the to-be-processed area acquisition unit is used for determining at least one target object which is wholly or partially positioned in the auxiliary display area, and taking an area containing the auxiliary display area and the at least one target object as the to-be-processed area of the to-be-displayed image.
Optionally, the second image filling module includes:
a pixel density acquisition unit configured to acquire a first pixel density of the main display area and a second pixel density of the sub display area;
a change frequency acquisition unit, configured to determine a change frequency from the second pixel density to the first pixel density according to a preset change step length;
and the to-be-processed area acquisition unit is used for determining the to-be-processed area based on the change step length, the change times and the edge of the auxiliary display area.
Optionally, the shape of the area to be treated includes at least one of: semicircular, rectangular, triangular, trapezoidal, cosmetic pointed and circular.
Optionally, the second image acquisition module includes:
a data value obtaining unit, configured to sequentially obtain data values of a first number of pixels in the first image;
an average value obtaining unit configured to obtain an average value of the data values of the first number of pixels;
and the second image acquisition unit is used for taking the average value as a new data value of the first number of pixels to obtain a second image.
Optionally, the average value obtaining unit is further configured to:
And continuously acquiring an average value of the data values of a plurality of pixels when each pixel in the auxiliary display area corresponds to the plurality of pixels in the plurality of images to be displayed, and taking the average value as the data value of each pixel.
Optionally, the second image acquisition module includes:
a sub-pixel weight obtaining unit, configured to obtain colors of a first pixel and a last pixel of each row of pixels in the first image, and determine weights of sub-pixels in each pixel;
and the second image acquisition unit is used for adjusting the data value of each row of pixels based on the weights of the sub-pixels in each pixel to obtain a second image.
Optionally, the second image acquisition module includes:
the first image dividing unit is used for dividing the first image according to the shape of the first image to obtain a plurality of strip-shaped areas;
and the second image acquisition unit is used for carrying out fuzzy processing on each strip-shaped area in the plurality of strip-shaped areas to different degrees based on a preset processing strategy to obtain a second image.
According to a fourth aspect of embodiments of the present disclosure, there is provided an apparatus for processing an image to be displayed, the image to be displayed being displayed by a display screen including a main display area and a sub display area, the apparatus comprising:
The second image acquisition module is used for acquiring a second image of the previous frame and the determined area to be processed;
and the second image filling module is used for filling the to-be-processed area of the to-be-displayed image of the current frame by using the second image of the previous frame to obtain the processed to-be-displayed image.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to execute executable instructions in the memory to implement the steps of the methods of the first and second aspects.
According to a sixth aspect of embodiments of the present disclosure, there is provided a machine-readable storage medium having stored thereon machine-executable instructions which, when executed by a processor, implement the steps of the methods of the first and second aspects.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
as can be seen from the above embodiments, in the embodiments of the present disclosure, a to-be-processed area in an image to be displayed and a first image in the to-be-processed area are determined according to a main display area and a sub display area in a display screen; at least part of the area to be processed is overlapped with the auxiliary display area; then, performing image blurring processing on the first image to obtain a second image; and filling the region to be processed with the second image to obtain a processed image to be displayed. In the embodiment, the image blurring processing is performed on the first image, so that the display difference between the main display area and the auxiliary display area in the process of displaying the processed image to be displayed can be reduced, split feeling is relieved or avoided, and the display effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is an application scenario diagram illustrating an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of processing an image to be displayed, according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating the acquisition of a region to be processed according to an exemplary embodiment;
FIG. 4 is an effect diagram showing the shape of the secondary display area and the area to be processed according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating acquiring a region to be processed according to another exemplary embodiment;
FIG. 6 is an effect diagram of acquiring a target object, according to an example embodiment;
fig. 7 is an effect diagram showing the shapes of a sub-display area and a region to be processed according to still another exemplary embodiment;
FIG. 8 is a flowchart illustrating acquiring a region to be processed according to yet another exemplary embodiment;
fig. 9 is an effect diagram showing the shapes of the sub-display area and the area to be processed according to an exemplary embodiment;
Fig. 10 is an effect diagram showing the shapes of a sub-display area and a region to be processed according to still another exemplary embodiment;
FIG. 11 is a flowchart illustrating acquiring a second image according to an exemplary embodiment;
FIG. 12 is a schematic diagram illustrating updating data values of pixels according to an example embodiment;
FIG. 13 is a schematic diagram showing updating data values of pixels according to another exemplary embodiment;
FIG. 14 is a flowchart illustrating acquiring a second image according to another exemplary embodiment;
FIG. 15 is a schematic diagram illustrating the acquisition of weights for sub-pixels in a pixel, according to an exemplary embodiment;
FIG. 16 is a flowchart illustrating the acquisition of a second image according to yet another exemplary embodiment;
FIG. 17 is an effect diagram illustrating the division of a second image into a plurality of bar areas according to an exemplary embodiment;
FIG. 18 is an effect diagram illustrating pre-, intermediate-, and post-processing of an image to be processed according to an exemplary embodiment;
FIG. 19 is a flowchart illustrating a method of processing an image to be displayed according to an exemplary embodiment;
fig. 20 to 27 are block diagrams of an apparatus for processing an image to be displayed according to an exemplary embodiment;
fig. 28 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus consistent with some aspects of the disclosure as detailed in the accompanying claims.
Embodiments of the present disclosure provide a display screen, and fig. 1 is a schematic diagram of a display screen according to an exemplary embodiment. Referring to fig. 1, a display screen 10 includes a main display area 11 and a sub display area 12.
It should be noted that, the display screen 10 includes two different types of display areas, i.e., the main display area 11 and the auxiliary display area 12, but the main display area 11 and the auxiliary display area 12 are physically integrated, i.e., the display screen 10 is an integrated structure, which is not divided into a plurality of independent components.
In the embodiment of the present disclosure, the main display area 11 and the sub display area 12 each have a display function. The number of the sub display areas 12 may be one or more. In fig. 1, the number of sub display sections 12 is schematically illustrated as 1.
In an example, the camera 20 may be disposed below the secondary display area 12 to implement a photographing function, and may be one or more of a general camera, an infrared camera, a depth camera, a structured light camera, and a TOF camera, so that the camera 20 occupying the space of the display screen may be disposed below the secondary display area 12 to release the space of the display screen 10 to the maximum and to increase the screen ratio. If the display screen has a frame, only the frame can bring a certain reduction to the screen occupation ratio; if the display screen has no frame, the screen duty ratio can reach 100%, so that a full screen in a real sense is realized.
In the embodiment of the present disclosure, since the camera 20 is required to light when operating, the light transmittance of the sub display area 12 is superior to that of the main display area 11, for example, the light transmittance of the sub display area 12 is greater than that of the main display area 11. Optionally, the light transmittance of the secondary display area 12 is greater than 30% to meet the normal operating requirements of cameras and other devices for light transmittance. In practical applications, the appropriate materials, appropriate processes, or appropriate pixel distribution patterns may be selected according to the light transmittance requirements of the devices below the secondary display region to produce a secondary display region 12 that meets the light transmittance requirements described above.
In the embodiment of the present disclosure, in combination with the requirement of the camera 20 for light, the working state of the auxiliary display area 12 may be adjusted according to the working state of the camera 20. For example, when the camera 20 has a requirement for capturing an image, the secondary display area 12 can be controlled to be in a closed state, so that light enters the camera 20 through the secondary display area, and interference to the light can be reduced due to the fact that the secondary display area 12 is not displayed, which is beneficial to ensuring the quality of the captured image of the camera 20. When the camera 20 does not collect images, the auxiliary display area 12 can be controlled to be in a display state, so that the display effect of the display screen is ensured.
In order to ensure the display effect and the light transmittance of the display screen, in the embodiment of the disclosure, some special designs need to be made on the pixels in the auxiliary display area, for example, the area of the pixels in the auxiliary display area is increased, so that the pixel densities of the main display area and the auxiliary display area are different, and the display effect of the main display area and the auxiliary display area is different due to the difference of the pixels, so that the screen has split feeling, and the viewing effect is affected.
To this end, an embodiment of the present disclosure provides a method of processing an image to be displayed, and fig. 2 is a flowchart illustrating a method of processing an image to be displayed according to an exemplary embodiment, referring to fig. 2, a method of processing an image to be displayed, including steps 201 to 203, wherein;
201, determining a to-be-processed area in an image to be displayed and a first image in the to-be-processed area according to the main display area and the auxiliary display area; at least a part of the region to be processed overlaps with the sub-display area.
In the embodiment of the disclosure, the processor in the electronic device may acquire an image to be displayed, where the image to be displayed may be a locally stored photo or video, or may be a photo or video just shot by the camera. Meanwhile, the processor can acquire the positions and shapes of the main display area 11 and the sub display area 12 in the display screen 10. Based on the correspondence between the image to be displayed and the display screen, the processor can acquire the image displayed in the sub display area 12 and the image displayed in the main display area 11 in the image to be displayed.
In an embodiment of the disclosure, the processor may acquire a region to be processed in the image to be displayed, and a first image in the region to be processed. It should be noted that, in the embodiment of the present disclosure, at least a part of the area to be processed overlaps with the secondary display area.
It should be noted that the number of the areas to be processed may be one or more, and the technician may set the areas according to a specific scenario, which is not limited herein. In the following embodiments, the solution of each embodiment will be described by taking one area to be treated as an example.
In this embodiment, the processor obtains the area to be processed in the image to be displayed, which may include the following ways:
in one manner, referring to fig. 3, the processor may acquire a preset shape of the area to be processed (corresponding to step 301), where the shape may include at least one of the following: semicircular, rectangular, triangular, trapezoidal, cosmetic pointed and circular. The shape of the area to be processed can be preset by a technician according to a specific scene, and the shape is not limited herein.
Then, the processor may adjust the preset shape according to the position and shape of the secondary display area until the secondary display area (the mapping area of the secondary display area of the display screen on the image to be displayed) is located in the preset shape, to obtain the area to be processed in the image to be displayed (corresponding to step 302).
It should be noted that, the secondary display area 12 being located within the preset shape may include that the adjusted shape coincides with the edge of the secondary display area, and may also include that the adjusted shape is larger than the secondary display area. Of course, in some scenarios, it is also possible that a portion of the secondary display area falls within the adjusted shape, and the solution of the present application may be implemented as well.
In an example, referring to fig. 4, the preset shape of the area to be treated may be rectangular. In conjunction with the shape and location of the secondary display region 12, the processor may determine the secondary display region 12 on the image to be displayed. Then, after the processor acquires that the shape of the area to be processed is rectangular, the size of the preset shape is adjusted to enable the auxiliary display area to be located in the preset shape, so that the area to be processed 13 in the image to be displayed can be obtained.
In a second mode, referring to fig. 5, the processor may call a preset image recognition algorithm, where the image recognition algorithm may be one or more of a depth-first search algorithm, a breadth-first search algorithm, an a-star algorithm, a shortest path algorithm, and a minimum spanning tree algorithm, and a technician may also use a neural network algorithm in the related art, and may also recognize a target object in the image, where the corresponding scheme falls within the protection scope of the present application. The processor may thus identify the image to be displayed, resulting in a plurality of target objects contained in the image to be displayed (corresponding to step 501). The processor may then determine at least one target object located in whole or in part in the secondary display area, with the area containing the secondary display area and the at least one target object as the area to be processed of the image to be displayed (corresponding to step 502).
In one example, referring to fig. 6, the processor may identify 3 target subjects in the image to be displayed, including smiling face 31, 2 stars 32, using an image recognition algorithm. The processor may then determine that a portion of smiling face 31 falls within secondary display region 12. Thereafter, the processor may determine an area including the sub-display area 12 and the smile 31 as a to-be-processed area 13 of the to-be-displayed image, the effect being as shown in fig. 7.
In a third manner, referring to fig. 8, the processor may obtain a first pixel density of the primary display area and a second pixel density of the secondary display area (corresponding to step 801). It is understood that the first pixel density is greater than the second pixel density of the secondary display area. The processor may then determine the number of changes from the second pixel density adjustment to the second pixel density according to the preset change step size (corresponding to step 802). The step size of the change may be one pixel, or may be one color block (including a plurality of pixels), and a technician may set the step size of the change according to a specific scene, which is not limited herein.
The processor may then determine the region to be processed based on the step size of the change, the number of changes, and the edges of the image to be displayed that are located within the secondary display area. For example, the processor may determine the adjustment width according to the change step size and the change number, and determine the area to be processed according to the adjustment width and the edge of the secondary display area (corresponding to step 803).
In an example, referring to fig. 9, it is assumed that the number b0 of pixels per unit area in the sub-display area 12, that is, the second pixel density b0. The number b1 of pixels per unit area in the main display area 11, i.e., the first pixel density b1. Assuming that the change step is the number n of lines of pixels per unit area, the number of changes may be determined to be (b 1-b 0)/n, for example, 5 times, based on the difference between the first pixel density b1 and the second pixel density b0. The processor may then derive the width of the change area based on the change step size and the number of changes. Then, the edge of the auxiliary display area and the width of the change area are combined, so that the area 13 to be processed in the image to be displayed can be obtained, and the effect is as shown in fig. 10.
Finally, the processor may determine the image within the region to be processed as the first image.
And 202, performing image blurring processing on the first image to obtain a second image.
In the embodiment of the disclosure, the processor may perform image blurring processing on the first image to obtain the second image. The degree of blurring of the second image is continuously changed according to the set direction.
In an example, the manner in which the processor performs the image blurring processing on the first image may include:
in one manner, referring to fig. 11, the processor may sequentially obtain data values for a first number of pixels in a first image (corresponding to step 1101). Wherein the first number may be preset, for example, 9, 4, etc., and a slider may be further provided, and each time the slider is moved, the first number of pixels may be obtained. The skilled person can select a suitable mode to determine the first pixel according to a specific scene, and the corresponding scheme falls into the protection scope of the application. The processor then obtains the data values for the first number of pixels and the average of these data values. And the processor takes the average value as the new data value for the first number of pixels to obtain a second image (corresponding to step 1102).
It should be noted that, the processor may also use the average value as a new data value of one of the first number of pixels, and the scheme of the present application may also be implemented, and the corresponding scheme falls within the protection scope of the present application.
It can be appreciated that, since the pixels in the first image are sequentially processed in this embodiment, each pixel in the first pixel is related to surrounding pixels, and the effect of continuously changing the blurring degree can be achieved in the subsequent display process.
In an example, referring to the left diagram in fig. 12, the processor obtains a first number of 9, and the data values of the first number of pixels are respectively: 200. 153, 231, 80, 12, 109, 95, 70 and 85, the average value of these data values can be calculated as 115. The processor treats the average 115 as a new data value for the first number of pixels, the effect being shown in the right hand graph of fig. 12.
In another example, referring to the left graph in fig. 13, the processor obtains a first number of 9, the data values of the first number of pixels are respectively: 200. 153, 231, 80, 12, 109, 95, 70 and 85, the average value of these data values can be calculated as 115. The processor treats the average 115 as a new data value for the middle pixel of the first number of pixels, the effect being shown in the right hand graph of fig. 13.
In a second manner, referring to fig. 14, the processor may obtain the colors of the first pixel and the last pixel of each row of pixels in the first image, and determine the weights of the sub-pixels in each pixel (corresponding to step 1401). The processor may then adjust the data values for each row of pixels based on the weights of the sub-pixels in each pixel to obtain a second image (corresponding to step 1402).
It should be noted that, the processing object of the method shown in fig. 14 may be an entire row of pixels in the first image. When each row of pixels in the region to be processed is divided into a plurality of segments, the processing object of the method shown in fig. 14 may also be a segment of pixels in each row of pixels. The technician can select a proper mode according to a specific scene, and the corresponding scheme falls into the protection scope of the application under the condition that the weight of each pixel can be adjusted.
It can be understood that, in this embodiment, each pixel in each row of pixels in the first image is sequentially processed, and each pixel in each row of pixels is related to the previous and subsequent pixels, so that smooth color transition of each row of pixels can be realized in the subsequent display process, and the effect of continuously changing the blurring degree is achieved.
In one example, referring to the upper diagram of fig. 15, the processor may acquire one of the rows of pixels and then determine that the colors of the first pixel P1 and the last pixel P6 of the row of pixels are red and green, respectively. According to the principle of gradual change from red to green, the processor can determine the weight of the sub-pixels in each pixel in the row of pixels, and the ratio of the first pixel P1 to each pixel is exemplified by 5/5, 4/5, 3/5, 2/5, 1/5 and 0. Referring to the lower diagram in fig. 15, taking the second pixel P2 as an example, the data values of the red, green and blue sub-pixels (r, g, b) in the second pixel P2 are r P2=4/5rp1+1/5rp6, gp2=4/5gp1+1/5gp6, bp2=4/5bp1+1/5bp6, respectively. The calculation of the other pixels may refer to the manner shown in the lower diagram of fig. 15 and the description of the second pixel P2, which are not described herein. Alternatively, pi (r, g, b) =k (r, g, b) p1+ (1-K) (r, g, b) Pn in each pixel, where i denotes any one pixel between the first pixel P1 and the nth pixel Pn, and K is the weight of the ith pixel.
Considering that the sub-display area may display the pixels of each row in a segmented manner, in an example, the pixels P1 to P6 shown in fig. 15 may be a segment of a pixel of a certain row, which is beneficial to realizing more accurate adjustment. In another example, the processor may set a weight for the pixels in each segment (i.e., the K value of each segment is the same), so that each segment displays the same content, i.e., each of the pixels P1-P6 in the lower diagram of fig. 15 may represent the color that the pixel of a segment needs to display.
In a third manner, referring to fig. 16, the processor may acquire the shape of the first image (i.e., the shape of the region to be processed), and divide the first image according to the shape of the first image, so as to obtain a plurality of strip regions (corresponding to step 1601). Then, the processor may perform blurring processing with different degrees on each of the plurality of strip areas based on a preset processing policy to obtain a second image (corresponding to step 1602).
It can be understood that, since the present embodiment performs the blurring process with different degrees on each strip-shaped area, the effect of continuously changing the blurring degree can be achieved under the condition that the blurring coefficient is set reasonably.
In an example, taking the area to be processed shown in fig. 10 as an example, referring to fig. 17, the area to be processed may be divided by a plurality of parallel lines 40, resulting in a plurality of stripe-shaped areas 41. And the processor invokes a preset processing strategy, and performs fuzzy processing on each strip-shaped area to different degrees by using the preset processing strategy to obtain a second image.
It should be noted that the preset processing policy may be what image blurring algorithm and the blurring degree are adopted for each bar region. The image blurring algorithm may be one or more of gaussian filtering, interpolation averaging, smoothing filtering and low-pass filtering. The degree of blurring may be such that the higher the degree of blurring of each stripe region, according to the direction of arrow 42 and/or arrow 43, the more coordinated the images in the primary and secondary display regions.
Considering that the pixels of the secondary display area may correspond to a plurality of pixels in the processed image to be displayed, in an example, the processor may obtain an average value of the data values of the plurality of pixels, and take the average value as the data value of the pixels of the secondary display area. With continued reference to fig. 13, it can be understood that 9 pixels in the image to be displayed correspond to one pixel (e.g., an intermediate pixel) in the secondary display area, in which case the data value of the pixel in the secondary display area is an average of the data values of 9 pixels in the image to be displayed.
In another example, when the pixel in the secondary display area corresponds to a plurality of pixels of the processed image to be displayed, and the plurality of pixels have the same data value, the processor may directly use the data value of the plurality of pixels as the data value of the pixel in the secondary display area.
Considering the display capability of the secondary display area, the color expression capability may be inferior to that of the primary display area, in this scenario, the pixels corresponding to the secondary display area in the second image need to be subjected to data compensation, and this part of pixels is called as the pixels to be compensated. The processor may determine a first color before the pixel to be compensated is not compensated and then determine an abscissa corresponding to the first color at the first color coordinate. And then determining a second color corresponding to the abscissa under the second color coordinate, and taking the data value corresponding to the second color as the data value of the pixel to be compensated. Repeating the steps until all the pixels to be compensated in the second image are compensated.
Of course, the corresponding relation of each color in the first color coordinate and the second color coordinate can be pre-established, and the processor can directly inquire the second color according to the acquired first color, so that the corresponding scheme falls into the protection scope of the application.
The first color coordinates are colors that can be displayed in the main display area, and the second color coordinates are colors that can be displayed in the sub-display area.
And 203, filling the region to be processed with the second image to obtain a processed image to be displayed.
In the embodiment of the disclosure, the processor may fill the second image into the area to be processed to replace the first image, so that the processed image to be displayed may be obtained. And then, the processor can send the processed image to be processed to a display screen for display.
In an example, the image to be processed is the lower diagram in fig. 18, see the middle diagram in fig. 18, and the secondary display area is pointed and marked with an oval. And the area to be processed determined by the processor is a rectangular area. And processing the first image in the rectangular area to obtain a second image, and replacing the first image with the second image to obtain a processed image to be displayed as an upper image in fig. 18.
So far, in the embodiment of the present disclosure, a to-be-processed area in an image to be displayed and a first image in the to-be-processed area are determined according to a main display area and a sub display area in a display screen; at least part of the area to be processed is overlapped with the auxiliary display area; then, performing image blurring processing on the first image to obtain a second image; and filling the region to be processed with the second image to obtain a processed image to be displayed. In the embodiment, the image blurring processing is performed on the first image, so that the display difference between the main display area and the auxiliary display area in the process of displaying the processed image to be displayed can be reduced, split feeling is relieved or avoided, and the display effect is improved.
The disclosed embodiments also provide a method of processing an image to be displayed, and fig. 19 is a flowchart illustrating a method of processing an image to be displayed according to an exemplary embodiment, referring to fig. 19, a method of processing an image to be displayed, including steps 1901 to 1902, wherein;
1901, acquiring a second image of the previous frame and the determined area to be processed.
In the embodiment of the present disclosure, the processor may acquire the current frame to-be-displayed image, and if the current frame to-be-displayed image is the first frame to-be-displayed image to be processed, the scheme of the embodiment shown in fig. 2 is adopted to process the current frame to-be-displayed image. If the current frame to-be-displayed image is not the first frame to-be-displayed image to be processed, the processor can acquire the second image of the previous frame to-be-displayed image and the determined to-be-processed area.
And 1902, filling the region to be processed of the image to be displayed of the current frame with the second image of the previous frame to obtain the processed image to be displayed.
In the embodiment of the disclosure, the processor uses the area to be processed of the image to be displayed of the previous frame as the area to be processed of the image to be displayed of the current frame, and then fills the first image of the previous frame into the area to be processed of the image to be displayed of the current frame, so that the processed image to be displayed can be obtained.
The embodiment of the disclosure has the advantages of small data calculation amount and high processing speed besides the effect of the embodiment shown in fig. 2, and is suitable for application scenes such as instant messaging software chat interfaces, backgrounds, static screen savers and the like.
On the basis of the method for processing the image to be displayed, the embodiment of the disclosure also provides an apparatus for processing the image to be displayed, and fig. 20 is a block diagram of an apparatus for processing the image to be displayed according to an exemplary embodiment. Referring to fig. 20, an apparatus for processing an image to be displayed, comprising:
a first image acquisition module 2001 for determining a region to be processed in an image to be displayed and a first image in the region to be processed according to the main display area and the sub display area;
a second image obtaining module 2002, configured to perform image blurring processing on the first image to obtain a second image;
and a second image filling module 2003, configured to fill the region to be processed with the second image, so as to obtain a processed image to be displayed.
So far, through carrying out the image blurring process to the first image in this embodiment, the display difference of the main display area and the auxiliary display area in the process of displaying the processed image to be displayed can be reduced, the split feeling is relieved or avoided, and the improvement of the display effect is facilitated.
In the embodiment of the disclosure, the blurring degree of the second image is continuously changed according to a set direction.
On the basis of the embodiment shown in fig. 20, the embodiment of the present disclosure further provides an apparatus for processing an image to be displayed, referring to fig. 21, the first image obtaining module 2001 includes:
A preset shape acquisition unit 2101 for acquiring a preset shape of the region to be processed;
a preset shape adjusting unit 2102, configured to adjust the preset shape according to the position and shape of the secondary display area until the secondary display area is located in the preset shape, so as to obtain the area to be processed.
On the basis of the embodiment shown in fig. 20, the embodiment of the present disclosure further provides an apparatus for processing an image to be displayed, referring to fig. 22, the first image obtaining module 2001 includes:
a target object obtaining unit 2201, configured to identify the image to be displayed, and obtain a plurality of target objects included in the image to be displayed;
a to-be-processed area acquiring unit 2202, configured to determine at least one target object that is located in whole or in part in the secondary display area, and take an area including the secondary display area and the at least one target object as a to-be-processed area of the to-be-displayed image.
On the basis of the embodiment shown in fig. 20, the embodiment of the present disclosure further provides an apparatus for processing an image to be displayed, referring to fig. 23, the second image filling module 2003 includes:
a pixel density acquisition unit 2301 for acquiring a first pixel density of the main display region and a second pixel density of the sub display region;
A change number acquisition unit 2302 configured to determine a change number of times from the second pixel density to the first pixel density according to a preset change step;
a to-be-processed area acquiring unit 2303 configured to determine an area to be processed based on the change step length, the change number and the edge of the sub display area.
In an embodiment of the disclosure, the shape of the region to be treated includes at least one of: semicircular, rectangular, triangular, trapezoidal, cosmetic pointed and circular.
On the basis of the embodiment shown in fig. 20, the embodiment of the disclosure further provides an apparatus for processing an image to be displayed, referring to fig. 24, the second image obtaining module 2002 includes:
a data value acquiring unit 2401, configured to sequentially acquire data values of a first number of pixels in the first image;
an average value acquisition unit 2402 for acquiring an average value of the data values of the first number of pixels;
the second image obtaining unit 2403 is configured to obtain a second image by using the average value as a new data value of the first number of pixels.
In the embodiment of the present disclosure, the average value obtaining unit 2402 is further configured to:
and continuously acquiring an average value of the data values of a plurality of pixels when each pixel in the auxiliary display area corresponds to the plurality of pixels in the plurality of images to be displayed, and taking the average value as the data value of each pixel.
On the basis of the embodiment shown in fig. 20, the embodiment of the disclosure further provides an apparatus for processing an image to be displayed, referring to fig. 25, the second image obtaining module 2002 includes:
a sub-pixel weight acquiring unit 2501 configured to acquire colors of a first pixel and a last pixel of each row of pixels in the first image, and determine weights of sub-pixels in each pixel;
a second image acquisition unit 2502 for adjusting the data values of the pixels in each row based on the weights of the sub-pixels in each pixel to obtain a second image.
On the basis of the embodiment shown in fig. 20, the embodiment of the disclosure further provides an apparatus for processing an image to be displayed, referring to fig. 26, the second image obtaining module 2002 includes:
a first image dividing unit 2601, configured to divide the first image according to a shape of the first image, so as to obtain a plurality of strip areas;
and a second image obtaining unit 2602, configured to perform blur processing on each of the plurality of strip areas to different extents based on a preset processing policy, to obtain a second image.
On the basis of the method for processing the image to be displayed, the embodiment of the disclosure also provides an apparatus for processing the image to be displayed, and fig. 27 is a block diagram of an apparatus for processing the image to be displayed according to an exemplary embodiment. Referring to fig. 27, an apparatus for processing an image to be displayed, comprising:
A second image acquisition module 2701, configured to acquire a second image of a previous frame and the determined area to be processed;
and the second image filling module 2702 is configured to fill the to-be-processed area of the to-be-displayed image of the current frame with the second image of the previous frame, so as to obtain the processed to-be-displayed image.
The embodiment of the disclosure has the advantages of small data calculation amount and high processing speed besides the effect of the embodiment shown in fig. 2, and is suitable for application scenes such as instant messaging software chat interfaces, backgrounds, static screen savers and the like.
It can be understood that the apparatus for processing an image to be displayed according to the embodiments of the present invention corresponds to the above method for processing an image to be displayed, and specific content may refer to content of each embodiment of the method, which is not described herein again.
Fig. 28 is a block diagram of an electronic device 2800, shown in accordance with an exemplary embodiment. For example, the electronic device 2800 may be an electronic device such as a cell phone, tablet computer, electronic book reader, multimedia playback device, wearable device, vehicle-mounted terminal, and the like.
Referring to fig. 28, electronic device 2800 can include one or more of the following components: a processing component 2802, a memory 2804, a power component 2806, a multimedia component 2808, an audio component 2810, an input/output (I/O) interface 2812, a sensor component 2814, and a communication component 2816.
The processing component 2802 generally controls overall operation of the electronic device 2800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. Processing component 2802 may include one or more processors 2820 to execute instructions to perform all or part of the steps of the methods described above. Further, processing component 2802 may include one or more modules to facilitate interactions between processing component 2802 and other components. For example, processing component 2802 may include a multimedia module to facilitate interaction between multimedia component 2808 and processing component 2802. As another example, processing component 2802 may read executable instructions from a memory to implement the steps of one method of processing an image to be displayed provided by the embodiments described above.
The memory 2804 is configured to store various types of data to support operations at the electronic device 2800. Examples of such data include instructions for any application or method operating on the electronic device 2800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 2804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically Erasable Programmable Read Only Memory (EEPROM), erasable Programmable Read Only Memory (EPROM), programmable Read Only Memory (PROM), read Only Memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply assembly 2806 provides power to the various components of the electronic device 2800. Power supply components 2806 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 2800.
The multimedia component 2808 includes a display, such as the display shown in fig. 1, between the electronic device 2800 and a user that provides an output interface. In some embodiments, multimedia assembly 2808 includes a front camera and/or a rear camera. When the electronic device 2800 is in an operational mode, such as a capture mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 2810 is configured to output and/or input audio signals. For example, audio component 2810 includes a Microphone (MIC) configured to receive external audio signals when electronic device 2800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in memory 2804 or transmitted via communication component 2816. In some embodiments, audio component 2810 further comprises a speaker for outputting audio signals.
I/O interface 2812 provides an interface between processing component 2802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 2814 includes one or more sensors for providing status assessment of various aspects of the electronic device 2800. For example, the sensor assembly 2814 may detect an on/off state of the electronic device 2800, a relative positioning of the components, such as a display and keypad of the electronic device 2800, the sensor assembly 2814 may also detect a change in position of the electronic device 2800 or a component of the electronic device 2800, the presence or absence of a user's contact with the electronic device 2800, an orientation or acceleration/deceleration of the electronic device 2800, and a change in temperature of the electronic device 2800. The sensor assembly 2814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 2814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 2814 can also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 2816 is configured to facilitate communication between the electronic device 2800 and other devices, either wired or wireless. The electronic device 2800 may access a wireless network based on a communication standard, such as Wi-Fi,2G,3G,4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component 2816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 2816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 2800 can be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory machine-readable storage medium is also provided, such as a memory 2804, comprising instructions executable by the processor 2820 of the electronic device 2800 to perform the image processing methods described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (22)
1. A method of processing an image to be displayed, the image to be displayed being displayed by a display screen comprising a primary display area and a secondary display area, the method comprising:
determining a region to be processed in an image to be displayed and a first image in the region to be processed according to the main display region and the auxiliary display region;
at least part of the area to be processed is overlapped with the auxiliary display area;
Performing image blurring processing on the first image to obtain a second image;
filling the region to be processed with the second image to obtain a processed image to be displayed;
the determining a to-be-processed area in the to-be-displayed image according to the main display area and the auxiliary display area comprises the following steps:
acquiring a preset shape of the region to be treated;
and adjusting the preset shape according to the position and the shape of the auxiliary display area until the auxiliary display area is positioned in the preset shape, so as to obtain a to-be-processed area in the to-be-displayed image.
2. The method of claim 1, wherein the degree of blurring of the second image is continuously varied in accordance with a set direction.
3. The method of claim 1, wherein determining a region to be processed in an image to be displayed from the primary display area and the secondary display area comprises:
identifying the image to be displayed to obtain a plurality of target objects contained in the image to be displayed;
and determining at least one target object which is wholly or partially positioned in the auxiliary display area, and taking an area containing the auxiliary display area and the at least one target object as a to-be-processed area of the to-be-displayed image.
4. The method of claim 2, wherein determining a region to be processed in an image to be displayed from the primary display area and the secondary display area comprises:
acquiring a first pixel density of the main display area and a second pixel density of the auxiliary display area;
determining the number of times of change from the second pixel density to the first pixel density according to a preset change step length;
and determining a region to be processed based on the change step length, the change times and the edge of the auxiliary display region.
5. The method according to any one of claims 1 to 4, wherein the shape of the area to be treated comprises at least one of: semicircular, rectangular, triangular, trapezoidal, cosmetic pointed and circular.
6. The method of claim 2, wherein performing image blurring processing on the first image to obtain a second image comprises:
sequentially acquiring data values of a first number of pixels in the first image;
and obtaining an average value of the data values of the first number of pixels, and taking the average value as a new data value of the first number of pixels to obtain a second image.
7. The method of claim 6, wherein the method further comprises:
If each pixel in the secondary display area corresponds to a plurality of pixels in a plurality of images to be displayed, continuously acquiring an average value of the data values of the plurality of pixels, and taking the average value as the data value of each pixel.
8. The method of claim 2, wherein performing image blurring processing on the first image to obtain a second image comprises:
acquiring the colors of a first pixel and a last pixel of each row of pixels in the first image, and determining the weights of sub-pixels in each pixel;
and adjusting the data value of each row of pixels based on the weights of the sub-pixels in each pixel to obtain a second image.
9. The method of claim 2, wherein performing image blurring processing on the first image to obtain a second image comprises:
dividing the first image according to the shape of the first image to obtain a plurality of strip-shaped areas;
and carrying out fuzzy processing of different degrees on each strip-shaped area in the plurality of strip-shaped areas based on a preset processing strategy to obtain a second image.
10. A method of processing an image to be displayed, the image to be displayed being displayed by a display screen comprising a primary display area and a secondary display area, the method comprising:
Acquiring a second image in a previous frame of an image to be displayed of the current frame of the image to be displayed and a region to be processed in the previous frame of the image to be displayed; wherein, the second image in the previous frame of image to be displayed is positioned in the area to be processed in the previous frame of image to be displayed; the second image in the previous frame of image to be displayed is obtained by performing image blurring processing on the first image in the area to be processed in the previous frame of image to be displayed; the to-be-processed area in the previous frame of to-be-displayed image is obtained by adjusting the preset shape of the to-be-processed area in the previous frame of to-be-displayed image according to the position and the shape of the auxiliary display area until the auxiliary display area is positioned in the preset shape;
and filling a second image in the previous frame of the image to be displayed into the area to be processed in the current frame of the image to be displayed to obtain a processed current frame of the image to be displayed.
11. An apparatus for processing an image to be displayed, wherein the image to be displayed is displayed by a display screen comprising a primary display area and a secondary display area, the apparatus comprising:
The first image acquisition module is used for determining a to-be-processed area in the image to be displayed of the current frame and a first image in the to-be-processed area according to the main display area and the auxiliary display area;
the second image acquisition module is used for carrying out image blurring processing on the first image to obtain a second image;
the second image filling module is used for filling the region to be processed with the second image to obtain a processed image to be displayed;
wherein the first image acquisition module comprises:
a preset shape acquisition unit, configured to acquire a preset shape of the area to be treated;
and the preset shape adjusting unit is used for adjusting the preset shape according to the position and the shape of the auxiliary display area until the auxiliary display area is positioned in the preset shape, so as to obtain the area to be processed.
12. The apparatus of claim 11, wherein the degree of blurring of the second image continuously varies according to a set direction.
13. The apparatus of claim 11, wherein the first image acquisition module comprises:
the target object acquisition unit is used for identifying the image to be displayed and obtaining a plurality of target objects contained in the image to be displayed;
And the to-be-processed area acquisition unit is used for determining at least one target object which is wholly or partially positioned in the auxiliary display area, and taking an area containing the auxiliary display area and the at least one target object as the to-be-processed area of the to-be-displayed image.
14. The apparatus of claim 12, wherein the second image filling module comprises:
a pixel density acquisition unit configured to acquire a first pixel density of the main display area and a second pixel density of the sub display area;
a change frequency acquisition unit, configured to determine a change frequency from the second pixel density to the first pixel density according to a preset change step length;
and the to-be-processed area acquisition unit is used for determining the to-be-processed area based on the change step length, the change times and the edge of the auxiliary display area.
15. The apparatus according to any one of claims 11 to 14, wherein the shape of the area to be treated comprises at least one of: semicircular, rectangular, triangular, trapezoidal, cosmetic pointed and circular.
16. The apparatus of claim 12, wherein the second image acquisition module comprises:
A data value obtaining unit, configured to sequentially obtain data values of a first number of pixels in the first image;
an average value obtaining unit configured to obtain an average value of the data values of the first number of pixels;
and the second image acquisition unit is used for taking the average value as a new data value of the first number of pixels to obtain a second image.
17. The apparatus of claim 16, wherein the average value acquisition unit is further configured to:
and continuously acquiring an average value of the data values of a plurality of pixels when each pixel in the auxiliary display area corresponds to the plurality of pixels in the plurality of images to be displayed, and taking the average value as the data value of each pixel.
18. The apparatus of claim 12, wherein the second image acquisition module comprises:
a sub-pixel weight obtaining unit, configured to obtain colors of a first pixel and a last pixel of each row of pixels in the first image, and determine weights of sub-pixels in each pixel;
and the second image acquisition unit is used for adjusting the data value of each row of pixels based on the weights of the sub-pixels in each pixel to obtain a second image.
19. The apparatus of claim 12, wherein the second image acquisition module comprises:
The first image dividing unit is used for dividing the first image according to the shape of the first image to obtain a plurality of strip-shaped areas;
and the second image acquisition unit is used for carrying out fuzzy processing on each strip-shaped area in the plurality of strip-shaped areas to different degrees based on a preset processing strategy to obtain a second image.
20. An apparatus for processing an image to be displayed, wherein the image to be displayed is displayed by a display screen comprising a primary display area and a secondary display area, the apparatus comprising:
the second image acquisition module is used for acquiring a second image in a previous frame of the image to be displayed of the current frame of the image to be displayed and a region to be processed in the previous frame of the image to be displayed; wherein, the second image in the previous frame of image to be displayed is positioned in the area to be processed in the previous frame of image to be displayed; the second image in the previous frame of image to be displayed is obtained by performing image blurring processing on the first image in the area to be processed in the previous frame of image to be displayed; the to-be-processed area in the previous frame of to-be-displayed image is obtained by adjusting the preset shape of the to-be-processed area in the previous frame of to-be-displayed image according to the position and the shape of the auxiliary display area until the auxiliary display area is positioned in the preset shape;
And the second image filling module is used for using the area to be processed in the previous frame of image to be displayed as the area to be processed in the current frame of image to be displayed, and filling the second image in the previous frame of image to be displayed into the area to be processed in the current frame of image to be displayed, so as to obtain the processed current frame of image to be displayed.
21. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to execute executable instructions in the memory to implement the steps of the method of any one of claims 1 to 10.
22. A machine-readable storage medium having stored thereon machine-executable instructions which, when executed by a processor, implement the steps of the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811641110.0A CN111383166B (en) | 2018-12-29 | 2018-12-29 | Method and device for processing image to be displayed, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811641110.0A CN111383166B (en) | 2018-12-29 | 2018-12-29 | Method and device for processing image to be displayed, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111383166A CN111383166A (en) | 2020-07-07 |
CN111383166B true CN111383166B (en) | 2023-09-26 |
Family
ID=71216159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811641110.0A Active CN111383166B (en) | 2018-12-29 | 2018-12-29 | Method and device for processing image to be displayed, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111383166B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114078101A (en) * | 2020-08-11 | 2022-02-22 | 中兴通讯股份有限公司 | Image display method, electronic device, and computer-readable storage medium |
CN113242339B (en) * | 2021-06-15 | 2023-08-29 | Oppo广东移动通信有限公司 | Display method, display device, electronic equipment and storage medium |
CN113781295B (en) * | 2021-09-14 | 2024-02-27 | 网易(杭州)网络有限公司 | Image processing method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104423919A (en) * | 2013-09-10 | 2015-03-18 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN107092684A (en) * | 2017-04-21 | 2017-08-25 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium |
CN107277419A (en) * | 2017-07-28 | 2017-10-20 | 京东方科技集团股份有限公司 | A kind of display device and its display methods |
CN107819020A (en) * | 2017-11-03 | 2018-03-20 | 武汉天马微电子有限公司 | Organic light-emitting display panel and display device |
CN108766347A (en) * | 2018-06-13 | 2018-11-06 | 京东方科技集团股份有限公司 | A kind of display panel, its display methods and display device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107357404A (en) * | 2016-05-10 | 2017-11-17 | 联发科技(新加坡)私人有限公司 | Method for displaying image and its electronic installation |
KR102566717B1 (en) * | 2016-12-12 | 2023-08-14 | 삼성전자 주식회사 | Electronic device having a biometric sensor |
-
2018
- 2018-12-29 CN CN201811641110.0A patent/CN111383166B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104423919A (en) * | 2013-09-10 | 2015-03-18 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN107092684A (en) * | 2017-04-21 | 2017-08-25 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium |
CN107277419A (en) * | 2017-07-28 | 2017-10-20 | 京东方科技集团股份有限公司 | A kind of display device and its display methods |
CN107819020A (en) * | 2017-11-03 | 2018-03-20 | 武汉天马微电子有限公司 | Organic light-emitting display panel and display device |
CN108766347A (en) * | 2018-06-13 | 2018-11-06 | 京东方科技集团股份有限公司 | A kind of display panel, its display methods and display device |
Also Published As
Publication number | Publication date |
---|---|
CN111383166A (en) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110958401B (en) | Super night scene image color correction method and device and electronic equipment | |
CN109345485B (en) | Image enhancement method and device, electronic equipment and storage medium | |
CN111383166B (en) | Method and device for processing image to be displayed, electronic equipment and readable storage medium | |
CN108734754B (en) | Image processing method and device | |
CN113554658B (en) | Image processing method, device, electronic equipment and storage medium | |
CN106484257A (en) | Camera control method, device and electronic equipment | |
CN106131441B (en) | Photographing method and device and electronic equipment | |
CN107948733B (en) | Video image processing method and device and electronic equipment | |
CN107967459B (en) | Convolution processing method, convolution processing device and storage medium | |
CN104050645B (en) | Image processing method and device | |
US20210133499A1 (en) | Method and apparatus for training image processing model, and storage medium | |
CN114096994A (en) | Image alignment method and device, electronic equipment and storage medium | |
CN105574834B (en) | Image processing method and device | |
CN117616774A (en) | Image processing method, device and storage medium | |
CN113160038A (en) | Image style migration method and device, electronic equipment and storage medium | |
CN112585939A (en) | Image processing method, control method, equipment and storage medium | |
CN106469446B (en) | Depth image segmentation method and segmentation device | |
CN107613210B (en) | Image display method and device, terminal and storage medium | |
CN105976344A (en) | Whiteboard image processing method and whiteboard image processing device | |
CN113660425A (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN115239570A (en) | Image processing method, image processing apparatus, and storage medium | |
CN113822806B (en) | Image processing method, device, electronic equipment and storage medium | |
CN111835977B (en) | Image sensor, image generation method and device, electronic device, and storage medium | |
CN114390189A (en) | Image processing method, device, storage medium and mobile terminal | |
CN112651899A (en) | Image processing method and device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |