WO2017101626A1 - 一种实现图像处理的方法及装置 - Google Patents

一种实现图像处理的方法及装置 Download PDF

Info

Publication number
WO2017101626A1
WO2017101626A1 PCT/CN2016/105755 CN2016105755W WO2017101626A1 WO 2017101626 A1 WO2017101626 A1 WO 2017101626A1 CN 2016105755 W CN2016105755 W CN 2016105755W WO 2017101626 A1 WO2017101626 A1 WO 2017101626A1
Authority
WO
WIPO (PCT)
Prior art keywords
saliency
region
image
original image
different
Prior art date
Application number
PCT/CN2016/105755
Other languages
English (en)
French (fr)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017101626A1 publication Critical patent/WO2017101626A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Definitions

  • This document relates to, but is not limited to, image processing technology, and more particularly to a method and apparatus for implementing image processing.
  • Image saliency is an important visual feature of an image, reflecting the degree to which the human eye attaches importance to parts of the image.
  • the user is only interested in some areas of the image, the partial area of interest represents the user's query intent, and the other areas are independent of the user's query intent.
  • Figure 1 (a) is the original image taken, as shown in Figure 1 (a), the body area in the image can be significantly displayed in the visual range;
  • Figure 1 (b) is the saliency image of the captured image, as shown in Figure 1.
  • (b) shows that the higher the brightness of the pixel in the saliency image, the higher the degree of saliency, the more the original image of the corresponding region can cause visual interest to the user, and the image of the portion with high saliency is the region of interest to the user. .
  • the user When taking an image, the user generally focuses on the subject area of interest.
  • the subject area usually becomes a significant area of the captured image, and the weight of the evaluation of the photo quality is mainly reflected in the main area of the image; when the captured image appears in focus as a whole Blurring, incorrect exposure, occlusion of light, poor saturation, and inconspicuous contrast.
  • the image processing algorithm performs global adjustment and processing on the image, the same processing is performed on both the main body area and the background area, resulting in significant significantity of the main body area. Sexual weakness. The display effect of the main area cannot be improved.
  • image processing is performed on the divided main body area and the background area, respectively.
  • the subject area and the background area obtained by dividing the difference between color and brightness are subjected to image processing, only the same image processing method can be used for each of the divided areas, for example, when the captured image is a person image with a background.
  • the character image is processed according to the unified image processing method with the character as the main body region, and the background image is processed according to the unified image processing method with the background as the background region;
  • the local features for example, glasses for focusing images, cheeks, etc.
  • the unified processing method does not significantly distinguish local features.
  • the optimization of the sex, the display effect has not improved.
  • Embodiments of the present invention provide a method and apparatus for implementing image processing, which can improve the display effect of a body area.
  • An embodiment of the present invention provides an apparatus for implementing image processing, including: an analyzing unit, a determining unit, and a processing unit;
  • An analysis unit configured to perform a significant analysis of the original image
  • Determining a unit configured to distinguish the original image as a salient region containing at least two different significant effects according to a result of the significance analysis
  • the processing unit is configured to process each of the saliency regions with a corresponding image processing method.
  • the apparatus further includes a segmentation unit configured to segment each significant region having a different significance effect before processing each of the saliency regions with a corresponding image processing method.
  • the analyzing unit is configured to compare the color, brightness, and direction of each pixel of the original image with the color, brightness, and direction of the surrounding pixel points to obtain corresponding saliency of each pixel. Numerical analysis was performed for significance analysis.
  • the analyzing unit is configured to perform image contrast analysis on the original image by using a region contrast RC algorithm, and perform saliency analysis of the original image by image contrast analysis.
  • the segmentation unit is further configured to: use a mathematical morphology to extract a contour of each significant region with different significance effects, and/or fill each significant effect when segmenting each significant region with a different significance effect A cavity inside the area of different saliency areas.
  • the determining unit is configured to distinguish the original image as a saliency region including at least two different saliency effects by:
  • the original image is divided into significant regions containing at least two different significant effects according to a regional significance value size in the saliency analysis result in combination with a preset discrimination threshold.
  • the preset differentiation threshold includes: the saliency value range of the body region is greater than 64 and less than 255, or Equal to 255; the significance of the background area ranges from greater than 0 to less than 64, or equal to zero.
  • the image processing method comprises: adjusting an exposure value, and/or blurring processing, and/or white balance effects, and/or background replacement, and/or significance value adjustment, and/or tone scale adjustment. , and/or brightness adjustment, hue/saturation adjustment, and/or color replacement, and/or gradient mapping, and/or photo filters.
  • the segmentation unit is configured to extract, by using a mathematical morphology, an outline of a saliency region different in each of the saliency effects by:
  • Corresponding binary images are respectively calculated for each saliency region with different saliency effects, and the inner contour of the binary image of each saliency region is extracted to obtain an inner contour, and an inner contour smaller than the preset area is determined as an internal void. Pixel filling the inner cavity.
  • the analyzing unit is configured to perform saliency analysis of the original image by:
  • the original image is divided into N regions according to the set pixel size, and the saliency value S(r k ) of the region r k is calculated:
  • f 1 (i) represents the probability of the region r 1 i th color all statistics in the area color type n appears 1;
  • f 2 (i) for the region r 2 j-th color all the statistics in the region n type color probability of occurrence 2;
  • d (c 1, i, c 2, j) r 1 i-th colors c 1, i and r j-2 in color c 2, j the distance difference ;
  • D s (r k , r i ) is the Euclidean distance of the center of gravity of the two regions, and ⁇ s is the spatial distance influence adjustment factor.
  • the present application also provides a method for implementing image processing, including:
  • Each saliency area is processed by a corresponding image processing method.
  • distinguishing the original image as a salient region comprising at least two different significant effects includes:
  • the original image is divided into significant regions containing at least two different significant effects according to a regional significance value size in the saliency analysis result in combination with a preset discrimination threshold.
  • the preset differentiation threshold includes: the saliency value range of the body region is greater than 64 and less than 255, or Equal to 255; the significance of the background area ranges from greater than 0 to less than 64, or equal to zero.
  • the method further includes:
  • performing saliency analysis on the original image includes: comparing contrast, color, brightness, and direction of each pixel of the original image with color, brightness, and direction of surrounding pixels, The saliency value of each pixel is obtained and the significance analysis is performed.
  • performing a significant analysis of the original image includes:
  • the original image is divided into N regions according to the set pixel size, and the saliency value S(r k ) of the calculation region r k is:
  • f 1 (i) represents the probability that the i-th color in the region r 1 appears in all statistical color categories n 1 of the region
  • f 2 (i) is the statistics of the j-th color in the region r 2 in the region n type color probability of occurrence 2
  • d (c 1, i, c 2, j) r 1 i-th colors c 1, i and r j-2 in color c 2, j the distance difference ;
  • D s (r k , r i ) is the Euclidean distance of the center of gravity of the two regions, and ⁇ s is the spatial distance influence adjustment factor.
  • performing a significant analysis on the original image includes:
  • the original image is subjected to image contrast analysis using a region contrast RC algorithm, and the saliency analysis of the original image is performed by image contrast analysis.
  • the image processing method comprises: adjusting an exposure value, and/or blurring processing, and/or white balance effects, and/or background replacement, and/or significant value adjustment, and/or tone scale adjustment, and / or brightness adjustment, hue / saturation adjustment, and / or color replacement, and / or gradient mapping, and / or photo filter.
  • the method further includes:
  • Mathematical morphology is used to extract the contours of the saliency regions with different saliency effects, and/or to fill the intra-region voids of the saliency regions with different saliency effects.
  • mathematical morphology is used to extract each of the significant regions with different significance effects
  • the outline includes:
  • Corresponding binary images are respectively calculated for each saliency region with different saliency effects, and the inner contour of the binary image of each saliency region is extracted to obtain an inner contour, and an inner contour smaller than the preset area is determined as an internal void. Pixel filling the inner cavity.
  • the technical solution of the present application includes: performing saliency analysis on the original image; and distinguishing, according to the saliency analysis result, the original image as a saliency region including at least two different saliency effects; respectively adopting corresponding images for each saliency region Processing methods are processed.
  • each saliency region is determined by saliency analysis, and each saliency region is processed by a corresponding image processing method, thereby improving the display effect of the image body region and improving the image display quality.
  • Figure 1 (a) is the original image taken
  • Figure 1 (b) is a saliency image of the captured image
  • FIG. 2 is a schematic diagram showing the hardware structure of a mobile terminal implementing each embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for implementing image processing according to an embodiment of the present invention.
  • FIG. 4 is a structural block diagram of an apparatus for implementing image processing according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method according to an embodiment of the present invention.
  • Figure 6 (a) is a picture content of the first original image
  • 6(b) is a schematic diagram of dividing a first original image according to an embodiment of the present invention.
  • 6(c) is a schematic diagram showing the saliency analysis of the first original image according to the embodiment of the present invention.
  • 6(d) is a schematic diagram showing the result of the saliency analysis of the first original image of the embodiment
  • FIG. 6(e) is a schematic diagram showing an effect of contrast enhancement on the first original image
  • 6(f) is a schematic diagram showing an effect of performing image processing on a first original image according to an embodiment of the present invention
  • Figure 7 (a) is the picture content of the second original image
  • FIG. 7(b) is a schematic diagram showing an effect of performing global white balance processing on the second original image
  • FIG. 7(c) is a schematic diagram showing the result of performing saliency analysis on the second original image according to an embodiment of the present invention.
  • FIG. 7(d) is a schematic diagram showing an effect of performing image processing on a second original image according to an embodiment of the present invention.
  • Figure 8 (a) is the picture content of the third original image
  • FIG. 8(b) is a schematic diagram showing results of performing saliency analysis on a third original image according to an embodiment of the present invention.
  • FIG. 8(c) is a schematic diagram showing the effect of performing image processing on the second original image according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing the hardware structure of a mobile terminal that implements various embodiments of the present invention, as shown in FIG. 2,
  • the mobile terminal 100 may include a user input unit 130, an output unit 150, a memory 160, a controller 180, a power supply unit 190, and the like.
  • Figure 2 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • the A/V input unit 120 is arranged to receive a video signal.
  • the A/V input unit 120 may include a camera 121 camera 121 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium), and two or more cameras 1210 may be provided according to the configuration of the mobile terminal.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch pad eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact
  • a scroll wheel e.g, a scroll wheel, rocker, etc.
  • a touch screen can be formed.
  • the output unit 150 may include a display unit 151.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be set to detect touch input pressure as well as touch input position and touch input area.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a word Symbol or image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the embodiment of the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • FIG. 3 is a flowchart of a method for implementing image processing according to an embodiment of the present invention. As shown in FIG. 3, the method includes:
  • Step 300 performing saliency analysis on the original image
  • the saliency analysis of the original image includes: comparing the color, brightness and direction of each pixel of the original image with the color, brightness and direction of the surrounding pixel, and obtaining the corresponding saliency value of each pixel. Significant analysis.
  • the image contrast analysis was performed on the original image by the region contrast RC algorithm, and the saliency analysis of the original image was performed by image contrast analysis.
  • the saliency of the image can be calculated based on the human attention mechanism.
  • the image content is focused on the image according to the interest. Since the user shoots the content of interest through the shooting technique, the partial content has a remarkable effect.
  • the embodiment of the present invention performs saliency analysis by using the region contrast (RC, Region Contrast) algorithm, and obtains the saliency of the image by using the regional contrast calculation according to the idea of the RC algorithm; and obtains the correlation between the global contrast and the spatial position of the image.
  • the global saliency image that is, the scale of the image is the same as the original image.
  • the contrast method based on the global region fully considers the contrast difference between the single region and the global region, so that the saliency of the entire region can be effectively highlighted, not only the region.
  • the edge is significant (only the significance of the image within the local range is considered).
  • the RC algorithm also considers the spatial relationship of the region. When calculating the significance effect, the distance between the regions is set by the weighting parameter, the smaller the weighting value is, and the closer the position between the regions is, the more the weighting value should be. Large, to achieve reasonable handling of regional space.
  • the original image is first segmented according to the super pixel. It is assumed that the original image is divided into n regions, and the number n of the image regions can be determined according to the resolution of the image, and the size of each region is set to
  • the saliency value S(r k ) of the RC algorithm defining region r k is:
  • r i represents a region different from r k
  • ⁇ (r i ) is a weighting value of the region r i .
  • D r (r k , r i ) is the color distance difference between the two regions, specifically defined as:
  • f 1 (i) represents the probability that the i-th color in the region r 1 appears in all statistical color types n 1 of the region;
  • f 2 (i) is the j-th color in the region r 2 The probability of occurrence in all statistical color categories n 2 in this region.
  • d (c 1, i, c 2, j) , r 1 i-th colors c 1, i and r j-2 in color c 2, j the distance difference, d (c 1, i, c 2 , j ) is mainly a distance metric in CIELAB space (CIELAB is a color system of CIE, the distance metric in CIELAB space is based on the CIELAB spatial color system, determining the numerical information of the distance metric of a color).
  • the RC algorithm adds the regional spatial distance difference, and obtains the regional significance value as:
  • D s (r k , r i ) is the Euclidean distance of the center of gravity of the two regions
  • ⁇ s is the spatial distance influence adjustment factor.
  • the value of the adjustment factor can be set according to the empirical value of those skilled in the art.
  • Step 301 Differentiate, according to the result of the saliency analysis, the original image as a saliency region including at least two different saliency effects;
  • distinguishing the original image as a salient region including at least two different significant effects includes:
  • the original image is divided into significant regions containing at least two different significant effects according to a regional significance value size in the saliency analysis result in combination with a preset discrimination threshold.
  • the preset differentiation threshold includes: the saliency value range of the body region is greater than 64 and less than 255, or Equal to 255, the significance of the background area ranges from greater than 0 to less than 64, or equal to zero.
  • the size of the significance value range can be adjusted according to the experience value of those skilled in the art. When the original image is divided into more significant regions, each of the experience values of the person skilled in the art can be used to determine each. The corresponding range of values for the saliency area.
  • Step 302 Perform processing on each of the saliency regions by using a corresponding image processing method.
  • the method of the embodiment of the present invention further includes:
  • the method of the embodiment of the present invention separates each significant region with different significant effects.
  • the domain can be implemented using an associated image segmentation algorithm.
  • the image processing method includes: adjusting an exposure value, and/or blurring processing, and/or white balance effects, and/or background replacement, and/or significance value adjustment, and/or tone scale adjustment, and / or brightness adjustment, hue / saturation adjustment, and / or color replacement, and / or gradient mapping, and / or photo filter.
  • the processing of different saliency regions with different saliency effects by different image processing methods refers to selecting corresponding image processing methods for image processing for each saliency region with different saliency effects, each of which is processed. There is no association between the image processing methods of the saliency region.
  • the saliency region with different saliency effects is processed by different image processing methods in the embodiment of the present invention, which may include the following application scenarios:
  • Application scenario 1 Portrait landscape, the image has insufficient exposure. After segmenting the main body area according to the saliency analysis result, the exposure value is up-regulated on the main body area, and the exposure value of the background area is unchanged.
  • the exposure value can be adjusted step by step according to a preset unit, or can be directly adjusted by means of parameter input.
  • the second application scenario the portrait landscape, the contrast between the main body area and the background area is small, the main area is unchanged, and the background area is blurred; the adjustment of the blur processing can be performed step by step according to a preset adjustment unit, or parameters can be adopted.
  • the input method directly inputs the blur parameters.
  • the image white balance effect is not good, and the white balance processing is performed on the main body area and the background area by using different parameters.
  • the background of the image is cluttered, the main area is unchanged, and the background area is replaced; the background area can be replaced with an alternate image taken by the user alone, or the appropriate background material can be selected from the image library.
  • the saliency effect is not good, and the image fusion processing is performed to increase the significant weight on the main body area and improve the display effect of the main body area.
  • the image processing method further includes: gradation adjustment, brightness adjustment, hue/saturation adjustment, color replacement, gradation mapping, photo filter, etc., and specific applicable scenarios and implementation manners are common knowledge of those skilled in the art, and in addition, the superimposed implementation of two or more image processing methods is also a common technical means by those skilled in the art, and details are not described herein again.
  • the method of the embodiment of the present invention further includes:
  • Mathematical morphology is used to extract the contours of the saliency regions with different saliency effects, and/or to fill the interior voids of the saliency regions with different saliency effects.
  • Corresponding binary images are respectively calculated for each saliency region with different saliency effects, and the inner contour of the binary image of each saliency region is extracted to obtain an inner contour, and an inner contour smaller than the preset area is determined as an internal void. Pixel filling the inner cavity.
  • expansion, corrosion, opening and closing are the basic algorithms in the image processing algorithm.
  • the expansion is the same as the expansion in the image processing algorithm, and the erosion is the same as the erosion in the image processing algorithm.
  • the contour extraction includes: obtaining a binary image of the original image by expansion, etching, opening and closing operations, and binarizing the binary image into a body region and a background region by a binary image (generally, the pixel of the body region can be set to 255 ( Displayed as white), the background area is set to 0 (displayed as black)), traversing the binary image, extracting pixel break points in the binary image, such as 255 to 0 or 0 to 255 pixel mutation points, the pixel mutation point As a boundary point of the image, the boundary points are connected to form a contour of the body region.
  • contour extraction can ensure that each saliency region of the segmentation has smooth transition and the image is clean; the filling of the inner cavity of the region can ensure the integrity of each saliency region of the segmentation.
  • each saliency region is determined by saliency analysis, and each saliency region is processed by a corresponding image processing method, thereby improving the display effect of the image body region and improving the display quality of the image.
  • FIG. 4 is a structural block diagram of an apparatus for implementing image processing according to an embodiment of the present invention. As shown in FIG. 4, the method includes: an analyzing unit 401, a determining unit 402, and a processing unit 403;
  • the analyzing unit 401 is configured to perform saliency analysis on the original image
  • the analyzing unit 401 is configured to set the color, brightness and direction of each pixel of the original image.
  • the contrast of the image is compared with the color, brightness and direction of the surrounding pixels, and the corresponding saliency value of each pixel is obtained, and the saliency analysis is performed.
  • the analyzing unit 401 is configured to perform image contrast analysis on the original image by using the area contrast RC algorithm, and perform saliency analysis of the original image by image contrast analysis.
  • the determining unit 402 is configured to distinguish the original image as a saliency region including at least two different saliency effects according to the saliency analysis result;
  • the processing unit 403 is configured to perform processing on each of the saliency regions by using a corresponding image processing method.
  • the apparatus of the embodiment of the present invention further includes a segmentation unit 404 configured to segment each saliency region having a different significance effect before processing each of the saliency regions with a corresponding image processing method.
  • the segmentation unit 404 is further configured to extract, by using mathematical morphology, a contour of a saliency region having different saliency effects, and/or a significant difference in filling each saliency effect when segmenting each saliency region having a different saliency effect
  • the inner area of the sexual area is hollow.
  • FIG. 5 is a flowchart of a method according to an embodiment of the present invention. As shown in FIG. 5, the method includes:
  • Step 500 dividing the original image according to the set pixel size
  • Figure 6 (a) is the picture content of the first original image, as shown in Figure 6 (a), the picture contains two main parts of the animal body and the background;
  • Figure 6 (b) is the first original image of the embodiment of the present invention
  • the schematic diagram, as shown in Fig. 6(b), divides the original image into n regions.
  • Step 501 Perform saliency analysis on the original image divided according to the set pixel
  • FIG. 6(c) is a schematic diagram showing the saliency analysis of the first original image according to the embodiment of the present invention.
  • the divided image area 1 and the image area 2 are calculated according to the formula (3) of the RC algorithm.
  • the image area 1 is used as the saliency analysis target, other numbered image areas (all image areas except the image area 1) in the image are subjected to saliency numerical calculation as an image different from the image area 1, and the image area 2 is marked as significant Other numbered image areas in the image when analyzing objects
  • the image different from the image area 2 the saliency numerical calculation is performed; the saliency calculation result of the image area 1 is large, and the saliency calculation result of the image area 2 is small;
  • FIG. 6(d) is the first original image of the embodiment.
  • the saliency analysis results are shown in Fig. 6(d).
  • the image area of interest to the user, while the image area 2 has a small significance value.
  • Step 502 Divide the original image into a saliency region containing two or more significant effects differently according to the saliency analysis result.
  • the saliency analysis result may divide the original image into two or more saliency regions with different saliency effects; the specific number of regions and the saliency value setting for distinguishing each saliency region may be according to the field.
  • the analysis of the technician performs setting; for example, setting the first threshold image area in which the significance value is sorted as a saliency area of the display effect; setting the second threshold image area in which the saliency value is sorted As the saliency region of the second display effect; the third threshold image region in which the saliency value is sorted is set as the saliency region of the third display effect.
  • the division of the saliency area can also be divided by setting a proportional or saliency value, and the specific setting can be set and adjusted according to image analysis by those skilled in the art.
  • image analysis the degree of saliency of the subject is determined, and the two significant regions of the subject and the background are divided according to the saliency effect.
  • the saliency region can be divided into a body region and a background region, and the image saliency value (brightness of the saliency image) is in the range of [0 255], and the saliency value range of the body region can be assumed to be set.
  • the saliency of the background area ranges from [0 64].
  • the main reason is that when the saliency value is low, the human eye does not have obvious perception of the part of the image, so it can be divided into the background and the subject.
  • the significance limit, the value range can be adjusted according to the experience value of the person skilled in the art, and the saliency area can also be adjusted according to the fineness of the image processing. Generally, the higher the image processing quality requirement, the significant division The more sexual areas.
  • Step 503 Segment each significant area with different significance effects.
  • the method further includes: extracting, by using mathematical morphology, a contour of each significant region with different significant effects, and/or filling each significantness A cavity inside the area of the saliency area with different effects.
  • Step 504 Perform processing on each of the saliency regions by using a corresponding image processing method.
  • FIG. 6(a) the subject original and the background contrast are low in the first original image, and there is a problem that the main body display effect is not obvious;
  • FIG. 6(e) is a schematic diagram showing the effect of contrast enhancement on the first original image, as shown in FIG. 6 e), since the subject and the background are simultaneously subjected to contrast enhancement processing, the purpose of enhancing the subject of the character is not reached;
  • FIG. 6(f) is a schematic diagram showing the effect of image processing on the first original image according to an embodiment of the present invention, as shown in the figure As shown in FIG. 6(f), the present embodiment performs image processing with contrast enhancement on the subject, and does not perform any processing on the background, so that the display effect of the subject is enhanced.
  • FIG. 7 (a) is the picture content of the second original image, as shown in Figure 7 (a), the problem of poor white balance effect between the animal body and the background in the second original image
  • Figure 7 (b) is the second original
  • the effect of the image performing global white balance processing is as shown in Fig. 7(b). Since the animal body and the background are simultaneously white balance processed, the display effect of the animal body is distorted due to the white balance processing, and the picture display effect is deteriorated
  • FIG. 7(c) is a schematic diagram showing the result of performing saliency analysis on the second original image according to an embodiment of the present invention. As shown in FIG. 7(c), the saliency degree of the animal subject is determined by saliency analysis, according to the difference of the significant effects.
  • FIG. 7(d) is a schematic diagram showing the effect of image processing on the second original image according to the embodiment of the present invention.
  • the white balance of the background is performed in this embodiment. Processing, no treatment is done on the animal body, so that the display effect of the picture is enhanced by the local processing of the white balance. After the partial white balance, the flowers and greens become green, and the color of the puppies becomes gray. Using the saliency image for local white balance, the white balance of the puppies can be kept unadjusted, and the white balance effect of the background is improved.
  • Figure 8 (a) is the picture content of the third original image, as shown in Figure 8 (a), the animal body and the background are also clear, resulting in a reduction in the display effect of the animal body;
  • Figure 8 (b) is the third embodiment of the present invention
  • the results of the results of the saliency analysis of the original image, as shown in Fig. 8(b), determine the degree of significance of the animal subject by saliency analysis, and divide the two main areas of the animal body and the background according to the significant effect;
  • c) is a schematic diagram of the effect of image processing on the second original image according to the embodiment of the present invention. As shown in FIG. 8(c), the embodiment performs blur processing on the background, and does not perform any processing on the animal body, and the processing is improved.
  • each module/unit in the above embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program stored in the memory by a processor. / instruction to achieve its corresponding function.
  • the invention is not limited to any specific form of combination of hardware and software.
  • the above technical solution improves the display effect of the image main body area and improves the display quality of the image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种实现图像处理的方法及装置,包括:对原始图像进行显著性分析;根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;对每个显著性区域分别采用对应的图像处理方法进行处理。上述技术方案通过显著性分析区分确定每个显著性区域,对每个显著性区域分别采用对应的图像处理方法进行处理,提高了图像主体区域的显示效果,提升了图像的显示质量。

Description

一种实现图像处理的方法及装置 技术领域
本文涉及但不限于图像处理技术,尤指一种实现图像处理的方法及装置。
背景技术
图像显著性是图像的重要视觉特征,体现了人眼对图像的部分区域的重视程度。对一幅图像,用户只对图像中的部分区域感兴趣,感兴趣的部分区域代表了用户的查询意图,其他区域则与用户的查询意图无关。图1(a)为拍摄的原始图像,如图1(a)所示,图像中的主体区域可以显著的显示在视觉范围内;图1(b)为拍摄图像的显著性图像,如图1(b)所示,显著性图像中像素点的亮度越高,表示显著性程度越高,相应区域的原始图像越能引起用户视觉兴趣,该部分显著性高区域的图像为用户感兴趣的区域。
拍摄图像时,用户一般会对焦到感兴趣的主体区域,主体区域通常会成为拍摄图像的显著性区域,而对照片质量的评价权重也主要体现在图像的主体区域;当拍摄的图像整体出现对焦模糊、曝光不正确、光线遮挡、饱和度欠佳、对比度不明显等问题,如果图像处理算法通过对图像进行全局调整和处理,即对主体区域和背景区域同时进行相同处理,导致主体区域的显著性减弱。主体区域的显示效果无法得到提高。
目前,对主体区域和背景区域按照颜色和亮度的差异进行分割后,对分割的主体区域和背景区域分别进行图像处理。采用颜色和亮度的差异进行分割获得的主体区域和背景区域进行图像处理时,只能对分割的每个区域分别采用同一图像处理方法进行处理,例如、拍摄的图像为有背景的人物图像时,当根据颜色和亮度分割为人物和背景两个区域时,以人物作为主体区域按照统一图像处理方法对人物图像进行处理,以背景作为背景区域按照统一图像处理方法对背景图像进行处理;人物图像中的局部特征(例如、拍摄图像对焦的眼镜、脸颊等)并未进行区分,统一处理方式对局部特征并未进行显著 性方面的优化,导致显示效果并未提高。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供一种实现图像处理的方法及装置,能够提高主体区域的显示效果。
本发明实施例提供了一种实现图像处理的装置,包括:分析单元、确定单元及处理单元;其中,
分析单元,设置为对原始图像进行显著性分析;
确定单元,设置为根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;
处理单元,设置为对每个显著性区域分别采用对应的图像处理方法进行处理。
可选地,该装置还包括分割单元,设置为对每个显著性区域分别采用对应的图像处理方法进行处理之前,分割每个显著性效果不同的显著性区域。
可选地,所述分析单元是设置为,对原始图像的每个像素点的颜色、亮度及方向与周边像素点的颜色、亮度及方向进行图像对比度对比,获得每个像素点相应的显著性数值,进行显著性分析。
可选地,所述分析单元是设置为,采用区域对比RC算法对所述原始图像进行图像对比度分析,通过图像对比度分析进行所述原始图像的显著性分析。
可选地,分割单元还设置为,分割每个显著性效果不同的显著性区域时,利用数学形态学提取每个显著性效果不同的显著性区域的轮廓、和/或填充每个显著性效果不同的显著性区域的区域内部空洞。
可选地,确定单元,是设置为通过如下方式实现区分所述原始图像为包含至少两个不同显著性效果的显著性区域:
根据显著性分析结果中区域显著性数值大小,结合预先设定的区分阈值,将所述原始图像区分为包含至少两个不同显著性效果的显著性区域。
可选地,当将所述原始图像划分为显著性效果不同的主体区域和背景区域时,所述预先设定的区分阈值包括:主体区域的显著性取值范围为大于64且小于255,或等于255;背景区域的显著性取值范围为大于0且小于64,或等于0。
可选地,所述图像处理方法包括:调整曝光值、和/或虚化处理、和/或白平衡特效、和/或背景替换,和/或显著性取值调整、和/或色阶调整、和/或亮度调整、色相/饱和度调整、和/或颜色替换、和/或渐变映射、和/或照片滤镜。
可选地,所述分割单元是设置为通过如下方式实现利用数学形态学提取每个所述显著性效果不同的显著性区域的轮廓:
通过膨胀、腐蚀、开启及闭合运算获得每个所述显著性效果不同的显著性区域的二值图像,通过计算获得的二值图像进行轮廓提取;
所述分割单元是设置为通过如下方式实现填充每个所述显著性效果不同的显著性区域的区域内部空洞:
对每个显著性效果不同的显著性区域分别计算相应的二值图像,对每个显著性区域的二值图像的内部进行轮廓提取获得内部轮廓,确定小于预设面积的内部轮廓为内部空洞,对所述内部空洞进行像素填充。
可选地,所述分析单元是设置为通过如下方式实现进行原始图像的显著性分析:
对所述原始图像按照设定的像素大小进行分割为N个区域,计算区域rk的显著性数值S(rk):
Figure PCTCN2016105755-appb-000001
其中,ri表示不同于rk的区域,ω(ri)为区域ri的加权值,Dr(rk,ri)为两个区域的颜色距离差值,Dr(rk,ri)计算公式为:
Figure PCTCN2016105755-appb-000002
其中,f1(i)表示区域r1中第i种颜色在该区域所有统计的颜色种类n1中出现的概率;f2(i)为区域r2中第j种颜色在该区域所有统计的颜色种类n2中出现的概率;d(c1,i,c2,j)为r1中第i种颜色c1,i与r2中第j种颜色c2,j的距离差值;
对S(rk)加入区域空间距离差值,获得区域显著性数值为:
Figure PCTCN2016105755-appb-000003
其中,Ds(rk,ri)为两个区域的区域重心的欧式距离,σs为空间距离影响调节因子。
另一方面,本申请还提供一种实现图像处理的方法,包括:
对原始图像进行显著性分析;
根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;
对每个显著性区域分别采用对应的图像处理方法进行处理。
可选地,区分原始图像为包含至少两个不同显著性效果的显著性区域包括:
根据显著性分析结果中区域显著性数值大小,结合预先设定的区分阈值,将所述原始图像区分为包含至少两个不同显著性效果的显著性区域。
可选地,当将所述原始图像划分为显著性效果不同的主体区域和背景区域时,所述预先设定的区分阈值包括:主体区域的显著性取值范围为大于64且小于255,或等于255;背景区域的显著性取值范围为大于0且小于64,或等于0。
可选地,对每个显著性区域分别采用对应的图像处理方法进行处理之前,该方法还包括:
分割每个显著性效果不同的显著性区域。
可选地,对原始图像进行显著性分析包括:对原始图像的每个像素点的颜色、亮度及方向与周边像素点的颜色、亮度及方向进行图像对比度对比, 获得每个像素点相应的显著性数值,进行显著性分析。
可选地,进行原始图像的显著性分析包括:
对所述原始图像按照设定的像素大小进行分割为N个区域,计算区域rk的显著性数值S(rk)为:
Figure PCTCN2016105755-appb-000004
其中,ri表示不同于rk的区域,ω(ri)为区域ri的加权值,Dr(rk,ri)为两个区域的颜色距离差值,Dr(rk,ri)计算公式为:
Figure PCTCN2016105755-appb-000005
其中,f1(i)表示区域r1中第i种颜色在该区域所有统计的颜色种类n1中出现的概率;f2(i)为区域r2中第j种颜色在该区域所有统计的颜色种类n2中出现的概率;d(c1,i,c2,j)为r1中第i种颜色c1,i与r2中第j种颜色c2,j的距离差值;
对S(rk)加入区域空间距离差值,获得区域显著性数值为:
Figure PCTCN2016105755-appb-000006
其中,Ds(rk,ri)为两个区域的区域重心的欧式距离,σs为空间距离影响调节因子。
可选地,对原始图像进行显著性分析包括:
采用区域对比RC算法对所述原始图像进行图像对比度分析,通过图像对比度分析进行所述原始图像的显著性分析。
可选地,图像处理方法包括:调整曝光值、和/或虚化处理、和/或白平衡特效、和/或背景替换,和/或显著性取值调整、和/或色阶调整、和/或亮度调整、色相/饱和度调整、和/或颜色替换、和/或渐变映射、和/或照片滤镜。
可选地,分割每个显著性效果不同的显著性区域时,该方法还包括:
利用数学形态学提取每个显著性效果不同的显著性区域的轮廓、和/或填充每个显著性效果不同的显著性区域的区域内部空洞。
可选地,利用数学形态学提取每个所述显著性效果不同的显著性区域的 轮廓包括:
通过膨胀、腐蚀、开启及闭合运算,获得每个所述显著性效果不同的显著性区域的二值图像,通过计算获得的二值图像进行轮廓提取;
所述填充每个所述显著性效果不同的显著性区域的区域内部空洞包括:
对每个显著性效果不同的显著性区域分别计算相应的二值图像,对每个显著性区域的二值图像的内部进行轮廓提取获得内部轮廓,确定小于预设面积的内部轮廓为内部空洞,对所述内部空洞进行像素填充。
本申请技术方案包括:对原始图像进行显著性分析;根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;对每个显著性区域分别采用对应的图像处理方法进行处理。本发明实施例通过显著性分析区分确定每个显著性区域,对每个显著性区域分别采用对应的图像处理方法进行处理,提高了图像主体区域的显示效果,提升了图像的显示质量。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1(a)为拍摄的原始图像;
图1(b)为拍摄图像的显著性图像;
图2为实现本发明每个实施例的移动终端的硬件结构示意;
图3为本发明实施例实现图像处理的方法的流程图;
图4为本发明实施例实现图像处理的装置的结构框图;
图5为本发明实施例的方法流程图;
图6(a)为对第一原始图像的图片内容;
图6(b)为本发明实施例分割第一原始图像的示意图;
图6(c)为本发明实施例第一原始图像的显著性分析示意图;
图6(d)为本实施例第一原始图像的显著性分析结果示意图;
图6(e)为对第一原始图像进行对比度增强的效果示意图;
图6(f)为本发明实施例对第一原始图像进行图像处理的效果示意图;
图7(a)为第二原始图像的图片内容;
图7(b)为对第二原始图像进行全局白平衡处理的效果示意图;
图7(c)为本发明实施例对第二原始图像进行显著性分析的结果示意图;
图7(d)为本发明实施例对第二原始图像进行图像处理的效果示意图;
图8(a)为第三原始图像的图片内容;
图8(b)为本发明实施例对第三原始图像进行显著性分析的结果示意图;
图8(c)为本发明实施例对第二原始图像进行图像处理的效果示意图。
本发明的实施方式
下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
图2为实现本发明各个实施例的移动终端的硬件结构示意,如图2所示,
移动终端100可以包括用户输入单元130、输出单元150、存储器160、控制器180和电源单元190等等。图2示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
A/V输入单元120设置为接收视频信号。A/V输入单元120可以包括相机121相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中,可以根据移动终端的构造提供两个或更多相机1210。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形 式叠加在显示单元151上时,可以形成触摸屏。
输出单元150可以包括显示单元151。显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可设置为检测触摸输入压力以及触摸输入位置和触摸输入面积。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字 符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明实施例能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
基于上述移动终端硬件结构以及通信系统,提出本发明方法各个实施例。
图3为本发明实施例实现图像处理的方法的流程图,如图3所示,包括:
步骤300、对原始图像进行显著性分析;
对原始图像进行显著性分析包括:对原始图像的每个像素点的颜色、亮度及方向与周边像素点的颜色、亮度及方向进行图像对比度对比,获得每个像素点相应的显著性数值,进行显著性分析。
对原始图像进行显著性分析包括:
采用区域对比RC算法对原始图像进行图像对比度分析,通过图像对比度分析进行原始图像的显著性分析。
需要说明的是,以人的注意力机制为基础计算图像的显著度,可以得到 在图像拍摄过程中根据兴趣而专注拍摄的图像内容,由于用户通过拍摄技巧对感兴趣的内容进行拍摄,该部分内容显著性效果较好。本发明实施例以区域对比(RC,Region Contrast)算法进行显著性分析,依据RC算法的思想,利用区域对比度的计算获得图像的显著性;通过获得图像的全局对比度和空间位置的相关性,产生全局性的显著性图像;即图像的尺度大小和原图相同,基于全局区域的对比度方法充分考虑了单一区域同全局区域的对比度差异,从而能够有效的突出整个区域的显著性,而不仅是区域的边缘显著性(仅考虑局部范围内图像的显著性)。此外,RC算法同时还考虑了区域空间关系,在计算显著性效果时,通过加权参数设定区域之间位置相距越大,加权值相应越小,区域之间的位置越近,加权值应越大,实现对区域空间的合理处理。以下是RC算法的主要内容为:
采用RC算法进行显著性分析时,首先对原始图像按照超像素进行分割,假设将原始图像分割为n个区域,图像区域的数目n可以根据图像的分辨率来确定,每个区域的大小设置为像素块的宽度p,如p=20,则有20*20=400个像素,通常p的范围为[20,40],设定图像的长和宽分为么M和N,M*N越大,P越大,也即图像分辨率越大,图像区域的单位面积越大,那么图像区域的数目n=M*N/(p*p)。
对于区域rk,RC算法定义区域rk的显著性值S(rk)为:
Figure PCTCN2016105755-appb-000007
式(1)中,ri表示不同于rk的区域,ω(ri)为区域ri的加权值,这里规定区域内部的像素面积越大(像素个数),ω(ri)越大,具体权值变化规律可以根据本领域技术人员的经验值设定;Dr(rk,ri)为两个区域的颜色距离差值,具体定义为:
Figure PCTCN2016105755-appb-000008
式(2)中,f1(i)表示:区域r1中第i种颜色在该区域所有统计的颜色种类n1中出现的概率;f2(i)为区域r2中第j种颜色在该区域所有统计的颜色种类n2中出现的概率。d(c1,i,c2,j)为r1中第i种颜色c1,i与r2中第j种颜色c2,j的距离 差值,d(c1,i,c2,j)主要是在CIELAB空间的距离度量(CIELAB是CIE的一个颜色系统,在CIELAB空间的距离度量是基于CIELAB空间颜色系统,确定一个颜色的距离度量的数值信息)。为了充分利用区域空间关系,基于公式(1),RC算法加入区域空间距离差值,获得区域显著性数值为:
Figure PCTCN2016105755-appb-000009
其中,Ds(rk,ri)为两个区域的区域重心的欧式距离,σs为空间距离影响调节因子,距离越大,调节因子数值越大,调节因子越大,空间距离对于显著性的计算影响越小。调节因子取值可以根据本领域技术人员的经验值进行设定。
步骤301、根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;
本步骤中,区分所述原始图像为包含至少两个不同显著性效果的显著性区域包括:
根据显著性分析结果中区域显著性数值大小,结合预先设定的区分阈值,将所述原始图像区分为包含至少两个不同显著性效果的显著性区域。
可选的,当将所述原始图像划分为显著性效果不同的主体区域和背景区域时,所述预先设定的区分阈值包括:主体区域的显著性取值范围为大于64且小于255,或等于255,背景区域的显著性取值范围为大于0且小于64,或等于0。
需要说明的是,显著性取值范围的大小可以根据本领域技术人员的经验值进行调整,当将原始图像划分更多的显著性区域时,可以基于本领域技术人员的经验值,确定每个显著性区域相应的取值范围。
步骤302、对每个显著性区域分别采用对应的图像处理方法进行处理。
对每个显著性区域分别采用对应的图像处理方法进行处理之前,本发明实施例方法还包括:
分割每个显著性效果不同的显著性区域。
需要说明的是,本发明实施例方法分割每个显著性效果不同的显著性区 域可以采用相关的图像分割算法实现。
步骤302中,图像处理方法包括:调整曝光值、和/或虚化处理、和/或白平衡特效、和/或背景替换,和/或显著性取值调整、和/或色阶调整、和/或亮度调整、色相/饱和度调整、和/或颜色替换、和/或渐变映射、和/或照片滤镜。
需要说明的是,对每个显著性效果不同的显著性区域采用不同的图像处理方法进行处理是指对每个显著性效果不同的显著性区域分别选择相应的图像处理方法进行图像处理,每个显著性区域的图像处理方法之间不存在关联关系,本发明实施例对每个显著性效果不同的显著性区域采用不同的图像处理方法进行处理可以包含以下应用场景:
应用场景一、人像风景,图像存在曝光不足的问题,根据显著性分析结果分割出主体区域后,对主体区域进行曝光值上调处理,背景区域曝光值不变。曝光值上调大小可以按照预先设定的单位逐步调整,也可以采用参数输入的方式直接进行调整。
应用场景之二、人像风景,主体区域和背景区域对比度小,保持主体区域不变,对背景区域进行虚化处理;虚化处理的调整可以按照预先设定的调整单位逐步进行,也可以采用参数输入的方式直接输入虚化参数。
应用场景之三、图像白平衡效果不佳,对主体区域和背景区域分别采用不同的参数进行白平衡处理。
应用场景之四、图像背景杂乱,保持主体区域不变,进行背景区域的替换;背景区域替换可以选用用户单独拍摄的备用图像,也可以从图像库中选择合适的背景素材。
应用场景之五、显著性效果不佳,进行图像融合处理,对主体区域加大显著性权重,提高主体区域的显示效果。
此外,图像处理方法还包括:色阶调整、亮度调整、色相/饱和度调整、颜色替换、渐变映射、照片滤镜等,具体适用场景及实施方式属于本领域技术人员的公知常识,在此不做赘述;另外,两种或两种以上图片处理方法的叠加实现也属于本领域技术人员的惯用技术手段,在此不再赘述。
分割每个显著性效果不同的显著性区域时,本发明实施例方法还包括:
利用数学形态学提取每个显著性效果不同的显著性区域的轮廓,和/或,填充每个显著性效果不同的显著性区域的区域内部空洞。
利用数学形态学提取每个所述显著性效果不同的显著性区域的轮廓包括:
通过膨胀、腐蚀、开启及闭合运算获得每个所述显著性效果不同的显著性区域的二值图像,通过计算获得的二值图像进行轮廓提取;
所述填充每个所述显著性效果不同的显著性区域的区域内部空洞包括:
对每个显著性效果不同的显著性区域分别计算相应的二值图像,对每个显著性区域的二值图像的内部进行轮廓提取获得内部轮廓,确定小于预设面积的内部轮廓为内部空洞,对所述内部空洞进行像素填充。
需要说明的是,膨胀、腐蚀、开启及闭合为图像处理算法中的基本算法,膨胀与图像处理算法中的扩张相同、腐蚀与图像处理算法中的侵蚀相同。轮廓提取包括:通过膨胀、腐蚀、开启及闭合运算获得原始图像的二值图像,通过二值图像进行二值化分割为主体区域和背景区域(一般的,可以将主体区域的像素设置为255(显示为白色),背景区域设置为0(显示为黑色)),遍历二值图像,提取出二值图像中的像素突变点,如255到0或者0到255的像素突变点,将像素突变点作为图像的边界点,将边界点连接构成主体区域的轮廓。
本发明实施例通过轮廓提取可以保证分割的每个显著性区域过渡平滑、图像整洁;区域内部空洞的填充可以保证分割的每个显著性区域的完整。
本发明实施例方法通过显著性分析区分确定每个显著性区域,对每个显著性区域分别采用对应的图像处理方法进行处理,提高了图像主体区域的显示效果,提升了图像的显示质量。
图4为本发明实施例实现图像处理的装置的结构框图,如图4所示,包括:分析单元401、确定单元402及处理单元403;其中,
分析单元401,设置为对原始图像进行显著性分析;
分析单元401是设置为,对原始图像的每个像素点的颜色、亮度及方向 与周边像素点的颜色、亮度及方向进行图像对比度对比,获得每个像素点相应的显著性数值,进行显著性分析。
分析单元401是设置为,采用区域对比RC算法对原始图像进行图像对比度分析,通过图像对比度分析进行原始图像的显著性分析。
确定单元402,设置为根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;
处理单元403,设置为对每个显著性区域分别采用对应的图像处理方法进行处理。
本发明实施例装置还包括分割单元404,设置为对每个显著性区域分别采用对应的图像处理方法进行处理之前,分割每个显著性效果不同的显著性区域。
分割单元404还设置为,分割每个显著性效果不同的显著性区域时,利用数学形态学提取每个显著性效果不同的显著性区域的轮廓、和/或填充每个显著性效果不同的显著性区域的区域内部空洞。
以下通过具体实施例对本发明方法进行清楚详细的说明,实施例仅用于陈述本发明,并不用于限定本发明方法的保护范围。
实施例
图5为本发明实施例的方法流程图,如图5所示,包括:
步骤500、对原始图像按照设定的像素大小进行分割;
图6(a)为第一原始图像的图片内容,如图6(a)所示,图片中包含动物主体和背景两个主要部分;图6(b)为本发明实施例分割第一原始图像的示意图,如图6(b)所示,将原始图像分割为n个区域。
步骤501、对按照设定的像素分割的原始图像进行显著性分析;
图6(c)为本发明实施例第一原始图像的显著性分析示意图,如图6(c)所示,根据RC算法的公式(3)计算,对分割的图像区域1和图像区域2,以图像区域1作为显著性分析对象时,图像中其他编号的图像区域(除图像区域1以外的所有图像区域)作为不同于图像区域1的图像,进行显著性数值计算;以图像区域2作为显著性分析对象时,图像中其他编号的图像区域 作为不同于图像区域2的图像,进行显著性数值计算;图像区域1的显著性计算结果较大,图像区域2的显著性计算结果较小;图6(d)为本实施例第一原始图像的显著性分析结果示意图,如图6(d)所示,图像区域1的显著性数值越大,显著性图像中像素点的亮度越高,表示显著性程度越高,显著性高的区域是用户感兴趣的图像区域,而图像区域2的显著性数值较小。
步骤502、根据显著性分析结果将原始图像区分为包含两个或两个以上显著性效果不同的显著性区域。
这里,显著性分析结果可以将原始图像划分为两个或两个以上显著性效果不同的显著性区域;具体划分区域个数以及区分每个显著性区域的显著性数值大小设定可以根据本领域技术人员的分析进行设定;例如,设定显著性数值排序在前的第一阈值个图像区域作为一种显示效果的显著性区域;设定显著性数值排序在中的第二阈值个图像区域作为第二种显示效果的显著性区域;设定显著性数值排序在后的第三阈值个图像区域作为第三种显示效果的显著性区域。显著性区域的划分还可以设定比例或显著性数值大小进行划分,具体设定可以由本领域技术人员根据图像分析进行设定和调整。通过显著性分析确定人物主体的显著性程度,根据显著性效果的不同划分人物主体和背景两个显著性区域。
本实施例可以将显著性区域划分为主体区域和背景区域,以图像显著性取值(显著性图像的亮度)范围为[0 255]为例,可以假设设定主体区域的显著性取值范围为[64 255],背景区域的显著性取值范围为[0 64].这里主要考虑到显著性取值低时,人眼对该部分图像的感知不明显,因此可以作为划分为背景和主体的显著性界限,取值范围可以根据本领域技术人员的经验值进行调整,划分显著性区域也可以根据对图像处理的精细程度进行调整,一般的,对图像处理质量要求越高,划分的显著性区域越多。
步骤503、分割每个显著性效果不同的显著性区域。
可选的,本实施例分割每个显著性效果不同的显著性区域时,还包括:利用数学形态学提取每个显著性效果不同的显著性区域的轮廓,和/或,填充每个显著性效果不同的显著性区域的区域内部空洞。
步骤504、对每个显著性区域分别采用对应的图像处理方法进行处理。
图6(a)中第一原始图像中人物主体和背景对比度较低,存在人物主体显示效果不明显问题;图6(e)为对第一原始图像进行对比度增强的效果示意图,如图6(e)所示,由于人物主体和背景同时进行了对比度增强处理,并未到达增强人物主体的目的;图6(f)为本发明实施例对第一原始图像进行图像处理的效果示意图,如图6(f)所示,本实施例对人物主体进行对比度增强的图像处理,对背景不做任何处理,这样人物主体的显示效果得到了加强。
图7(a)为第二原始图像的图片内容,如图7(a)所示,第二原始图像中动物主体和背景存在白平衡效果差的问题;图7(b)为对第二原始图像进行全局白平衡处理的效果示意图,如图7(b)所示,由于动物主体和背景同时进行了白平衡处理处理,动物主体的显示效果因为白平衡处理显得失真,图片显示效果变差;图7(c)为本发明实施例对第二原始图像进行显著性分析的结果示意图,如图7(c)所示,通过显著性分析确定动物主体的显著性程度,根据显著性效果的不同划分动物主体和背景两个显著性区域;图7(d)为本发明实施例对第二原始图像进行图像处理的效果示意图,如图7(d)所示,本实施例对背景进行白平衡处理,对动物主体不做任何处理,这样图片的显示效果因为白平衡的局部处理得到了加强。局部白平衡后,花草变绿了,小狗的颜色也变成灰色的了,利用显著性图像进行局部白平衡,可以保持小狗的白平衡不被调整,背景的白平衡效果得到改善。
图8(a)为第三原始图像的图片内容,如图8(a)所示,动物主体和背景同样清晰,导致动物主体显示效果降低;图8(b)为本发明实施例对第三原始图像进行显著性分析的结果示意图,图8(b)所示,通过显著性分析确定动物主体的显著性程度,根据显著性效果的不同划分动物主体和背景两个显著性区域;图8(c)为本发明实施例对第二原始图像进行图像处理的效果示意图,如图8(c)所示,本实施例对背景进行虚化处理,对动物主体不做任何处理,通过处理提高了图片中动物主体的显示效果。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程 序来指令相关硬件(例如处理器)完成,所述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,例如通过集成电路来实现其相应功能,也可以采用软件功能模块的形式实现,例如通过处理器执行存储于存储器中的程序/指令来实现其相应功能。本发明不限制于任何特定形式的硬件和软件的结合。
虽然本发明所揭露的实施方式如上,但所述的内容仅为便于理解本发明而采用的实施方式,并非用以限定本发明。任何本发明所属领域内的技术人员,在不脱离本发明所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本发明的专利保护范围,仍须以所附的权利要求书所界定的范围为准。
工业实用性
上述技术方案提高了图像主体区域的显示效果,提升了图像的显示质量。

Claims (20)

  1. 一种实现图像处理的装置,包括:分析单元、确定单元及处理单元;其中,
    分析单元,设置为对原始图像进行显著性分析;
    确定单元,设置为根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;
    处理单元,设置为对每个显著性区域分别采用对应的图像处理方法进行处理。
  2. 根据权利要求1所述的装置,该装置还包括:
    分割单元,设置为对每个显著性区域分别采用对应的图像处理方法进行处理之前,分割每个显著性效果不同的显著性区域。
  3. 根据权利要求1所述的装置,其中,所述分析单元是设置为,对原始图像的每个像素点的颜色、亮度及方向与周边像素点的颜色、亮度及方向进行图像对比度对比,获得每个像素点相应的显著性数值,进行显著性分析。
  4. 根据权利要求1、2或3所述的装置,其中,所述分析单元是设置为,
    采用区域对比RC算法对所述原始图像进行图像对比度分析,通过图像对比度分析进行所述原始图像的显著性分析。
  5. 根据权利要求2所述的装置,
    所述分割单元还设置为,分割每个显著性效果不同的显著性区域时,利用数学形态学提取每个显著性效果不同的显著性区域的轮廓、和/或填充每个显著性效果不同的显著性区域的区域内部空洞。
  6. 根据权利要求1所述的装置,其中,确定单元,是设置为通过如下方式实现区分所述原始图像为包含至少两个不同显著性效果的显著性区域:
    根据显著性分析结果中区域显著性数值大小,结合预先设定的区分阈值,将所述原始图像区分为包含至少两个不同显著性效果的显著性区域。
  7. 根据权利要求6所述的装置,其中,
    当将所述原始图像划分为显著性效果不同的主体区域和背景区域时,所 述预先设定的区分阈值包括:主体区域的显著性取值范围为大于64且小于255,或等于255;背景区域的显著性取值范围为大于0且小于64,或等于0。
  8. 根据权利要求1、2或3所述的装置,其中,
    所述图像处理方法包括:调整曝光值、和/或虚化处理、和/或白平衡特效、和/或背景替换,和/或显著性取值调整、和/或色阶调整、和/或亮度调整、色相/饱和度调整、和/或颜色替换、和/或渐变映射、和/或照片滤镜。
  9. 根据权利要求5所述的装置,其中,
    所述分割单元是设置为通过如下方式实现利用数学形态学提取每个所述显著性效果不同的显著性区域的轮廓:
    通过膨胀、腐蚀、开启及闭合运算获得每个所述显著性效果不同的显著性区域的二值图像,通过计算获得的二值图像进行轮廓提取;
    所述分割单元是设置为通过如下方式实现填充每个所述显著性效果不同的显著性区域的区域内部空洞:
    对每个显著性效果不同的显著性区域分别计算相应的二值图像,对每个显著性区域的二值图像的内部进行轮廓提取获得内部轮廓,确定小于预设面积的内部轮廓为内部空洞,对所述内部空洞进行像素填充。
  10. 根据权利要求4所述的装置,其中,
    所述分析单元是设置为通过如下方式实现进行原始图像的显著性分析:
    对所述原始图像按照设定的像素大小进行分割为N个区域,计算区域rk的显著性数值S(rk):
    Figure PCTCN2016105755-appb-100001
    其中,ri表示不同于rk的区域,ω(ri)为区域ri的加权值,Dr(rk,ri)为两个区域的颜色距离差值,Dr(rk,ri)计算公式为:
    Figure PCTCN2016105755-appb-100002
    其中,f1(i)表示区域r1中第i种颜色在该区域所有统计的颜色种类n1中出 现的概率;f2(i)为区域r2中第j种颜色在该区域所有统计的颜色种类n2中出现的概率;d(c1,i,c2,j)为r1中第i种颜色c1,i与r2中第j种颜色c2,j的距离差值;
    对S(rk)加入区域空间距离差值,获得区域显著性数值为:
    Figure PCTCN2016105755-appb-100003
    其中,Ds(rk,ri)为两个区域的区域重心的欧式距离,σs为空间距离影响调节因子。
  11. 一种实现图像处理的方法,包括:
    对原始图像进行显著性分析;
    根据显著性分析结果,区分所述原始图像为包含至少两个不同显著性效果的显著性区域;
    对每个显著性区域分别采用对应的图像处理方法进行处理。
  12. 根据权利要求11所述的方法,其中,区分所述原始图像为包含至少两个不同显著性效果的显著性区域包括:
    根据显著性分析结果中区域显著性数值大小,结合预先设定的区分阈值,将所述原始图像区分为包含至少两个不同显著性效果的显著性区域。
  13. 根据权利要求12所述的方法,其中,当将所述原始图像划分为显著性效果不同的主体区域和背景区域时,所述预先设定的区分阈值包括:主体区域的显著性取值范围为大于64且小于255,或等于255;背景区域的显著性取值范围为大于0且小于64,或等于0。
  14. 根据权利要求11或12所述的方法,该方法还包括:
    所述对每个显著性区域分别采用对应的图像处理方法进行处理之前,分割每个所述显著性效果不同的显著性区域。
  15. 根据权利要求11或12所述的方法,其中,所述对原始图像进行显著性分析包括:
    对原始图像的每个像素点的颜色、亮度及方向与周边像素点的颜色、亮度及方向进行图像对比度对比,获得每个像素点相应的显著性数值,进行显 著性分析。
  16. 根据权利要求11、12或15所述的方法,其中,所述对原始图像进行显著性分析包括:
    采用区域对比RC算法对所述原始图像进行图像对比度分析,通过图像对比度分析进行所述原始图像的显著性分析。
  17. 根据权利要求16所述的方法,其中,所述进行原始图像的显著性分析包括:
    对所述原始图像按照设定的像素大小进行分割为N个区域,计算区域rk的显著性数值S(rk)为:
    Figure PCTCN2016105755-appb-100004
    其中,ri表示不同于rk的区域,ω(ri)为区域ri的加权值,Dr(rk,ri)为两个区域的颜色距离差值,Dr(rk,ri)计算公式为:
    Figure PCTCN2016105755-appb-100005
    其中,f1(i)表示区域r1中第i种颜色在该区域所有统计的颜色种类n1中出现的概率;f2(i)为区域r2中第j种颜色在该区域所有统计的颜色种类n2中出现的概率;d(c1,i,c2,j)为r1中第i种颜色c1,i与r2中第j种颜色c2,j的距离差值;
    对S(rk)加入区域空间距离差值,获得区域显著性数值为:
    Figure PCTCN2016105755-appb-100006
    其中,Ds(rk,ri)为两个区域的区域重心的欧式距离,σs为空间距离影响调节因子。
  18. 根据权利要求11、12或13所述的方法,其中,所述图像处理方法包括:调整曝光值、和/或虚化处理、和/或白平衡特效、和/或背景替换,和/或显著性取值调整、和/或色阶调整、和/或亮度调整、色相/饱和度调整、和/或颜色替换、和/或渐变映射、和/或照片滤镜。
  19. 根据权利要求12所述的方法,该方法还包括:
    所述分割每个显著性效果不同的显著性区域时,利用数学形态学提取每个所述显著性效果不同的显著性区域的轮廓、和/或填充每个所述显著性效果不同的显著性区域的区域内部空洞。
  20. 根据权利要求19所述的方法,其中,所述利用数学形态学提取每个所述显著性效果不同的显著性区域的轮廓包括:
    通过膨胀、腐蚀、开启及闭合运算获得每个所述显著性效果不同的显著性区域的二值图像,通过计算获得的二值图像进行轮廓提取;
    所述填充每个所述显著性效果不同的显著性区域的区域内部空洞包括:
    对每个显著性效果不同的显著性区域分别计算相应的二值图像,对每个显著性区域的二值图像的内部进行轮廓提取获得内部轮廓,确定小于预设面积的内部轮廓为内部空洞,对所述内部空洞进行像素填充。
PCT/CN2016/105755 2015-12-15 2016-11-14 一种实现图像处理的方法及装置 WO2017101626A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510936808.5 2015-12-15
CN201510936808.5A CN105574866A (zh) 2015-12-15 2015-12-15 一种实现图像处理的方法及装置

Publications (1)

Publication Number Publication Date
WO2017101626A1 true WO2017101626A1 (zh) 2017-06-22

Family

ID=55884957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105755 WO2017101626A1 (zh) 2015-12-15 2016-11-14 一种实现图像处理的方法及装置

Country Status (2)

Country Link
CN (1) CN105574866A (zh)
WO (1) WO2017101626A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210277A (zh) * 2018-05-22 2019-09-06 安徽大学 一种运动目标空洞填充算法
CN111127476A (zh) * 2019-12-06 2020-05-08 Oppo广东移动通信有限公司 一种图像处理方法、装置、设备及存储介质
CN111279389A (zh) * 2018-12-28 2020-06-12 深圳市大疆创新科技有限公司 图像处理方法和装置
CN112419265A (zh) * 2020-11-23 2021-02-26 哈尔滨工程大学 一种基于人眼视觉机制的伪装评价方法
CN115861451A (zh) * 2022-12-27 2023-03-28 东莞市楷德精密机械有限公司 一种基于机器视觉的多功能图像处理方法及系统
CN116342629A (zh) * 2023-06-01 2023-06-27 深圳思谋信息科技有限公司 一种图像交互分割方法、装置、设备及存储介质
CN116757963A (zh) * 2023-08-14 2023-09-15 荣耀终端有限公司 图像处理方法、电子设备、芯片系统及可读存储介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017224970A (ja) * 2016-06-15 2017-12-21 ソニー株式会社 画像処理装置、画像処理方法、および撮像装置
CN106254643B (zh) * 2016-07-29 2020-04-24 瑞安市智造科技有限公司 一种移动终端及图片处理方法
CN106780513B (zh) * 2016-12-14 2019-08-30 北京小米移动软件有限公司 图片显著性检测的方法和装置
CN107147823A (zh) * 2017-05-31 2017-09-08 广东欧珀移动通信有限公司 曝光方法、装置、计算机可读存储介质和移动终端
CN107197146B (zh) * 2017-05-31 2020-06-30 Oppo广东移动通信有限公司 图像处理方法和装置、移动终端、计算机可读存储介质
EP3611914A4 (en) * 2017-06-09 2020-03-25 Huawei Technologies Co., Ltd. METHOD AND APPARATUS FOR PHOTOGRAPHY OF IMAGES
CN107277354B (zh) * 2017-07-03 2020-04-28 瑞安市智造科技有限公司 一种虚化拍照方法、虚化拍照终端和计算机可读存储介质
CN107392972B (zh) * 2017-08-21 2018-11-30 维沃移动通信有限公司 一种图像背景虚化方法、移动终端及计算机可读存储介质
CN108024057B (zh) * 2017-11-30 2020-01-10 Oppo广东移动通信有限公司 背景虚化处理方法、装置及设备
CN108376404A (zh) * 2018-02-11 2018-08-07 广东欧珀移动通信有限公司 图像处理方法和装置、电子设备、存储介质
CN109907461A (zh) * 2018-05-31 2019-06-21 周超强 防儿童式智能吹风机
CN109344724B (zh) * 2018-09-05 2020-09-25 深圳伯奇科技有限公司 一种证件照自动背景替换方法、系统及服务器
CN109325507B (zh) * 2018-10-11 2020-10-16 湖北工业大学 结合超像素显著性特征与hog特征图像分类方法和系统
CN109827652A (zh) * 2018-11-26 2019-05-31 河海大学常州校区 一种针对光纤传感振动信号识别方法与系统
CN109993816B (zh) * 2019-03-21 2023-08-04 广东智媒云图科技股份有限公司 联合绘画方法、装置、终端设置及计算机可读存储介质
CN109978881B (zh) * 2019-04-09 2021-11-26 苏州浪潮智能科技有限公司 一种图像显著性处理的方法和装置
CN110602384B (zh) * 2019-08-27 2022-03-29 维沃移动通信有限公司 曝光控制方法及电子设备
CN113505799B (zh) * 2021-06-30 2022-12-23 深圳市慧鲤科技有限公司 显著性检测方法及其模型的训练方法和装置、设备、介质
CN115460389B (zh) * 2022-09-20 2023-05-26 北京拙河科技有限公司 一种图像白平衡区域优选方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509308A (zh) * 2011-08-18 2012-06-20 上海交通大学 基于混合动态纹理空时显著性检测的运动分割方法
CN104240244A (zh) * 2014-09-10 2014-12-24 上海交通大学 一种基于传播模式和流形排序的显著性物体检测方法
CN104408708A (zh) * 2014-10-29 2015-03-11 兰州理工大学 一种基于全局和局部低秩的图像显著目标检测方法
US20150084978A1 (en) * 2011-04-08 2015-03-26 Anders Ballestad Local definition of global image transformations
CN105023264A (zh) * 2014-04-25 2015-11-04 南京理工大学 一种结合对象性和背景性的红外图像显著特征检测方法
CN105574886A (zh) * 2016-01-28 2016-05-11 多拉维(深圳)技术有限公司 手持多目相机高精度标定方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514582A (zh) * 2012-06-27 2014-01-15 郑州大学 基于视觉显著的图像去模糊方法
CN103473739B (zh) * 2013-08-15 2016-06-22 华中科技大学 一种基于支持向量机的白细胞图像精确分割方法与系统
US9704059B2 (en) * 2014-02-12 2017-07-11 International Business Machines Corporation Anomaly detection in medical imagery
CN103914834B (zh) * 2014-03-17 2016-12-07 上海交通大学 一种基于前景先验和背景先验的显著性物体检测方法
CN104809729B (zh) * 2015-04-29 2018-08-28 山东大学 一种鲁棒的图像显著性区域自动分割方法
CN104766287A (zh) * 2015-05-08 2015-07-08 哈尔滨工业大学 一种基于显著性检测的模糊图像盲复原方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150084978A1 (en) * 2011-04-08 2015-03-26 Anders Ballestad Local definition of global image transformations
CN102509308A (zh) * 2011-08-18 2012-06-20 上海交通大学 基于混合动态纹理空时显著性检测的运动分割方法
CN105023264A (zh) * 2014-04-25 2015-11-04 南京理工大学 一种结合对象性和背景性的红外图像显著特征检测方法
CN104240244A (zh) * 2014-09-10 2014-12-24 上海交通大学 一种基于传播模式和流形排序的显著性物体检测方法
CN104408708A (zh) * 2014-10-29 2015-03-11 兰州理工大学 一种基于全局和局部低秩的图像显著目标检测方法
CN105574886A (zh) * 2016-01-28 2016-05-11 多拉维(深圳)技术有限公司 手持多目相机高精度标定方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210277A (zh) * 2018-05-22 2019-09-06 安徽大学 一种运动目标空洞填充算法
CN110210277B (zh) * 2018-05-22 2022-12-09 安徽大学 一种运动目标空洞填充算法
CN111279389A (zh) * 2018-12-28 2020-06-12 深圳市大疆创新科技有限公司 图像处理方法和装置
CN111127476A (zh) * 2019-12-06 2020-05-08 Oppo广东移动通信有限公司 一种图像处理方法、装置、设备及存储介质
CN111127476B (zh) * 2019-12-06 2024-01-26 Oppo广东移动通信有限公司 一种图像处理方法、装置、设备及存储介质
CN112419265A (zh) * 2020-11-23 2021-02-26 哈尔滨工程大学 一种基于人眼视觉机制的伪装评价方法
CN112419265B (zh) * 2020-11-23 2023-08-01 哈尔滨工程大学 一种基于人眼视觉机制的伪装评价方法
CN115861451A (zh) * 2022-12-27 2023-03-28 东莞市楷德精密机械有限公司 一种基于机器视觉的多功能图像处理方法及系统
CN116342629A (zh) * 2023-06-01 2023-06-27 深圳思谋信息科技有限公司 一种图像交互分割方法、装置、设备及存储介质
CN116757963A (zh) * 2023-08-14 2023-09-15 荣耀终端有限公司 图像处理方法、电子设备、芯片系统及可读存储介质
CN116757963B (zh) * 2023-08-14 2023-11-07 荣耀终端有限公司 图像处理方法、电子设备、芯片系统及可读存储介质

Also Published As

Publication number Publication date
CN105574866A (zh) 2016-05-11

Similar Documents

Publication Publication Date Title
WO2017101626A1 (zh) 一种实现图像处理的方法及装置
WO2021017811A1 (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
US11250571B2 (en) Robust use of semantic segmentation in shallow depth of field rendering
WO2017107700A1 (zh) 一种实现图像配准的方法及终端
WO2021022983A1 (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
US10410327B2 (en) Shallow depth of field rendering
US9491366B2 (en) Electronic device and image composition method thereof
US20160301868A1 (en) Automated generation of panning shots
WO2020038087A1 (zh) 超级夜景模式下的拍摄控制方法、装置和电子设备
WO2019015477A1 (zh) 图像矫正方法、计算机可读存储介质和计算机设备
US11538175B2 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
KR20200023651A (ko) 미리보기 사진 블러링 방법 및 장치 및 저장 매체
KR20230084486A (ko) 이미지 효과를 위한 세그먼트화
WO2021180131A1 (zh) 一种图像处理方法及电子设备
WO2019223068A1 (zh) 虹膜图像局部增强方法、装置、设备及存储介质
US11836903B2 (en) Subject recognition method, electronic device, and computer readable storage medium
US8995784B2 (en) Structure descriptors for image processing
RU2320011C1 (ru) Способ автоматической коррекции эффекта красных глаз
CN110442313B (zh) 一种显示属性调整方法以及相关设备
CN110288560A (zh) 一种图像模糊检测方法及装置
AU2018271418B2 (en) Creating selective virtual long-exposure images
CN114926351A (zh) 图像处理方法、电子设备以及计算机存储介质
CN108259767B (zh) 图像处理方法、装置、存储介质及电子设备
US20230033956A1 (en) Estimating depth based on iris size
WO2023151210A1 (zh) 图像处理方法、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16874683

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16874683

Country of ref document: EP

Kind code of ref document: A1