CN105574866A - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
CN105574866A
CN105574866A CN201510936808.5A CN201510936808A CN105574866A CN 105574866 A CN105574866 A CN 105574866A CN 201510936808 A CN201510936808 A CN 201510936808A CN 105574866 A CN105574866 A CN 105574866A
Authority
CN
China
Prior art keywords
image
region
significance
analysis
saliency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510936808.5A
Other languages
Chinese (zh)
Inventor
戴向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510936808.5A priority Critical patent/CN105574866A/en
Publication of CN105574866A publication Critical patent/CN105574866A/en
Priority to PCT/CN2016/105755 priority patent/WO2017101626A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method and apparatus. The image processing method comprises the steps of performing significance analysis on an original image; distinguishing the original image into significance regions including at least two different significance effects according to the significance analysis; and adopting corresponding image processing methods for processing the significance regions respectively. According to the image processing method, the significance regions are distinguished and determined through the significance analysis; and meanwhile, the significance regions are processed by adopting the corresponding image processing methods, so that the display effect of the image body region is improved, and the image display quality is improved.

Description

Method and device for realizing image processing
Technical Field
The present invention relates to image processing technologies, and in particular, to a method and an apparatus for implementing image processing.
Background
The image saliency is an important visual feature of an image, and represents the degree of importance of human eyes on a partial region of the image. For one image, the user only interests a part of regions in the image, the interested part of regions represents the query intention of the user, and other regions are irrelevant to the query intention of the user. Fig. 1(a) is a photographed original image, and as shown in fig. 1(a), a subject region in the image can be prominently displayed in a visual range; fig. 1(b) is a saliency image of a captured image, and as shown in fig. 1(b), the higher the brightness of a pixel point in the saliency image is, the higher the saliency is, the more the original image of the corresponding region can arouse the visual interest of the user, and the image of the region with high saliency is the region in which the user is interested.
When an image is shot, a user generally focuses on an interested main area, the main area generally becomes a salient area of the shot image, and the evaluation weight of the photo quality is mainly reflected in the main area of the image; when the whole shot image has the problems of focusing blur, incorrect exposure, light shielding, poor saturation, unobvious contrast and the like, the existing image processing algorithm performs global adjustment and processing on the image, namely simultaneously performs the same processing on a main body area and a background area, so that the significance of the main body area is weakened. The display effect of the main body region cannot be improved.
At present, after a main body area and a background area are segmented according to the difference of color and brightness, the segmented main body area and the segmented background area are respectively subjected to image processing, when the main body area and the background area obtained by segmentation according to the difference of color and brightness are subjected to image processing, each area which can only be segmented is respectively processed by the same image processing method, for example, when a shot image is a person image with a background, when the shot image is segmented into two areas of a person and the background according to the color and the brightness, the person is taken as the main body area to process the person image according to a unified image processing method, and the background is taken as the background area to process the background image according to the unified image processing method; local features (such as glasses and cheeks for focusing the shot image) in the human image are not distinguished, and the local features are not optimized in significance in a unified processing mode, so that the display effect is not improved.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides a method and an apparatus for implementing image processing, which can improve the display effect of a main body region.
In order to achieve the object of the present invention, the present invention provides an apparatus for implementing image processing, comprising: the device comprises an analysis unit, a determination unit and a processing unit; wherein,
the analysis unit is used for carrying out significance analysis on the original image;
the determining unit is used for distinguishing the original image into a significant area containing at least two different significant effects according to a significant analysis result;
and the processing unit is used for processing each salient region by adopting a corresponding image processing method.
Furthermore, the device further comprises a segmentation unit, which is used for segmenting the salient regions with different significance effects before the salient regions are respectively processed by adopting the corresponding image processing methods.
Further, the analysis unit is specifically configured to perform image contrast comparison on each pixel point of the original image and the color, brightness, and direction of the peripheral pixel points, to obtain a corresponding saliency value of each pixel point, and perform saliency analysis.
Further, the analysis unit is specifically adapted to,
and performing image contrast analysis on the original image by adopting a regional contrast RC algorithm, and performing significance analysis on the original image through the image contrast analysis.
Further, the dividing means extracts the outline of the significant region having the different significance effect and/or the internal cavity of the region filling the significant region having the different significance effect by using mathematical morphology when dividing the significant region having the different significance effect.
In another aspect, the present application further provides a method for implementing image processing, including:
carrying out significance analysis on the original image;
distinguishing the original image into a significant area containing at least two different significant effects according to a significant analysis result;
and processing each salient region by adopting a corresponding image processing method.
Further, distinguishing the original image as a salient region containing at least two different salient effects specifically includes:
and according to the area significance numerical value in the significance analysis result, combining a preset distinguishing threshold value, and distinguishing the original image into significance areas containing at least two different significance effects.
Further, when the original image is divided into a main area and a background area with different significance effects, the preset distinguishing threshold includes: the significance value range of the main area is greater than 64 and less than or equal to 255, and the significance value range of the background area is greater than or equal to 0 and less than 64.
Further, before each salient region is processed by the corresponding image processing method, the method further includes:
and dividing the significant regions with different significant effects.
Further, the performing of the saliency analysis on the original image specifically includes: and comparing the contrast of the image of each pixel point of the original image with the color, brightness and direction of the surrounding pixel points to obtain a corresponding significance value of each pixel point, and performing significance analysis.
Further, the performing of the saliency analysis of the original image specifically includes:
dividing the original image into N areas according to the set pixel size, and calculating the area r by the formula (1)kS (r) is a significance value ofk) Comprises the following steps:
S ( r k ) = Σ i = 1 , i ≠ k n ω ( r i ) D r ( r k , r i ) - - - ( 1 )
wherein r isiRepresents a difference from rkRegion of (d), ω (r)i) Is a region riWeighted value of Dr(rk,ri) Is the difference in color distance between two regions, Dr(rk,ri) The calculation formula is as follows:
D r ( r 1 , r 2 ) = Σ i = 1 n 1 Σ j = 1 n 2 f 1 ( i ) f 2 ( j ) d ( c 1 , i , c 2 , j ) - - - ( 2 )
wherein f is1(i) Indicating the region r1All statistical color types n of the ith color in the region1The probability of occurrence of (a); f. of2(i) Is a region r2All statistical color types n of the jth color in the region2The probability of occurrence of (a); d (c)1,i,c2,j) Is r1The ith color c1,iAnd r2Middle j color c2,jA distance difference of (d);
to S (r)k) Adding a difference value of the space distance of the region to obtain a region significance value as follows:
S ′ ( r k ) = Σ i = 1 , i ≠ k n exp ( - D s ( r k , r i ) / σ s 2 ) ω ( r i ) D r ( r k , r i ) - - - ( 3 )
wherein D iss(rk,ri) Euclidean distance, σ, of region barycenter for two regionssThe adjustment factor is influenced for the spatial distance.
Further, the performing of the saliency analysis on the original image specifically includes:
and performing image contrast analysis on the original image by adopting a regional contrast RC algorithm, and performing significance analysis on the original image through the image contrast analysis.
Further, the image processing method includes: adjusting exposure value, blurring processing, white balance special effect, background replacement, saliency value adjustment, color gradation adjustment, brightness adjustment, hue/saturation adjustment, color replacement, gradient mapping, and/or photo filter.
Further, when the salient regions with different significance effects are divided, the method further comprises the following steps:
the outline of the significant region with different significant effects and/or the internal cavity of the region filling the significant region with different significant effects are extracted by mathematical morphology.
Further, the extracting, by using mathematical morphology, the contour of each significant region having a different significant effect specifically includes:
performing contour extraction on the binary images obtained through calculation by using the binary images of the saliency areas with different saliency effects obtained through calculation through expansion, corrosion, opening and closing operations;
the filling of the internal cavity of the region of the significant region with different significant effects specifically includes:
respectively calculating corresponding binary images for the salient regions with different salient effects, extracting the outline of the interior of the binary image of each salient region to obtain an internal outline, determining the internal outline smaller than a preset area as an internal cavity, and filling pixels in the internal cavity.
Compared with the prior art, the technical scheme of the application comprises the following steps: carrying out significance analysis on the original image; distinguishing the original image into a significant area containing at least two different significant effects according to a significant analysis result; and processing each salient region by adopting a corresponding image processing method. According to the method, each saliency area is distinguished and determined through saliency analysis, and each saliency area is processed by adopting a corresponding image processing method, so that the display effect of the image main body area is improved, and the display quality of the image is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1(a) is a photographed original image;
fig. 1(b) is a saliency image of a captured image;
FIG. 2 is a schematic hardware configuration of a mobile terminal implementing various embodiments of the present invention;
FIG. 3 is a flow chart of a method of implementing image processing according to the present invention;
FIG. 4 is a block diagram of an apparatus for implementing image processing according to the present invention;
FIG. 5 is a flow chart of a method of an embodiment of the present invention;
FIG. 6(a) is a picture content for a first original image;
FIG. 6(b) is a diagram illustrating a first original image being segmented according to an embodiment of the present invention;
FIG. 6(c) is a schematic diagram of a saliency analysis of a first original image according to an embodiment of the present invention;
fig. 6(d) is a schematic diagram of a saliency analysis result of the first original image in this embodiment;
fig. 6(e) is a schematic diagram illustrating the effect of contrast enhancement on the first original image;
FIG. 6(f) is a schematic diagram illustrating an effect of image processing on a first original image according to an embodiment of the present invention;
FIG. 7(a) is the picture content of the second original image;
fig. 7(b) is a schematic diagram illustrating the effect of performing global white balance processing on the second original image;
FIG. 7(c) is a diagram illustrating the result of performing a saliency analysis on a second original image according to an embodiment of the present invention;
FIG. 7(d) is a schematic diagram illustrating the effect of image processing on the second original image according to the embodiment of the present invention;
fig. 8(a) is the picture content of the third original image;
fig. 8(b) is a diagram illustrating the result of performing a saliency analysis on a third original image according to an embodiment of the present invention;
fig. 8(c) is a schematic diagram illustrating an effect of performing image processing on the second original image according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
Fig. 2 is a schematic hardware configuration of a mobile terminal implementing various embodiments of the present invention, as shown in fig. 2,
the mobile terminal 100 may include a user input unit 130, an output unit 150, a memory 160, a controller 180, a power supply unit 190, and the like. Fig. 2 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. Elements of the mobile terminal will be described in detail below.
The a/V input unit 120 is for receiving a video signal. The a/V input unit 120 may include a camera 121 and a camera 121 to process image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium), and two or more cameras 1210 may be provided according to the construction of the mobile terminal.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The output unit 150 may include a display unit 151. The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like. Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
Based on the above mobile terminal hardware structure and communication system, the present invention provides various embodiments of the method.
Fig. 3 is a flowchart of a method for implementing image processing according to the present invention, as shown in fig. 3, including:
step 300, carrying out significance analysis on the original image;
the saliency analysis of the original image specifically includes: and comparing the contrast of the image of each pixel point of the original image with the color, brightness and direction of the surrounding pixel points to obtain a corresponding significance value of each pixel point, and performing significance analysis.
The saliency analysis of the original image specifically includes:
and performing image contrast analysis on the original image by adopting a regional contrast RC algorithm, and performing significance analysis on the original image through the image contrast analysis.
It should be noted that, the saliency of the image is calculated based on the human attention mechanism, so that the image content focused on shooting according to the interest in the image shooting process can be obtained, and the saliency effect of the part of the content is better because the user shoots the interested content through the shooting skill. The method performs significance analysis by using a Region Contrast (RC) algorithm, and obtains the significance of the image by using the calculation of the region contrast according to the thought of the RC algorithm; generating a global significance image by obtaining the correlation between the global contrast and the spatial position of the image; that is, the image has the same scale size as the original image, and the contrast method based on the global area sufficiently considers the contrast difference between the single area and the global area, so that the saliency of the entire area can be effectively highlighted, not only the edge saliency of the area (only the saliency of the image in the local area is considered). In addition, the RC algorithm also considers the spatial relationship of the areas, when the significance effect is calculated, the distance between the areas is set to be larger through the weighting parameters, the weighted value is correspondingly smaller, the positions between the areas are closer, the weighted value is correspondingly larger, and the reasonable processing of the area space is realized. The following is the main content of the RC algorithm:
when the RC algorithm is used to perform significance analysis, the original image needs to be divided into super pixels, assuming that the original image is divided into N regions, the number N of image blocks can be determined according to the resolution of the image, the size of each block is set to the width P of a pixel block, if P is 20, 20 is 20 or 400 pixels, and the range of P is usually [2040], and the number N of image blocks is set to M and N, the larger M is, the larger N is, i.e., the larger the resolution of the image is, the larger the unit area of the image block is, and the greater the number N of image blocks is M/(P).
For region rkThe RC algorithm defines the region rkS (r) ofk) Comprises the following steps:
S ( r k ) = Σ i = 1 , i ≠ k n ω ( r i ) D r ( r k , r i ) - - - ( 1 )
in the formula (1), riRepresents a difference from rkRegion of (d), ω (r)i) Is a region riThe weighted value of (2) is defined as ω (r) as the area of the pixel (number of pixels) within the region increasesi) The larger the weight is, the more the specific weight change rule can be set according to the empirical value of a person skilled in the art; dr(rk,ri) The color distance difference of the two regions is specifically defined as:
D r ( r 1 , r 2 ) = Σ i = 1 n 1 Σ j = 1 n 2 f 1 ( i ) f 2 ( j ) d ( c 1 , i , c 2 , j ) - - - ( 2 )
in the formula (2), f1(i) Represents: region r1All statistical color types n of the ith color in the region1The probability of occurrence of (a); f. of2(i) Is a region r2All statistical color types n of the jth color in the region2The probability of occurrence of (c). d (c)1,i,c2,j) Is r1The ith color c1,iAnd r2Middle j color c2,jD (c) is a distance difference1,i,c2,j) Mainly distance measures in CIELAB space (CIELAB is a color system of the CIE, distance measures in CIELAB space are numerical information that determines distance measures for a certain color based on the CIELAB space color system). In order to fully utilize the spatial relationship of the regions, based on the formula (1), the RC algorithm adds a spatial distance difference of the regions, and obtains a region saliency value as:
S ′ ( r k ) = Σ i = 1 , i ≠ k n exp ( - D s ( r k , r i ) / σ s 2 ) ω ( r i ) D r ( r k , r i ) - - - ( 3 )
wherein D iss(rk,ri) Euclidean distance, σ, of region barycenter for two regionssThe spatial distance influences the adjustment factor, the larger the distance is, the larger the value of the adjustment factor is, the larger the adjustment factor is, and the smaller the spatial distance influences the calculation of the significance. The value of the adjustment factor can be set according to the experience of the person skilled in the art.
Step 301, according to a saliency analysis result, distinguishing the original image into a saliency region containing at least two different saliency effects;
in this step, the distinguishing the original image as a salient region including at least two different salient effects specifically includes:
and according to the area significance numerical value in the significance analysis result, combining a preset distinguishing threshold value, and distinguishing the original image into significance areas containing at least two different significance effects.
Preferably, when the original image is divided into a main area and a background area with different significance effects, the preset distinguishing threshold includes: the significance value range of the main area is greater than 64 and less than or equal to 255, and the significance value range of the background area is greater than or equal to 0 and less than 64.
It should be noted that the size of the significance value range may be adjusted according to an empirical value of a person skilled in the art, and when the original image needs to be divided into more significance regions, the value range corresponding to each significance region may be determined based on the empirical value of the person skilled in the art.
Before each salient region is processed by adopting the corresponding image processing method, the method further comprises the following steps:
and dividing the significant regions with different significant effects.
It should be noted that, the method of the present invention can be implemented by using the existing image segmentation algorithm to segment the salient regions with different salient effects.
And 302, processing each salient region by adopting a corresponding image processing method.
In this step, the image processing method includes: adjusting exposure value, blurring processing, white balance special effect, background replacement, saliency value adjustment, color gradation adjustment, brightness adjustment, hue/saturation adjustment, color replacement, gradient mapping, and/or photo filter.
It should be noted that, the processing of the saliency areas with different saliency effects by using different image processing methods means that corresponding image processing methods are respectively selected for the saliency areas with different saliency effects to perform image processing, and there is no correlation between the image processing methods of the saliency areas, and the processing of the saliency areas with different saliency effects by using different image processing methods in the present invention may include the following application scenarios:
and after the main body area is segmented according to the significance analysis result, the exposure value of the main body area is up-regulated, and the exposure value of the background area is unchanged. The exposure value up-regulation can be gradually regulated according to a preset unit, and can also be directly regulated by adopting a parameter input mode.
Applying a second scene, namely a portrait scene, keeping the contrast ratio of the main body area and the background area unchanged, and blurring the background area; the adjustment of the blurring process may be performed step by step according to a preset adjustment unit, or the blurring parameter may be directly input by using a parameter input method.
And performing white balance processing on the main area and the background area by respectively adopting different parameters when the third scene is applied and the white balance effect of the image is not good.
Applying scene IV and disordered image backgrounds, keeping a main body area of the object unchanged, and replacing the background area; the background area replacement can select a standby image which is shot by a user independently, and can also select a proper background material from an image library.
And fifthly, the saliency effect is poor in the application scene, image fusion processing is carried out, the saliency weight is increased for the main body region, and the display effect of the main body region is improved.
Further, the image processing method further includes: the specific application scenarios and embodiments belong to the common general knowledge of those skilled in the art, and are not described herein in detail; in addition, the implementation of the superposition of two or more image processing methods also belongs to the conventional technical means of those skilled in the art, and is not described herein again.
When the significant areas with different significant effects are segmented, the method further comprises the following steps:
extracting the outline of the significant region with different significant effects by using mathematical morphology, and/or filling the inner cavity of the significant region with different significant effects.
The extracting of the outline of each significant region having a different significant effect by using mathematical morphology specifically includes:
performing contour extraction on the binary images obtained through calculation by using the binary images of the saliency areas with different saliency effects obtained through calculation through expansion, corrosion, opening and closing operations;
the filling of the internal cavity of the region of the significant region with different significant effects specifically includes:
respectively calculating corresponding binary images for the salient regions with different salient effects, extracting the outline of the interior of the binary image of each salient region to obtain an internal outline, determining the internal outline smaller than a preset area as an internal cavity, and filling pixels in the internal cavity.
It should be noted that expansion, erosion, opening, and closing are basic algorithms in the image processing algorithm, and expansion is the same as expansion in the image processing algorithm and erosion expansion is the same as erosion in the image processing algorithm. The contour extraction specifically comprises: obtaining a binary image of an original image through expansion, corrosion, opening and closing operations, performing binarization segmentation on the binary image into a main body region and a background region (generally, a pixel of the main body region can be set to be 255 (displayed as white), and a background region is set to be 0 (displayed as black)), traversing the binary image, extracting pixel mutation points in the binary image, such as the pixel mutation points from 255 to 0 or from 0 to 255, taking the pixel mutation points as boundary points of the image, and connecting the boundary points to form a contour of the main body region.
According to the method, smooth transition and neat image of each salient region of the segmentation can be ensured through contour extraction; filling of the holes in the regions can ensure the integrity of each segmented saliency region.
According to the method, each saliency area is distinguished and determined through saliency analysis, and each saliency area is processed by adopting a corresponding image processing method, so that the display effect of the image main body area is improved, and the display quality of the image is improved.
Fig. 4 is a block diagram of an apparatus for implementing image processing according to the present invention, as shown in fig. 4, including: the device comprises an analysis unit, a determination unit and a processing unit; wherein,
the analysis unit is used for carrying out significance analysis on the original image;
the analysis unit is specifically configured to perform image contrast comparison on each pixel point of the original image and the color, brightness, and direction of the peripheral pixel points, to obtain a corresponding saliency value of each pixel point, and perform saliency analysis.
The analysis unit is specifically configured to perform image contrast analysis on the original image by using a regional contrast RC algorithm, and perform saliency analysis on the original image by using the image contrast analysis.
The determining unit is used for distinguishing the original image into a significant area containing at least two different significant effects according to a significant analysis result;
and the processing unit is used for processing each salient region by adopting a corresponding image processing method.
The device also comprises a segmentation unit which is used for segmenting the salient regions with different salient effects before the salient regions are respectively processed by adopting the corresponding image processing method.
The dividing means is also configured to extract the outline of each significant region having a different significance effect and/or fill the internal cavity of each significant region having a different significance effect by using mathematical morphology when dividing each significant region having a different significance effect.
The process of the present invention is illustrated in clear detail below by means of specific examples, which are provided only for illustrating the present invention and are not intended to limit the scope of the process of the present invention.
Examples
Fig. 5 is a flowchart of a method according to an embodiment of the present invention, as shown in fig. 5, including:
500, segmenting an original image according to a set pixel size;
fig. 6(a) shows the picture content of the first original image, as shown in fig. 6(a), the picture includes two main portions, namely, a human body and a background; fig. 6(b) is a schematic diagram of segmenting the first original image according to the embodiment of the present invention, and as shown in fig. 6(b), the original image is segmented into n regions.
501, performing significance analysis on an original image segmented according to set pixels;
fig. 6(c) is a schematic diagram of a saliency analysis of a first original image according to an embodiment of the present invention, as shown in fig. 6(c), when the image area 1 is taken as a saliency analysis target for the divided image areas 1 and 2, and the image areas (all image areas except the image area 1) of other numbers in the image are taken as images different from the image area 1, a saliency value is calculated according to the calculation of formula (3) of the RC algorithm; when the image area 2 is taken as a significance analysis object, taking image areas with other numbers in the image as images different from the image area 2, and carrying out significance numerical calculation; the result of the saliency calculation of image region 1 is larger, and the result of the saliency calculation of image region 2 is smaller; fig. 6(d) is a schematic diagram of a saliency analysis result of the first original image in this embodiment, as shown in fig. 6(d), the larger the saliency value of the image region 1 is, the higher the brightness of the pixel point in the saliency image is, the higher the saliency degree is, the region with high saliency is the image region in which the user is interested, and the smaller the saliency value of the image region 2 is.
Step 502, according to the result of the significance analysis, the original image is divided into two or more significant areas with different significance effects.
Here, the saliency analysis result may divide the original image into two or more saliency areas different in saliency effect; the number of the specific divided regions and the size of the significance numerical value for dividing each significance region can be set according to the analysis of a person skilled in the art; for example, a first threshold number of image regions with the highest significance value in the sequence are set as the significance regions of a display effect; setting a second threshold value image area with the sequenced significance numerical values as a significance area with a second display effect; and setting a third threshold image area with the ranked saliency values as a saliency area with a third display effect. The division of the saliency areas can also be performed by setting a proportion or a saliency value, and the specific setting can be set and adjusted by a person skilled in the art according to image analysis. The degree of significance of the human subject is determined through significance analysis, and two significance regions of the human subject and the background are divided according to different significance effects.
In this embodiment, the saliency region may be divided into a main region and a background region, and taking an image saliency value (brightness of a saliency image) range [0255] as an example, it may be assumed that a saliency value range of the main region is set as [64255], and a saliency value range of the background region is set as [064].
Step 503, dividing the salient regions with different significance effects.
Preferably, the present embodiment, when dividing the saliency areas having different saliency effects, further includes: extracting the outline of the significant region with different significant effects by using mathematical morphology, and/or filling the inner cavity of the significant region with different significant effects.
And step 504, processing each salient region by adopting a corresponding image processing method.
In fig. 6(a), the contrast between the human subject and the background in the first original image is low, and there is a problem that the display effect of the human subject is not obvious; fig. 6(e) is a schematic diagram illustrating the effect of contrast enhancement on the first original image, as shown in fig. 6(e), since the human subject and the background are simultaneously subjected to the contrast enhancement processing, the purpose of enhancing the human subject is not achieved; fig. 6(f) is a schematic diagram illustrating the effect of image processing on the first original image according to the embodiment of the present invention, and as shown in fig. 6(f), the image processing for enhancing the contrast of the human subject is performed in the embodiment, and no processing is performed on the background, so that the display effect of the human subject is enhanced.
Fig. 7(a) is the picture content of the second original image, as shown in fig. 7(a), the animal subject and the background in the second original image have a problem of poor white balance effect; fig. 7(b) is a schematic view illustrating the effect of performing the global white balance processing on the second original image, as shown in fig. 7(b), since the animal subject and the background are simultaneously subjected to the white balance processing, the display effect of the animal subject appears distorted due to the white balance processing, and the picture display effect becomes poor; fig. 7(c) is a schematic diagram illustrating a result of performing saliency analysis on a second original image according to an embodiment of the present invention, as shown in fig. 7(c), determining a saliency degree of an animal subject through the saliency analysis, and dividing two saliency areas of the animal subject and a background according to differences of saliency effects; fig. 7(d) is a schematic diagram illustrating the effect of performing image processing on the second original image according to the embodiment of the present invention, as shown in fig. 7(d), the embodiment performs white balance processing on the background, and does not perform any processing on the animal subject, so that the display effect of the picture is enhanced due to the local processing of white balance. After the local white balance, the flowers and the grass turn green, the color of the puppy also turns grey, the local white balance is carried out by utilizing the saliency image, the white balance of the puppy can be kept not to be adjusted, and the white balance effect of the background is improved.
Fig. 8(a) is the picture content of the third original image, as shown in fig. 8(a), the animal subject and the background are equally clear, resulting in a reduction in the display effect of the animal subject; fig. 8(b) is a schematic diagram illustrating a result of performing saliency analysis on a third original image according to an embodiment of the present invention, and fig. 8(b) illustrates that a degree of saliency of an animal subject is determined through the saliency analysis, and two saliency areas of the animal subject and a background are divided according to differences of saliency effects; fig. 8(c) is a schematic view illustrating an effect of performing image processing on the second original image according to the embodiment of the present invention, as shown in fig. 8(c), in this embodiment, blurring processing is performed on a background, and no processing is performed on an animal subject, so that a display effect of the animal subject in the picture is improved by the processing.
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An apparatus for implementing image processing, comprising: the device comprises an analysis unit, a determination unit and a processing unit; wherein,
the analysis unit is used for carrying out significance analysis on the original image;
the determining unit is used for distinguishing the original image into a significant area containing at least two different significant effects according to a significant analysis result;
and the processing unit is used for processing each salient region by adopting a corresponding image processing method.
2. The apparatus according to claim 1, further comprising a segmentation unit configured to segment the salient regions with different significance effects before the salient regions are processed by the corresponding image processing methods.
3. The apparatus according to claim 1, wherein the analysis unit is specifically configured to perform saliency analysis by performing image contrast comparison on each pixel of the original image with colors, luminances and directions of surrounding pixels to obtain a corresponding saliency value of each pixel.
4. The device according to claim 1, 2 or 3, characterized in that the analysis unit is in particular adapted to,
and performing image contrast analysis on the original image by adopting a regional contrast RC algorithm, and performing significance analysis on the original image through the image contrast analysis.
5. The apparatus according to claim 2, wherein the segmentation unit is further configured to extract an outline of each significant region having a different significance effect and/or fill a cavity inside each significant region having a different significance effect by using mathematical morphology when segmenting each significant region having a different significance effect.
6. A method of implementing image processing, comprising:
carrying out significance analysis on the original image;
distinguishing the original image into a significant area containing at least two different significant effects according to a significant analysis result;
and processing each salient region by adopting a corresponding image processing method.
7. The method according to claim 6, wherein distinguishing the original image as a salient region containing at least two different salient effects specifically comprises:
and according to the area significance numerical value in the significance analysis result, combining a preset distinguishing threshold value, and distinguishing the original image into significance areas containing at least two different significance effects.
8. The method according to claim 6 or 7, wherein when the original image is divided into a main area and a background area with different significance effects, the preset discrimination threshold comprises: the significance value range of the main area is greater than 64 and less than or equal to 255, and the significance value range of the background area is greater than or equal to 0 and less than 64.
9. The method according to claim 6 or 7, wherein before the processing of each salient region by the corresponding image processing method, the method further comprises:
and dividing the significant regions having different significant effects.
10. The method according to claim 6 or 7, wherein the saliency analysis of the raw image comprises in particular: and comparing the contrast of the image of each pixel point of the original image with the color, brightness and direction of the surrounding pixel points to obtain a corresponding significance value of each pixel point, and performing significance analysis.
11. The method according to claim 6, 7 or 10, wherein the saliency analysis of the original image comprises in particular:
and performing image contrast analysis on the original image by adopting a regional contrast RC algorithm, and performing significance analysis on the original image through the image contrast analysis.
12. The method according to claim 11, wherein said performing a saliency analysis of the raw image comprises in particular:
dividing the original image into N areas according to the set pixel size, and calculating the area r by the formula (1)kS (r) is a significance value ofk) Comprises the following steps:
S ( r k ) = Σ i = 1 , i ≠ k n ω ( r i ) D r ( r k , r i ) - - - ( 1 )
wherein r isiRepresents a difference from rkRegion of (d), ω (r)i) Is a region riWeighted value of Dr(rk,ri) Is the difference in color distance between two regions, Dr(rk,ri) The calculation formula is as follows:
D r ( r 1 , r 2 ) = Σ i = 1 n 1 Σ j = 1 n 2 f 1 ( i ) f 2 ( j ) d ( c 1 , i , c 2 , j ) - - - ( 2 )
wherein f is1(i) Indicating the region r1All statistical color types n of the ith color in the region1The probability of occurrence of (a); f. of2(i) Is a region r2All statistical color types n of the jth color in the region2The probability of occurrence of (a); d (c)1,i,c2,j) Is r1The ith color c1,iAnd r2Middle j color c2,jA distance difference of (d);
to S (r)k) Adding a difference value of the space distance of the region to obtain a region significance value as follows:
S ′ ( r k ) = Σ i = 1 , i ≠ k n exp ( - D s ( r k , r i ) / σ s 2 ) ω ( r i ) D r ( r k , r i ) - - - ( 3 )
wherein D iss(rk,ri) Euclidean distance, σ, of region barycenter for two regionssThe adjustment factor is influenced for the spatial distance.
13. The method according to claim 6, 7 or 8, characterized in that the image processing method comprises: adjusting exposure value, blurring processing, white balance special effect, background replacement, saliency value adjustment, color gradation adjustment, brightness adjustment, hue/saturation adjustment, color replacement, gradient mapping, and/or photo filter.
14. The method according to claim 7, wherein when segmenting saliency areas that differ in saliency effect, the method further comprises:
and extracting the outline of the significant region with different significant effects and/or filling the inner cavity of the region of the significant region with different significant effects by using mathematical morphology.
15. The method according to claim 14, wherein the extracting the outline of the significant region with different significant effects by using mathematical morphology specifically comprises:
performing contour extraction on the binary images obtained through calculation by using the binary images of the saliency areas with different saliency effects obtained through calculation through expansion, corrosion, opening and closing operations;
the filling of the internal cavity of the region of the significant region with different significant effects specifically includes:
respectively calculating corresponding binary images for the salient regions with different salient effects, extracting the outline of the interior of the binary image of each salient region to obtain an internal outline, determining the internal outline smaller than a preset area as an internal cavity, and filling pixels in the internal cavity.
CN201510936808.5A 2015-12-15 2015-12-15 Image processing method and apparatus Pending CN105574866A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510936808.5A CN105574866A (en) 2015-12-15 2015-12-15 Image processing method and apparatus
PCT/CN2016/105755 WO2017101626A1 (en) 2015-12-15 2016-11-14 Method and apparatus for implementing image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510936808.5A CN105574866A (en) 2015-12-15 2015-12-15 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
CN105574866A true CN105574866A (en) 2016-05-11

Family

ID=55884957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510936808.5A Pending CN105574866A (en) 2015-12-15 2015-12-15 Image processing method and apparatus

Country Status (2)

Country Link
CN (1) CN105574866A (en)
WO (1) WO2017101626A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254643A (en) * 2016-07-29 2016-12-21 努比亚技术有限公司 A kind of mobile terminal and image processing method
CN106780513A (en) * 2016-12-14 2017-05-31 北京小米移动软件有限公司 The method and apparatus of picture conspicuousness detection
CN107147823A (en) * 2017-05-31 2017-09-08 广东欧珀移动通信有限公司 Exposure method, device, computer-readable recording medium and mobile terminal
CN107197146A (en) * 2017-05-31 2017-09-22 广东欧珀移动通信有限公司 Image processing method and related product
CN107277354A (en) * 2017-07-03 2017-10-20 努比亚技术有限公司 One kind virtualization photographic method, virtualization photo terminal and computer-readable recording medium
CN107392972A (en) * 2017-08-21 2017-11-24 维沃移动通信有限公司 A kind of image background weakening method, mobile terminal and computer-readable recording medium
CN107950017A (en) * 2016-06-15 2018-04-20 索尼公司 Image processing equipment, image processing method and picture pick-up device
CN108376404A (en) * 2018-02-11 2018-08-07 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, storage medium
CN108702452A (en) * 2017-06-09 2018-10-23 华为技术有限公司 A kind of image capturing method and device
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
CN109827652A (en) * 2018-11-26 2019-05-31 河海大学常州校区 One kind being directed to Fibre Optical Sensor vibration signal recognition and system
WO2019105254A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Background blur processing method, apparatus and device
CN109978881A (en) * 2019-04-09 2019-07-05 苏州浪潮智能科技有限公司 A kind of method and apparatus of saliency processing
CN109993816A (en) * 2019-03-21 2019-07-09 广东智媒云图科技股份有限公司 Joint drawing method, device, terminal setting and computer readable storage medium
WO2019228084A1 (en) * 2018-05-31 2019-12-05 Zhou Chaoqiang Child-proof smart blow dryer
CN110602384A (en) * 2019-08-27 2019-12-20 维沃移动通信有限公司 Exposure control method and electronic device
CN114612336A (en) * 2022-03-21 2022-06-10 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium
CN115460389A (en) * 2022-09-20 2022-12-09 北京拙河科技有限公司 Image white balance area optimization method and device
WO2023273069A1 (en) * 2021-06-30 2023-01-05 深圳市慧鲤科技有限公司 Saliency detection method and model training method and apparatus thereof, device, medium, and program

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210277B (en) * 2018-05-22 2022-12-09 安徽大学 Moving target hole filling algorithm
WO2020133170A1 (en) * 2018-12-28 2020-07-02 深圳市大疆创新科技有限公司 Image processing method and apparatus
CN111127476B (en) * 2019-12-06 2024-01-26 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN112419265B (en) * 2020-11-23 2023-08-01 哈尔滨工程大学 Camouflage evaluation method based on human eye vision mechanism
CN115861451B (en) * 2022-12-27 2023-06-30 东莞市楷德精密机械有限公司 Multifunctional image processing method and system based on machine vision
CN116342629A (en) * 2023-06-01 2023-06-27 深圳思谋信息科技有限公司 Image interaction segmentation method, device, equipment and storage medium
CN116757963B (en) * 2023-08-14 2023-11-07 荣耀终端有限公司 Image processing method, electronic device, chip system and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473739A (en) * 2013-08-15 2013-12-25 华中科技大学 White blood cell image accurate segmentation method and system based on support vector machine
CN103514582A (en) * 2012-06-27 2014-01-15 郑州大学 Visual saliency-based image deblurring method
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN104766287A (en) * 2015-05-08 2015-07-08 哈尔滨工业大学 Blurred image blind restoration method based on significance detection
CN104809729A (en) * 2015-04-29 2015-07-29 山东大学 Robust automatic image salient region segmenting method
US20150227809A1 (en) * 2014-02-12 2015-08-13 International Business Machines Corporation Anomaly detection in medical imagery

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3457354B1 (en) * 2011-04-08 2020-02-19 Dolby Laboratories Licensing Corporation Definition of global image transformations
CN102509308A (en) * 2011-08-18 2012-06-20 上海交通大学 Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN105023264A (en) * 2014-04-25 2015-11-04 南京理工大学 Infrared image remarkable characteristic detection method combining objectivity and background property
CN104240244B (en) * 2014-09-10 2017-06-13 上海交通大学 A kind of conspicuousness object detecting method based on communication mode and manifold ranking
CN104408708B (en) * 2014-10-29 2017-06-20 兰州理工大学 A kind of image well-marked target detection method based on global and local low-rank
CN105574886A (en) * 2016-01-28 2016-05-11 多拉维(深圳)技术有限公司 High-precision calibration method of handheld multi-lens camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514582A (en) * 2012-06-27 2014-01-15 郑州大学 Visual saliency-based image deblurring method
CN103473739A (en) * 2013-08-15 2013-12-25 华中科技大学 White blood cell image accurate segmentation method and system based on support vector machine
US20150227809A1 (en) * 2014-02-12 2015-08-13 International Business Machines Corporation Anomaly detection in medical imagery
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN104809729A (en) * 2015-04-29 2015-07-29 山东大学 Robust automatic image salient region segmenting method
CN104766287A (en) * 2015-05-08 2015-07-08 哈尔滨工业大学 Blurred image blind restoration method based on significance detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴向东: "红外与可见光图像融合技术的研究", 《万方学位论文》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107950017A (en) * 2016-06-15 2018-04-20 索尼公司 Image processing equipment, image processing method and picture pick-up device
CN106254643B (en) * 2016-07-29 2020-04-24 瑞安市智造科技有限公司 Mobile terminal and picture processing method
CN106254643A (en) * 2016-07-29 2016-12-21 努比亚技术有限公司 A kind of mobile terminal and image processing method
CN106780513B (en) * 2016-12-14 2019-08-30 北京小米移动软件有限公司 The method and apparatus of picture conspicuousness detection
CN106780513A (en) * 2016-12-14 2017-05-31 北京小米移动软件有限公司 The method and apparatus of picture conspicuousness detection
CN107147823A (en) * 2017-05-31 2017-09-08 广东欧珀移动通信有限公司 Exposure method, device, computer-readable recording medium and mobile terminal
CN107197146A (en) * 2017-05-31 2017-09-22 广东欧珀移动通信有限公司 Image processing method and related product
CN107197146B (en) * 2017-05-31 2020-06-30 Oppo广东移动通信有限公司 Image processing method and device, mobile terminal and computer readable storage medium
US10674091B2 (en) 2017-05-31 2020-06-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method based on determination of light spot area and related products
CN108702452A (en) * 2017-06-09 2018-10-23 华为技术有限公司 A kind of image capturing method and device
CN108702452B (en) * 2017-06-09 2020-02-14 华为技术有限公司 Image shooting method and device
WO2018223394A1 (en) * 2017-06-09 2018-12-13 华为技术有限公司 Method and apparatus for photographing image
US11425309B2 (en) 2017-06-09 2022-08-23 Huawei Technologies Co., Ltd. Image capture method and apparatus
CN107277354A (en) * 2017-07-03 2017-10-20 努比亚技术有限公司 One kind virtualization photographic method, virtualization photo terminal and computer-readable recording medium
CN107277354B (en) * 2017-07-03 2020-04-28 瑞安市智造科技有限公司 Virtual photographing method, virtual photographing terminal and computer readable storage medium
CN107392972A (en) * 2017-08-21 2017-11-24 维沃移动通信有限公司 A kind of image background weakening method, mobile terminal and computer-readable recording medium
WO2019105254A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Background blur processing method, apparatus and device
CN108376404A (en) * 2018-02-11 2018-08-07 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, storage medium
WO2019228084A1 (en) * 2018-05-31 2019-12-05 Zhou Chaoqiang Child-proof smart blow dryer
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109325507B (en) * 2018-10-11 2020-10-16 湖北工业大学 Image classification method and system combining super-pixel saliency features and HOG features
CN109827652A (en) * 2018-11-26 2019-05-31 河海大学常州校区 One kind being directed to Fibre Optical Sensor vibration signal recognition and system
CN109993816A (en) * 2019-03-21 2019-07-09 广东智媒云图科技股份有限公司 Joint drawing method, device, terminal setting and computer readable storage medium
CN109993816B (en) * 2019-03-21 2023-08-04 广东智媒云图科技股份有限公司 Combined painting method, device, terminal setting and computer readable storage medium
CN109978881B (en) * 2019-04-09 2021-11-26 苏州浪潮智能科技有限公司 Image saliency processing method and device
CN109978881A (en) * 2019-04-09 2019-07-05 苏州浪潮智能科技有限公司 A kind of method and apparatus of saliency processing
CN110602384A (en) * 2019-08-27 2019-12-20 维沃移动通信有限公司 Exposure control method and electronic device
CN110602384B (en) * 2019-08-27 2022-03-29 维沃移动通信有限公司 Exposure control method and electronic device
WO2023273069A1 (en) * 2021-06-30 2023-01-05 深圳市慧鲤科技有限公司 Saliency detection method and model training method and apparatus thereof, device, medium, and program
CN114612336A (en) * 2022-03-21 2022-06-10 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium
CN115460389A (en) * 2022-09-20 2022-12-09 北京拙河科技有限公司 Image white balance area optimization method and device
CN115460389B (en) * 2022-09-20 2023-05-26 北京拙河科技有限公司 Image white balance area optimization method and device

Also Published As

Publication number Publication date
WO2017101626A1 (en) 2017-06-22

Similar Documents

Publication Publication Date Title
CN105574866A (en) Image processing method and apparatus
US20210258479A1 (en) Image processing method and apparatus
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN106534675A (en) Method and terminal for microphotography background blurring
US10382712B1 (en) Automatic removal of lens flares from images
CN105427263A (en) Method and terminal for realizing image registering
CN108171677B (en) Image processing method and related equipment
CN106791416A (en) A kind of background blurring image pickup method and terminal
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112272832A (en) Method and system for DNN-based imaging
CN112258404A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113395440A (en) Image processing method and electronic equipment
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
CN109816620B (en) Image processing method and device, electronic equipment and storage medium
CN113610884B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN113920912A (en) Display attribute adjusting method and related equipment
CN105976344A (en) Whiteboard image processing method and whiteboard image processing device
CN111553865B (en) Image restoration method and device, electronic equipment and storage medium
US20180150978A1 (en) Method and device for processing a page
CN113225451A (en) Image processing method and device and electronic equipment
CN111968605A (en) Exposure adjusting method and device
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160511

RJ01 Rejection of invention patent application after publication