CN109978881B - Image saliency processing method and device - Google Patents

Image saliency processing method and device Download PDF

Info

Publication number
CN109978881B
CN109978881B CN201910278429.XA CN201910278429A CN109978881B CN 109978881 B CN109978881 B CN 109978881B CN 201910278429 A CN201910278429 A CN 201910278429A CN 109978881 B CN109978881 B CN 109978881B
Authority
CN
China
Prior art keywords
image
value
images
saliency
significance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910278429.XA
Other languages
Chinese (zh)
Other versions
CN109978881A (en
Inventor
张永欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN201910278429.XA priority Critical patent/CN109978881B/en
Publication of CN109978881A publication Critical patent/CN109978881A/en
Application granted granted Critical
Publication of CN109978881B publication Critical patent/CN109978881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Color Image Communication Systems (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for processing image saliency. The image saliency processing method comprises the following steps: determining an object to be subjected to image saliency processing in an image; and carrying out image processing on the determined object to obtain an image with optimal image significance. The method and the device for processing the image saliency can be used for processing the saliency of the object in the image, simplify the complexity of the saliency processing of the image and meet the saliency requirement of a user on the image.

Description

Image saliency processing method and device
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and an apparatus for processing saliency of an image.
Background
Visual Attention Mechanism (VA) refers to when facing a scene, a human automatically processes regions of interest and selectively ignores regions of no interest, which are called salient regions. With the spread of large data volume and the high-speed development of artificial intelligence brought by the internet, people pay more and more attention to significance analysis, and more significance analysis algorithms appear. The saliency analysis algorithms are mainly used for calculating the saliency of the image, are high in calculation complexity and have different adaptability to different images. Currently, it is highly desirable to be able to achieve attractive objects by changing the saliency of an object in an image, which requires saliency processing of the image. However, the saliency processing of the image is complicated to implement due to immaturity and imperfection of the saliency analysis algorithm, and the expected saliency effect cannot be achieved.
Disclosure of Invention
The invention mainly aims to provide a method and a device for processing image saliency, which can perform saliency processing on an object in an image, simplify the complexity of the saliency processing of the image and meet the saliency requirement of a user on the image.
In order to solve the above technical problem, the present invention provides an image saliency processing method, including:
determining an object to be subjected to image saliency processing in an image;
and carrying out image processing on the determined object according to the brightness value, the contrast value, the hue value and the saturation value to obtain image significance data, judging whether the obtained image significance data meets the condition of optimal image significance according to a preset rule, and if not, continuously adjusting the brightness value, the contrast value, the hue value and the saturation value until obtaining the image with the optimal image significance.
In an exemplary embodiment, the method further comprises the following features:
and (3) performing image processing on the determined object according to the brightness value, the contrast value, the hue value and the saturation value to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, and if not, continuously adjusting the brightness value, the contrast value, the hue value and the saturation value until obtaining the image with the optimal image significance, wherein the steps of S0-S3 are included.
Step S0: respectively initializing a brightness value bVal, a contrast value cVal, a hue value hVal and a saturation value sVal into bVal0、cVal0、hVal0And sVal0To obtain bVal0、cVal0、hVal0And sVal0As the current bVal, cVal, hVal, sVal values.
Step S1: carrying out image processing on the determined object according to the current bVal, cVal, hVal and sVal values; and the image before the processing is saved as the first group of images, the image after the processing in step S0 is saved as the second group of images, and the moving image processing is performed on the second group of images, and the image after the moving image processing is saved as the third group of images.
Step S2: and performing image detection analysis on the first group of images, the second group of images and the third group of images respectively.
Step S3: comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, if not, re-adjusting the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal according to the obtained image significance data, taking the adjusted brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal as the current bVal, cVal, hVal and sVal values, and continuously executing the step S1; and if the requirement that the image saliency is optimal is met, obtaining an image with the optimal image saliency, and ending the process.
In an exemplary embodiment, the method further comprises the following features:
the step S2: performing image detection analysis on the first, second, and third sets of images, respectively, comprises:
and respectively detecting the object with the most image significance in the first group of images, the second group of images and the third group of images, and detecting the time and the distribution rule of human eyes staying on the object.
In an exemplary embodiment, the method further comprises the following features:
if the obtained image significance data does not meet the optimal image significance according to the preset rule, before the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal are re-adjusted according to the obtained image significance data, the method further comprises the following steps:
and judging the number N of times that the step S1 is executed, if the number of times that the step S1 is executed is greater than or equal to a preset threshold value, searching for the optimal image saliency data from the obtained image saliency data of the N times, and taking the image corresponding to the image saliency data with the optimal performance as the image with the optimal image saliency.
In an exemplary embodiment, the method further comprises the following features:
the detection of the distribution rule of the human eyes staying on the object comprises the following steps:
and detecting the area where the human eyes stay on the object, the stay time of the human eyes on the area, and the brightness value, the contrast value, the hue value and the saturation value of the stay area to obtain the distribution rule of the human eyes stay on the object.
In order to solve the above problem, the present invention also provides an image saliency processing apparatus comprising: a memory and a processor; wherein:
the memory is used for storing a program for image saliency processing;
the processor is used for reading and executing the program for image saliency processing, and executing the following operations:
determining an object to be subjected to image saliency processing in an image;
and carrying out image processing on the determined object according to the brightness value, the contrast value, the hue value and the saturation value to obtain image significance data, judging whether the obtained image significance data meets the condition of optimal image significance according to a preset rule, and if not, continuously adjusting the brightness value, the contrast value, the hue value and the saturation value until obtaining the image with the optimal image significance.
In an exemplary embodiment, the processor is configured to read a program for performing the image saliency processing, and perform the following operations: and carrying out image processing on the determined object according to the brightness value, the contrast value, the hue value and the saturation value to obtain image significance data, judging whether the obtained image significance data meets the condition of optimal image significance according to a preset rule, and if not, continuously adjusting the brightness value, the contrast value, the hue value and the saturation value until obtaining an image with the optimal image significance, wherein the steps of S0-S3 are included.
Step S0: respectively initializing a brightness value bVal, a contrast value cVal, a hue value hVal and a saturation value sVal into bVal0、cVal0、hVal0And sVal0To obtain bVal0、cVal0、hVal0And sVal0As the current bVal, cVal, hVal, sVal values.
Step S1: carrying out image processing on the determined object according to the current bVal, cVal, hVal and sVal values; and the image before the processing is saved as the first group of images, the image after the processing in step S0 is saved as the second group of images, and the moving image processing is performed on the second group of images, and the image after the moving image processing is saved as the third group of images.
Step S2: and performing image detection analysis on the first group of images, the second group of images and the third group of images respectively.
Step S3: comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, if not, re-adjusting the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal according to the obtained image significance data, taking the adjusted brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal as the current bVal, cVal, hVal and sVal values, and continuously executing the step S1; and if the requirement that the image saliency is optimal is met, obtaining an image with the optimal image saliency, and ending the process.
In an exemplary embodiment, the processor is configured to read a program for performing the image saliency processing, and the step S2 of performing: performing image detection analysis on the first, second, and third sets of images, respectively, comprises:
and respectively detecting the object with the most image significance in the first group of images, the second group of images and the third group of images, and detecting the time and the distribution rule of human eyes staying on the object.
In an exemplary embodiment, the processor is configured to read a program for performing the image saliency processing, and further configured to:
if the obtained image significance data does not meet the condition of optimal image significance according to the preset rule, before the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal are re-adjusted according to the obtained image significance data,
and judging the number N of times that the step S1 is executed, if the number of times that the step S1 is executed is greater than or equal to a preset threshold value, searching for the optimal image saliency data from the obtained image saliency data of the N times, and taking the image corresponding to the image saliency data with the optimal performance as the image with the optimal image saliency.
In an exemplary embodiment, the detecting a distribution law of the human eye staying on the object includes:
and detecting the area where the human eyes stay on the object, the stay time of the human eyes on the area, and the brightness value, the contrast value, the hue value and the saturation value of the stay area to obtain the distribution rule of the human eyes stay on the object.
In summary, the method for processing image saliency according to the present application first determines an object to be subjected to image saliency processing in an image, and then performs image processing on the determined object to obtain an image with optimal image saliency. Compared with the prior art, the image saliency processing method and device simplify the complexity of image saliency processing and meet the saliency requirement of a user on an image.
Drawings
Fig. 1 is a flowchart of a method for processing image saliency according to an embodiment of the present invention.
Fig. 2 is a specific flowchart of obtaining an image with optimal image saliency according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an image saliency processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
FIG. 1 is a flow diagram of a method of image saliency processing of an embodiment of the present invention. According to the flowchart of fig. 1, the method for processing image saliency of the present embodiment includes:
step A: determining an object to be subjected to image saliency processing in an image;
and B: and carrying out image processing on the determined object to obtain an image with optimal image significance.
In one exemplary embodiment, step B: the image processing of the determined object to obtain an image with the best image saliency includes steps S0-S3.
Step S0: respectively initializing a brightness value bVal, a contrast value cVal, a hue value hVal and a saturation value sVal into bVal0、cVal0、hVal0And sVal0To obtain bVal0、cVal0、hVal0And sVal0As the current bVal, cVal, hVal, sVal values.
Step S1: carrying out image processing on the determined object according to the current bVal, cVal, hVal and sVal values; and the image before the processing is saved as the first group of images, the image after the processing in step S0 is saved as the second group of images, and the moving image processing is performed on the second group of images, and the image after the moving image processing is saved as the third group of images.
Step S2: and performing image detection analysis on the first group of images, the second group of images and the third group of images respectively.
Step S3: comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, if not, re-adjusting the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal according to the obtained image significance data, taking the adjusted brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal as the current bVal, cVal, hVal and sVal values, and continuously executing the step S1; and if the requirement that the image saliency is optimal is met, obtaining an image with the optimal image saliency, and ending the process.
As shown in fig. 2, step S3 includes sub-steps S30, S31, S32, and S33.
Wherein, the sub-step S30: and comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data.
Substep S31: judging whether the obtained image significance data meet the condition that the image significance is optimal or not according to a preset rule; if the image saliency is optimal, performing a sub-step S32; if the image saliency optimization is not satisfied, sub-step S33 is performed.
Substep S32: and taking the currently processed second group of images and the currently processed third group of images as the images with the optimal image saliency, and ending the flow of the image saliency processing.
Substep S33: the brightness value bVal, the contrast value cVal, the hue value hVal, and the saturation value sVal are re-adjusted according to the obtained image saliency data, and the adjusted brightness value bVal, contrast value cVal, hue value hVal, and saturation value sVal are used as the current bVal, cVal, hVal, and sVal values, and the step S1 is continuously performed.
In an exemplary embodiment, image processing the determined object according to the current bVal, cVal, hVal, sVal value may include:
and respectively carrying out image processing on the determined object according to the brightness formula, the contrast formula, the hue formula and the saturation formula of the image according to the current bVal, cVal, hVal and sVal values.
Wherein, the brightness formula (1) of the image is as follows:
Figure GDA0002128653710000071
the contrast equations (2) and (3) of the images are as follows:
cVal=Average+(RGB-Average)*(1+percentage)(2)
Figure GDA0002128653710000072
the tone formula (4) of the image is as follows:
Figure GDA0002128653710000073
the saturation formula (5) of the image is as follows:
Figure GDA0002128653710000081
where MaxV ═ max (R, G, B), MinV ═ min (R, G, B), nRGB denotes adjusted tristimulus color values, RGB denotes tristimulus color values, Average denotes luminance Average values, percentage denotes adjustment percentage, and bVal, cVal, hVal, and sVal denote luminance values, contrast values, hue values, and saturation values, respectively.
In another exemplary embodiment, due to some constraint condition, one or more of the brightness value, the contrast value, the hue value and the saturation value of the image need to satisfy some constraint condition, for example, the hue value is required to be a fixed value, or the saturation value is required to be within a certain range while the hue value is required to be a fixed value, and then the image processing of the determined object according to the current bVal, cVal, hfal and sVal values in step S1 may include:
on the premise that the constraint condition is satisfied, image processing is performed on the determined object according to one or more of the current bVal, cVal, hVal and sVal values.
In an exemplary embodiment, step S2: performing image detection analysis on the first, second, and third sets of images, respectively, comprises:
and respectively detecting the object with the most image significance in the first group of images, the second group of images and the third group of images, and detecting the time and the distribution rule of human eyes staying on the object.
In another exemplary embodiment, an eye tracker may be used to perform image detection analysis on the first, second, and third sets of images.
In an exemplary embodiment, the preset rule for determining whether the obtained image saliency data satisfies the optimal image saliency in step S3 may be set by the user according to requirements, for example, the time that the human eye stays on the object expected to have image saliency in the second group of images may be greater than T seconds, the time that the human eye stays on the object expected to have image saliency in the third group of images may be not less than T seconds, or the time that the human eye stays on the object expected to have image saliency in the second group of images may be 2 times (or another value) the time that the human eye stays on the object expected to have image saliency in the first group of images, and so on. Of course, the preset rule may also be set according to an empirical value. The present application does not limit the specific content of the preset rule.
In an exemplary embodiment, if it is determined according to a preset rule that the obtained image saliency data does not satisfy the image saliency optimization, before re-adjusting the luminance value bVal, the contrast value cVal, the hue value hfal, and the saturation value sVal according to the obtained image saliency data, the method further includes:
and judging the number N of times that the step S1 is executed, if the number of times that the step S1 is executed is greater than or equal to a preset threshold value, searching for the optimal image saliency data from the obtained image saliency data of the N times, and taking the image corresponding to the image saliency data with the optimal performance as the image with the optimal image saliency.
In another exemplary embodiment, detecting a distribution law of human eye staying on the object includes:
and detecting the area where the human eyes stay on the object, the stay time of the human eyes on the area, and the brightness value, the contrast value, the hue value and the saturation value of the stay area to obtain the distribution rule of the human eyes stay on the object.
In an exemplary embodiment, in step S3, performing comparative analysis on the results of the image detection analysis of the three sets of images includes:
analyzing whether the object with the most image significance is changed after the image is processed according to the object with the most image significance in each group of images obtained by the image detection and analysis of the three groups of images; if the object with the most image significance is changed, judging whether the changed object with the most image significance is the expected object with the most image significance, and comparing the change of the object with the most image significance according to the parameter difference among the three groups of images;
if the image of the human eye is not changed, continuously analyzing the staying time and the distribution rule of the human eye on the object with the most image significance in each group of images, and comparing the changes of the staying time and the distribution rule according to the parameter difference among the three groups of images;
and recording the result of the comparative analysis to obtain image significance data.
In another exemplary embodiment, the image saliency data may be preserved each time using a uniform template to facilitate analysis of the image saliency data. Alternatively, when there is no change in the object having the most image saliency, the relationship between the brightness value bVal, the contrast value cVal, the hue value hfal, the saturation value sVal, and the eye dwell time may be saved in the form of a curve so as to more conveniently and intuitively observe the relationship therebetween, which is not specifically limited in the present application.
In summary, the method for processing image saliency according to the present application first determines an object to be subjected to image saliency processing in an image, and then performs image processing on the determined object to obtain an image with optimal image saliency. Compared with the prior art, the image saliency processing method and device simplify the complexity of image saliency processing and meet the saliency requirement of a user on an image.
Fig. 3 is a schematic diagram of an image saliency processing apparatus according to an embodiment of the present invention. According to the schematic diagram shown in fig. 3, the image saliency processing apparatus of the present embodiment includes a memory and a processor. Wherein:
the memory 100 is used for storing a program for image saliency processing;
the processor 200 is configured to read and execute the program for processing image saliency, and perform the following operations:
determining an object to be subjected to image saliency processing in an image;
and carrying out image processing on the determined object to obtain an image with optimal image significance.
In an exemplary embodiment, the processor 200 is configured to read a program for performing the image saliency processing, perform image processing on the determined object, and obtain an image with optimal image saliency includes:
step S0: respectively initializing a brightness value bVal, a contrast value cVal, a hue value hVal and a saturation value sVal into bVal0、cVal0、hVal0And sVal0To obtain bVal0、cVal0、hVal0And sVal0As the current bVal, cVal, hVal, sVal values.
Step S1: carrying out image processing on the determined object according to the current bVal, cVal, hVal and sVal values; storing the images before processing as a first group of images, storing the images processed in the step S0 as a second group of images, and performing dynamic image processing on the second group of images, and storing the images after dynamic image processing as a third group of images;
step S2: performing image detection analysis on the first group of images, the second group of images and the third group of images respectively;
step S3: comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, if not, re-adjusting the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal according to the obtained image significance data, taking the adjusted brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal as the current bVal, cVal, hVal and sVal values, and continuously executing the step S1; and if the requirement that the image saliency is optimal is met, obtaining an image with the optimal image saliency, and ending the process.
In an exemplary embodiment, image processing the determined object according to the current bVal, cVal, hVal, sVal value may include:
and respectively carrying out image processing on the determined object according to the brightness formula, the contrast formula, the hue formula and the saturation formula of the image according to the current bVal, cVal, hVal and sVal values.
The brightness formula, the contrast formula, the hue formula and the saturation formula are shown in the foregoing formulas (1) to (5).
In another exemplary embodiment, due to some constraint condition, one or more of the brightness value, the contrast value, the hue value and the saturation value of the image need to satisfy some constraint condition, for example, the hue value is required to be a fixed value, or the saturation value is required to be within a certain range while the hue value is required to be a fixed value, and then the image processing of the determined object according to the current bVal, cVal, hfal and sVal values in step S1 may include:
and on the premise that the constraint condition is met, performing image processing on the determined object according to one or more of the current bVal, cVal, hVal and sVal values.
In an exemplary embodiment, the processor 200 is configured to read a program for performing the image saliency processing, and the step S2 is executed to: performing image detection analysis on the first, second, and third sets of images, respectively, comprises:
and respectively detecting the object with the most image significance in the first group of images, the second group of images and the third group of images, and detecting the time and the distribution rule of human eyes staying on the object.
In another exemplary embodiment, an eye tracker may be used to perform image detection analysis on the first, second, and third sets of images.
In an exemplary embodiment, the preset rule for determining whether the obtained image saliency data satisfies the optimal image saliency in step S3 may be set by the user according to requirements, for example, the time that the human eye stays on the object expected to have image saliency in the second group of images may be greater than T seconds, the time that the human eye stays on the object expected to have image saliency in the third group of images may be not less than T seconds, or the time that the human eye stays on the object expected to have image saliency in the second group of images may be 2 times (or another value) the time that the human eye stays on the object expected to have image saliency in the first group of images, and so on. Of course, the preset rule may also be set according to an empirical value. The present application does not limit the specific content of the preset rule.
In an exemplary embodiment, the processor 200 is configured to read a program for performing the image saliency processing, and further performs the following operations:
if the obtained image significance data does not meet the condition of optimal image significance according to the preset rule, before the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal are re-adjusted according to the obtained image significance data,
and judging the number N of times that the step S1 is executed, if the number of times that the step S1 is executed is greater than or equal to a preset threshold value, searching for the optimal image saliency data from the obtained image saliency data of the N times, and taking the image corresponding to the image saliency data with the optimal performance as the image with the optimal image saliency.
In another exemplary embodiment, detecting a distribution law of human eye staying on the object includes:
and detecting the area where the human eyes stay on the object, the stay time of the human eyes on the area, and the brightness value, the contrast value, the hue value and the saturation value of the stay area to obtain the distribution rule of the human eyes stay on the object.
In an exemplary embodiment, in step S3, performing comparative analysis on the results of the image detection analysis of the three sets of images includes:
analyzing whether the object with the most image significance is changed after the image is processed according to the object with the most image significance in each group of images obtained by the image detection and analysis of the three groups of images; if the object with the most image significance is changed, judging whether the changed object with the most image significance is the expected object with the most image significance, and comparing the change of the object with the most image significance according to the parameter difference among the three groups of images;
if the image of the human eye is not changed, continuously analyzing the staying time and the distribution rule of the human eye on the object with the most image significance in each group of images, and comparing the changes of the staying time and the distribution rule according to the parameter difference among the three groups of images;
and recording the result of the comparative analysis to obtain image significance data.
In another exemplary embodiment, the image saliency data may be preserved each time using a uniform template to facilitate analysis of the image saliency data. Alternatively, when there is no change in the object having the most image saliency, the relationship between the brightness value bVal, the contrast value cVal, the hue value hfal, the saturation value sVal, and the eye dwell time may be saved in the form of a curve so as to more conveniently and intuitively observe the relationship therebetween, which is not specifically limited in the present application.
The method for processing the image significance is further explained by a specific application example.
The method comprises the following steps: and (5) initializing. The initialization includes initialization of parameters and also initialization of rules. Wherein the initialization of the parameters comprises respectively initializing a brightness value bVal, a contrast value cVal, a hue value hVal and a saturation value sVal to bVal0、cVal0、hVal0And sVal0. The initialization of the rule includes initialization of a preset rule, a preset threshold value, and the like.
Step two: and determining an object to be subjected to image saliency processing in the image.
Step three: the determined object is image processed according to the current bVal, cVal, hVal, sVal values according to equations (1) - (5).
Step four: saving the images before processing as a first group of images, saving the images after processing in the step three as a second group of images, and performing dynamic image processing on the second group of images, and saving the images after dynamic image processing as a third group of images; the experimenters were evenly divided into three groups, corresponding to three groups of images.
Step five: the experimenter wears the eye tracker and clicks an object which the experimenter thinks to attract, and in the process, the eye tracker detects the stay time and the distribution rule of the experimenter on the object to obtain the result of image detection and analysis corresponding to the three groups of images.
Step six: comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, if not, re-adjusting the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal according to the obtained image significance data, taking the adjusted brightness value bVal, contrast value cVal, hue value hVal and saturation value sVal as the current bVal, cVal, hVal and sVal values, and continuously executing the third step; and if the requirement that the image saliency is optimal is met, obtaining an image with the optimal image saliency, and ending the process.
In the above specific application example, there is no strict execution sequence between each step, and those skilled in the art can reasonably arrange the execution sequence of each step as needed.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, and the program may be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
The foregoing is only a preferred embodiment of the present invention, and naturally there are many other embodiments of the present invention, and those skilled in the art can make various corresponding changes and modifications according to the present invention without departing from the spirit and the essence of the present invention, and these corresponding changes and modifications should fall within the scope of the appended claims.

Claims (8)

1. A method of image saliency processing comprising:
determining an object in the image that is expected to have image saliency;
carrying out image processing on the determined object according to a brightness value, a contrast value, a hue value and a saturation value to obtain image significance data, judging whether the obtained image significance data meets the condition that the image significance of the determined object is optimal according to a preset rule, and if the obtained image significance data does not meet the condition that the image significance is optimal, continuously adjusting the brightness value, the contrast value, the hue value and the saturation value until an image with the optimal image significance of the determined object is obtained;
the image processing of the determined object according to the brightness value, the contrast value, the hue value and the saturation value to obtain image saliency data, the judgment of whether the obtained image saliency data meets the condition that the image saliency of the determined object is optimal according to a preset rule, and if the obtained image saliency data does not meet the condition that the image saliency is optimal, the continuous adjustment of the brightness value, the contrast value, the hue value and the saturation value until the image with the optimal image saliency of the determined object is obtained includes:
step S0: respectively initializing a brightness value bVal, a contrast value cVal, a hue value hVal and a saturation value sVal into bVal0、cVal0、hVal0And sVal0To obtain bVal0、cVal0、hVal0And sVal0As current bVal, cVal, hVal, sVal values;
step S1: carrying out image processing on the determined object according to the current bVal, cVal, hVal and sVal values; storing the images before processing as a first group of images, storing the images processed in the step S0 as a second group of images, and performing dynamic image processing on the second group of images, and storing the images after dynamic image processing as a third group of images;
step S2: performing image detection analysis on the first group of images, the second group of images and the third group of images respectively;
step S3: comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, if not, re-adjusting the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal according to the obtained image significance data, taking the adjusted brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal as the current bVal, cVal, hVal and sVal values, and continuously executing the step S1; if the image saliency is optimal, obtaining an image with the optimal image saliency, and ending the process;
wherein the preset rule comprises: the time that the human eye stays on the object expected to have image saliency in the second set of images is more than T seconds, the time that the human eye stays on the object expected to have image saliency in the third set of images is not less than T seconds, or the time that the human eye stays on the object expected to have image saliency in the second set of images is 2 times the time that the human eye stays on the object expected to have image saliency in the first set of images.
2. The method of claim 1, wherein the step S2: performing image detection analysis on the first, second, and third sets of images, respectively, comprises:
and respectively detecting the object with the most image significance in the first group of images, the second group of images and the third group of images, and detecting the time and the distribution rule of human eyes staying on the object.
3. The method according to claim 1 or 2, wherein if it is determined according to preset rules that the obtained image saliency data does not satisfy the image saliency optimization, before readjusting the luminance value bVal, the contrast value cVal, the hue value hfal, the saturation value sVal according to the obtained image saliency data, the method further comprises:
and judging the number N of times that the step S1 is executed, if the number of times that the step S1 is executed is greater than or equal to a preset threshold value, searching for the optimal image saliency data from the obtained image saliency data of the N times, and taking the image corresponding to the image saliency data with the optimal performance as the image with the optimal image saliency.
4. The method of claim 2, wherein detecting the distribution law of the human eye staying on the object comprises:
and detecting the area where the human eyes stay on the object, the stay time of the human eyes on the area, and the brightness value, the contrast value, the hue value and the saturation value of the stay area to obtain the distribution rule of the human eyes stay on the object.
5. An image saliency processing apparatus comprising: a memory and a processor; wherein:
the memory is used for storing a program for image saliency processing;
the processor is used for reading and executing the program for image saliency processing, and executing the following operations:
determining an object in the image that is expected to have image saliency;
carrying out image processing on the determined object according to a brightness value, a contrast value, a hue value and a saturation value to obtain image significance data, judging whether the obtained image significance data meets the condition that the image significance of the determined object is optimal according to a preset rule, and if the obtained image significance data does not meet the condition that the image significance is optimal, continuously adjusting the brightness value, the contrast value, the hue value and the saturation value until an image with the optimal image significance of the determined object is obtained;
the processor is used for reading a program for executing the image saliency processing and executing the following operations: performing image processing on the determined object according to a brightness value, a contrast value, a hue value and a saturation value to obtain image significance data, judging whether the obtained image significance data meets the image significance optimization of the determined object according to a preset rule, and if not, continuously adjusting the brightness value, the contrast value, the hue value and the saturation value until obtaining an image with the image significance optimization of the determined object, wherein the image processing method comprises the following steps:
step S0: respectively initializing a brightness value bVal, a contrast value cVal, a hue value hVal and a saturation value sVal into bVal0、cVal0、hVal0And sVal0B is to beal0、cVal0、hVal0And sVal0As current bVal, cVal, hVal, sVal values;
step S1: carrying out image processing on the determined object according to the current bVal, cVal, hVal and sVal values; storing the images before processing as a first group of images, storing the images processed in the step S0 as a second group of images, and performing dynamic image processing on the second group of images, and storing the images after dynamic image processing as a third group of images;
step S2: performing image detection analysis on the first group of images, the second group of images and the third group of images respectively;
step S3: comparing and analyzing the results of the image detection and analysis of the three groups of images to obtain image significance data, judging whether the obtained image significance data meets the optimal image significance according to a preset rule, if not, re-adjusting the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal according to the obtained image significance data, taking the adjusted brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal as the current bVal, cVal, hVal and sVal values, and continuously executing the step S1; if the image saliency is optimal, obtaining an image with the optimal image saliency, and ending the process;
wherein the preset rule comprises: the time that the human eye stays on the object expected to have image saliency in the second set of images is more than T seconds, the time that the human eye stays on the object expected to have image saliency in the third set of images is not less than T seconds, or the time that the human eye stays on the object expected to have image saliency in the second set of images is 2 times the time that the human eye stays on the object expected to have image saliency in the first set of images.
6. The apparatus according to claim 5, wherein said processor is configured to read a program for performing said image saliency processing, said step S2 being performed: performing image detection analysis on the first, second, and third sets of images, respectively, comprises:
and respectively detecting the object with the most image significance in the first group of images, the second group of images and the third group of images, and detecting the time and the distribution rule of human eyes staying on the object.
7. The apparatus of claim 5 or 6, wherein the processor is configured to read a program that performs the image saliency processing, and further configured to:
if the obtained image significance data does not meet the condition of optimal image significance according to the preset rule, before the brightness value bVal, the contrast value cVal, the hue value hVal and the saturation value sVal are re-adjusted according to the obtained image significance data,
and judging the number N of times that the step S1 is executed, if the number of times that the step S1 is executed is greater than or equal to a preset threshold value, searching for the optimal image saliency data from the obtained image saliency data of the N times, and taking the image corresponding to the image saliency data with the optimal performance as the image with the optimal image saliency.
8. The apparatus of claim 6, wherein said detecting the distribution law of the human eye staying on the object comprises:
and detecting the area where the human eyes stay on the object, the stay time of the human eyes on the area, and the brightness value, the contrast value, the hue value and the saturation value of the stay area to obtain the distribution rule of the human eyes stay on the object.
CN201910278429.XA 2019-04-09 2019-04-09 Image saliency processing method and device Active CN109978881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910278429.XA CN109978881B (en) 2019-04-09 2019-04-09 Image saliency processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910278429.XA CN109978881B (en) 2019-04-09 2019-04-09 Image saliency processing method and device

Publications (2)

Publication Number Publication Date
CN109978881A CN109978881A (en) 2019-07-05
CN109978881B true CN109978881B (en) 2021-11-26

Family

ID=67083504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910278429.XA Active CN109978881B (en) 2019-04-09 2019-04-09 Image saliency processing method and device

Country Status (1)

Country Link
CN (1) CN109978881B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449548A (en) * 2020-03-24 2021-09-28 华为技术有限公司 Method and apparatus for updating object recognition model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098489A (en) * 2006-06-27 2008-01-02 三星电子株式会社 Image processing apparatus and method of enhancing visibility of displayed image
CN105023253A (en) * 2015-07-16 2015-11-04 上海理工大学 Visual underlying feature-based image enhancement method
CN105574866A (en) * 2015-12-15 2016-05-11 努比亚技术有限公司 Image processing method and apparatus
CN107909553A (en) * 2017-11-02 2018-04-13 青岛海信电器股份有限公司 A kind of image processing method and equipment
CN108647695A (en) * 2018-05-02 2018-10-12 武汉科技大学 Soft image conspicuousness detection method based on covariance convolutional neural networks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980248B (en) * 2010-11-09 2012-12-05 西安电子科技大学 Improved visual attention model-based method of natural scene object detection
CN102999908A (en) * 2012-11-19 2013-03-27 西安电子科技大学 Synthetic aperture radar (SAR) airport segmentation method based on improved visual attention model
US20170206426A1 (en) * 2016-01-15 2017-07-20 Ford Global Technologies, Llc Pedestrian Detection With Saliency Maps
CN106775527B (en) * 2016-12-07 2019-06-28 中国联合网络通信集团有限公司 Adjust the method, apparatus and display equipment of the display parameters of display panel
CN108696732B (en) * 2017-02-17 2023-04-18 北京三星通信技术研究有限公司 Resolution adjustment method and device for head-mounted display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098489A (en) * 2006-06-27 2008-01-02 三星电子株式会社 Image processing apparatus and method of enhancing visibility of displayed image
CN105023253A (en) * 2015-07-16 2015-11-04 上海理工大学 Visual underlying feature-based image enhancement method
CN105574866A (en) * 2015-12-15 2016-05-11 努比亚技术有限公司 Image processing method and apparatus
CN107909553A (en) * 2017-11-02 2018-04-13 青岛海信电器股份有限公司 A kind of image processing method and equipment
CN108647695A (en) * 2018-05-02 2018-10-12 武汉科技大学 Soft image conspicuousness detection method based on covariance convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
改进的卷积神经网络模型及其应用研究;何鹏超;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;第39页 *

Also Published As

Publication number Publication date
CN109978881A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
US11610433B2 (en) Skin tone assisted digital image color matching
US9639965B2 (en) Adjusting color attribute of an image in a non-uniform way
CN105404846B (en) A kind of image processing method and device
CN107507144B (en) Skin color enhancement processing method and device and image processing device
US9881202B2 (en) Providing visual effects for images
US20240193739A1 (en) Image processing method and apparatus, computer device, and storage medium
WO2022199710A1 (en) Image fusion method and apparatus, computer device, and storage medium
CN114511479A (en) Image enhancement method and device
KR20180109658A (en) Apparatus and method for image processing
CN103839245B (en) The Retinex colour-image reinforcing method at night of Corpus--based Method rule
CN111476744B (en) Underwater image enhancement method based on classification and atmospheric imaging model
CN104392425A (en) Face based automatic contrast adjusting image enhancing method
Lei et al. A novel intelligent underwater image enhancement method via color correction and contrast stretching✰
CN109978881B (en) Image saliency processing method and device
Liba et al. Sky optimization: Semantically aware image processing of skies in low-light photography
CN111541937A (en) Image quality adjusting method, television device and computer storage medium
WO2019047409A1 (en) Image processing method and system, readable storage medium and mobile camera device
CN114037641A (en) Low-illumination image enhancement method, device, equipment and medium
WO2020093441A1 (en) Detail processing method and device for image saturation enhancement
US20230146016A1 (en) Method and apparatus for extreme-light image enhancement
Singh et al. A survey of image enhancement techniques
JP2012510201A (en) Memory color correction in digital images
CN114663300A (en) DCE-based low-illumination image enhancement method, system and related equipment
CN114266803A (en) Image processing method, image processing device, electronic equipment and storage medium
AU2011200830B2 (en) Method, apparatus and system for modifying quality of an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant