WO2021211105A1 - Watermarked image signal with varied watermark strengths - Google Patents
Watermarked image signal with varied watermark strengths Download PDFInfo
- Publication number
- WO2021211105A1 WO2021211105A1 PCT/US2020/028197 US2020028197W WO2021211105A1 WO 2021211105 A1 WO2021211105 A1 WO 2021211105A1 US 2020028197 W US2020028197 W US 2020028197W WO 2021211105 A1 WO2021211105 A1 WO 2021211105A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- watermark
- image signal
- regions
- signal
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 36
- 230000000007 visual effect Effects 0.000 claims description 51
- 238000013136 deep learning model Methods 0.000 claims description 42
- 230000007547 defect Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000015556 catabolic process Effects 0.000 description 6
- 238000006731 degradation reaction Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 230000001747 exhibiting effect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000007670 refining Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
- G06T1/0028—Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234345—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8358—Generation of protective data, e.g. certificates involving watermark
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0061—Embedding of the watermark in each block of the image, e.g. segmented watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0202—Image watermarking whereby the quality of watermarked images is measured; Measuring quality or performance of watermarking methods; Balancing between quality and robustness
Definitions
- Watermarking includes a process of embedding data into an image signal.
- the embedded data may be non-visible and is embedded by applying a watermark signal, which contains the data, to the image signal.
- the watermark signal may be invisible to the human eye while the data is recoverable through other circuitry, such as a camera.
- FIG. 1 is a flow diagram illustrating an example method of generating a watermarked image signal with varied watermark strengths, in accordance with examples of the present disclosure.
- FIG. 2 is a block diagram illustrating an example computing device including non-transitory machine-readable storage medium, in accordance with examples of the present disclosure.
- FIG. 3 is a block diagram illustrating an example apparatus for generating a watermarked image signal, in accordance with examples of the present disclosure.
- FIG. 4 is a flow diagram illustrating an example method of generating a watermarked image signal having varied watermark strengths, in accordance with examples of the present disclosure.
- Digital watermarking is a process involving modification of an image signal to embed machine-readable data into the image signal.
- the image signal with the applied watermark signal may be processed to generate an image.
- the machine-readable data may be applied as a watermark signal that is non-visible or otherwise is imperceptible to a human, while being detectable through the use of other circuitry, such as by a camera on a mobile phone.
- the watermark signal may be applied to the image signal using a fixed set of parameters including a fixed watermark strength.
- a watermark strength refers to and/or includes the signal strength of the watermark signal as applied to an image signal.
- Applying the watermark signal at the fixed watermark strength may reduce the visual quality of the image generated from the watermarked image signal in particular regions of the image signal.
- a watermark signal applied to human skin in an image may be more visually perceptible to a human as compared to the watermark signal applied to the background of the image.
- Manual adjustment of the watermark strength in different regions of the image signal may be used to decrease the visual impact of the watermark on the image signal.
- the generated image may be segmented and a plurality of users may manually set different watermark strengths, which is a time consuming process.
- Examples of the present disclosure are directed to a watermarking process which applies a watermark signal to an image signal with varied watermark strengths based on a computer-generated assessment of image quality values of a watermarked version of the image signal, herein generally referred to as “a watermarked image signal”.
- the image quality values are assigned to different regions of the watermarked image signal based on image artifacts caused by the application of the watermark signal.
- Image artifacts refer to or include defects introduced to the image signal by the application of the watermark signal and which may include changes in image quality attributes, as further described herein.
- a deep learning model is used to automatically assess the watermarked image signal for trained features indicative of the image quality values.
- the image quality values are mapped to a plurality of varied watermark strengths and associated with the regions of the watermarked image signal.
- the watermark signal is adaptively applied at the varied watermark strengths, which may provide for better watermark concealment due to a reduced visual impact on the visual quality at respective regions of the watermarked image while maintaining recoverability of data in the watermark.
- FIG.1 is a flow diagram illustrating an example method of generating a watermarked image signal with varied watermark strengths, in accordance with examples of the present disclosure.
- a watermark signal refers to and/or includes a digital signal including machine-readable data to be embedded in an image signal.
- An image signal refers and/or includes to a digital signal which may be used to generate an image.
- the image signal may be printed or displayed to generate an image or an object with the image.
- the image signal may be used to generate a variety of different types of images.
- Example images include photographs, holograms, three-dimensional images, virtual reality or video images, and printed images such as three-dimensional objects, among other types of images.
- the method includes receiving a watermarked image signal including a watermark signal and an image signal.
- the watermarked image signal may include the image signal with the watermarked image signal applied at a first watermark strength.
- the watermarked image signal and the image signal, without the watermark applied, are received.
- the watermarked image signal and image signal are received, and the method further includes applying the watermark signal to the image signal at the first watermark strength among a range of watermark strengths via a watermark embedding process.
- An example watermark embedding process includes converting the machine-readable data into a watermark signal or otherwise receiving the watermark signal.
- the watermark signal is combined with the image signal, and optionally other signals, such as an orientation pattern or synchronization signal, to create the watermarked image signal.
- the process of combining the watermark signal with the image signal may be a linear or non-linear function.
- the watermarked image signal may be applied by modulating or altering signal samples in a spatial, frequency or some other transform domain.
- the method may further include receiving input that identifies the range of watermark strengths.
- the first watermark strength may include the highest watermark strength of the range, although examples are not so limited.
- the lowest watermark strength of the range may be set, for example, based on recoverability of the machine-readable data of the watermark signal.
- the method includes assigning an image quality value to each of a plurality of regions of the watermarked image signal based on a plurality of image artifacts in the watermarked image signal.
- the assessment may include or otherwise be based on a comparison of the watermarked image signal and the image signal.
- the plurality of image artifacts may include luminesce and contrast changes between the watermarked image signal and a reference image signal, such as the image signal or another similar image.
- the image quality value of each of the plurality of regions may be indicative of a visual impact of the watermark signal on the image signal.
- a visual impact refers to and/or includes a visually noticeable or perceptible change in the image caused by application of the watermark signal to the image signal and which may degrade the visual quality of the image generated from the watermarked image signal.
- a visual quality refers to and/or includes a level of accuracy of the image signal that is used to generate an image, and which may include a weighted combination of image quality attributes of the image.
- Example image quality attributes include noise, sharpness, dynamic range, color accuracy, uniformity, chromatic aberration, optical distortions, luminesce, contrast, and flare.
- the application of the watermark signal may cause the image artifacts, such as various changes and/or impacts to image quality attributes including luminesce, contrast, and noise. Changes which are noticeable or perceivable to the human eye may have a greater visual impact and may reduce the visual quality of the image.
- the image quality value of each of the regions may be stored in a map that correlates each of the plurality of regions of the watermarked image signal to a corresponding image quality value.
- a higher image quality value may mean or indicate higher degradation of the visual quality or quality distance of the region of the watermarked image signal as compared to the image signal.
- a lower image quality value may mean or indicate lower degradation of the visual quality or quality distance of the region.
- examples are not so limited, and examples may include a higher image quality value being indicative of lower degradation of the visual quality or quality distance of the region, and a lower image quality value being indicative of higher degradation of the visual quality or quality distance of the region.
- the watermarked image signal may be assessed for the plurality of image artifacts using a deep learning model.
- Deep learning is a specialized area of machine learning and/or artificial intelligence that may be used in different areas, such as computer vision, speech recognition, and text translation.
- a deep learning model refers to or includes a trained machine learning model that has undergone a training process and may make inferences from received data.
- the deep learning model may be trained for an image quality assessment (IQA), sometimes herein referred to as an “IQA deep learning model”, and is used to define the image quality values.
- Example deep learning models include neural networks, such as convolution neural networks and deep neural networks.
- the deep learning model may be trained according to known inputs and known outputs.
- the known inputs may include a plurality of images and the known outputs may include a score assigned to different regions of the plurality of images according a human visual system (HVS) assessment.
- HVS human visual system
- the deep learning model may be trained with datasets, such as publicly available datasets, for IQA to evaluate a set of features that may reduce an overall visual quality as artifacts and chromatic noises.
- Example databases include Laboratory for Image and Video Engineering (LIVE), Consortium for Information and Software Quality (CISQ), and/or TID2013 for IQA.
- the deep learning model may be trained using different distorted images and original or undistorted images to define features to extract in a watermarked image signal when assessing for the image artifacts, such as luminance and contrast.
- the distorted images and original images may be evaluated by the deep learning model using the HVS assessment and a score is assigned to each image artifact based on the visual quality. The score is indicative of a human perception of the visual quality.
- the deep learning model has defined features to extract which are related to the image quality attributes and may be used to assess the watermarked image signal for noise and other image artifacts in the trained data.
- the watermarked image signal and image signal may be input to the deep learning model to evaluate the watermarked image signal for the features indicative of the image quality attributes.
- the deep learning model may provide the image quality value for each of the plurality of regions, which may be weighted, in some examples. More specifically, the deep learning model may extract features in the watermarked image signal and classify the features as types of image artifacts by assigning a score to the extracted feature. As a region may be associated with a plurality of extracted features, the deep learning model may regress the mean and standard deviations of scores of the extracted features to provide the image quality value for each region. [0021] At 106, the method includes assigning a plurality of varied watermark strengths to the plurality of regions based on the image quality value assigned to each region. The plurality of varied watermark strengths may be within or include the range of watermark strengths.
- the varied watermark strengths may allow for recovering the watermark signal embedded in the image while minimizing the impact on the visual quality at particular regions of the image signal.
- the varied watermark strengths may preserve the visual quality of regions of the image signal where application of the watermark may cause a perceivable visual impact on the visual quality, and may preserve recoverability of the watermark signal in regions of the image signal where application of the watermark may cause a visual impact below a threshold, such as low and/or unperceivable impacts on the visual quality. Regions of the watermarked image signal exhibiting a visual impact from the watermark signal below the threshold may be assigned higher watermark strengths of the plurality of varied watermark strengths.
- Regions of the watermarked image signal exhibiting a visual impact above the threshold, such as a high visual impact from the watermark signal, may be assigned lower watermark strengths of the plurality of varied watermark strengths.
- the regions of the watermarked image signal may correspond to the same regions in the image signal, albeit the watermarked image signal has the embedded watermark signal. Accordingly, the regions are sometimes herein interchangeably referred to as “regions of the watermarked image signal” and “regions of the image signal”.
- assigning the plurality of varied watermark strengths may include assigning a first watermark strength of the plurality in first regions of the plurality of regions having lower defects and assigning a second watermark strength of the plurality in second regions of the plurality of regions having higher defects with respect to the first regions.
- the first watermark strength is higher than the second watermark strength.
- Assigning strengths may include associating the image quality value of each of the plurality of regions to intervals of the plurality of varied watermark strengths.
- the plurality of varied watermark strengths may be stored in a map that correlates each of the plurality of regions of the watermarked image signal to one of the plurality of varied watermark strengths.
- assigning strengths the image quality value assigned to each of a plurality of regions of the watermarked image signal may be normalized from a minimum value to a maximum value, and the normalized values are mapped to the plurality of varied watermark strengths of k intervals of the same size, which may be referred to as “an image-based linear distribution”.
- a Gaussian distribution is used to identify the k intervals of the watermark strengths, which may be different sizes.
- image quality value thresholds are selected and associated with different watermark strengths. Although examples are not so limited and a variety of methodologies may be used for selecting high watermark strengths for image quality values indicative of low degradation of the visual quality and low watermark strengths for image quality values indicative of high degradation of the image quality.
- An example image-based linear distribution for watermark strength selection is described below.
- the image quality values are normalized and k intervals of the image quality values of the same size are mapped to the watermark strengths.
- the k intervals are between the maximum image quality value, which is mapped to the maximum watermark strength, and the minimum quality value, which is mapped to the maximum watermark strength.
- An image quality value of each region of the watermarked image signal is analyzed and used to compute a threshold for the watermark strength selection. To compute the threshold, a percentile is used to remove outliers of the image quality values.
- the 10 th percentile may be the threshold for the maximum watermark strength, such that image quality values that are equal to or less than the 10 th percentile of all image quality values receive the maximum watermark strength.
- the 90 th percentile may be the threshold for the minimum watermark strength. Image quality values that are equal to or greater than the 90 th percentile receive the minimum watermark strength.
- the distribution interval of the image quality values is subdivided equally by the number of intermediate strengths between the maximum and the minimum.
- a range of watermark strengths is set to six to ten. The image quality value assigned to each region is used to calculate the 10 th and 90 th percentile.
- the 10 th and 90 th percentile are 42 and 384, respectively. Regions of the watermarked image signal with an image quality value less than 42 are assigned a watermark strength of ten. Regions of the watermarked image signal with an image quality value greater than 384 are assigned a watermark strength of six. The image quality values between 42 and 384 are divided equally between the watermark strengths of seven, eight, and nine, resulting in 114 image quality values for each watermark strength of seven, eight, and nine. [0028] Although examples are not limited to equally spaced intervals for the intermediate percentiles. Various examples are directed to a non-linear distribution, such as a Gaussian distribution.
- a Gaussian distribution may consider image quality values from a plurality of different image signals to obtain knowledge of image quality value variation and/or percentile-based threshold spacing for non-linear distribution ranges.
- a uniform watermark is applied to the plurality of different image signals at a first watermark strength of the range of varied watermark strengths and the plurality of watermarked image signals are assessed for image quality values.
- a map of the image quality values for the plurality of different watermarked image signals is generated and a Kernel Density Estimation (KDE) is used to fit the density distribution of the image quality values of the plurality of different images.
- KDE Kernel Density Estimation
- the image quality values may behave as a Gaussian distribution, and a Gaussian kernel is used when applying the KDE. With a fitted function, the intervals of watermark strengths may be divided according to a selected strength. Additionally, as with the linear distribution, a percentile may be used to remove the outliers, such as removing five percent of each of the maximum and minimum image quality values
- the division of the ranges of the image quality values may be based on percentiles.
- a linear division of the percentile ranges may be applied to the function of KDE, which results in smaller image quality value ranges for watermark strengths where the density of the image quality values is relatively high and larger image quality value ranges for watermark strengths where the density of the image quality values is relatively low.
- the Gaussian distribution may allow for the watermark strengths to spread more evenly across the image as compared to the linear distribution.
- a range of watermark strengths is set to six to seven. If the KDE function for a watermark strength of seven is not estimated, a KDE is applied to a plurality of different watermarked image signals, as described above. If the KDE is estimated, it may be used.
- the thresholds of the percentile values are calculated linearly.
- the percentile range from 5 th to 95 th percentile are divided between the watermark strengths of the range.
- the 50 th percentile may be used to separate the wavelength strengths of six and seven.
- a range of watermark strengths is set to five to seven.
- the range for the 5 th percentile and the 95 th percentile is divided for the three watermark strengths, resulting in use of the 35 th and 65 th percentiles for the watermark strength selection.
- the image quality values that are less than the 35 th percentile are assigned the watermark strength of five, while the image quality values between the 35 th percentile and the 65 th percentile are assigned the watermark strength of six and the image quality values greater than the 65 th percentile are assigned the watermark strength of seven.
- the varied watermark strengths may be refined and/or revised.
- the method may include identifying adjacent regions of the plurality of regions that are associated with different watermark strengths of the plurality of varied watermark strengths, and adjusting the different watermark strengths of portions of the adjacent regions.
- color regions may be identified and used to adjust the watermark strengths, as further described herein.
- the method further includes generating a revised watermarked image signal by applying the watermark signal to the image signal at the plurality of varied watermark strengths.
- the generation may include a single application of watermark signal at the plurality of varied watermark strengths or multiple applications.
- each of the multiple applications includes the watermark signal being applied at a different strength of the plurality of varied watermark strengths.
- the method may further include combining different portions of the multiple watermarked image signals to form the revised watermarked image signal, as further illustrated and described herein.
- the revised watermarked image signal has the watermark signal applied with a respectively higher watermark strength in regions where the watermark signal may not greatly affect the visual quality and applied with a respectively lower watermark strength in other regions where the watermark signal is more visually perceptible.
- FIG. 2 is a block diagram illustrating an example computing device that includes non-transitory machine-readable storage medium, in accordance with examples of the present disclosure.
- the computing device includes a processor 212 and a machine-readable storage medium 214 storing a set of machine-executable instructions 216, 218, 220 that are executable by the processor 212.
- the processor 212 is communicatively coupled to the machine-readable storage medium 214 through a communication path 211.
- the machine-readable storage medium 214 may, for example, include read-only memory (ROM), random-access memory (RAM), electrically erasable programmable read-only memory (EEPROM), Flash memory, a solid state drive, and/or discrete data register sets.
- the processor 212 may include a central processing unit (CPU) or another suitable processor.
- the processor212 may execute instructions 216 to assess a plurality of image artifacts in a plurality of regions of a watermarked image signal.
- the processor 212 may assess the plurality of image artifacts by comparing the image signal to the watermarked image signal and identifying similarities and/or differences.
- the processor 212 executes instructions 218 to, based on the plurality of image artifacts, generate a first map that correlates an image quality value to each of the plurality of regions.
- the instructions 216, 218 may include use or application of a deep learning model to assess the plurality of image artifacts. As described above, the deep learning model is trained to extract features associated with the image artifacts in the watermarked image signal and to assign a score to each extracted feature. The scores of each region are regressed to generate the image quality value for each of the plurality of regions.
- the processor 212 further executes instructions 220 to generate a second map that correlates a plurality of varied watermark strengths to the plurality of regions based on the first map.
- the instructions 220 to generate the second map may be executed to associate the image quality value of each of the plurality of regions to intervals of the plurality of varied watermark strengths.
- the processor 212 may further adjust the second map.
- the second map may have a smaller resolution than the image signal as the first map is generated by the deep learning model assessing the plurality of regions.
- the deep learning model may evaluate on a region of the watermarked image signal that is greater than a 1x1 pixel region, such as 32x32 pixel regions.
- applying the watermark signal to the image signal at the varied watermark strengths in the second map may cause emergence of junction lines in the revised watermarked image signal that are between the regions of the watermarked image signal with the different watermark strengths.
- the junction lines may emerge, for example, when the region of the image signal is flat and/or where changes on image pixels are more perceptible around edges that split distinct colors.
- the processor 212 may execute instructions to refine the second map.
- the image signal is split into regions of color having a distribution of watermark strengths among the respective regions of a similar color in order to mitigate or avoid application of different watermark strengths in the same color region and allow for maintaining boundaries.
- refining the second map may include applying color quantization to the image signal, segmenting each color quantization to five on each color channel, reducing the number of color channels and labeling each connected color region with a most frequently occurring watermark strength in the color region.
- the refined second map may thereby include adjustments of watermark strengths in the same color regions to the most frequently occurring watermark strength.
- the processor 212 may execute instructions to identify a plurality of objects in the watermarked image signal and based on the objects, adjust the varied watermark strengths. For example, the processor 212 may identify a first object and a second object within the watermarked image signal, identify adjacent edges between the first object and the second object and identify regions of the plurality of regions associated with the adjacent edges. The processor 212 may adjust the watermark strengths of portions of the regions associated with the adjacent edges in the second map.
- the objects may be identified using, for example, another deep learning model, which is trained to identify objects in image signals based on publicly available datasets. Although the above describes refining the second map, examples are not so limited and may include an adjustment of the image quality value assigned to each region of the watermarked image signal in the first map.
- the processor 212 may output the second map to external circuitry for application of the watermark signal or may generate the revised watermarked image signal using the second map.
- the processor 212 may apply the watermark signal to the image signal at the plurality of varied watermark strengths, as previously described.
- FIG. 3 is a block diagram illustrating an example apparatus for generating a watermarked image signal, in accordance with examples of the present disclosure.
- the apparatus 330 includes a memory 338 to store a set of machine-readable instructions and a processor 336.
- the processor 336 may execute the set of machine-readable instructions to receive the watermarked image signal 332 and a reference image signal 334.
- the reference image signal 334 may include the image signal, without application of the watermark signal, in various examples.
- the reference image signal includes a different, but similar, type of image signal without application of a watermark signal.
- a similar type of image signal may include a similar scenery or subject in a photograph, such as two human portraits or two ocean scenes.
- receiving the watermarked image signal 332 may include receiving the image signal and the watermark signal, and the processor 336 may apply the watermark signal to the image signal at a first watermark strength among the plurality of varied watermark strengths to generate the watermarked image signal 332.
- the watermark signal is applied to the image signal by another processor and/or apparatus, and the watermarked image signal 332 is communicated to the apparatus 330.
- the processor 336 may identify a plurality of image artifacts in a plurality of regions of the watermarked image signal 332, and based on the identified plurality of image artifacts, generate a first map including a plurality of image quality values and the plurality of regions. In the first map, each of the plurality of regions correspond to one of the plurality of image quality values.
- the image artifacts may include luminesce and contrast changes between the watermarked image signal 332 and the reference image signal 334. However examples are not so limited, and the image artifacts may include or be associated with various other defects and/or image quality attribute changes between the watermarked image signal 332 and the reference image signal 334.
- the plurality of image artifacts and plurality of image quality values may be determined using a deep learning model 340.
- the deep learning model 340 may include an IQA deep learning model which is trained using public datasets for IQA to evaluate features that may reduce an overall visual quality.
- the IQA deep learning model 340 may assess the watermarked image signal 332 against the reference image signal 334 for features associated with the plurality of image artifacts, such as luminance and contrast similarities and differences, between the image signals 332, 334.
- the scores of the image artifacts are regressed to provide the image quality values.
- the plurality of image quality values may be weighted. Different regions of the watermarked image signal 332 may have different levels of visual importance. Image artifacts and other impacts to the visual quality of the image in areas of low visual importance may be less noticeable than areas of higher visual importance.
- the processor 336 may estimate a plurality of saliency values for the plurality of regions, and weigh the plurality of image quality values by the plurality of saliency values.
- the first map includes the weighted plurality of image quality values.
- the saliency values are indicative of the visual importance of the region.
- a visual importance refers to or includes a level of visual attention the region receives from human viewers and which may be assessed using IQA, the public datasets, and/or the deep learning model 340.
- the processor 336 may associate the plurality of image quality values with a plurality of varied watermark strengths and generate, using the first map, a second map including the plurality of varied watermark strengths and the plurality of regions. In the second map, each of the plurality regions correspond to one of the plurality of varied watermark strengths.
- the varied watermark strengths may be applied to fixed intervals of the image quality values or varied intervals of the image quality values to map the varied watermark strengths to the image quality values in the first map, such as by using a linear distribution or non-linear distribution.
- the processor 336 may further apply the watermark signal to the image signal at the plurality of varied watermark strengths using the second map to generate a revised watermarked image signal 342.
- generating the revised watermarked image signal 342 includes generating a plurality of watermarked image signals, including the watermarked image signal 332, using the image signal and the watermark signal. Each of the plurality of watermarked image signals have the watermark signal applied to the image signal at a watermark strength of the plurality of varied watermark strengths.
- the processor 336 may combine the plurality of watermarked image signals into the revised watermarked image signal of the plurality of varied watermarked strengths. Although examples are not so limited and the watermark signal may be directly applied at the varied strengths.
- FIG. 4 is a flow diagram illustrating an example method of generating a watermarked image signal having varied watermark strengths, in accordance with examples of the present disclosure.
- an IQA deep learning model may be used to assess the visual quality of a watermarked image signal and to provide a plurality of image quality values as a first map 460.
- the plurality of image quality values are assessed to apply varied watermark strengths.
- the watermark signal is applied at a higher watermark strength in regions where the watermark signal is less noticeable and at a lower watermark strength in regions where the watermark signal is noticeable.
- the IQA deep learning model may provide for balance between the visual quality and watermark strength such that the watermark may be recovered.
- noticeability of the watermark signal refers to or includes the level of human perception of the visual impact that the watermark signal has on the image signal.
- the method may include receiving inputs of an image signal 450 and a plurality of input parameters 452.
- the input parameters 452 may include the range of watermark strengths and the watermark signal, and optionally other parameters which set the application of the watermark, such as the pixels per inch (PPI), the watermark pixel per inch (WPI), and /or other metrics.
- N watermarked image signal(s) is or are generated by applying the watermark signal to the image signal 450 based on the range of watermark strengths and, optionally, based on other input parameters 452.
- N is equal to one
- the watermarked image signal 456 is generated by applying the watermark signal to the image signal 450 at the highest watermark strength of the range.
- N is a plurality, and a plurality of watermarked image signals 455 are generated.
- N watermarked image signals are generated, where N is the number of watermark strengths in a set X and the set X includes the range of watermark strengths.
- a watermarked image signal is generated for each watermark strength value x e Xto generate the plurality of watermarked image signals 455.
- the range of watermark strengths is not limited to that illustrated by FIG. 4, and the input parameters 452 are not limited to those listed above.
- different types of watermark systems may be used to generate the watermarked image signal 456 or the plurality of watermarked image signals 455, such as a Digimarc system, among other types of watermark systems.
- the watermarked image signal 456 may be used to generate a first map, at 458.
- the watermarked image signal 456 may have the watermark signal applied at the highest strength of the range of watermark strengths.
- the watermarked image signal 456 may be assessed by an IQA technique which returns a metric correlated to a level of human perception of the watermark and the visual impact to the visual quality caused by the watermark strength, which is referred to herein as the “image quality values”.
- a deep learning model may be used that is trained by datasets to evaluate IQA.
- the IQA technique may allow for automatic and accurate computations of image quality values that correlate with an evaluation performed by HVS.
- the image quality values which are artificially generated by a machine, may be used to measure the visual impact the watermark has on the image, and serve as a base to select the watermark strengths to apply which may have less visual impact while allowing for recognition of the watermark by other circuitry.
- a specific example of an IQA deep learning model includes a Weighted Average Deep Image Quality Measure for full reference (FR) IQA (WaDIQaM- FR).
- the WaDIQaM-FR model is trained using public databases for IQA.
- the databases include distorted images and the original, non-distorted images.
- the distorted images are evaluated using the original images to define a plurality of features to extract that reduce the visual quality associated with the image artifacts as artifacts and chromatic noises.
- Each feature is evaluated and assigned a score of quality based on an HVS assessment, and the features of a respective region may be regressed to define the image quality score for the region.
- the distorted images in the dataset are evaluated by users through the HVS assessment. Image quality scores for the distorted images are calculated based on the evaluation by the users and used to train the IQA deep learning model.
- the output of the IQA deep learning model is the first map 460 that includes the image quality value for each region of the watermarked image signal 456.
- Each image quality value corresponds to the evaluation of an N * M region of the watermarked image signal 456.
- a fully constructed map may be composed by the IQA deep learning model or the IQA deep learning model may return one global metric.
- the global metric may be used to evaluate each region locally.
- the image quality values are stored in the first map 460 according to the positions or indices of the corresponding regions in the watermarked image signal 456.
- Examples are not limited to a WaDIQaM-FR and may include other types of deep learning model.
- Other example deep learning models includes other types of FR IQA deep learning models, reduced reference (RR) IQA deep learning models, and no reference (NR) IQA deep learning models.
- a DeepQA model may be used.
- the first map 460 may be used to define image quality value intervals and generate a second map 464 that includes assigned varied watermark strengths, at 462.
- Each image quality value of the first map 460 is assigned to a watermark strength that visually impacts the region of the image signal 450 less than or equal to the visual impact caused by the application of the watermark signal at the highest watermark strength of the range.
- the varied watermark strength may not change the visual quality of the region and, at best, improves the visual quality as compared to the watermarked image signal 456.
- the varied watermark strengths are stored in a second map 464.
- the first map 460 and/or the second map 464 may be refined to improve transitions between regions, such as color regions, segmentations or object classification.
- a refined second map 468 is generated to reduce junction lines between respective regions, as described above.
- Other refinements may be applied such as other forms of map processing and value distribution to improve the final image quality.
- the second map 464 and/or the refined second map 468 may be used to generate the revised watermarked image signal 472 by combining the N watermarked image signal(s), at 470.
- combining the N watermarked image signal may include applying the watermark signal at the varied watermark strengths defined in the second map 464 or the refined second map 468 to generate the refined watermarked image signal 472.
- the plurality of watermarked image signals 455 are combined based on the second map 464 or the refined second map 468 to generate the revised watermarked image signal 472.
- the revised watermarked image signal 472 may be composed of different portions of each of the plurality of watermarked image signals 455, where the strength assignment in each region complies with the watermark strength value and position identified in the second map 464 or in the refined second map 468.
- Examples of the present disclosure are directed to methods, computing devices, and apparatus which may generate a watermarked image having a balance between perceived visual quality and recoverability of the watermark signal. The watermarking process varies the watermark strength on the spatial domain, and may be applied to any watermark system and any type of watermark signals.
- a deep learning model may be used to guide the application of the watermark strength, which allows for less image artifacts or noise perceived by a human and higher intensity of the watermark signal in regions of the image where the watermark signal is less noticeable according to the HVS.
- the visual impact of the watermark signal may be reduced while improving the recognition rate of the applied watermark signal in the image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
An example method includes receiving a watermarked image signal including a watermark signal and an image signal, assigning an image quality value to each of a plurality of regions of the watermarked image signal based on a plurality of image artifacts, and assigning a plurality of varied watermark strengths to the plurality of regions based on the image quality value assigned to each region. The method further includes generating a revised watermarked image signal by applying the watermark signal to the image signal at the plurality of varied watermark strengths.
Description
WATERMARKED IMAGE SIGNAL WITH VARIED WATERMARK
STRENGTHS
Background
[0001] Watermarking includes a process of embedding data into an image signal. The embedded data may be non-visible and is embedded by applying a watermark signal, which contains the data, to the image signal. The watermark signal may be invisible to the human eye while the data is recoverable through other circuitry, such as a camera.
Brief Description of the Drawings
[0002] FIG. 1 is a flow diagram illustrating an example method of generating a watermarked image signal with varied watermark strengths, in accordance with examples of the present disclosure.
[0003] FIG. 2 is a block diagram illustrating an example computing device including non-transitory machine-readable storage medium, in accordance with examples of the present disclosure.
[0004] FIG. 3 is a block diagram illustrating an example apparatus for generating a watermarked image signal, in accordance with examples of the present disclosure.
[0005] FIG. 4 is a flow diagram illustrating an example method of generating a watermarked image signal having varied watermark strengths, in accordance with examples of the present disclosure.
Detailed Description
[0006] In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration, specific examples in which the disclosure may be practiced. It is to be understood that other examples may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims. It is to be understood that features of the various examples described herein may be combined, in part or whole, with each other, unless specifically noted otherwise.
[0007] Digital watermarking is a process involving modification of an image signal to embed machine-readable data into the image signal. The image signal with the applied watermark signal may be processed to generate an image. The machine-readable data may be applied as a watermark signal that is non-visible or otherwise is imperceptible to a human, while being detectable through the use of other circuitry, such as by a camera on a mobile phone. The watermark signal may be applied to the image signal using a fixed set of parameters including a fixed watermark strength. As used herein, a watermark strength refers to and/or includes the signal strength of the watermark signal as applied to an image signal. Applying the watermark signal at the fixed watermark strength may reduce the visual quality of the image generated from the watermarked image signal in particular regions of the image signal. As a specific example, a watermark signal applied to human skin in an image may be more visually perceptible to a human as compared to the watermark signal applied to the background of the image.
[0008] Manual adjustment of the watermark strength in different regions of the image signal may be used to decrease the visual impact of the watermark on the image signal. For example, the generated image may be segmented and a plurality of users may manually set different watermark strengths, which is a time consuming process. Examples of the present disclosure are directed to a
watermarking process which applies a watermark signal to an image signal with varied watermark strengths based on a computer-generated assessment of image quality values of a watermarked version of the image signal, herein generally referred to as “a watermarked image signal”. The image quality values are assigned to different regions of the watermarked image signal based on image artifacts caused by the application of the watermark signal. Image artifacts, as used herein, refer to or include defects introduced to the image signal by the application of the watermark signal and which may include changes in image quality attributes, as further described herein.
[0009] In specific examples, a deep learning model is used to automatically assess the watermarked image signal for trained features indicative of the image quality values. The image quality values are mapped to a plurality of varied watermark strengths and associated with the regions of the watermarked image signal. The watermark signal is adaptively applied at the varied watermark strengths, which may provide for better watermark concealment due to a reduced visual impact on the visual quality at respective regions of the watermarked image while maintaining recoverability of data in the watermark. [0010] FIG.1 is a flow diagram illustrating an example method of generating a watermarked image signal with varied watermark strengths, in accordance with examples of the present disclosure. The watermark strengths may be varied, for example, to allow for a trade-off between perceptibility of the watermark signal in the image and recoverability of the data embedded by the watermark signal. [0011] As noted above, a watermark signal refers to and/or includes a digital signal including machine-readable data to be embedded in an image signal. An image signal refers and/or includes to a digital signal which may be used to generate an image. The image signal may be printed or displayed to generate an image or an object with the image. The image signal may be used to generate a variety of different types of images. Example images include photographs, holograms, three-dimensional images, virtual reality or video images, and printed images such as three-dimensional objects, among other types of images.
[0012] At 102, the method includes receiving a watermarked image signal including a watermark signal and an image signal. The watermarked image signal may include the image signal with the watermarked image signal applied at a first watermark strength. In some examples, the watermarked image signal and the image signal, without the watermark applied, are received.
[0013] In other specific examples, the watermarked image signal and image signal are received, and the method further includes applying the watermark signal to the image signal at the first watermark strength among a range of watermark strengths via a watermark embedding process. An example watermark embedding process includes converting the machine-readable data into a watermark signal or otherwise receiving the watermark signal. The watermark signal is combined with the image signal, and optionally other signals, such as an orientation pattern or synchronization signal, to create the watermarked image signal. The process of combining the watermark signal with the image signal may be a linear or non-linear function. The watermarked image signal may be applied by modulating or altering signal samples in a spatial, frequency or some other transform domain.
[0014] In some examples, the method may further include receiving input that identifies the range of watermark strengths. The first watermark strength may include the highest watermark strength of the range, although examples are not so limited. The lowest watermark strength of the range may be set, for example, based on recoverability of the machine-readable data of the watermark signal. [0015] At 104, the method includes assigning an image quality value to each of a plurality of regions of the watermarked image signal based on a plurality of image artifacts in the watermarked image signal. The assessment may include or otherwise be based on a comparison of the watermarked image signal and the image signal. The plurality of image artifacts may include luminesce and contrast changes between the watermarked image signal and a reference image signal, such as the image signal or another similar image.
[0016] The image quality value of each of the plurality of regions may be indicative of a visual impact of the watermark signal on the image signal. A visual impact, as used herein, refers to and/or includes a visually noticeable or
perceptible change in the image caused by application of the watermark signal to the image signal and which may degrade the visual quality of the image generated from the watermarked image signal. A visual quality refers to and/or includes a level of accuracy of the image signal that is used to generate an image, and which may include a weighted combination of image quality attributes of the image. Example image quality attributes include noise, sharpness, dynamic range, color accuracy, uniformity, chromatic aberration, optical distortions, luminesce, contrast, and flare. The application of the watermark signal may cause the image artifacts, such as various changes and/or impacts to image quality attributes including luminesce, contrast, and noise. Changes which are noticeable or perceivable to the human eye may have a greater visual impact and may reduce the visual quality of the image. [0017] The image quality value of each of the regions may be stored in a map that correlates each of the plurality of regions of the watermarked image signal to a corresponding image quality value. A higher image quality value may mean or indicate higher degradation of the visual quality or quality distance of the region of the watermarked image signal as compared to the image signal. A lower image quality value may mean or indicate lower degradation of the visual quality or quality distance of the region. Although examples are not so limited, and examples may include a higher image quality value being indicative of lower degradation of the visual quality or quality distance of the region, and a lower image quality value being indicative of higher degradation of the visual quality or quality distance of the region.
[0018] The watermarked image signal may be assessed for the plurality of image artifacts using a deep learning model. Deep learning is a specialized area of machine learning and/or artificial intelligence that may be used in different areas, such as computer vision, speech recognition, and text translation. A deep learning model, as used herein, refers to or includes a trained machine learning model that has undergone a training process and may make inferences from received data. The deep learning model may be trained for an image quality assessment (IQA), sometimes herein referred to as an “IQA deep learning model”, and is used to define the image quality values. Example deep learning
models include neural networks, such as convolution neural networks and deep neural networks.
[0019] The deep learning model may be trained according to known inputs and known outputs. The known inputs may include a plurality of images and the known outputs may include a score assigned to different regions of the plurality of images according a human visual system (HVS) assessment. For example, the deep learning model may be trained with datasets, such as publicly available datasets, for IQA to evaluate a set of features that may reduce an overall visual quality as artifacts and chromatic noises. Example databases include Laboratory for Image and Video Engineering (LIVE), Consortium for Information and Software Quality (CISQ), and/or TID2013 for IQA. In specific examples, the deep learning model may be trained using different distorted images and original or undistorted images to define features to extract in a watermarked image signal when assessing for the image artifacts, such as luminance and contrast. The distorted images and original images may be evaluated by the deep learning model using the HVS assessment and a score is assigned to each image artifact based on the visual quality. The score is indicative of a human perception of the visual quality. As the deep learning model is trained with different types of image artifacts, the deep learning model has defined features to extract which are related to the image quality attributes and may be used to assess the watermarked image signal for noise and other image artifacts in the trained data.
[0020] The watermarked image signal and image signal may be input to the deep learning model to evaluate the watermarked image signal for the features indicative of the image quality attributes. The deep learning model may provide the image quality value for each of the plurality of regions, which may be weighted, in some examples. More specifically, the deep learning model may extract features in the watermarked image signal and classify the features as types of image artifacts by assigning a score to the extracted feature. As a region may be associated with a plurality of extracted features, the deep learning model may regress the mean and standard deviations of scores of the extracted features to provide the image quality value for each region.
[0021] At 106, the method includes assigning a plurality of varied watermark strengths to the plurality of regions based on the image quality value assigned to each region. The plurality of varied watermark strengths may be within or include the range of watermark strengths.
[0022] The varied watermark strengths may allow for recovering the watermark signal embedded in the image while minimizing the impact on the visual quality at particular regions of the image signal. For example, the varied watermark strengths may preserve the visual quality of regions of the image signal where application of the watermark may cause a perceivable visual impact on the visual quality, and may preserve recoverability of the watermark signal in regions of the image signal where application of the watermark may cause a visual impact below a threshold, such as low and/or unperceivable impacts on the visual quality. Regions of the watermarked image signal exhibiting a visual impact from the watermark signal below the threshold may be assigned higher watermark strengths of the plurality of varied watermark strengths. Regions of the watermarked image signal exhibiting a visual impact above the threshold, such as a high visual impact from the watermark signal, may be assigned lower watermark strengths of the plurality of varied watermark strengths. The regions of the watermarked image signal may correspond to the same regions in the image signal, albeit the watermarked image signal has the embedded watermark signal. Accordingly, the regions are sometimes herein interchangeably referred to as “regions of the watermarked image signal” and “regions of the image signal”.
[0023] As a specific example, assigning the plurality of varied watermark strengths may include assigning a first watermark strength of the plurality in first regions of the plurality of regions having lower defects and assigning a second watermark strength of the plurality in second regions of the plurality of regions having higher defects with respect to the first regions. The first watermark strength is higher than the second watermark strength.
[0024] Assigning strengths may include associating the image quality value of each of the plurality of regions to intervals of the plurality of varied watermark strengths. For example, the plurality of varied watermark strengths may be
stored in a map that correlates each of the plurality of regions of the watermarked image signal to one of the plurality of varied watermark strengths. [0025] As an example of assigning strengths, the image quality value assigned to each of a plurality of regions of the watermarked image signal may be normalized from a minimum value to a maximum value, and the normalized values are mapped to the plurality of varied watermark strengths of k intervals of the same size, which may be referred to as “an image-based linear distribution”. As another example, a Gaussian distribution is used to identify the k intervals of the watermark strengths, which may be different sizes. For either of the examples, image quality value thresholds are selected and associated with different watermark strengths. Although examples are not so limited and a variety of methodologies may be used for selecting high watermark strengths for image quality values indicative of low degradation of the visual quality and low watermark strengths for image quality values indicative of high degradation of the image quality.
[0026] An example image-based linear distribution for watermark strength selection is described below. For the linear distribution, the image quality values are normalized and k intervals of the image quality values of the same size are mapped to the watermark strengths. The k intervals are between the maximum image quality value, which is mapped to the maximum watermark strength, and the minimum quality value, which is mapped to the maximum watermark strength. An image quality value of each region of the watermarked image signal is analyzed and used to compute a threshold for the watermark strength selection. To compute the threshold, a percentile is used to remove outliers of the image quality values. The 10th percentile may be the threshold for the maximum watermark strength, such that image quality values that are equal to or less than the 10th percentile of all image quality values receive the maximum watermark strength. Similarly, the 90th percentile may be the threshold for the minimum watermark strength. Image quality values that are equal to or greater than the 90th percentile receive the minimum watermark strength. For watermark strengths between the minimum and maximum watermark strengths, the
distribution interval of the image quality values is subdivided equally by the number of intermediate strengths between the maximum and the minimum. [0027] As a more specific image-based linear distribution example, a range of watermark strengths is set to six to ten. The image quality value assigned to each region is used to calculate the 10th and 90th percentile. In the example, the 10th and 90th percentile are 42 and 384, respectively. Regions of the watermarked image signal with an image quality value less than 42 are assigned a watermark strength of ten. Regions of the watermarked image signal with an image quality value greater than 384 are assigned a watermark strength of six. The image quality values between 42 and 384 are divided equally between the watermark strengths of seven, eight, and nine, resulting in 114 image quality values for each watermark strength of seven, eight, and nine. [0028] Although examples are not limited to equally spaced intervals for the intermediate percentiles. Various examples are directed to a non-linear distribution, such as a Gaussian distribution. A Gaussian distribution may consider image quality values from a plurality of different image signals to obtain knowledge of image quality value variation and/or percentile-based threshold spacing for non-linear distribution ranges. A uniform watermark is applied to the plurality of different image signals at a first watermark strength of the range of varied watermark strengths and the plurality of watermarked image signals are assessed for image quality values. A map of the image quality values for the plurality of different watermarked image signals is generated and a Kernel Density Estimation (KDE) is used to fit the density distribution of the image quality values of the plurality of different images. The image quality values may behave as a Gaussian distribution, and a Gaussian kernel is used when applying the KDE. With a fitted function, the intervals of watermark strengths may be divided according to a selected strength. Additionally, as with the linear distribution, a percentile may be used to remove the outliers, such as removing five percent of each of the maximum and minimum image quality values.
[0029] The division of the ranges of the image quality values may be based on percentiles. For example, a linear division of the percentile ranges may be applied to the function of KDE, which results in smaller image quality value
ranges for watermark strengths where the density of the image quality values is relatively high and larger image quality value ranges for watermark strengths where the density of the image quality values is relatively low. The Gaussian distribution may allow for the watermark strengths to spread more evenly across the image as compared to the linear distribution.
[0030] As a specific Gaussian distribution example, a range of watermark strengths is set to six to seven. If the KDE function for a watermark strength of seven is not estimated, a KDE is applied to a plurality of different watermarked image signals, as described above. If the KDE is estimated, it may be used.
With the estimated KDE, the thresholds of the percentile values are calculated linearly. The percentile range from 5th to 95th percentile are divided between the watermark strengths of the range. In the example of watermark strengths of six and seven, the 50th percentile may be used to separate the wavelength strengths of six and seven. As another example, a range of watermark strengths is set to five to seven. For application of watermark strengths of five to seven, the range for the 5th percentile and the 95th percentile is divided for the three watermark strengths, resulting in use of the 35th and 65th percentiles for the watermark strength selection. The image quality values that are less than the 35th percentile are assigned the watermark strength of five, while the image quality values between the 35th percentile and the 65th percentile are assigned the watermark strength of six and the image quality values greater than the 65th percentile are assigned the watermark strength of seven.
[0031] In various examples, the varied watermark strengths may be refined and/or revised. For example, to reduce junction lines between regions, the method may include identifying adjacent regions of the plurality of regions that are associated with different watermark strengths of the plurality of varied watermark strengths, and adjusting the different watermark strengths of portions of the adjacent regions. In other examples, color regions may be identified and used to adjust the watermark strengths, as further described herein.
[0032] At 108, the method further includes generating a revised watermarked image signal by applying the watermark signal to the image signal at the plurality of varied watermark strengths. The generation may include a single
application of watermark signal at the plurality of varied watermark strengths or multiple applications. For the multiple applications, each of the multiple applications includes the watermark signal being applied at a different strength of the plurality of varied watermark strengths. The method may further include combining different portions of the multiple watermarked image signals to form the revised watermarked image signal, as further illustrated and described herein. In either example, the revised watermarked image signal has the watermark signal applied with a respectively higher watermark strength in regions where the watermark signal may not greatly affect the visual quality and applied with a respectively lower watermark strength in other regions where the watermark signal is more visually perceptible.
[0033] FIG. 2 is a block diagram illustrating an example computing device that includes non-transitory machine-readable storage medium, in accordance with examples of the present disclosure.
[0034] The computing device includes a processor 212 and a machine-readable storage medium 214 storing a set of machine-executable instructions 216, 218, 220 that are executable by the processor 212. The processor 212 is communicatively coupled to the machine-readable storage medium 214 through a communication path 211. The machine-readable storage medium 214 may, for example, include read-only memory (ROM), random-access memory (RAM), electrically erasable programmable read-only memory (EEPROM), Flash memory, a solid state drive, and/or discrete data register sets. The processor 212 may include a central processing unit (CPU) or another suitable processor. [0035] Although FIG. 2 illustrates a single processor 212 and a single machine- readable storage medium 214, examples are not so limited and may be directed to an apparatus and/or multiple computing devices with multiple processors and multiple machine-readable storage mediums. The instructions may be distributed and stored across multiple machine-readable storage mediums and the instructions may be distributed and executed by across multiple processors. [0036] The processor212 may execute instructions 216 to assess a plurality of image artifacts in a plurality of regions of a watermarked image signal. The processor 212 may assess the plurality of image artifacts by comparing the
image signal to the watermarked image signal and identifying similarities and/or differences. The processor 212 executes instructions 218 to, based on the plurality of image artifacts, generate a first map that correlates an image quality value to each of the plurality of regions. The instructions 216, 218 may include use or application of a deep learning model to assess the plurality of image artifacts. As described above, the deep learning model is trained to extract features associated with the image artifacts in the watermarked image signal and to assign a score to each extracted feature. The scores of each region are regressed to generate the image quality value for each of the plurality of regions. The processor 212 further executes instructions 220 to generate a second map that correlates a plurality of varied watermark strengths to the plurality of regions based on the first map. The instructions 220 to generate the second map may be executed to associate the image quality value of each of the plurality of regions to intervals of the plurality of varied watermark strengths. [0037] The processor 212 may further adjust the second map. The second map may have a smaller resolution than the image signal as the first map is generated by the deep learning model assessing the plurality of regions. The deep learning model may evaluate on a region of the watermarked image signal that is greater than a 1x1 pixel region, such as 32x32 pixel regions. As the second map may have a smaller resolution than the image signal, applying the watermark signal to the image signal at the varied watermark strengths in the second map may cause emergence of junction lines in the revised watermarked image signal that are between the regions of the watermarked image signal with the different watermark strengths. The junction lines may emerge, for example, when the region of the image signal is flat and/or where changes on image pixels are more perceptible around edges that split distinct colors.
[0038] In such examples, the processor 212 may execute instructions to refine the second map. As an example, the image signal is split into regions of color having a distribution of watermark strengths among the respective regions of a similar color in order to mitigate or avoid application of different watermark strengths in the same color region and allow for maintaining boundaries. In specific examples, refining the second map may include applying color
quantization to the image signal, segmenting each color quantization to five on each color channel, reducing the number of color channels and labeling each connected color region with a most frequently occurring watermark strength in the color region. The refined second map may thereby include adjustments of watermark strengths in the same color regions to the most frequently occurring watermark strength.
[0039] As another example, the processor 212 may execute instructions to identify a plurality of objects in the watermarked image signal and based on the objects, adjust the varied watermark strengths. For example, the processor 212 may identify a first object and a second object within the watermarked image signal, identify adjacent edges between the first object and the second object and identify regions of the plurality of regions associated with the adjacent edges. The processor 212 may adjust the watermark strengths of portions of the regions associated with the adjacent edges in the second map. The objects may be identified using, for example, another deep learning model, which is trained to identify objects in image signals based on publicly available datasets. Although the above describes refining the second map, examples are not so limited and may include an adjustment of the image quality value assigned to each region of the watermarked image signal in the first map.
[0040] In various examples, the processor 212 may output the second map to external circuitry for application of the watermark signal or may generate the revised watermarked image signal using the second map. For example, the processor 212 may apply the watermark signal to the image signal at the plurality of varied watermark strengths, as previously described.
[0041] FIG. 3 is a block diagram illustrating an example apparatus for generating a watermarked image signal, in accordance with examples of the present disclosure. The apparatus 330 includes a memory 338 to store a set of machine-readable instructions and a processor 336.
[0042] The processor 336 may execute the set of machine-readable instructions to receive the watermarked image signal 332 and a reference image signal 334. The reference image signal 334 may include the image signal, without application of the watermark signal, in various examples. In other examples, the
reference image signal includes a different, but similar, type of image signal without application of a watermark signal. As specific and non-limiting examples, a similar type of image signal may include a similar scenery or subject in a photograph, such as two human portraits or two ocean scenes. In some specific examples, receiving the watermarked image signal 332 may include receiving the image signal and the watermark signal, and the processor 336 may apply the watermark signal to the image signal at a first watermark strength among the plurality of varied watermark strengths to generate the watermarked image signal 332. In other examples, the watermark signal is applied to the image signal by another processor and/or apparatus, and the watermarked image signal 332 is communicated to the apparatus 330.
[0043] The processor 336 may identify a plurality of image artifacts in a plurality of regions of the watermarked image signal 332, and based on the identified plurality of image artifacts, generate a first map including a plurality of image quality values and the plurality of regions. In the first map, each of the plurality of regions correspond to one of the plurality of image quality values. The image artifacts may include luminesce and contrast changes between the watermarked image signal 332 and the reference image signal 334. However examples are not so limited, and the image artifacts may include or be associated with various other defects and/or image quality attribute changes between the watermarked image signal 332 and the reference image signal 334.
[0044] The plurality of image artifacts and plurality of image quality values may be determined using a deep learning model 340. As noted above, the deep learning model 340 may include an IQA deep learning model which is trained using public datasets for IQA to evaluate features that may reduce an overall visual quality. The IQA deep learning model 340 may assess the watermarked image signal 332 against the reference image signal 334 for features associated with the plurality of image artifacts, such as luminance and contrast similarities and differences, between the image signals 332, 334. The scores of the image artifacts are regressed to provide the image quality values.
[0045] In some specific examples, the plurality of image quality values may be weighted. Different regions of the watermarked image signal 332 may have
different levels of visual importance. Image artifacts and other impacts to the visual quality of the image in areas of low visual importance may be less noticeable than areas of higher visual importance. In various examples, the processor 336 may estimate a plurality of saliency values for the plurality of regions, and weigh the plurality of image quality values by the plurality of saliency values. In such examples, the first map includes the weighted plurality of image quality values. The saliency values are indicative of the visual importance of the region. As used herein, a visual importance refers to or includes a level of visual attention the region receives from human viewers and which may be assessed using IQA, the public datasets, and/or the deep learning model 340.
[0046] The processor 336 may associate the plurality of image quality values with a plurality of varied watermark strengths and generate, using the first map, a second map including the plurality of varied watermark strengths and the plurality of regions. In the second map, each of the plurality regions correspond to one of the plurality of varied watermark strengths. The varied watermark strengths may be applied to fixed intervals of the image quality values or varied intervals of the image quality values to map the varied watermark strengths to the image quality values in the first map, such as by using a linear distribution or non-linear distribution.
[0047] The processor 336 may further apply the watermark signal to the image signal at the plurality of varied watermark strengths using the second map to generate a revised watermarked image signal 342. As described above, in specific examples, generating the revised watermarked image signal 342 includes generating a plurality of watermarked image signals, including the watermarked image signal 332, using the image signal and the watermark signal. Each of the plurality of watermarked image signals have the watermark signal applied to the image signal at a watermark strength of the plurality of varied watermark strengths. The processor 336 may combine the plurality of watermarked image signals into the revised watermarked image signal of the plurality of varied watermarked strengths. Although examples are not so limited and the watermark signal may be directly applied at the varied strengths.
[0048] FIG. 4 is a flow diagram illustrating an example method of generating a watermarked image signal having varied watermark strengths, in accordance with examples of the present disclosure.
[0049] As noted above, an IQA deep learning model may be used to assess the visual quality of a watermarked image signal and to provide a plurality of image quality values as a first map 460. The plurality of image quality values are assessed to apply varied watermark strengths. For example, the watermark signal is applied at a higher watermark strength in regions where the watermark signal is less noticeable and at a lower watermark strength in regions where the watermark signal is noticeable. The IQA deep learning model may provide for balance between the visual quality and watermark strength such that the watermark may be recovered. As used herein, noticeability of the watermark signal refers to or includes the level of human perception of the visual impact that the watermark signal has on the image signal.
[0050] As shown by FIG. 4, the method may include receiving inputs of an image signal 450 and a plurality of input parameters 452. The input parameters 452 may include the range of watermark strengths and the watermark signal, and optionally other parameters which set the application of the watermark, such as the pixels per inch (PPI), the watermark pixel per inch (WPI), and /or other metrics. At 454, using the inputs, N watermarked image signal(s) is or are generated by applying the watermark signal to the image signal 450 based on the range of watermark strengths and, optionally, based on other input parameters 452. In specific examples, N is equal to one, and the watermarked image signal 456 is generated by applying the watermark signal to the image signal 450 at the highest watermark strength of the range. In other examples, N is a plurality, and a plurality of watermarked image signals 455 are generated. For example, N watermarked image signals are generated, where N is the number of watermark strengths in a set X and the set X includes the range of watermark strengths. A watermarked image signal is generated for each watermark strength value x e Xto generate the plurality of watermarked image signals 455.
[0051] The range of watermark strengths is not limited to that illustrated by FIG. 4, and the input parameters 452 are not limited to those listed above. In a specific example, different types of watermark systems may be used to generate the watermarked image signal 456 or the plurality of watermarked image signals 455, such as a Digimarc system, among other types of watermark systems.
[0052] The watermarked image signal 456 may be used to generate a first map, at 458. The watermarked image signal 456 may have the watermark signal applied at the highest strength of the range of watermark strengths. The watermarked image signal 456 may be assessed by an IQA technique which returns a metric correlated to a level of human perception of the watermark and the visual impact to the visual quality caused by the watermark strength, which is referred to herein as the “image quality values”. As noted above, a deep learning model may be used that is trained by datasets to evaluate IQA. The IQA technique may allow for automatic and accurate computations of image quality values that correlate with an evaluation performed by HVS. The image quality values, which are artificially generated by a machine, may be used to measure the visual impact the watermark has on the image, and serve as a base to select the watermark strengths to apply which may have less visual impact while allowing for recognition of the watermark by other circuitry.
[0053] A specific example of an IQA deep learning model includes a Weighted Average Deep Image Quality Measure for full reference (FR) IQA (WaDIQaM- FR). The WaDIQaM-FR model is trained using public databases for IQA. The databases include distorted images and the original, non-distorted images. The distorted images are evaluated using the original images to define a plurality of features to extract that reduce the visual quality associated with the image artifacts as artifacts and chromatic noises. Each feature is evaluated and assigned a score of quality based on an HVS assessment, and the features of a respective region may be regressed to define the image quality score for the region. More specifically, the distorted images in the dataset are evaluated by users through the HVS assessment. Image quality scores for the distorted
images are calculated based on the evaluation by the users and used to train the IQA deep learning model.
[0054] The output of the IQA deep learning model is the first map 460 that includes the image quality value for each region of the watermarked image signal 456. Each image quality value corresponds to the evaluation of an N * M region of the watermarked image signal 456. To generate the first map 460, a fully constructed map may be composed by the IQA deep learning model or the IQA deep learning model may return one global metric. For the example, the global metric may be used to evaluate each region locally. The image quality values are stored in the first map 460 according to the positions or indices of the corresponding regions in the watermarked image signal 456.
[0055] Examples are not limited to a WaDIQaM-FR and may include other types of deep learning model. Other example deep learning models includes other types of FR IQA deep learning models, reduced reference (RR) IQA deep learning models, and no reference (NR) IQA deep learning models. As a specific and non-limiting example, a DeepQA model may be used.
[0056] The first map 460 may be used to define image quality value intervals and generate a second map 464 that includes assigned varied watermark strengths, at 462. Each image quality value of the first map 460 is assigned to a watermark strength that visually impacts the region of the image signal 450 less than or equal to the visual impact caused by the application of the watermark signal at the highest watermark strength of the range. In various examples, at worse, the varied watermark strength may not change the visual quality of the region and, at best, improves the visual quality as compared to the watermarked image signal 456. The varied watermark strengths are stored in a second map 464. Furthermore, the first map 460 and/or the second map 464 may be refined to improve transitions between regions, such as color regions, segmentations or object classification. For example, at 466, a refined second map 468 is generated to reduce junction lines between respective regions, as described above. Other refinements may be applied such as other forms of map processing and value distribution to improve the final image quality.
[0057] The second map 464 and/or the refined second map 468 may be used to generate the revised watermarked image signal 472 by combining the N watermarked image signal(s), at 470. In examples in which N is one, combining the N watermarked image signal may include applying the watermark signal at the varied watermark strengths defined in the second map 464 or the refined second map 468 to generate the refined watermarked image signal 472. In examples in which N is a plurality, the plurality of watermarked image signals 455 are combined based on the second map 464 or the refined second map 468 to generate the revised watermarked image signal 472. For example, the revised watermarked image signal 472 may be composed of different portions of each of the plurality of watermarked image signals 455, where the strength assignment in each region complies with the watermark strength value and position identified in the second map 464 or in the refined second map 468. [0058] Examples of the present disclosure are directed to methods, computing devices, and apparatus which may generate a watermarked image having a balance between perceived visual quality and recoverability of the watermark signal. The watermarking process varies the watermark strength on the spatial domain, and may be applied to any watermark system and any type of watermark signals. A deep learning model may be used to guide the application of the watermark strength, which allows for less image artifacts or noise perceived by a human and higher intensity of the watermark signal in regions of the image where the watermark signal is less noticeable according to the HVS. The visual impact of the watermark signal may be reduced while improving the recognition rate of the applied watermark signal in the image.
[0059] Although specific examples have been illustrated and described herein, a variety of alternate and/or equivalent implementations may be substituted for the specific examples shown and described without departing from the scope of the present disclosure. This application is intended to cover any variations of the specific examples discussed herein. Therefore, it is intended that this disclosure be limited by the claims and the equivalents thereof.
Claims
1. A method comprising: receiving a watermarked image signal including a watermark signal and an image signal; assigning an image quality value to each of a plurality of regions of the watermarked image signal based on a plurality of image artifacts in the watermarked image signal; assigning a plurality of varied watermark strengths to the plurality of regions based on the image quality value assigned to each region; and generating a revised watermarked image signal by applying the watermark signal to the image signal at the plurality of varied watermark strengths.
2. The method of claim 1 , wherein receiving the watermarked image signal includes applying the watermark signal to the image signal at a first watermark strength among a range of watermark strengths, the first watermark strength including the highest watermark strength of the range, and the plurality of varied watermark strengths being within the range.
3. The method of claim 2, wherein assigning the image quality value to each of the plurality of regions including using a deep learning model, and the method further including receiving input that identifies the range of watermark strengths, wherein the lowest watermark strength of the range is set based on recoverability of machine-readable data of the watermark signal.
4. The method of claim 1 , wherein assigning the plurality of varied watermark strengths includes assigning a first watermark strength of the plurality in first regions of the plurality of regions having lower defects and assigning a second watermark strength of the plurality in second regions of the plurality of regions having higher defects with respect to the first regions,
wherein the first watermark strength is higher than the second watermark strength.
5. The method of claim 1 , wherein generating the revised watermarked image signal further includes: identifying adjacent regions of the plurality of regions that are associated with different watermark strengths of the plurality of varied watermark strengths; and adjusting the different watermark strengths of portions of the adjacent regions.
6. The method of claim 1 , further including storing the image quality value of each of the plurality of regions in a map that correlates each of the plurality of regions of the watermarked image signal to the corresponding image quality value.
7. The method of claim 1 , further including storing the plurality of varied watermark strengths in a map that correlates each of the plurality of regions of the watermarked image signal to one of the plurality of varied watermark strengths.
8. A non-transitory machine-readable storage medium storing instructions that when executed by a processor, cause the processor to: assess a plurality of image artifacts in a plurality of regions of a watermarked image signal, the watermarked image signal including a watermark signal and an image signal; based on the plurality of image artifacts, generate a first map that correlates an image quality value to each of the plurality of regions; and generate a second map that correlates a plurality of varied watermark strengths to the plurality of regions based on the first map.
9. The machine-readable medium of claim 8, further including instructions to apply the watermark signal to the image signal at the plurality of varied watermark strengths, the image quality value of each of the plurality of regions being indicative of a visual impact of the watermark signal on the image signal.
10. The machine-readable medium of claim 8, further including instructions to associate the image quality value of each of the plurality of regions to intervals of the plurality of varied watermark strengths.
11. The machine-readable medium of claim 8, further including instructions to assess the plurality of image artifacts in the plurality of regions of the watermarked image signal using a deep learning model trained to extract features associated with the plurality of image artifacts and to assign a score to each extracted feature.
12. An apparatus, comprising: memory to store a set of instructions; and a processor to execute the set of instructions to: receive a watermarked image signal including an image signal and a watermark signal, and in response to: identify a plurality of image artifacts in a plurality of regions of the watermarked image signal, the plurality of image artifacts including luminesce and contrast changes between the watermarked image signal and a reference image signal; and based on the identified plurality of image artifacts, generate a first map including a plurality of image quality values and the plurality of regions, wherein each of the plurality of regions corresponds to one of the plurality of image quality values in the first map; associate the plurality of image quality values with a plurality of varied watermark strengths and generate, using the first map, a second map including the plurality of varied watermark strengths and the plurality of regions,
wherein each of the plurality of regions corresponds to one of the plurality of varied watermark strengths in the second map; and apply the watermark signal to the image signal at the plurality of varied watermark strengths using the second map to generate a revised watermarked image signal.
13. The apparatus of claim 12, wherein the processor is to further execute the set of instructions to apply the watermark signal to the image signal at a first watermark strength among the plurality of varied watermark strengths to generate the watermarked image signal.
14. The apparatus of claim 12, wherein the processor is to further execute the set of instructions to: estimate a plurality of saliency values for the plurality of regions; and weigh the plurality of image quality values by the plurality of saliency values, the first map including the weighted plurality of image quality values.
15. The apparatus of claim 12, wherein the processor is to execute the set of instructions to generate the revised watermarked image signal to: generate a plurality of watermarked image signals, including the watermarked image signal, using the image signal and the watermark signal, each of the plurality of watermarked image signals having the watermark signal applied to the image signal at a watermark strength of the plurality of varied watermark strengths; and combine the plurality of watermarked image signals into the revised watermarked image signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2020/028197 WO2021211105A1 (en) | 2020-04-15 | 2020-04-15 | Watermarked image signal with varied watermark strengths |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2020/028197 WO2021211105A1 (en) | 2020-04-15 | 2020-04-15 | Watermarked image signal with varied watermark strengths |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021211105A1 true WO2021211105A1 (en) | 2021-10-21 |
Family
ID=78084971
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/028197 WO2021211105A1 (en) | 2020-04-15 | 2020-04-15 | Watermarked image signal with varied watermark strengths |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021211105A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002033650A1 (en) * | 2000-10-18 | 2002-04-25 | Matsushita Electric Industrial Co., Ltd. | Human visual model for data hiding |
US20050025338A1 (en) * | 2002-11-04 | 2005-02-03 | Mediasec Technologies Gmbh | Apparatus and methods for improving detection of watermarks in content that has undergone a lossy transformation |
US20090257618A1 (en) * | 2004-12-09 | 2009-10-15 | Sony United Kingdom Limited | Data processing apparatus and method |
US20120074222A1 (en) * | 2006-01-23 | 2012-03-29 | Rhoads Geoffrey B | Document Processing Methods |
-
2020
- 2020-04-15 WO PCT/US2020/028197 patent/WO2021211105A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002033650A1 (en) * | 2000-10-18 | 2002-04-25 | Matsushita Electric Industrial Co., Ltd. | Human visual model for data hiding |
US20050025338A1 (en) * | 2002-11-04 | 2005-02-03 | Mediasec Technologies Gmbh | Apparatus and methods for improving detection of watermarks in content that has undergone a lossy transformation |
US20090257618A1 (en) * | 2004-12-09 | 2009-10-15 | Sony United Kingdom Limited | Data processing apparatus and method |
US20120074222A1 (en) * | 2006-01-23 | 2012-03-29 | Rhoads Geoffrey B | Document Processing Methods |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chang et al. | Automatic contrast-limited adaptive histogram equalization with dual gamma correction | |
JP4870617B2 (en) | Image data automatic mapping method and image processing device | |
Celik | Spatial mutual information and PageRank-based contrast enhancement and quality-aware relative contrast measure | |
Rivera et al. | Content-aware dark image enhancement through channel division | |
JP4870618B2 (en) | Image data automatic mapping method and image processing device | |
CN108805829B (en) | Image data processing method, device, equipment and computer readable storage medium | |
CN103530847B (en) | A kind of infrared image enhancing method | |
EP1367538A2 (en) | Image processing method and system | |
CN110390643B (en) | License plate enhancement method and device and electronic equipment | |
CN111724430A (en) | Image processing method and device and computer readable storage medium | |
Lavoué et al. | Quality assessment in computer graphics | |
KR20140142381A (en) | Method and Apparatus for removing haze in a single image | |
Jung et al. | Optimized perceptual tone mapping for contrast enhancement of images | |
CN111210395B (en) | Retinex underwater image enhancement method based on gray value mapping | |
CN107644409B (en) | Image enhancement method, display device and computer-readable storage medium | |
US9123141B2 (en) | Ghost artifact detection and removal in HDR image processing using multi-level median threshold bitmaps | |
CN114429437B (en) | Image fusion method and device with self-adaptive full scene brightness | |
Trongtirakul et al. | Single backlit image enhancement | |
CN112700363A (en) | Self-adaptive visual watermark embedding method and device based on region selection | |
CN111127337B (en) | Image local area highlight adjusting method, medium, equipment and device | |
US20150347866A1 (en) | Image processing apparatus, non-transitory computer readable medium, and image processing method | |
CN113989127A (en) | Image contrast adjusting method, system, equipment and computer storage medium | |
US10764471B1 (en) | Customized grayscale conversion in color form processing for text recognition in OCR | |
WO2021211105A1 (en) | Watermarked image signal with varied watermark strengths | |
Srinivas et al. | Channel prior based Retinex model for underwater image enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20931013 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20931013 Country of ref document: EP Kind code of ref document: A1 |