CN110570343A - Image watermark embedding method and device based on self-adaptive feature point extraction - Google Patents

Image watermark embedding method and device based on self-adaptive feature point extraction Download PDF

Info

Publication number
CN110570343A
CN110570343A CN201910748671.9A CN201910748671A CN110570343A CN 110570343 A CN110570343 A CN 110570343A CN 201910748671 A CN201910748671 A CN 201910748671A CN 110570343 A CN110570343 A CN 110570343A
Authority
CN
China
Prior art keywords
complexity
feature points
image
watermark
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910748671.9A
Other languages
Chinese (zh)
Other versions
CN110570343B (en
Inventor
袁小晨
李冕杰
朱红岷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910748671.9A priority Critical patent/CN110570343B/en
Publication of CN110570343A publication Critical patent/CN110570343A/en
Application granted granted Critical
Publication of CN110570343B publication Critical patent/CN110570343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

the embodiment of the invention provides an image watermark embedding method based on self-adaptive feature point extraction, which comprises the following steps: acquiring an original image, and extracting global feature points from the original image; carrying out attack training on the original image to generate a trained image; segmenting the trained image into a plurality of segmentation areas by utilizing a simple linear iterative clustering algorithm; calculating the complexity of the segmentation region; when the complexity of the segmentation region exceeds a set threshold, adjusting the size of the segmentation region until the complexity of the segmentation region is lower than the set threshold; extracting local feature points in the segmented region; performing adaptive matching on the local feature points and the global feature points to determine final feature points; the watermark is embedded into the host image. The watermark embedding method provided by the embodiment of the invention has better robustness.

Description

Image watermark embedding method and device based on self-adaptive feature point extraction
Technical Field
The invention belongs to the technical field of digital watermarks, and particularly relates to an image watermark embedding method and device based on self-adaptive feature point extraction.
background
Today, copies of digital media can be easily copied, transmitted and distributed, and thus watermarking technology has become an effective way to protect copyright information. Digital image watermarking schemes can be divided into global image watermarking and local image watermarking by embedding watermark information into the entire image or a designated local area. In addition, digital image watermarking schemes are classified into single watermarking and multiple watermarking.
In the local watermark scheme based on feature extraction, the feature extraction with high stability plays a decisive role. However, most of the existing feature extraction methods are used for extracting feature points globally, so that some unnecessary key points are caused. For example, the detected feature points are usually concentrated in a specific area, while the second most important feature points may be ignored.
In recent years, many watermarking techniques have been proposed. Chen et al first proposes the concept of robust watermarking. They propose a Quantization Index Modulation (QIM) method and a spread Spectrum Transform Dither Modulation (STDM) method for watermark embedding and extraction. The STDM method does not quantize a certain coefficient of an original image, but performs projection transformation on the obtained vector, and then performs Dither Modulation (DM) on the data.
disclosure of Invention
In view of this, the embodiments of the present invention provide a method and an apparatus for embedding a watermark.
The embodiment of the invention provides an image watermark embedding method based on self-adaptive feature point extraction, which comprises the following steps:
Acquiring an original image, and extracting global feature points from the original image;
Carrying out attack training on the original image to generate a trained image;
Segmenting the trained image into a plurality of segmentation areas by utilizing a simple linear iterative clustering algorithm;
calculating the complexity of the segmentation region;
when the complexity of the segmentation region exceeds a set threshold, adjusting the size of the segmentation region until the complexity of the segmentation region is lower than the set threshold;
Extracting local feature points in the segmented region;
performing adaptive matching on the local feature points and the global feature points to determine final feature points;
The watermark is embedded into the host image.
an embodiment of the present invention provides an apparatus, where the apparatus includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the above-mentioned watermark embedding method when executing the computer program.
The watermark embedding method and device provided by the embodiment of the invention have better robustness.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic flowchart of an image watermark embedding method based on adaptive feature point extraction according to an embodiment of the present invention;
Fig. 2 is a schematic flow chart of a method for feature point extraction according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a state of image segmentation according to an embodiment of the present invention;
Fig. 4 is a flowchart illustrating a method for regularizing a partition area according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for adaptive feature point matching according to an embodiment of the present invention;
Fig. 6 is a schematic flowchart of another image watermark embedding method based on adaptive feature point extraction according to an embodiment of the present invention;
Fig. 7 is a schematic image of watermark embedding provided by an embodiment of the present invention;
fig. 8 is a schematic flowchart of another image watermark embedding method based on adaptive feature point extraction according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
in order to explain the technical means of the present invention, the following description will be given by way of specific examples.
fig. 1 is a schematic flowchart of an image watermark embedding method based on adaptive feature point extraction according to an embodiment of the present invention. The specific contents are as follows:
S110, an original image is obtained, and global feature points are extracted from the original image.
An image is a flat medium composed of graphics and the like. Specific images may include dot matrix images and vector images, such as BMP, JPG format images, or SWF, CDR, AI, and the like format images.
The feature point in the embodiment of the present invention refers to a portion having a certain feature in an image, and is used as a reference position or direction for detecting a watermark or a portion with watermark information. Specifically, the image may be a portion having a feature such as an edge, a focus, or a texture region. The global feature points of the embodiment of the invention are feature points based on image global extraction.
s120, performing attack training on the original image to generate a trained image.
the attack of the embodiments of the present invention is also defined as an attack chain, including, for example, Joint Photographic Experts Group (JPEG) compression, scaling, and gaussian filtering. Since different attacks have different effects on the image, a higher robustness can be extracted.
s130, segmenting the trained image into a plurality of segmentation areas by using a simple linear iterative clustering algorithm.
Simple Linear Iterative Clustering (SLIC) segments an image into irregular pixel blocks with certain visual significance, which are formed by adjacent pixels with similar texture, color, brightness and other characteristics. The pixels are grouped by the similarity of the features among the pixels, and a small number of super pixels are used for replacing a large number of pixels to express the picture features. The plurality of separation regions described in the embodiments of the present invention specifically include at least two separation regions.
specifically, as shown in fig. 3, after an image provided by the embodiment of the present invention is subjected to SLIC segmentation, a plurality of segmented regions are formed.
In one specific embodiment, after the SLIC algorithm segmentation is performed on the image, the trained image is segmented into a plurality of irregular segmented regions. Further, the plurality of irregular divided regions may be adjusted to be regular divided regions, and specifically, each irregular divided region may be converted into a circumscribed rectangle thereof. Therefore, subsequent calculation can be facilitated, and the complexity of image calculation is reduced.
S140 calculates the complexity of the segmented region.
the complexity of the segmentation region is:
Wherein ENTdirfor randomness of said second divided region, CONdirMeasuring local variations, ENE, of said second segmented areadirfor the sum of the calculated square elements of the second divided region, HOMdirCOR being the proximity of the measuring elements of said second divided area to the diagonaldirA joint probability occurrence for a given pixel pair of the second partition region,
Wherein GLCMdirIs a matrix of gray-scale relationships of pixels and neighboring pixels of the divided area, muxAnd σxRepresents GLCMdirmean and standard deviation of the sums in each column.
s150, when the complexity of the divided region exceeds a set threshold, adjusting the size of the divided region until the complexity of the divided region is lower than the set threshold.
Specifically, as shown in fig. 2, a schematic flow chart of extracting feature points according to an embodiment of the present invention is shown.
in an alternative embodiment, when the complexity of the segmented region exceeds a set threshold, the linear simplicity iterative clustering algorithm is adjusted, and the segmented region is re-segmented until the complexity of the segmented region is lower than the set threshold.
In another alternative embodiment, when the complexity of the divided region exceeds the set threshold, the divided region with the complexity exceeding the set threshold is further divided to form a region with smaller region until the complexity of all the divided regions is lower than the set threshold.
S160 extracts local feature points in the segmented region.
and extracting local feature points of the image in the segmentation region with the complexity lower than a set threshold value, namely extracting feature points representing the features of the segmentation region.
S170, performing adaptive matching on the local feature points and the global feature points to determine final feature points.
Specific embodiments are described below:
S510 calculates euclidean distances between every two feature points corresponding to the local feature points and the global feature points.
S520 calculates a ratio of the closest euclidean distance to the second closest euclidean distance.
S530 if a ratio of the closest euclidean distance to the second closest euclidean distance is greater than a corresponding set maximum ratio, deleting the corresponding local feature point and the global feature point.
Calculating the Euclidean distance between every two feature points belonging to the local feature point set and the global feature point set; and expressed as the ratio of the calculated closest Euclidean distance to the second closest Euclidean distanceif it is notIf the ratio is larger than the corresponding MaxRatio (maximum ratio), the corresponding local feature points and the global feature points are deleted.
Wherein FP _ AdtBlki,kIndicating it in the segment to be matchedhK of characteristic pointsththe value of the dimension; FP _ AdtBlkgb,kindicating i in the main imagethK of characteristic pointsthThe value of the dimension, N, represents an N-dimensional feature descriptor.
further, the embodiment of the present invention may also adaptively adjust MaxRatio to generate an appropriate matching result. Increasing MaxRatio produces more matches while decreasing it helps to blur the matches. Therefore, MaxRatio can be adaptively adjusted according to the number of required feature points in each local block.
S180 embeds the watermark into the host image.
Feature points with high robustness and stability are extracted using Adaptive Segmentation-based Feature Extraction (ASFE). Specifically, the feature points may be extracted from the above-described divided regions into which the image is divided based on the complexity, that is, the divided regions having the complexity smaller than the set threshold.
S810 selects a Luma component of the host image color space.
S820 locates the feature region to the position of the feature point in the Luma component.
S830 determines approximation coefficients of the feature region using the stationary wavelet transform.
S840 embeds the watermark in the approximate coefficient region.
The Luma component of the YCbCr color space of the host image is selected. Then, based on the extracted feature points, the feature region is positioned in the luminance component. Thereafter, SWT (Stationary Wavelet Transform) is applied to each feature region. Because of its shift invariance and high robustness to attack, the corresponding approximate coefficient region is selected for watermark embedding.
as shown in fig. 6, in S-STDM (Spread spectrum conversion jitter Modulation), first, Singular Value Decomposition (SVD) is applied to an approximation coefficient of the Decomposition. SVD has good concealment and geometric attack resistance. This helps to improve the robustness of subsequent watermarks. After obtaining diagonal elements in the diagonal matrix by SVD, we perform projective transformation on the obtained vectors using S _ STDM and then perform DM on the projection data. This approach may improve robustness.
Fig. 6 shows the process of S-STDM, which modulates the watermark message into a vector using a DM quantizer. In fig. 6, U (AC) and V (AC) represent orthogonal matrices, which are decomposed from approximate coefficients using equation (1). Then the diagonal element beta of S (AC) is chosen as the watermark carrier using equation (2), as shown in fig. 6. Then, β is projected onto the expansion vector α using equation (3), thereby generating ξ. After the watermark bits are modulated by the DM quantizer in equation (4), the watermark coefficients can be generated by equation (5).
AC=U(AC)S(AC)V*(AC) (1)
Where U (AC) and V (AC) are the left and right singular vectors, respectively. V*(AC) is a conjugate transpose.
ξ=βTα=[s1 s2 ... sn]α (3)
where DM denotes the quantization step, Wmirepresenting the corresponding watermark bit to be embedded and QS representing the quantization step size.
The embodiment of the invention also discloses a watermark extraction method, which specifically comprises the following steps:
the watermark extraction process is the reverse of the embedding process. ASFE is applied to the received image to extract feature points while extracting the Luma component of the YCbCr color space of the received image. And positioning the characteristic region in the Luma component according to the extracted characteristic points. By applying SWT in each feature region, we extract the watermark from the corresponding approximation coefficient AC'. Using the proposed S-STDM, the watermark will be extracted accordingly.
In the extraction process of the S-STDM, similar to the embedding process, the approximation coefficient is first subjected to SVD processing, and then a diagonal element β is selected from the decomposed diagonal matrix by using formula (10) to extract watermark information.
During detection, using equation (3), β is projected onto the projection vector α to get ξ. The watermark bits 0 and 1 are then modulated using the DM quantizer to obtain DM, respectively0And DM1. Finally, according to xi and DM0Or DM1The watermark bit Wm included in the received image is estimated using equation (6)i
The feature extraction method ASFE includes a Complexity-based Adaptive Segmentation (CAS) algorithm and feature point extraction using Speeded Up Robust Features (SURF). ASFE can extract feature points uniformly in the entire image. Thus not leading to unnecessary key points. After extracting the local feature regions, each local region is decomposed into approximation coefficients and detail coefficients using Stationary Wavelet Transform (SWT) in consideration of translational invariance thereof. Next, watermark information is embedded into diagonal elements in a diagonal matrix of Singular Value Decomposition (SVD) using the proposed Singular Value Decomposition-based spread spectrum transform Dither Modulation (S-STDM) method. After watermark embedding, the watermark image is reconstructed using Inverse Singular Value Decomposition (ISVD) and Inverse Stationary Wavelet Transform (ISWT). The S-STDM provided by the scheme embeds watermark information after extracting the local characteristic region, and has better robustness.
The embodiment of the invention also provides an electronic device, which comprises a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform any of the methods described above.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is a logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An image watermark embedding method based on self-adaptive feature point extraction is characterized by comprising the following steps:
Acquiring an original image, and extracting global feature points from the original image;
Carrying out attack training on the original image to generate a trained image;
Segmenting the trained image into a plurality of segmentation areas by utilizing a simple linear iterative clustering algorithm;
calculating the complexity of the segmentation region;
When the complexity of the segmentation region exceeds a set threshold, adjusting the size of the segmentation region until the complexity of the segmentation region is lower than the set threshold;
Extracting local feature points in the segmented region;
Performing adaptive matching on the local feature points and the global feature points to determine final feature points;
the watermark is embedded into the host image.
2. The method of claim 1, wherein the segmenting the trained image into a plurality of segmented regions using a simple linear iterative clustering algorithm comprises:
segmenting the trained image into a plurality of irregular segmentation areas by utilizing a simple linear iterative clustering algorithm;
And adjusting the irregular divided areas into regular divided areas.
3. the method according to claim 1, wherein the adjusting the plurality of irregular partition areas into regular partition areas is specifically:
and converting the irregular segmentation region into a regular segmentation region of a circumscribed rectangle.
4. The method according to any one of claims 1 to 3, wherein the adjusting the size of the partition when the complexity of the partition exceeds a set threshold until the complexity of the partition is lower than the set threshold specifically comprises:
and when the complexity of the segmentation region exceeds a set threshold, adjusting a simple linear iterative clustering algorithm, and re-segmenting the trained image until the complexity of the segmentation region is lower than the set threshold.
5. The method according to any one of claims 1 to 3, wherein the adjusting the size of the partition when the complexity of the partition exceeds a set threshold until the complexity of the partition is lower than the set threshold specifically comprises:
When the complexity of the divided areas exceeds a set threshold, further dividing the divided areas with the complexity exceeding the set threshold to form areas with smaller areas until the complexity of all the divided areas is lower than the set threshold.
6. The method of any of claims 1-3, wherein the complexity of the split region is:
Wherein ENTdirFor randomness of said divided areas, CONdirmeasuring local variations, ENE, of said segmented regionsdirFor the sum of the calculated square elements of the segmented regions, HOMdirCOR being the proximity of the measuring elements of said segmented area to the diagonaldirA joint probability occurrence for a given pair of pixels of the segmented region,
Wherein GLCMdirfor a matrix of gray-scale relationships of pixels of said division area to adjacent pixels, muxand σxRepresents GLCMdirmean and standard deviation of the sums in each column.
7. The method of any one of claims 1-3, wherein said adaptively matching the local feature points and the global feature points to determine final feature points comprises:
Calculating the Euclidean distance between every two feature points corresponding to the local feature points and the global feature points;
Calculating a ratio of the closest euclidean distance to the second closest euclidean distance;
And deleting the corresponding local feature point and the global feature point if the ratio of the closest Euclidean distance to the second closest Euclidean distance is larger than the corresponding set maximum ratio.
8. A method of watermark embedding according to claims 1-3, wherein the embedding of the watermark into the host image comprises:
Selecting a Luma component of a host image color space;
locating the feature region to the position of the feature point in the Luma component;
Determining approximate coefficients of the characteristic region by utilizing stationary wavelet transform;
Embedding a watermark in the approximate coefficient region.
9. The method of claim 8, wherein the method further comprises:
Performing singular value decomposition on the approximate coefficient AC to obtain an area S (AC) for watermark embedding, wherein AC is U (AC) S (AC) V*(AC), U (AC) and V (AC) are respectively left singular vector and right singular vector, V*(AC) is a conjugate transpose;
Extracting diagonal elements beta of S (AC) as watermark carrier, wherein
Projecting β onto the expanded vector α, thereby generating ξ, where
ξ=βTα=[s1 s2 ... sn]α;
Modulating the watermark, wherein
DM denotes the quantization step, Wmirepresenting the corresponding watermark bit to be embedded, QS representing the quantization step;
Performing corresponding calculation on the modulated coefficients to generate the coefficients embedded with the watermark, wherein the coefficients embedded with the watermark are
10. an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to perform any of the methods of claims 1-9.
CN201910748671.9A 2019-08-14 2019-08-14 Image watermark embedding method and device based on self-adaptive feature point extraction Active CN110570343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910748671.9A CN110570343B (en) 2019-08-14 2019-08-14 Image watermark embedding method and device based on self-adaptive feature point extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910748671.9A CN110570343B (en) 2019-08-14 2019-08-14 Image watermark embedding method and device based on self-adaptive feature point extraction

Publications (2)

Publication Number Publication Date
CN110570343A true CN110570343A (en) 2019-12-13
CN110570343B CN110570343B (en) 2023-04-07

Family

ID=68775249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910748671.9A Active CN110570343B (en) 2019-08-14 2019-08-14 Image watermark embedding method and device based on self-adaptive feature point extraction

Country Status (1)

Country Link
CN (1) CN110570343B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747061A (en) * 2021-08-25 2021-12-03 国网河北省电力有限公司衡水供电分公司 Image acquisition method, device, terminal and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678389B1 (en) * 1998-12-29 2004-01-13 Kent Ridge Digital Labs Method and apparatus for embedding digital information in digital multimedia data
CN1967594A (en) * 2006-10-16 2007-05-23 北京大学 An adaptive method for extending, transforming and dithering modulation of watermarking
CN102024244A (en) * 2009-09-10 2011-04-20 北京大学 Method and device for embedding and detecting watermarks based on image characteristic region
CN102903075A (en) * 2012-10-15 2013-01-30 西安电子科技大学 Robust watermarking method based on image feature point global correction
CN102903071A (en) * 2011-07-27 2013-01-30 阿里巴巴集团控股有限公司 Watermark adding method and system as well as watermark identifying method and system
CN103854249A (en) * 2013-12-28 2014-06-11 辽宁师范大学 Digital image watermarking method based on local index torque characteristic
CN108711132A (en) * 2018-05-09 2018-10-26 上海理工大学 Digital watermark method based on Harris angle point resist geometric attacks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678389B1 (en) * 1998-12-29 2004-01-13 Kent Ridge Digital Labs Method and apparatus for embedding digital information in digital multimedia data
CN1967594A (en) * 2006-10-16 2007-05-23 北京大学 An adaptive method for extending, transforming and dithering modulation of watermarking
CN102024244A (en) * 2009-09-10 2011-04-20 北京大学 Method and device for embedding and detecting watermarks based on image characteristic region
CN102903071A (en) * 2011-07-27 2013-01-30 阿里巴巴集团控股有限公司 Watermark adding method and system as well as watermark identifying method and system
CN102903075A (en) * 2012-10-15 2013-01-30 西安电子科技大学 Robust watermarking method based on image feature point global correction
CN103854249A (en) * 2013-12-28 2014-06-11 辽宁师范大学 Digital image watermarking method based on local index torque characteristic
CN108711132A (en) * 2018-05-09 2018-10-26 上海理工大学 Digital watermark method based on Harris angle point resist geometric attacks

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LI, MIANJIE AND XIAOCHEN YUAN: "Robust Feature Extraction Based Watermarking Method Using Spread Transform Dither Modulation", 《2017 INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT)》 *
LI, MIANJIE ET AL.: "Image segmentation-based robust feature extraction for color image watermarking", 《INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING》 *
张正伟等: "基于DWT和SIFT的鲁棒图像水印算法", 《合肥工业大学学报(自然科学版)》 *
杨金劳等: "基于椭圆特征区域与重要位平面分解的鲁棒图像水印算法", 《包装工程》 *
董夙慧等: "基于YC_oC_g-R颜色空间与离散余弦变换的自适应彩色图像水印算法", 《包装工程》 *
陈青等: "一种新的SIFT几何校正的抗几何攻击水印算法", 《包装工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747061A (en) * 2021-08-25 2021-12-03 国网河北省电力有限公司衡水供电分公司 Image acquisition method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN110570343B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US7068809B2 (en) Segmentation in digital watermarking
US20070280551A1 (en) Removing ringing and blocking artifacts from JPEG compressed document images
CN110232650B (en) Color image watermark embedding method, detection method and system
WO2006017848A1 (en) Robust hidden data extraction method for scaling attacks
JP2001148776A (en) Image processing unit and method and storage medium
JP2005051785A (en) Method and systems for embedding watermarks in digital data and detecting it
CN105512999B (en) A kind of color image holographic watermark method of double transformation
Pei et al. A novel image recovery algorithm for visible watermarked images
Yang et al. Efficient reversible data hiding algorithm based on gradient-based edge direction prediction
CN110910299B (en) Self-adaptive reversible information hiding method based on integer wavelet transform
Ma et al. Adaptive spread-transform dither modulation using a new perceptual model for color image watermarking
CN110570343B (en) Image watermark embedding method and device based on self-adaptive feature point extraction
CN113763224A (en) Image processing method and device
Maity et al. Genetic algorithms for optimality of data hiding in digital images
CN111065000B (en) Video watermark processing method, device and storage medium
CN111640052B (en) Robust high-capacity digital watermarking method based on mark code
CN115545998A (en) Blind watermark embedding and extracting method and device, electronic equipment and storage medium
Kim et al. Robust watermarking in curvelet domain for preserving cleanness of high-quality images
Kamble et al. Wavelet based digital image watermarking algorithm using fractal images
Bhattacharyya et al. A novel approach of video steganography using pmm
JP3809310B2 (en) Image processing apparatus and method, and storage medium
KR100397752B1 (en) Watermarking method using block based on wavelet transform
CN112767227B (en) Image watermarking method capable of resisting screen shooting
JP2001119558A (en) Image processor and method and storage medium
JP2001119561A (en) Image processor, image processing method and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant