CN118138852B - Audio digital watermark embedding method and device - Google Patents

Audio digital watermark embedding method and device Download PDF

Info

Publication number
CN118138852B
CN118138852B CN202410560776.2A CN202410560776A CN118138852B CN 118138852 B CN118138852 B CN 118138852B CN 202410560776 A CN202410560776 A CN 202410560776A CN 118138852 B CN118138852 B CN 118138852B
Authority
CN
China
Prior art keywords
watermark
sequence
channel
information
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410560776.2A
Other languages
Chinese (zh)
Other versions
CN118138852A (en
Inventor
刘俊
吕相东
常超
马春来
吴一尘
刘春生
沈培佳
牛钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202410560776.2A priority Critical patent/CN118138852B/en
Publication of CN118138852A publication Critical patent/CN118138852A/en
Application granted granted Critical
Publication of CN118138852B publication Critical patent/CN118138852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/018Audio watermarking, i.e. embedding inaudible data in the audio signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Security & Cryptography (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses an audio digital watermark embedding method and device, wherein the method comprises the following steps: acquiring an audio file and corresponding channel model information thereof; determining picture watermark file information, watermark transformation model information and watermark embedding information based on the channel model information; generating an audio digital watermark information sequence based on the determined picture watermark file information and watermark transformation model information; and embedding the audio digital watermark information sequence into the audio file based on watermark embedding information to obtain the audio digital watermark file. According to the invention, the image watermark is embedded in the digital audio file, and the embedding method is determined according to the channel model, so that the problems of low fidelity and poor robustness of the digital audio watermark are solved, the quality of the audio file embedded with the digital watermark is ensured, and the effect of the digital watermark is ensured.

Description

Audio digital watermark embedding method and device
Technical Field
The present invention relates to the field of digital watermarking technologies, and in particular, to a method and an apparatus for embedding an audio digital watermark.
Background
In the digital age, copyright protection of audio content has become a global issue. Audio watermarking technology has been widely used as an important copyright protection means. The traditional digital audio watermarking technology mainly relies on frequency domain and time domain transformation methods such as FFT and DCT, and has problems in terms of high fidelity and robustness. Audio digital watermark embedding generally has no special requirements on the attribute and the characteristic of an audio file, only the length of the audio file can influence the capacity capable of embedding the digital watermark, but a section of audio can not be randomly embedded with the digital watermark, only the audio is processed and the digital watermark is embedded at a proper position, so that a preset effect can be achieved, if the digital watermark is embedded at an improper position, the quality problem of the whole audio file can be caused, and the effect of the digital watermark is lost.
At present, digital audio watermarks have the problems of low fidelity and poor robustness, and an audio file embedded with the digital watermarks can reduce the quality of the whole audio file, so that the digital watermarks lose the effect of the digital watermarks.
Disclosure of Invention
The invention aims to solve the technical problem of providing an audio digital watermark embedding method and device, which can improve the quality and adaptability of audio digital watermark embedding.
In order to solve the above technical problems, a first aspect of an embodiment of the present invention discloses an image recognition method, which includes:
s1, acquiring an audio file and corresponding channel model information thereof;
s2, determining picture watermark file information, watermark transformation model information and watermark embedding information based on the channel model information;
s3, generating an audio digital watermark information sequence based on the determined picture watermark file information and watermark transformation model information;
s4, based on watermark embedding information, embedding the audio digital watermark information sequence into the audio file to obtain the audio digital watermark file.
The obtaining the channel model information corresponding to the audio file includes:
s11, determining a sending end and a receiving end of the audio file;
s12, generating a channel detection sequence by using the transmitting end, and transmitting the channel detection sequence to the receiving end through a transmission channel;
s13, receiving a receiving sequence corresponding to the channel detection sequence by using the receiving end;
s14, comparing and counting the receiving sequence and the channel detection sequence to obtain channel model information.
The comparing and counting the receiving sequence and the channel detecting sequence to obtain the channel model information, which comprises the following steps:
s141, representing the channel detection sequence as Representing the received sequence asEach representing a sequence number of an element in the sequence,Representing the sequence length;
s142, equally dividing the channel detection sequence and the receiving sequence into N1 sections to respectively obtain corresponding N1 subsequences; numbering the subsequences of the channel detection sequence and the receiving sequence according to the sequence of occurrence of the subsequences to obtain sequence number values of the subsequences;
S143, respectively calculating the square sum of the differences of all elements of the subsequences for the subsequences with the same sequence number of the channel detection sequence and the receiving sequence, wherein the calculation expression is as follows:
wherein, Representing the first of the channel sounding sequencesSub-sequence NoThe number of elements to be added to the composition,Representing the first of the received sequencesSub-sequence NoThe number of elements to be added to the composition,Represent the firstSum of squares of differences of all elements of the sub-sequences;
S144, carrying out accumulation and variance processing on the sum of squares of the differences of all the subsequences to obtain the channel power statistical information; the channel power statistical information comprises a channel power difference value and a channel power variance value; the channel power difference value is obtained by accumulating the square sums of the differences of all the subsequences; the channel power variance value is obtained by solving the variance of the sum of squares of the differences of all the subsequences;
S145, respectively carrying out frequency domain transformation processing on the channel detection sequence and the receiving sequence to respectively obtain a channel detection frequency domain sequence and a receiving frequency domain sequence;
S146, equally dividing the channel detection frequency domain sequence and the receiving frequency domain sequence into N1 sections respectively to obtain corresponding N1 frequency domain subsequences respectively; numbering the frequency domain subsequences of the channel detection frequency domain sequence and the receiving frequency domain sequence according to the sequence of occurrence of the frequency domain subsequences respectively to obtain sequence number values of the frequency domain subsequences;
s147, respectively calculating the frequency domain subsequences with the same sequence number of the channel detection frequency domain sequence and the received frequency domain sequence to obtain the sum of squares of the difference values of all elements of the frequency domain subsequences, wherein the calculation expression is as follows:
wherein, A first bit representing a channel sounding frequency domain sequenceThe first frequency domain sub-sequenceThe number of elements to be added to the composition,Representing the first received frequency domain sequenceThe first frequency domain sub-sequenceThe number of elements to be added to the composition,Represent the firstSum of squares of differences of all elements of the individual frequency domain sub-sequences; the number of elements contained for the frequency domain sub-sequence;
S148, carrying out accumulation and variance processing on the sum of squares of the differences of all the frequency domain subsequences to obtain the channel frequency response statistical information; the channel frequency response statistical information comprises a frequency response difference value and a frequency response variance value; the frequency response difference value is obtained by accumulating the square sums of the differences of all the frequency domain subsequences; the frequency response variance value is obtained by solving the variance of the sum of squares of the differences of all the frequency domain subsequences;
s149, processing the channel power statistical information and the channel frequency response statistical information by using a channel discrimination model to determine corresponding channel model information; the channel discrimination model comprises a numerical distribution range of channel power statistical information and channel frequency response statistical information of the first to fourth channel models; the channel model includes a first channel model, a second channel model, a third channel model, and a fourth channel model.
The method for determining the channel model information comprises the steps of processing the channel power statistical information and the channel frequency response statistical information by using a channel discrimination model to determine the corresponding channel model information, and specifically comprises the following steps:
the channel discrimination model comprises a power variance value threshold and a frequency response variance value threshold;
When the channel power variance value is larger than a power variance value threshold and the frequency response variance value is larger than the frequency response variance value threshold, determining the channel model information as a first channel model;
when the channel power variance value is larger than a power variance value threshold and the frequency response variance value is smaller than the frequency response variance value threshold, determining the channel model information as a second channel model;
when the channel power variance value is smaller than a power variance value threshold and the frequency response variance value is smaller than the frequency response variance value threshold, determining the channel model information as a third channel model;
and when the channel power variance value is smaller than a power variance value threshold and the frequency response variance value is larger than the frequency response variance value threshold, determining the channel model information as a fourth channel model.
The determining picture watermark file information, watermark transformation model information and watermark embedding information based on the channel model information comprises:
s21, extracting a corresponding picture watermark file from a preset picture watermark file database according to the channel model information;
S22, determining corresponding watermark transformation model information according to the channel model information;
S23, carrying out transform domain parameter extraction processing on the receiving sequence and the channel detection sequence to obtain watermark transformation model parameter information;
s24, determining watermark embedding information according to the channel model information and the watermark transformation model information.
The determining corresponding watermark transformation model information according to the channel model information comprises the following steps:
determining a watermark transformation model corresponding to the first channel model as a first watermark transformation model; the computational expression of the first watermark transformation model is:
wherein, For the kth element of the signal sequence corresponding to the picture watermark file,For the length of the signal sequence corresponding to the picture watermark file,The transformed data information obtained for the first watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the first watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndA time domain transformation length and a frequency domain transformation length of the first watermark transformation model respectively;
determining a watermark transformation model corresponding to the second channel model as a second watermark transformation model; the computational expression of the second watermark transformation model is:
wherein, The transformed data information obtained for the second watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the second watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndRespectively a time domain transformation length and a frequency domain transformation length of the second watermark transformation model;
Determining a watermark transformation model corresponding to the third channel model as a third watermark transformation model; the computational expression of the third watermark transformation model is:
wherein, The transformed data information obtained for the third watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the third watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndA time domain transform length and a frequency domain transform length of the third watermark transformation model respectively,An angle average value for the third watermark transformation model;
Determining a watermark transformation model corresponding to the fourth channel model as a fourth watermark transformation model; the calculation expression of the fourth watermark transformation model is as follows:
wherein, The transformed data information obtained for the fourth watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the fourth watermark transformation model,Is thatAt the position ofThe value of the position is taken out,For the time domain transform length of the fourth watermark transformation model,And obtaining the angle average value of the fourth watermark transformation model.
The step of extracting transform domain parameters of the receiving sequence and the channel detection sequence to obtain watermark transformation model parameter information comprises the following steps:
s231, using the watermark transformation model corresponding to the watermark transformation model information determined in the S22 as a transformation domain model;
S232, respectively carrying out transformation processing on the receiving sequence and the channel detection sequence by utilizing the transformation domain model to obtain a receiving transformation domain sequence and a channel detection transformation domain sequence;
s233, constructing a difference function by utilizing the difference value of the received transform domain sequence and the channel detection transform domain sequence;
S234, constructing and obtaining a model parameter information optimization model by utilizing the difference function; the model parameter information optimizing model has an objective function of a minimized difference function and an independent variable of a transformation function corresponding to the watermark transformation model determined in the step S22;
S235, solving the model parameter information optimization model to obtain optimal parameters; and determining the optimal parameters as watermark transformation model parameter information.
The embedding of the audio digital watermark information sequence into the audio file based on the watermark embedding information to obtain the audio digital watermark file comprises the following steps:
s41, performing discrete sampling on the audio file to obtain a discrete audio sequence;
S42, respectively carrying out uniform segmentation processing on the discrete audio sequence and the audio digital watermark information sequence to respectively obtain a plurality of discrete audio subsequences and a plurality of audio digital watermark information subsequences; the number of discrete audio subsequences and the number of audio digital watermark information subsequences are the same;
S43, each discrete audio subsequence is inserted into a corresponding audio digital watermark information subsequence according to the embedding position and the position interval in the watermark embedding information;
S44, when all the discrete audio subsequences are inserted into the corresponding audio digital watermark information subsequences, determining that the obtained signal sequence is an audio digital watermark file.
The second aspect of the embodiment of the invention discloses an audio digital watermark embedding device, which comprises:
A memory storing executable program code;
A processor coupled to the memory;
The processor calls the executable program codes stored in the memory to execute the audio digital watermark embedding method.
A third aspect of the embodiments of the present invention discloses a computer-readable storage medium storing computer instructions for executing the audio digital watermark embedding method when the computer instructions are invoked.
The beneficial effects of the invention are as follows:
1. According to the invention, the image watermark is embedded in the digital audio file, and the embedding method is determined according to the channel model, so that the problems of low fidelity and poor robustness of the digital audio watermark are solved, the quality of the audio file embedded with the digital watermark is ensured, and the effect of the digital watermark is ensured.
2. Before embedding the image watermark into the audio file, the invention firstly detects and models the transmission channel, and after obtaining the channel model, the corresponding watermark transformation model is determined, thereby improving the reliability of the audio file transmission.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a picture watermark file before embedding;
fig. 3 is a picture watermark file obtained by detection at a receiving end.
Detailed Description
For a better understanding of the present disclosure, an embodiment is presented herein.
FIG. 1 is a flow chart of the method of the present invention; FIG. 2 is a picture watermark file before embedding; fig. 3 is a picture watermark file obtained by detection at a receiving end.
Aiming at the problems that the existing digital audio watermark has low fidelity and poor robustness, and the quality of the whole audio file is reduced after the digital watermark is embedded, and the effect of the digital watermark is lost, the invention discloses an audio digital watermark embedding method, which comprises the following steps:
s1, acquiring an audio file and corresponding channel model information thereof;
s2, determining picture watermark file information, watermark transformation model information and watermark embedding information based on the channel model information;
s3, generating an audio digital watermark information sequence based on the determined picture watermark file information and watermark transformation model information;
s4, based on watermark embedding information, embedding the audio digital watermark information sequence into the audio file to obtain the audio digital watermark file.
S5, adding the embedded characteristic information into the audio digital watermark file to obtain a transmission information sequence; the embedded characteristic information comprises watermark embedded information, watermark transformation model parameter information and audio digital watermark information subsequence number information;
S6, modulating the transmission information sequence to obtain a modulation signal; transmitting the modulated signal to a receiving end through a transmission channel;
And S7, detecting the received modulation signal by using a receiving end to obtain a received watermark sequence.
S8, judging the received watermark sequence and the audio digital watermark information sequence to obtain an audio digital watermark embedding judgment result.
And adding the embedded characteristic information to the audio digital watermark file to obtain a transmission information sequence, specifically adding the embedded characteristic information to the front part or the rear part of the audio digital watermark file to obtain the transmission information sequence.
The obtaining the channel model information corresponding to the audio file includes:
s11, determining a sending end and a receiving end of the audio file;
s12, generating a channel detection sequence by using the transmitting end, and transmitting the channel detection sequence to the receiving end through a transmission channel;
s13, receiving a receiving sequence corresponding to the channel detection sequence by using the receiving end;
s14, comparing and counting the receiving sequence and the channel detection sequence to obtain channel model information.
The generating, by the transmitting end, a channel sounding sequence, and transmitting the channel sounding sequence to the receiving end via a transmission channel, including:
The transmitting end carries out modulation and up-conversion processing on the channel detection sequence, and the obtained signal is transmitted to the receiving end through a transmission channel.
The receiving end is used for receiving the received sequence corresponding to the channel detection sequence, namely the receiving end demodulates and detects the received signal to obtain an information sequence, and the information sequence is used as the received sequence.
The comparing and counting the receiving sequence and the channel detecting sequence to obtain the channel model information, which comprises the following steps:
s141, representing the channel detection sequence as Representing the received sequence asEach representing a sequence number of an element in the sequence,Representing the sequence length;
s142, equally dividing the channel detection sequence and the receiving sequence into N1 sections to respectively obtain corresponding N1 subsequences; numbering the subsequences of the channel detection sequence and the receiving sequence according to the sequence of occurrence of the subsequences to obtain sequence number values of the subsequences;
n1 should be chosen to ensure that A1/N1 is an integer.
For a channel sounding sequence, it includes 1 st to N1 st subsequences; for the reception sequence, it includes the 1 st to N1 st subsequences.
S143, respectively calculating the square sum of the differences of all elements of the subsequences for the subsequences with the same sequence number of the channel detection sequence and the receiving sequence, wherein the calculation expression is as follows:
wherein, Representing the first of the channel sounding sequencesSub-sequence NoThe number of elements to be added to the composition,Representing the first of the received sequencesSub-sequence NoThe number of elements to be added to the composition,Represent the firstSum of squares of differences of all elements of the sub-sequences;
S144, carrying out accumulation and variance processing on the sum of squares of the differences of all the subsequences to obtain the channel power statistical information; the channel power statistical information comprises a channel power difference value and a channel power variance value; the channel power difference value is obtained by accumulating the square sums of the differences of all the subsequences; the channel power variance value is obtained by solving the variance of the sum of squares of the differences of all the subsequences.
S145, respectively carrying out frequency domain transformation processing on the channel detection sequence and the receiving sequence to respectively obtain a channel detection frequency domain sequence and a receiving frequency domain sequence;
S146, equally dividing the channel detection frequency domain sequence and the receiving frequency domain sequence into N1 sections respectively to obtain corresponding N1 frequency domain subsequences respectively; numbering the frequency domain subsequences of the channel detection frequency domain sequence and the receiving frequency domain sequence according to the sequence of occurrence of the frequency domain subsequences respectively to obtain sequence number values of the frequency domain subsequences;
s147, respectively calculating the frequency domain subsequences with the same sequence number of the channel detection frequency domain sequence and the received frequency domain sequence to obtain the sum of squares of the difference values of all elements of the frequency domain subsequences, wherein the calculation expression is as follows:
wherein, A first bit representing a channel sounding frequency domain sequenceThe first frequency domain sub-sequenceThe number of elements to be added to the composition,Representing the first received frequency domain sequenceThe first frequency domain sub-sequenceThe number of elements to be added to the composition,Represent the firstSum of squares of differences of all elements of the individual frequency domain sub-sequences; the number of elements contained for the frequency domain sub-sequence;
s148, carrying out accumulation and variance processing on the sum of squares of the differences of all the frequency domain subsequences to obtain the channel frequency response statistical information; the channel frequency response statistical information comprises a frequency response difference value and a frequency response variance value; the frequency response difference value is obtained by accumulating the square sums of the differences of all the frequency domain subsequences; the frequency response variance value is obtained by solving the variance of the sum of squares of the differences of all the frequency domain subsequences.
S149, processing the channel power statistical information and the channel frequency response statistical information by using a channel discrimination model to determine corresponding channel model information; the channel discrimination model comprises a numerical distribution range of channel power statistical information and channel frequency response statistical information of the first to fourth channel models; the channel model includes a first channel model, a second channel model, a third channel model, and a fourth channel model.
The step S149 specifically includes:
the channel discrimination model comprises a power variance value threshold and a frequency response variance value threshold;
When the channel power variance value is larger than a power variance value threshold and the frequency response variance value is larger than the frequency response variance value threshold, determining the channel model information as a first channel model;
when the channel power variance value is larger than a power variance value threshold and the frequency response variance value is smaller than the frequency response variance value threshold, determining the channel model information as a second channel model;
when the channel power variance value is smaller than a power variance value threshold and the frequency response variance value is smaller than the frequency response variance value threshold, determining the channel model information as a third channel model;
and when the channel power variance value is smaller than a power variance value threshold and the frequency response variance value is larger than the frequency response variance value threshold, determining the channel model information as a fourth channel model.
The determining picture watermark file information, watermark transformation model information and watermark embedding information based on the channel model information comprises:
s21, extracting a corresponding picture watermark file from a preset picture watermark file database according to the channel model information;
S22, determining corresponding watermark transformation model information according to the channel model information;
S23, carrying out transform domain parameter extraction processing on the receiving sequence and the channel detection sequence to obtain watermark transformation model parameter information;
s24, determining watermark embedding information according to the channel model information and the watermark transformation model information.
The preset picture watermark file database comprises 4 types of picture watermark files which respectively correspond to 4 types of channel models; step S21, selecting one picture watermark file from a class of picture watermark files corresponding to the channel model to extract according to the channel model information.
The determining watermark embedding information according to the channel model information and watermark transformation model information comprises the following steps:
determining the embedding position of the picture watermark file in the audio file according to the channel model information;
Specifically, the method comprises the following steps:
when the channel model information is a first channel model, determining that the embedding position of the picture watermark file in the audio file is a first position; the serial number value of the information sequence of the audio file at the first position is Y11;
when the channel model information is a second channel model, determining that the embedding position of the picture watermark file in the audio file is a second position; the serial number value of the information sequence of the audio file at the second position is Y12;
when the channel model information is a third channel model, determining that the embedding position of the picture watermark file in the audio file is a third position; the serial number value of the information sequence of the audio file at the third position is Y13;
When the channel model information is a fourth channel model, determining that the embedding position of the picture watermark file in the audio file is a fourth position; and the serial number value of the information sequence of the audio file at the fourth position is Y14.
The intervals among Y11, Y12, Y13 and Y14 should be greater than the length of the sequence corresponding to the segmented picture watermark file.
Determining the position interval of the picture watermark file after being embedded into the audio file according to the watermark transformation model information;
And determining the embedding position and the position interval to embed information for the watermark.
Specifically, when the watermark transformation model information is a first watermark transformation model, determining that the position interval information is N11; when the watermark transformation model information is a second watermark transformation model, determining that the position interval information is N12; when the watermark transformation model information is a third watermark transformation model, determining that the position interval information is N13; and when the watermark transformation model information is a fourth watermark transformation model, determining that the position interval information is N14.
The picture watermark file is embedded in the audio file, and the embedded position of each section of the picture watermark file is embedded in the audio file after the information sequences corresponding to the picture watermark file and the audio file are respectively segmented.
The position interval information refers to sequence number intervals of elements in the sequence corresponding to each section of picture watermark file after the corresponding information sequence of the audio file is embedded.
The determining corresponding watermark transformation model information according to the channel model information comprises the following steps:
determining a watermark transformation model corresponding to the first channel model as a first watermark transformation model; the computational expression of the first watermark transformation model is:
wherein, For the kth element of the signal sequence corresponding to the picture watermark file,For the length of the signal sequence corresponding to the picture watermark file,The transformed data information obtained for the first watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the first watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndA time domain transformation length and a frequency domain transformation length of the first watermark transformation model respectively;
determining a watermark transformation model corresponding to the second channel model as a second watermark transformation model; the computational expression of the second watermark transformation model is:
wherein, The transformed data information obtained for the second watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the second watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndRespectively a time domain transformation length and a frequency domain transformation length of the second watermark transformation model; is stronger than the channel frequency offset resistance
Determining a watermark transformation model corresponding to the third channel model as a third watermark transformation model; the computational expression of the third watermark transformation model is:
wherein, The transformed data information obtained for the third watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the third watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndA time domain transform length and a frequency domain transform length of the third watermark transformation model respectively,And obtaining the angle average value of the third watermark transformation model.
Determining a watermark transformation model corresponding to the fourth channel model as a fourth watermark transformation model; the calculation expression of the fourth watermark transformation model is as follows:
wherein, The transformed data information obtained for the fourth watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the fourth watermark transformation model,Is thatAt the position ofThe value of the position is taken out,For the time domain transform length of the fourth watermark transformation model,And obtaining the angle average value of the fourth watermark transformation model.
And in the computing expression of the four-channel model, when the sequence number value of the s () sequence exceeds the sequence number length, processing is carried out in a mode of circulating back to the first bit and then continuing to take values.
The signal sequence corresponding to the picture watermark file is a one-dimensional data sequence obtained by performing discrete sampling processing on the picture watermark file to obtain a watermark matrix and performing dimension reduction processing on the watermark matrix; the dimension reduction processing is to splice the matrix row by row or column by column.
The step of extracting transform domain parameters of the receiving sequence and the channel detection sequence to obtain watermark transformation model parameter information comprises the following steps:
s231, using the watermark transformation model corresponding to the watermark transformation model information determined in the S22 as a transformation domain model;
S232, respectively carrying out transformation processing on the receiving sequence and the channel detection sequence by utilizing the transformation domain model to obtain a receiving transformation domain sequence and a channel detection transformation domain sequence;
s233, constructing a difference function by utilizing the difference value of the received transform domain sequence and the channel detection transform domain sequence;
S234, constructing and obtaining a model parameter information optimization model by utilizing the difference function; the model parameter information optimizing model has an objective function of a minimized difference function and an independent variable of a transformation function corresponding to the watermark transformation model determined in the step S22;
S235, solving the model parameter information optimization model to obtain optimal parameters; and determining the optimal parameters as watermark transformation model parameter information.
And solving the model parameter information optimization model, wherein an optimization method or an exhaustive search method can be adopted.
The watermark transformation model parameter information is parameters of a transformation function corresponding to the watermark transformation model.
The generating an audio digital watermark information sequence based on the determined picture watermark file information and watermark transformation model information comprises the following steps:
Transforming the picture watermark file information by using the watermark transformation model to obtain transformed data information;
And carrying out format conversion processing on the converted data information to obtain an audio digital watermark information sequence matched with the format of the audio file.
The transforming the picture watermark file information by using the watermark transformation model to obtain transformed data information comprises the following steps:
Determining a watermark transformation model by utilizing watermark transformation model parameter information and watermark transformation model information;
Performing discrete sampling processing on the picture watermark file information to obtain a corresponding signal sequence;
transforming the signal sequence by using a watermark transformation model to obtain transformed data information; the transformed data information is a two-dimensional matrix.
The watermark transformation model used is determined by utilizing watermark transformation model parameter information and watermark transformation model information, specifically, the expression of the transformation function of the watermark transformation model is determined by utilizing watermark transformation model parameter information, so that the specific expression of the watermark transformation model used is determined.
The converting the format of the transformed data information to obtain an audio digital watermark information sequence matched with the format of the audio file may be:
and performing dimension reduction processing on the two-dimensional matrix corresponding to the transformation data information to obtain an audio digital watermark information sequence matched with the format of the audio file. The dimension reduction process can be realized by adopting a size command in Matlab.
The embedding of the audio digital watermark information sequence into the audio file based on the watermark embedding information to obtain the audio digital watermark file comprises the following steps:
s41, performing discrete sampling on the audio file to obtain a discrete audio sequence;
S42, respectively carrying out uniform segmentation processing on the discrete audio sequence and the audio digital watermark information sequence to respectively obtain a plurality of discrete audio subsequences and a plurality of audio digital watermark information subsequences; the number of discrete audio subsequences and the number of audio digital watermark information subsequences are the same;
S43, each discrete audio subsequence is inserted into a corresponding audio digital watermark information subsequence according to the embedding position and the position interval in the watermark embedding information;
The S43 includes:
When the embedding position is 2 and the position interval is 2, the first discrete audio subsequence is [1,1,0,0,1,0,1,0,1,0,1], the first audio digital watermark information subsequence is [1,0,1], the inserted subsequence is [1,1,0,0,0,0,1,1,1,0,1], wherein the thickened position is the inserted audio digital watermark information subsequence, and after the insertion, the replacement of the original sequence number is completed.
S44, when all the discrete audio subsequences are inserted into the corresponding audio digital watermark information subsequences, determining that the obtained signal sequence is an audio digital watermark file.
The method for detecting the received modulated signal by the receiving end to obtain a received watermark sequence comprises the following steps:
S71, demodulating the received modulated signal by using a receiving end to obtain a receiving sequence;
s72, extracting embedded characteristic information from a receiving sequence, and deleting the embedded characteristic information from the receiving sequence to obtain a receiving data sequence;
S73, determining the number information of the audio digital watermark information subsequences in the embedded characteristic information, wherein the number information is the dividing number M1 of the received data sequence; uniformly dividing the received data sequence to obtain M1 received data subsequences;
S74, extracting corresponding watermark subsequences from each received data subsequence by utilizing the watermark embedding information;
s75, integrating watermark subsequences corresponding to all received data subsequences to obtain an extracted watermark sequence;
s76, constructing and obtaining a watermark inverse transformation model by utilizing watermark transformation model information and watermark transformation model parameter information;
and S77, carrying out transformation processing on the extracted watermark sequence by utilizing a watermark inverse transformation model to obtain a received watermark sequence.
The integration processing in S75 is to splice all watermark sub-sequences together to obtain a total sequence.
The transforming the extracted watermark sequence by using the watermark inverse transformation model to obtain a received watermark sequence comprises the following steps:
Firstly, a matrix constructed by using an extracted watermark sequence is utilized, the dimension of the matrix is matched with the transformation processing dimension of the watermark inverse transformation model, and then the watermark inverse transformation model is utilized to perform transformation processing on the extracted watermark sequence, so that a receiving watermark sequence is obtained.
The construction of the watermark inverse transformation model by utilizing watermark transformation model information and watermark transformation model parameter information comprises the following steps:
When the watermark transformation model is a first watermark transformation model, constructing and obtaining a first watermark inverse transformation model, wherein the calculation expression is as follows:
wherein, To receive the kth element of the watermark sequence,For constructing the elements of the z-th row and the x-th column of the obtained matrix by using the extracted watermark sequence,AndThe number of rows and columns of the matrix respectively,Is a conjugate function of a transformation function corresponding to the first watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndRespectively a time domain transformation length and a frequency domain transformation length of the first watermark inverse transformation model;
When the watermark transformation model is a second watermark transformation model, constructing and obtaining a second watermark inverse transformation model, wherein the calculation expression is as follows:
wherein, To receive the kth element of the watermark sequence,Is a conjugate function of a transformation function corresponding to the second watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndRespectively a time domain transformation length and a frequency domain transformation length of the second watermark inverse transformation model;
when the watermark transformation model is a third watermark transformation model, constructing and obtaining a third watermark inverse transformation model, wherein the calculation expression is as follows:
wherein, Is a conjugate function of the transformation function corresponding to the third watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndA time domain transform length and a frequency domain transform length of the third inverse watermark model respectively,And obtaining the angle average value of the third watermark inverse transformation model.
When the watermark transformation model is a fourth watermark transformation model, constructing and obtaining a fourth watermark inverse transformation model, wherein the calculation expression is as follows:
wherein, Is a conjugate function of the transformation function corresponding to the fourth watermark transformation model,Is thatAt the position ofThe value of the position is taken out,For the time domain transform length of the fourth inverse watermark model,And obtaining the angle average value of the fourth watermark inverse transformation model.
The parameter values of the conjugate function of the transformation function are determined by watermark transformation model parameter information.
And judging the received watermark sequence and the audio digital watermark information sequence to obtain an audio digital watermark embedding judgment result:
S81, utilizing the received watermark sequence and the audio digital watermark information sequence to be respectively divided into M1 subsequences to obtain a first subsequence and a second subsequence;
S82, constructing a first matrix by using the first subsequence as a row vector; constructing and obtaining a second matrix by using the second subsequence as a row vector; the dimensions of the first matrix and the second matrix are M1×M2;
s83, performing association calculation on the first matrix and the second matrix to obtain an association vector;
the association calculation has the following calculation expression:
wherein, For a first matrixThe i-th row, j-th column element of (c),For a second matrixThe i-th row, j-th column element of (c),Is the i-th element of the association vector.
S84, carrying out normalized entropy calculation processing on the associated vector to obtain an uncertainty evaluation value;
The normalized entropy value calculation process has a calculation expression as follows:
wherein, Is thatIs used for the normalization of the values of (c),Is an uncertainty evaluation value.
S85, carrying out relative peak power calculation on the received watermark sequence to obtain a relative fluctuation evaluation value;
the relative peak power calculation is as follows:
wherein, For the maximum of the absolute values of all elements in the received watermark sequence,For the average of the absolute values of all elements in the received watermark sequence,Representing the receipt of a watermark sequence,Representing the relative fluctuation evaluation value.
S86, performing error calculation processing on the received watermark sequence and the audio digital watermark information sequence to obtain an error evaluation value;
The error calculation process has a calculation expression of:
wherein, An error evaluation value;
and S87, carrying out weighted calculation processing on the uncertainty evaluation value, the relative fluctuation evaluation value and the error evaluation value to obtain an audio digital watermark embedding evaluation result value.
The weighting calculation process may have a weight of 0.25,0.25,0.5.
And S81, constructing a corresponding matrix by taking the received data subsequence and the audio digital watermark information subsequence as row vectors of the matrix.
The second aspect of the embodiment of the invention discloses an audio digital watermark embedding device, which comprises:
A memory storing executable program code;
A processor coupled to the memory;
The processor calls the executable program codes stored in the memory to execute the audio digital watermark embedding method.
A third aspect of the embodiments of the present invention discloses a computer-readable storage medium storing computer instructions for executing the audio digital watermark embedding method when the computer instructions are invoked.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (7)

1. An audio digital watermark embedding method, comprising:
s1, acquiring an audio file and corresponding channel model information thereof;
the obtaining the channel model information corresponding to the audio file includes:
s11, determining a sending end and a receiving end of the audio file;
s12, generating a channel detection sequence by using the transmitting end, and transmitting the channel detection sequence to the receiving end through a transmission channel;
s13, receiving a receiving sequence corresponding to the channel detection sequence by using the receiving end;
S14, comparing and counting the receiving sequence and the channel detection sequence to obtain channel model information;
The comparing and counting the receiving sequence and the channel detecting sequence to obtain the channel model information, which comprises the following steps:
s141, representing the channel detection sequence as Representing the received sequence asEach representing a sequence number of an element in the sequence,Representing the sequence length;
s142, equally dividing the channel detection sequence and the receiving sequence into N1 sections to respectively obtain corresponding N1 subsequences; numbering the subsequences of the channel detection sequence and the receiving sequence according to the sequence of occurrence of the subsequences to obtain sequence number values of the subsequences;
S143, respectively calculating the square sum of the differences of all elements of the subsequences for the subsequences with the same sequence number of the channel detection sequence and the receiving sequence, wherein the calculation expression is as follows:
wherein, Representing the first of the channel sounding sequencesSub-sequence NoThe number of elements to be added to the composition,Representing the first of the received sequencesSub-sequence NoThe number of elements to be added to the composition,Represent the firstSum of squares of differences of all elements of the sub-sequences;
S144, carrying out accumulation and variance processing on the sum of squares of the differences of all the subsequences to obtain channel power statistical information; the channel power statistical information comprises a channel power difference value and a channel power variance value; the channel power difference value is obtained by accumulating the square sums of the differences of all the subsequences; the channel power variance value is obtained by solving the variance of the sum of squares of the differences of all the subsequences;
S145, respectively carrying out frequency domain transformation processing on the channel detection sequence and the receiving sequence to respectively obtain a channel detection frequency domain sequence and a receiving frequency domain sequence;
S146, equally dividing the channel detection frequency domain sequence and the receiving frequency domain sequence into N1 sections respectively to obtain corresponding N1 frequency domain subsequences respectively; numbering the frequency domain subsequences of the channel detection frequency domain sequence and the receiving frequency domain sequence according to the sequence of occurrence of the frequency domain subsequences respectively to obtain sequence number values of the frequency domain subsequences;
s147, respectively calculating the frequency domain subsequences with the same sequence number of the channel detection frequency domain sequence and the received frequency domain sequence to obtain the sum of squares of the difference values of all elements of the frequency domain subsequences, wherein the calculation expression is as follows:
wherein, A first bit representing a channel sounding frequency domain sequenceThe first frequency domain sub-sequenceThe number of elements to be added to the composition,Representing the first received frequency domain sequenceThe first frequency domain sub-sequenceThe number of elements to be added to the composition,Represent the firstSum of squares of differences of all elements of the individual frequency domain sub-sequences; the number of elements contained for the frequency domain sub-sequence;
S148, carrying out accumulation and variance processing on the sum of squares of the differences of all the frequency domain subsequences to obtain channel frequency response statistical information; the channel frequency response statistical information comprises a frequency response difference value and a frequency response variance value; the frequency response difference value is obtained by accumulating the square sums of the differences of all the frequency domain subsequences; the frequency response variance value is obtained by solving the variance of the sum of squares of the differences of all the frequency domain subsequences;
S149, processing the channel power statistical information and the channel frequency response statistical information by using a channel discrimination model to determine corresponding channel model information; the channel discrimination model comprises a numerical distribution range of channel power statistical information and channel frequency response statistical information of the first to fourth channel models; the channel model comprises a first channel model, a second channel model, a third channel model and a fourth channel model;
s2, determining picture watermark file information, watermark transformation model information and watermark embedding information based on the channel model information;
the determining picture watermark file information, watermark transformation model information and watermark embedding information based on the channel model information comprises:
s21, extracting a corresponding picture watermark file from a preset picture watermark file database according to the channel model information;
S22, determining corresponding watermark transformation model information according to the channel model information;
S23, carrying out transform domain parameter extraction processing on the receiving sequence and the channel detection sequence to obtain watermark transformation model parameter information;
s24, determining watermark embedding information according to the channel model information and watermark transformation model information;
s3, generating an audio digital watermark information sequence based on the determined picture watermark file information and watermark transformation model information;
s4, based on watermark embedding information, embedding the audio digital watermark information sequence into the audio file to obtain the audio digital watermark file.
2. The audio digital watermark embedding method according to claim 1, wherein said processing said channel power statistics and said channel frequency response statistics using a channel discrimination model to determine corresponding channel model information comprises:
the channel discrimination model comprises a power variance value threshold and a frequency response variance value threshold;
When the channel power variance value is larger than a power variance value threshold and the frequency response variance value is larger than the frequency response variance value threshold, determining the channel model information as a first channel model;
when the channel power variance value is larger than a power variance value threshold and the frequency response variance value is smaller than the frequency response variance value threshold, determining the channel model information as a second channel model;
when the channel power variance value is smaller than a power variance value threshold and the frequency response variance value is smaller than the frequency response variance value threshold, determining the channel model information as a third channel model;
and when the channel power variance value is smaller than a power variance value threshold and the frequency response variance value is larger than the frequency response variance value threshold, determining the channel model information as a fourth channel model.
3. The audio digital watermark embedding method as claimed in claim 2, wherein said determining corresponding watermark transformation model information based on said channel model information comprises:
determining a watermark transformation model corresponding to the first channel model as a first watermark transformation model; the computational expression of the first watermark transformation model is:
wherein, For the kth element of the signal sequence corresponding to the picture watermark file,For the length of the signal sequence corresponding to the picture watermark file,The transformed data information obtained for the first watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the first watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndA time domain transformation length and a frequency domain transformation length of the first watermark transformation model respectively;
determining a watermark transformation model corresponding to the second channel model as a second watermark transformation model; the computational expression of the second watermark transformation model is:
wherein, The transformed data information obtained for the second watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the second watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndRespectively a time domain transformation length and a frequency domain transformation length of the second watermark transformation model;
Determining a watermark transformation model corresponding to the third channel model as a third watermark transformation model; the computational expression of the third watermark transformation model is:
wherein, The transformed data information obtained for the third watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the third watermark transformation model,Is thatAt the position ofThe value of the position is taken out,AndA time domain transform length and a frequency domain transform length of the third watermark transformation model respectively,An angle average value for the third watermark transformation model;
Determining a watermark transformation model corresponding to the fourth channel model as a fourth watermark transformation model; the calculation expression of the fourth watermark transformation model is as follows:
wherein, The transformed data information obtained for the fourth watermark transformation model corresponds to the elements of the z-th row and the x-th column of the matrix,For a transformation function corresponding to the fourth watermark transformation model,Is thatAt the position ofThe value of the position is taken out,For the time domain transform length of the fourth watermark transformation model,And obtaining the angle average value of the fourth watermark transformation model.
4. The audio digital watermark embedding method as claimed in claim 2, wherein said performing transform domain parameter extraction processing on said received sequence and said channel sounding sequence to obtain watermark transformation model parameter information includes:
s231, using the watermark transformation model corresponding to the watermark transformation model information determined in the S22 as a transformation domain model;
S232, respectively carrying out transformation processing on the receiving sequence and the channel detection sequence by utilizing the transformation domain model to obtain a receiving transformation domain sequence and a channel detection transformation domain sequence;
s233, constructing a difference function by utilizing the difference value of the received transform domain sequence and the channel detection transform domain sequence;
S234, constructing and obtaining a model parameter information optimization model by utilizing the difference function; the model parameter information optimizing model has an objective function of a minimized difference function and an independent variable of a transformation function corresponding to the watermark transformation model determined in the step S22;
S235, solving the model parameter information optimization model to obtain optimal parameters; and determining the optimal parameters as watermark transformation model parameter information.
5. The audio digital watermark embedding method according to claim 2, wherein said embedding said audio digital watermark information sequence into said audio file based on watermark embedding information to obtain an audio digital watermark file, comprises:
s41, performing discrete sampling on the audio file to obtain a discrete audio sequence;
S42, respectively carrying out uniform segmentation processing on the discrete audio sequence and the audio digital watermark information sequence to respectively obtain a plurality of discrete audio subsequences and a plurality of audio digital watermark information subsequences; the number of discrete audio subsequences and the number of audio digital watermark information subsequences are the same;
S43, each discrete audio subsequence is inserted into a corresponding audio digital watermark information subsequence according to the embedding position and the position interval in the watermark embedding information;
S44, when all the discrete audio subsequences are inserted into the corresponding audio digital watermark information subsequences, determining that the obtained signal sequence is an audio digital watermark file.
6. An audio digital watermark embedding apparatus, characterized in that the apparatus comprises:
A memory storing executable program code;
A processor coupled to the memory;
The processor invokes the executable program code stored in the memory to perform the audio digital watermark embedding method as claimed in any one of claims 1 to 5.
7. A computer storage medium storing computer instructions for performing the audio digital watermark embedding method as claimed in any one of claims 1 to 5 when invoked by a computer.
CN202410560776.2A 2024-05-08 2024-05-08 Audio digital watermark embedding method and device Active CN118138852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410560776.2A CN118138852B (en) 2024-05-08 2024-05-08 Audio digital watermark embedding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410560776.2A CN118138852B (en) 2024-05-08 2024-05-08 Audio digital watermark embedding method and device

Publications (2)

Publication Number Publication Date
CN118138852A CN118138852A (en) 2024-06-04
CN118138852B true CN118138852B (en) 2024-07-09

Family

ID=91232079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410560776.2A Active CN118138852B (en) 2024-05-08 2024-05-08 Audio digital watermark embedding method and device

Country Status (1)

Country Link
CN (1) CN118138852B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271690A (en) * 2008-05-09 2008-09-24 中国人民解放军重庆通信学院 Audio spread-spectrum watermark processing method for protecting audio data
CN101807401A (en) * 2010-03-16 2010-08-18 上海交通大学 Discrete cosine transform (DCT)-based audio zero-watermark anti-noise detection method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2362386A1 (en) * 2010-02-26 2011-08-31 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Watermark generator, watermark decoder, method for providing a watermark signal in dependence on binary message data, method for providing binary message data in dependence on a watermarked signal and computer program using a two-dimensional bit spreading
CN102664013A (en) * 2012-04-18 2012-09-12 南京邮电大学 Audio digital watermark method of discrete cosine transform domain based on energy selection
JP6353402B2 (en) * 2015-05-12 2018-07-04 日本電信電話株式会社 Acoustic digital watermark system, digital watermark embedding apparatus, digital watermark reading apparatus, method and program thereof
CN110890930B (en) * 2018-09-10 2021-06-01 华为技术有限公司 Channel prediction method, related equipment and storage medium
US11244692B2 (en) * 2018-10-04 2022-02-08 Digital Voice Systems, Inc. Audio watermarking via correlation modification using an amplitude and a magnitude modification based on watermark data and to reduce distortion
CN109784006A (en) * 2019-01-04 2019-05-21 平安科技(深圳)有限公司 Watermark insertion and extracting method and terminal device
US20240203431A1 (en) * 2021-05-08 2024-06-20 Microsoft Technology Licensing, Llc Robust authentication of digital audio

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271690A (en) * 2008-05-09 2008-09-24 中国人民解放军重庆通信学院 Audio spread-spectrum watermark processing method for protecting audio data
CN101807401A (en) * 2010-03-16 2010-08-18 上海交通大学 Discrete cosine transform (DCT)-based audio zero-watermark anti-noise detection method

Also Published As

Publication number Publication date
CN118138852A (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN101458810B (en) Vector map watermark method based on object property characteristic
JP5710604B2 (en) Combination of watermarking and fingerprinting
CN109118420B (en) Watermark identification model establishing and identifying method, device, medium and electronic equipment
US10217469B2 (en) Generation of a signature of a musical audio signal
CN103294667A (en) Method and system for tracing homologous image through watermark
CN101833650A (en) Video copy detection method based on contents
CN118070252B (en) PDF embedded font watermark embedding and extracting method and system
CN117251598A (en) Video retrieval method
CN114036467A (en) Block chain-based short video copyright protection method
CN108733843B (en) File detection method based on Hash algorithm and sample Hash library generation method
CN118138852B (en) Audio digital watermark embedding method and device
CN102194204B (en) Method and device for embedding and extracting reversible watermarking as well as method and device for recovering image
CN111738173B (en) Video clip detection method and device, electronic equipment and storage medium
CN106205627B (en) Digital audio reversible water mark algorithm based on side information prediction and histogram translation
CN116226435B (en) Cross-modal retrieval-based association matching method for remote sensing image and AIS information
CN118660210A (en) Audio digital watermark embedding evaluation method and device
CN118317166A (en) Method and device for transmitting audio digital watermark file
CN114630130B (en) Face-changing video tracing method and system based on deep learning
CN116563565A (en) Digital image tampering identification and source region and target region positioning method based on field adaptation, computer equipment and storage medium
CN106952211B (en) Compact image hashing method based on feature point projection
CN114241493A (en) Training method and training device for training data of amplification document analysis model
EP2682916A1 (en) Method for watermark decoding
Yue et al. Rights protection for trajectory streams
Raftopoulos et al. Region-Based Watermarking for Images
CN104715057A (en) Step-length-variable key frame extraction-based network video copy search method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant