CN106250729B - Song data processing method and equipment thereof - Google Patents
Song data processing method and equipment thereof Download PDFInfo
- Publication number
- CN106250729B CN106250729B CN201610620768.8A CN201610620768A CN106250729B CN 106250729 B CN106250729 B CN 106250729B CN 201610620768 A CN201610620768 A CN 201610620768A CN 106250729 B CN106250729 B CN 106250729B
- Authority
- CN
- China
- Prior art keywords
- note
- watermark
- unit
- target
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims description 75
- 238000013507 mapping Methods 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000003247 decreasing effect Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 9
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/16—Program or content traceability, e.g. by watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/106—Enforcing content protection by specific content processing
- G06F21/1063—Personalisation
Abstract
The embodiment of the invention discloses a song data processing method and equipment thereof, wherein the method comprises the following steps: acquiring song reference data of a target song, wherein the song reference data comprises a note sequence formed by arranging at least one note unit according to a time sequence, and each note unit in the at least one note unit comprises a note position identifier, note time data and a note value; acquiring watermark adding parameters associated with a target song, and calculating note position identifications corresponding to target note units added with watermark data and the number of watermark adding times added with the watermark data according to the watermark adding parameters; and adding watermark data behind the target note unit according to the note position identification and the watermark adding frequency corresponding to the target note unit by adopting the note time data of the target note unit and the note value of the target note unit. By adopting the invention, the security of the song reference data can be ensured, the classification management of the song reference data is realized, and the use effect of the song playing function is improved.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a song data processing method and equipment.
Background
With the continuous development and improvement of computer technology, terminals such as mobile phones and tablet computers have become an indispensable part of people's lives, and different requirements of users can be met through various application functions installed in the terminals, for example: communication, gaming, listening to music, etc
The existing music application can download and play required music files and can realize the function of song singing, in order to improve the interactivity of the song singing function, the music application can be based on pre-recorded song reference data, the song reference data comprises time data and note values of each note of a current song, and the time data and the note values are used for being matched with the time and the note values of the user song when singing, so that the user song singing is scored in real time. However, since the song reference data needs to be stored in the terminal in advance, it is easily stolen by lawless persons, which affects the security of the song reference data, and meanwhile, since there are many data of the same type as the song reference data, for example: lyric data, accompaniment data, etc. appear, the condition that appears data confusion easily, has influenced the result of use of song singing function.
Disclosure of Invention
The embodiment of the invention provides a song data processing method and equipment thereof, which can ensure the security of song reference data, realize the classification management of the song reference data and improve the use effect of a song playing function.
A first aspect of an embodiment of the present invention provides a song data processing method, which may include:
acquiring song reference data of a target song, wherein the song reference data comprises a note sequence formed by arranging at least one note unit according to a time sequence, and each note unit in the at least one note unit comprises a note position identifier, note time data and a note value;
acquiring watermark adding parameters associated with the target song, and calculating note position identifications corresponding to target note units added with watermark data and the number of times of adding watermarks added with the watermark data according to the watermark adding parameters;
and adding watermark data behind the target note unit according to the note position identification corresponding to the target note unit and the watermark adding times by adopting the note time data of the target note unit and the note value of the target note unit.
A second aspect of an embodiment of the present invention provides a song data processing apparatus, which may include:
the data acquisition unit is used for acquiring song reference data of a target song, the song reference data comprises a note sequence formed by arranging at least one note unit in a time sequence, and each note unit in the at least one note unit comprises a note position identifier, note time data and a note value;
the data calculation unit is used for acquiring a watermark adding parameter associated with the target song and calculating a note position identifier corresponding to a target note unit added with the watermark data and the number of times of adding the watermark by the watermark data according to the watermark adding parameter;
and the data adding unit is used for adding watermark data behind the target note unit according to the note position identifier corresponding to the target note unit and the watermark adding frequency by adopting the note time data of the target note unit and the note value of the target note unit.
In the embodiment of the invention, by acquiring the note position identification, note time data and note value of each note unit in at least one note unit of the target song, and according to the note position identification and the watermark adding times added by the acquired watermark data, the watermark data of the corresponding times is added in the corresponding note unit of the song reference data. By adding the watermark data into the song reference data, the song reference data is prevented from being stolen by lawbreakers, the security of the song reference data is ensured, meanwhile, the classification management of the song reference data is realized, and the using effect of the song singing playing function is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a song data processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another song data processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a song data processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another song data processing apparatus provided by an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a data calculation unit according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a data adding unit according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another song data processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The song data processing method provided by the embodiment of the invention can be applied to scenes of adding watermarks to song reference data of songs in music application, such as: the method comprises the steps that song data processing equipment obtains song reference data of a target song, the song data processing equipment obtains watermarking parameters related to the target song, note position identification corresponding to a target note unit added with watermarking data and watermarking times added with the watermarking data are calculated according to the watermarking parameters, the song data processing equipment adopts note time data of the target note unit and note value of the target note unit, and according to the note position identification corresponding to the target note unit and the watermarking times, scenes and the like of the watermarking data are added behind the target note unit. By adding the watermark data into the song reference data, the song reference data is prevented from being stolen by lawbreakers, the security of the song reference data is ensured, meanwhile, the classification management of the song reference data is realized, and the using effect of the song singing playing function is improved.
The song data processing equipment related to the embodiment of the invention can be background service equipment applied to music; the music application can be a music playing application, a karaoke application and the like; the song reference data may be reference text data for scoring a user song singing in real time, and specifically may be a note sequence formed by arranging at least one note unit in a time sequence, where each note unit in the at least one note unit includes a note position identifier, note time data, and a note value.
The song data processing method provided by the embodiment of the invention will be described in detail with reference to fig. 1 and fig. 2.
Referring to fig. 1, a flow chart of a song data processing method according to an embodiment of the present invention is schematically shown. As shown in fig. 1, the method of an embodiment of the present invention may include the following steps S101-S103.
S101, acquiring song reference data of a target song;
specifically, the song data processing device may obtain a song selected by a current user in a music application, determine the selected song as a target song, and search and obtain song reference data of the target song, where the song reference data may include note time data of each note unit in at least one note unit and a note value of each note unit, and it may be understood that the note time data may include a note start time and a note duration of each note in the target song, and the note value may be specifically a note value of each note in the target song.
S102, acquiring watermark adding parameters associated with the target song, and calculating note position identifications corresponding to target note units added with watermark data and the number of times of adding watermarks added with the watermark data according to the watermark adding parameters;
specifically, the song data processing apparatus may obtain a watermark adding parameter associated with the target song, and preferably, the watermark adding parameter may include a watermark position parameter and a watermark frequency parameter, where the watermark position parameter is used to calculate a note position identifier of a target note unit for performing watermark data addition in the song reference data, that is, a position where watermark data is added, and the watermark frequency parameter is used to calculate a number of times of adding watermarks in each target note unit for adding watermark data, that is, a number of times of adding watermark data.
The song data processing device can calculate the note position identification of the target note unit added by the watermark data according to the watermark position parameter, and calculate the watermark adding times added by the watermark data according to the watermark time parameter. Preferably, the song data processing device may calculate the note position identifier according to the watermark position parameter by using a preset note unit formula, and calculate the number of times of adding the watermark by using a preset number formula according to the watermark number parameter.
S103, adding watermark data behind the target note unit according to the note position identification corresponding to the target note unit and the watermark adding times by adopting the note time data of the target note unit and the note value of the target note unit;
specifically, the song data processing device may adopt note time data of the target note unit and a note value of the target note unit, and add watermark data after the target note unit according to a note position identifier corresponding to the target note unit and the number of times of adding the watermark, that is, the song data processing device may add watermark data after the target note unit according to note time data at a position where the watermark data is added and the note value of the target note unit and according to the number of added watermark data, where the watermark data is preferably multiple note units added in unit time.
In the embodiment of the invention, by acquiring the note position identification, note time data and note value of each note unit in at least one note unit of the target song, and according to the note position identification and the watermark adding times added by the acquired watermark data, the watermark data of the corresponding times is added in the corresponding note unit of the song reference data. By adding the watermark data into the song reference data, the song reference data is prevented from being stolen by lawbreakers, the security of the song reference data is ensured, meanwhile, the classification management of the song reference data is realized, and the using effect of the song singing playing function is improved.
Referring to fig. 2, a flow chart of another song data processing method according to an embodiment of the present invention is schematically shown. As shown in fig. 2, the method of the embodiment of the present invention may include the following steps S201 to S211.
S201, acquiring song reference data of a target song;
specifically, the song data processing device may obtain a song selected by a current user in a music application, determine the selected song as a target song, and search and obtain song reference data of the target song, where the song reference data may include note time data of each note unit in at least one note unit and a note value of each note unit, and it may be understood that the note time data may include a note start time and a note duration of each note in the target song, and the note value may be specifically a note value of each note in the target song. For example: the song reference data may be as shown in table 1:
note position identification | Note onset time | Duration of note | Note value |
… | … | … | … |
17 | 74078 | 235 | 69 |
18 | 74315 | 472 | 68 |
19 | 74789 | 235 | 69 |
20 | 75026 | 472 | 68 |
21 | 75500 | 116 | 66 |
… | … | … | … |
As shown in table 1, each row in the table is called a note unit, and may include note position identifier, note start time (ms), note duration (ms), and note value, and the song reference data of each song may be stored in the table, but may also be stored in other storage forms, such as: documents, databases, etc., all fall within the scope of protection of embodiments of the present invention.
S202, mapping the song serial number (Identity, ID) of the target song, and acquiring a mapping parameter list after the mapping;
specifically, the song data processing device may obtain a song ID of the target song, and perform mapping processing on the song ID to obtain a mapping parameter list, for example: the song ID is 13784, and a mapping parameter list of 1, 3, 4, 7, 8 can be obtained after the mapping process, or the song data processing device can use a preset mapping formula to map the song ID to obtain a mapping parameter list.
S203, randomly acquiring a watermark position parameter and a watermark frequency parameter associated with the target song from the mapping parameter list;
specifically, the song data processing device may randomly obtain a watermark position parameter and a watermark frequency parameter associated with the target song from the mapping parameter list, and it may be understood that there may be a plurality of positions where watermark data are added, and different numbers of watermark data may also be added for different positions where watermark data are added, so that both the watermark position parameter and the watermark frequency parameter may be a plurality of parameters, according to the above example: the randomly selected watermark position parameters are {3, 8}, the randomly selected watermark frequency parameters are {1, 7}, wherein the randomly selected watermark frequency parameters and the randomly selected watermark position parameters can be in one-to-one correspondence, one watermark frequency parameter can also correspond to a plurality of watermark position parameters, and configuration can be specifically carried out according to the adding requirement of actual watermark data.
S204, calculating the note position identification of the target note unit added by the watermark data according to the watermark position parameter;
specifically, the song data processing device may calculate, according to the watermark location parameter, a note location identifier of a target note unit added by the watermark data. Preferably, the song data processing device may adopt a preset note unit formula and calculate the note position identifier according to the watermark position parameter, where the preset note unit formula may be f (x) ax2+ bx + c, where the parameters a, b, c can be randomly defined by the staff, x represents the watermark location parameter, according toFor example, assuming that the preset formula of note unit is f (x) ═ 2x +1, and x ═ 3, 8, the note location indicators of the two target note units are 7 and 17, respectively.
S205, calculating the number of times of adding the watermark by the watermark data according to the parameter of the number of times of adding the watermark;
specifically, the song data processing device may calculate the number of times of adding the watermark to the watermark data according to the watermark number parameter. Preferably, the song data processing apparatus may adopt a preset number formula and calculate the number of times of watermarking according to the watermark number parameter, where the preset number formula may be f (y) py5+ q, where the parameters p and q may be randomly defined by the operator, y represents the watermark number parameter, and according to the above example, it is assumed that the preset number formula is f (y) y5And +9, y is {1}, the obtained watermark adding frequency is 10.
It should be noted that, in order to ensure normal use of the song reference data, the song data processing apparatus may further screen the obtained note position identifier, and limit the number of times of adding the watermark.
S206, acquiring note position marks of the target note units meeting the total amount of the note units of the note sequence;
specifically, the song data processing device obtains the note position identifier of the target note unit that satisfies the total note unit amount of at least one note unit in the note sequence, and preferably, the song data processing device obtains the note position identifier of the target note unit that is smaller than or equal to the total note unit amount of the at least one note unit, that is, the calculated note position identifier cannot be larger than the maximum note position identifier in the at least one note unit.
S207, acquiring the number of times of adding the watermark meeting the preset number of times interval;
specifically, the song data processing device obtains the number of times of watermark addition that meets a preset number of times interval, and preferably, the preset number of times interval may be configured for the number of times of watermark addition, for example: [3,5], when the number of times of watermarking is less than a minimum value in the preset number of times interval, the song data processing apparatus may adjust the number of times of watermarking to the minimum value, for example: if the number of the watermark addition times obtained by the current calculation is 1, adjusting the number of the watermark addition times to be 3; when the number of times of adding the watermark is greater than the maximum value in the preset number interval, adjusting the number of times of adding the watermark to be the maximum value, for example: if the number of the watermark addition times obtained by the current calculation is 10, adjusting the number of the watermark addition times to be 5; when the number of times of adding the watermark is within the preset number of times interval, determining the number of times of adding the watermark, for example: if the number of times of adding the watermark obtained by the current calculation is 3, 4 or 5, the number of times of adding the watermark obtained by the current calculation can be directly determined and obtained without adjustment.
S208, acquiring a next note unit of the target note unit, and adding a note unit corresponding to the watermark adding frequency between the target note unit and the next note unit;
specifically, the song data processing apparatus may obtain a note unit next to the target note unit, and add a note unit corresponding to the number of times of adding the watermark between the target note unit and the next note unit, for example: the note position of the currently calculated target note unit is identified as 17, and the calculated number of times of watermarking is 3, then 3 note units are added between the target note unit 17 and the next note unit 18.
S209, respectively setting note time data of the note unit corresponding to the watermark adding times and note values of the note units corresponding to the watermark adding times according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit and the note value of the next note unit;
specifically, the song data processing apparatus may set the note time data of the note unit corresponding to the watermark adding frequency and the note value of the note unit corresponding to the watermark adding frequency respectively according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit and the note value of the next note unit, and preferably, the song data processing apparatus may set the note start time of the note unit corresponding to the watermark adding frequency respectively to a time at which the note start time of the next note unit is decreased in a unit time amount, set the note duration of the note unit corresponding to the watermark adding frequency respectively to the unit time amount, and set the note value of the note unit corresponding to the watermark adding frequency respectively to the note value of the target note unit or the note value of the next note unit. For example: assuming a unit time of 1 millisecond, table 2 can be derived from table 1:
note position identification | Note onset time | Duration of note | Note value |
… | … | … | … |
17 | 74078 | 232 | 69 |
74312 | 1 | 69 | |
74313 | 1 | 69 | |
74314 | 1 | 69 | |
18 | 74315 | 472 | 68 |
19 | 74789 | 235 | 69 |
20 | 75026 | 472 | 68 |
21 | 75500 | 116 | 66 |
… | … | … | … |
As shown in Table 2, 3 note units are inserted between the note units of 17 and 18, the note onset times of the 3 note units are respectively set as the time at which the note onset time of the note unit 18 is decreased by the unit time amount, 74314, 74313 and 74312, the note durations of the 3 note units are respectively set as 1 (millisecond) unit time, and the note values of the 3 note units are set as the note value 69 of the note unit 17, and of course, as the note value 68 of the note unit 18 (not shown in Table 2).
S210, adjusting note time data of the target note unit;
specifically, after the note unit corresponding to the number of times of adding the watermark is added, the song data processing apparatus needs to adjust the note time data of the target note unit, and preferably, the song data processing apparatus may adjust the note duration of the target note unit according to the sum of the note durations of the note units corresponding to the number of times of adding the watermark, as shown in table 2, if the sum of the note durations of the added 3 note units is 3 (milliseconds), the note duration of the note unit 17 is adjusted from 235 to 232.
S211, adjusting the note position marks of the note units;
specifically, the song data processing device may adjust the note position identifiers of the note units, that is, reorder the note position identifiers of the note units to which the note units corresponding to the number of times of adding the watermark are added, so as to complete adding of the watermark data.
Table 3 can be formed from table 2 after reordering:
note position identification | Note onset time | Duration of note | Note value |
… | … | … | … |
17 | 74078 | 232 | 69 |
18 | 74312 | 1 | 69 |
19 | 74313 | 1 | 69 |
20 | 74314 | 1 | 69 |
21 | 74315 | 472 | 68 |
22 | 74789 | 235 | 69 |
23 | 75026 | 472 | 68 |
24 | 75500 | 116 | 66 |
… | … | … | … |
Note positions of the newly added 3 note units are respectively 18, 19 and 20, note positions of the original note units with the note positions of 18, 19, 20 and 21 after reordering are respectively 21, 22, 23 and 24, and so on.
In the embodiment of the invention, by acquiring the note position identification, note time data and note value of each note unit in at least one note unit of the target song, and according to the note position identification and the watermark adding times added by the acquired watermark data, the watermark data of the corresponding times is added in the corresponding note unit of the song reference data. By adding the watermark data into the song reference data, the song reference data is prevented from being stolen by lawbreakers, the security of the song reference data is ensured, meanwhile, the classification management of the song reference data is realized, and the using effect of the song singing playing function is improved; the obtained note position identification is screened, and the number of times of adding the watermark is limited, so that the normal use of the song reference data is ensured.
A song data processing apparatus according to an embodiment of the present invention will be described in detail with reference to fig. 3 to 6. It should be noted that, the song data processing apparatus shown in fig. 3 to fig. 6 is used for executing the method of the embodiment shown in fig. 1 and fig. 2 of the present invention, for convenience of explanation, only the part related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, please refer to the embodiment shown in fig. 1 and fig. 2 of the present invention.
Referring to fig. 3, a schematic structural diagram of a song data processing apparatus according to an embodiment of the present invention is provided. As shown in fig. 3, the song data processing apparatus 1 of the embodiment of the present invention may include: a data acquisition unit 11, a data calculation unit 12, and a data addition unit 13.
A data acquisition unit 11 for acquiring song reference data of a target song;
in a specific implementation, the data obtaining unit 11 may obtain a song selected by a current user in a music application, the data obtaining unit 11 determines the selected song as a target song, and searches and obtains song reference data of the target song, where the song reference data may include note time data of each note unit in at least one note unit and a note value of each note unit, and it is understood that the note time data may include a note start time and a note duration of each note in the target song, and the note value may specifically be a note value of each note in the target song.
The data calculation unit 12 is configured to obtain a watermark adding parameter associated with the target song, and calculate, according to the watermark adding parameter, a note position identifier corresponding to a target note unit to which watermark data is added and the number of times of adding the watermark by the watermark data;
in a specific implementation, the data calculating unit 12 may obtain a watermark adding parameter associated with the target song, and preferably, the watermark adding parameter may include a watermark position parameter and a watermark frequency parameter, where the watermark position parameter is used to calculate a note position identifier of a target note unit for adding watermark data in the song reference data, that is, a position where the watermark data is added, and the watermark frequency parameter is used to calculate a number of times of adding watermarks in the target note unit for adding each watermark data, that is, a number of times of adding the watermark data.
The data calculating unit 12 may calculate the note position identifier of the target note unit added with the watermark data according to the watermark position parameter, and calculate the number of times of adding the watermark to the watermark data according to the watermark number parameter. Preferably, the data calculating unit 12 may adopt a preset note unit formula and calculate the note position identifier according to the watermark position parameter, and adopt a preset number formula and calculate the number of times of adding the watermark according to the watermark number parameter.
A data adding unit 13, configured to add, by using the note time data of the target note unit and the note value of the target note unit, watermark data after the target note unit according to the note position identifier corresponding to the target note unit and the watermark adding frequency;
in a specific implementation, the data adding unit 13 may adopt the note time data of the target note unit and the note value of the target note unit, and add the watermark data after the target note unit according to the note position identifier corresponding to the target note unit and the number of times of adding the watermark, that is, the data adding unit 13 may add the watermark data after the target note unit according to the note time data at the position where the watermark data is added and the note value of the target note unit and according to the number of added watermark data, where the watermark data is preferably multiple note units added in unit time.
In the embodiment of the invention, by acquiring the note position identification, note time data and note value of each note unit in at least one note unit of the target song, and according to the note position identification and the watermark adding times added by the acquired watermark data, the watermark data of the corresponding times is added in the corresponding note unit of the song reference data. By adding the watermark data into the song reference data, the song reference data is prevented from being stolen by lawbreakers, the security of the song reference data is ensured, meanwhile, the classification management of the song reference data is realized, and the using effect of the song singing playing function is improved.
Referring to fig. 4, a schematic structural diagram of another song data processing apparatus according to an embodiment of the present invention is provided. As shown in fig. 4, the song data processing apparatus 1 of the embodiment of the present invention may include: a data acquisition unit 11, a data calculation unit 12, a data addition unit 13, an identification acquisition unit 14, and a number-of-times acquisition unit 15.
A data acquisition unit 11 for acquiring song reference data of a target song;
in a specific implementation, the song data processing device may obtain a song selected by a current user in a music application, determine the selected song as a target song, and search and obtain song reference data of the target song, where the song reference data may include note time data of each note unit in at least one note unit and a note value of each note unit, and it may be understood that the note time data may include a note start time and a note duration of each note in the target song, and the note value may specifically be a note value of each note in the target song. For example: the song reference data may be as shown in table 1:
note position identification | Note onset time | Duration of note | Note value |
… | … | … | … |
17 | 74078 | 235 | 69 |
18 | 74315 | 472 | 68 |
19 | 74789 | 235 | 69 |
20 | 75026 | 472 | 68 |
21 | 75500 | 116 | 66 |
… | … | … | … |
As shown in table 1, each row in the table is called a note unit, and may include note position identifier, note start time (ms), note duration (ms), and note value, and the song reference data of each song may be stored in the table, but may also be stored in other storage forms, such as: documents, databases, etc., all fall within the scope of protection of embodiments of the present invention.
The data calculation unit 12 is configured to obtain a watermark adding parameter associated with the target song, and calculate, according to the watermark adding parameter, a note position identifier corresponding to a target note unit to which watermark data is added and the number of times of adding the watermark by the watermark data;
in a specific implementation, the song data processing device may obtain a watermark adding parameter associated with the target song, and preferably, the watermark adding parameter may include a watermark position parameter and a watermark frequency parameter, where the watermark position parameter is used to calculate a note position identifier of a target note unit for adding watermark data in the song reference data, that is, a position where the watermark data is added, and the watermark frequency parameter is used to calculate a number of times of adding watermarks in the target note unit for adding each watermark data, that is, a number of times of adding the watermark data.
The song data processing device can calculate the note position identification of the target note unit added by the watermark data according to the watermark position parameter, and calculate the watermark adding times added by the watermark data according to the watermark time parameter. Preferably, the song data processing device may calculate the note position identifier according to the watermark position parameter by using a preset note unit formula, and calculate the number of times of adding the watermark by using a preset number formula according to the watermark number parameter.
Specifically, please refer to fig. 5, which provides a schematic structural diagram of a data calculating unit according to an embodiment of the present invention. As shown in fig. 5, the data calculation unit 12 may include:
a list obtaining subunit 121, configured to perform mapping processing on the song ID of the target song, and obtain a mapping parameter list after the mapping processing;
in a specific implementation, the song data processing device may obtain a song ID of the target song, and perform mapping processing on the song ID to obtain a mapping parameter list, for example: the song ID is 13784, and a mapping parameter list of 1, 3, 4, 7, 8 can be obtained after the mapping process, or the song data processing device can use a preset mapping formula to map the song ID to obtain a mapping parameter list.
A parameter obtaining sub-unit 122, configured to randomly obtain a watermark position parameter and a watermark frequency parameter associated with the target song from the mapping parameter list;
in a specific implementation, the song data processing device may randomly obtain a watermark position parameter and a watermark frequency parameter associated with the target song from the mapping parameter list, and it may be understood that there may be a plurality of positions where watermark data are added, and different numbers of watermark data may also be added for different positions where watermark data are added, so that both the watermark position parameter and the watermark frequency parameter may be a plurality of parameters, according to the above example: the randomly selected watermark position parameters are {3, 8}, the randomly selected watermark frequency parameters are {1, 7}, wherein the randomly selected watermark frequency parameters and the randomly selected watermark position parameters can be in one-to-one correspondence, one watermark frequency parameter can also correspond to a plurality of watermark position parameters, and configuration can be specifically carried out according to the adding requirement of actual watermark data.
The mark calculating subunit 123 is configured to calculate, according to the watermark position parameter, a note position mark of a target note unit added to the watermark data;
in a specific implementation, the song data processing device may calculate, according to the watermark position parameter, a note position identifier of a target note unit added by the watermark data. Preferably, the song data processing device may adopt a preset note unit formula and calculate the note position identifier according to the watermark position parameter, where the preset note unit formula may be f (x) ax2+ bx + c, where the parameters a, b, and c may be randomly defined by the staff, and x represents the watermark location parameter, according to the above example, assuming that the preset note unit formula is f (x) -2 x +1, and x-3, and 8, the note location identifiers of the two target note units are 7 and 17, respectively.
A number-of-times calculation subunit 124, configured to calculate, according to the watermark number parameter, a number of watermark addition times for adding watermark data;
in a specific implementation, the song data processing device may calculate the number of times of adding the watermark to the watermark data according to the watermark number parameter. Preferably, the song data processing apparatus may adopt a preset number formula and calculate the number of times of watermarking according to the watermark number parameter, where the preset number formula may be f (y) py5+ q, where the parameters p and q may be randomly defined by the operator, y represents the watermark number parameter, and according to the above example, it is assumed that the preset number formula is f (y) y5And +9, y is {1}, the obtained watermark adding frequency is 10.
It should be noted that, in order to ensure normal use of the song reference data, the song data processing apparatus may further screen the obtained note position identifier, and limit the number of times of adding the watermark.
An identification obtaining unit 14, configured to obtain note position identifications of the target note units satisfying the total amount of note units of the note sequence;
in a specific implementation, the song data processing device obtains the note position identifier of the target note unit that satisfies the total amount of the note units of the at least one note unit, and preferably, the song data processing device obtains the note position identifier of the target note unit that is less than or equal to the total amount of the note units of the at least one note unit, that is, the calculated note position identifier cannot be greater than the maximum note position identifier in the at least one note unit.
A number obtaining unit 15 configured to obtain a number of times of adding a watermark that satisfies a preset number interval;
in a specific implementation, the song data processing device obtains the number of times of watermark addition that meets a preset number of times interval, and preferably, the preset number of times interval may be configured for the number of times of watermark addition, for example: [3,5], when the number of times of watermarking is less than a minimum value in the preset number of times interval, the song data processing apparatus may adjust the number of times of watermarking to the minimum value, for example: if the number of the watermark addition times obtained by the current calculation is 1, adjusting the number of the watermark addition times to be 3; when the number of times of adding the watermark is greater than the maximum value in the preset number interval, adjusting the number of times of adding the watermark to be the maximum value, for example: if the number of the watermark addition times obtained by the current calculation is 10, adjusting the number of the watermark addition times to be 5; when the number of times of adding the watermark is within the preset number of times interval, determining the number of times of adding the watermark, for example: if the number of times of adding the watermark obtained by the current calculation is 3, 4 or 5, the number of times of adding the watermark obtained by the current calculation can be directly determined and obtained without adjustment.
A data adding unit 13, configured to add, by using the note time data of the target note unit and the note value of the target note unit, watermark data after the target note unit according to the note position identifier corresponding to the target note unit and the watermark adding frequency;
in a specific implementation, the song data processing device may adopt the note time data of the target note unit and the note value of the target note unit, and add watermark data after the target note unit according to the note position identifier corresponding to the target note unit and the number of times of adding the watermark, that is, the song data processing device may add the watermark data after the target note unit according to the note time data at the position where the watermark data is added and the note value of the target note unit and according to the number of added watermark data, where the watermark data is preferably multiple note units added in unit time.
Specifically, please refer to fig. 6, which provides a schematic structural diagram of a data adding unit according to an embodiment of the present invention. As shown in fig. 6, the data adding unit 13 may include:
a note unit adding subunit 131, configured to obtain a note unit next to the target note unit, and add a note unit corresponding to the watermark adding frequency between the target note unit and the next note unit;
in a specific implementation, the song data processing device may obtain a note unit next to the target note unit, and add a note unit corresponding to the number of times of adding the watermark between the target note unit and the next note unit, for example: the note position of the currently calculated target note unit is identified as 17, and the calculated number of times of watermarking is 3, then 3 note units are added between the target note unit 17 and the next note unit 18.
A data setting subunit 132, configured to set, according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit, and the note value of the next note unit, the note time data of the note unit corresponding to the watermark adding frequency and the note value of the note unit corresponding to the watermark adding frequency, respectively;
in a specific implementation, the song data processing device may set the note time data of the note unit corresponding to the watermark adding frequency and the note value of the note unit corresponding to the watermark adding frequency according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit and the note value of the next note unit, respectively, and preferably, the song data processing device may set the note start time of the note unit corresponding to the watermark adding frequency as the time for decrementing the note start time of the next note unit according to a unit time amount, set the note duration of the note unit corresponding to the watermark adding frequency as the unit time amount, and set the note value of the note unit corresponding to the watermark adding frequency as the note value of the target note unit or the note value of the next note unit according to the unit time amount The values, for example: assuming a unit time of 1 millisecond, table 2 can be derived from table 1:
note position identification | Note onset time | Duration of note | Note value |
… | … | … | … |
17 | 74078 | 232 | 69 |
74312 | 1 | 69 | |
74313 | 1 | 69 | |
74314 | 1 | 69 | |
18 | 74315 | 472 | 68 |
19 | 74789 | 235 | 69 |
20 | 75026 | 472 | 68 |
21 | 75500 | 116 | 66 |
… | … | … | … |
As shown in Table 2, 3 note units are inserted between the note units of 17 and 18, the note onset times of the 3 note units are respectively set as the time at which the note onset time of the note unit 18 is decreased by the unit time amount, 74314, 74313 and 74312, the note durations of the 3 note units are respectively set as 1 (millisecond) unit time, and the note values of the 3 note units are set as the note value 69 of the note unit 17, and of course, as the note value 68 of the note unit 18 (not shown in Table 2).
A data adjusting subunit 133, configured to adjust note time data of the target note unit;
in a specific implementation, after the note unit corresponding to the number of times of adding the watermark is added, the song data processing device needs to adjust the note time data of the target note unit, and preferably, the song data processing device may adjust the note duration of the target note unit according to the sum of the note durations of the note units corresponding to the number of times of adding the watermark, as shown in table 2, if the sum of the note durations of the added 3 note units is 3 (milliseconds), the note duration of the note unit 17 is adjusted from 235 to 232.
An identifier adjusting subunit 134, configured to adjust the note position identifiers of the note units;
in a specific implementation, the song data processing device may adjust the note position identifiers of the note units, that is, reorder the note position identifiers of the note units after the note units corresponding to the number of times of adding the watermark are added, so as to complete adding of the watermark data.
Table 3 can be formed from table 2 after reordering:
note position identification | Note onset time | Duration of note | Note value |
… | … | … | … |
17 | 74078 | 232 | 69 |
18 | 74312 | 1 | 69 |
19 | 74313 | 1 | 69 |
20 | 74314 | 1 | 69 |
21 | 74315 | 472 | 68 |
22 | 74789 | 235 | 69 |
23 | 75026 | 472 | 68 |
24 | 75500 | 116 | 66 |
… | … | … | … |
Note positions of the newly added 3 note units are respectively 18, 19 and 20, note positions of the original note units with the note positions of 18, 19, 20 and 21 after reordering are respectively 21, 22, 23 and 24, and so on.
In the embodiment of the invention, by acquiring the note position identification, note time data and note value of each note unit in at least one note unit of the target song, and according to the note position identification and the watermark adding times added by the acquired watermark data, the watermark data of the corresponding times is added in the corresponding note unit of the song reference data. By adding the watermark data into the song reference data, the song reference data is prevented from being stolen by lawbreakers, the security of the song reference data is ensured, meanwhile, the classification management of the song reference data is realized, and the using effect of the song singing playing function is improved; the obtained note position identification is screened, and the number of times of adding the watermark is limited, so that the normal use of the song reference data is ensured.
Referring to fig. 7, a schematic structural diagram of another song data processing apparatus according to an embodiment of the present invention is provided. As shown in fig. 7, the song data processing apparatus 1000 may include: at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 7, the memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a data processing application program.
In the song data processing apparatus 1000 shown in fig. 7, the user interface 1003 is mainly used as an interface for providing input for the user, acquiring data input by the user; and the processor 1001 may be configured to invoke a data processing application stored in the memory 1005 and specifically perform the following operations:
acquiring song reference data of a target song, wherein the song reference data comprises a note sequence formed by arranging at least one note unit according to a time sequence, and each note unit in the at least one note unit comprises a note position identifier, note time data and a note value;
acquiring watermark adding parameters associated with the target song, and calculating note position identifications corresponding to target note units added with watermark data and the number of times of adding watermarks added with the watermark data according to the watermark adding parameters;
and adding watermark data behind the target note unit according to the note position identification corresponding to the target note unit and the watermark adding times by adopting the note time data of the target note unit and the note value of the target note unit.
In one embodiment, the watermarking parameters comprise a watermark position parameter and a watermark time parameter;
when the processor 1001 acquires the watermarking parameter associated with the target song and calculates the note position identifier corresponding to the target note unit added with the watermark data and the number of times of adding the watermark by the watermark data according to the watermarking parameter, the following operations are specifically performed:
mapping the song serial number ID of the target song, and acquiring a mapping parameter list after the mapping;
randomly acquiring a watermark position parameter and a watermark frequency parameter associated with the target song from the mapping parameter list;
calculating the note position identification of a target note unit added by the watermark data according to the watermark position parameter;
and calculating the watermark adding times of the watermark data according to the watermark adding time parameter.
In one embodiment, before executing the steps of using the note time data of the target note unit and the note value of the target note unit, and adding the watermark data after the target note unit according to the note position identifier corresponding to the target note unit and the number of times of adding the watermark, the processor 1001 further executes the following operations:
obtaining note position identifiers of the target note units which meet the total amount of note units of the note sequence;
and acquiring the watermark adding times meeting the preset time interval.
In one embodiment, the processor 1001, when executing the step of obtaining the note position identifier of the target note unit satisfying the total amount of note units of the note sequence, specifically performs the following operations:
obtaining note position identifiers of the target note units which are smaller than or equal to the total amount of note units of the note sequence;
when the processor 1001 executes to acquire the number of times of adding a watermark that satisfies the preset number of times interval, the following operations are specifically executed:
when the number of times of adding the watermark is smaller than the minimum value in the preset number interval, adjusting the number of times of adding the watermark to be the minimum value;
when the number of times of adding the watermark is larger than the maximum value in the preset number interval, adjusting the number of times of adding the watermark to be the maximum value;
and when the number of times of adding the watermark is in the preset number interval, determining the number of times of adding the watermark.
In an embodiment, when the processor 1001 executes the following operations when adding the watermark data after the target note unit according to the note position identifier corresponding to the target note unit and the watermark adding number by using the note time data of the target note unit and the note value of the target note unit:
acquiring a next note unit of the target note unit, and adding a note unit corresponding to the watermark adding frequency between the target note unit and the next note unit;
respectively setting note time data of the note unit corresponding to the watermark adding times and note values of the note units corresponding to the watermark adding times according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit and the note value of the next note unit;
adjusting note time data of the target note unit;
and adjusting the note position identification of each note unit.
In one embodiment, the note time data for each note unit includes a note onset time and a note duration for each note unit;
when the processor 1001 executes the following operations of setting the note time data of the note unit corresponding to the watermark adding frequency and the note value of the note unit corresponding to the watermark adding frequency according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit, and the note value of the next note unit, respectively:
and respectively setting the note start time of the note unit corresponding to the watermark adding times as the time for decreasing the note start time of the next note unit according to unit time quantum, respectively setting the note duration of the note unit corresponding to the watermark adding times as the unit time quantum, and respectively setting the note value of the note unit corresponding to the watermark adding times as the note value of the target note unit or the note value of the next note unit. .
When the processor 1001 performs the adjustment of the note time data of the target note unit, the following operations are specifically performed:
and adjusting the note duration of the target note unit according to the sum of the note durations of the note units corresponding to the watermark adding times.
In the embodiment of the invention, by acquiring the note position identification, note time data and note value of each note unit in at least one note unit of the target song, and according to the note position identification and the watermark adding times added by the acquired watermark data, the watermark data of the corresponding times is added in the corresponding note unit of the song reference data. By adding the watermark data into the song reference data, the song reference data is prevented from being stolen by lawbreakers, the security of the song reference data is ensured, meanwhile, the classification management of the song reference data is realized, and the using effect of the song singing playing function is improved; the obtained note position identification is screened, and the number of times of adding the watermark is limited, so that the normal use of the song reference data is ensured.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.
Claims (14)
1. A song data processing method, comprising:
acquiring song reference data of a target song, wherein the song reference data comprises a note sequence formed by arranging at least one note unit according to a time sequence, and each note unit in the at least one note unit comprises a note position identifier, note time data and a note value;
acquiring watermark adding parameters associated with the target song, and calculating note position identifications corresponding to target note units added with watermark data and the number of times of adding watermarks added with the watermark data according to the watermark adding parameters;
obtaining note position identifiers of the target note units which meet the total amount of note units of the note sequence;
acquiring the number of times of adding watermarks in a preset number interval;
and adding watermark data behind the target note unit according to the note position identification corresponding to the target note unit meeting the total note unit amount of the note sequence and the watermark adding times meeting the preset time interval by adopting the note time data of the target note unit and the note value of the target note unit.
2. The method according to claim 1, wherein the watermarking parameters comprise a watermark location parameter and a watermark number parameter;
the obtaining of the watermark adding parameters associated with the target song and the calculating of the note position identification corresponding to the target note unit added by the watermark data and the number of times of adding the watermark by the watermark data according to the watermark adding parameters include:
mapping the song serial number ID of the target song, and acquiring a mapping parameter list after the mapping;
randomly acquiring a watermark position parameter and a watermark frequency parameter associated with the target song from the mapping parameter list;
calculating the note position identification of a target note unit added by the watermark data according to the watermark position parameter;
and calculating the watermark adding times of the watermark data according to the watermark adding time parameter.
3. The method of claim 1, wherein said obtaining the note location identity of the target note unit that satisfies the total number of note units of the note sequence comprises:
obtaining note position identifiers of the target note units which are smaller than or equal to the total amount of note units of the note sequence;
the acquiring of the number of times of watermark addition meeting the preset number of times interval includes:
when the number of times of adding the watermark is smaller than the minimum value in the preset number interval, adjusting the number of times of adding the watermark to be the minimum value;
when the number of times of adding the watermark is larger than the maximum value in the preset number interval, adjusting the number of times of adding the watermark to be the maximum value;
and when the number of times of adding the watermark is in the preset number interval, determining the number of times of adding the watermark.
4. The method as claimed in claim 1, wherein said employing the note time data of the target note unit and the note value of the target note unit, and adding watermark data after the target note unit according to the note position identifier corresponding to the target note unit and the watermark adding number, comprises:
acquiring a next note unit of the target note unit, and adding a note unit corresponding to the watermark adding frequency between the target note unit and the next note unit;
respectively setting note time data of the note unit corresponding to the watermark adding times and note values of the note units corresponding to the watermark adding times according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit and the note value of the next note unit;
adjusting note time data of the target note unit;
and adjusting the note position identification of each note unit.
5. The method of claim 4, wherein the note time data for each note unit comprises a note onset time and a note duration for each note unit;
the setting the note time data of the note unit corresponding to the watermark adding times and the note value of the note unit corresponding to the watermark adding times according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit and the note value of the next note unit respectively includes:
and respectively setting the note start time of the note unit corresponding to the watermark adding times as the time for decreasing the note start time of the next note unit according to unit time quantum, respectively setting the note duration of the note unit corresponding to the watermark adding times as the unit time quantum, and respectively setting the note value of the note unit corresponding to the watermark adding times as the note value of the target note unit or the note value of the next note unit.
6. The method of claim 5, wherein said adjusting the note time data of the target note unit comprises:
and adjusting the note duration of the target note unit according to the sum of the note durations of the note units corresponding to the watermark adding times.
7. A song data processing apparatus characterized by comprising:
the data acquisition unit is used for acquiring song reference data of a target song, the song reference data comprises a note sequence formed by arranging at least one note unit in a time sequence, and each note unit in the at least one note unit comprises a note position identifier, note time data and a note value;
the data calculation unit is used for acquiring a watermark adding parameter associated with the target song and calculating a note position identifier corresponding to a target note unit added with the watermark data and the number of times of adding the watermark by the watermark data according to the watermark adding parameter;
an identification obtaining unit, configured to obtain note position identifications of the target note units that satisfy the total amount of note units of the note sequence;
the number obtaining unit is used for obtaining the number of times of adding the watermark in the interval meeting the preset number of times;
and the data adding unit is used for adding watermark data behind the target note unit according to the note position identification corresponding to the target note unit meeting the total note unit amount of the note sequence and the watermark adding times meeting the preset time interval by adopting the note time data of the target note unit and the note value of the target note unit.
8. The apparatus of claim 7, wherein the watermarking parameters comprise a watermark location parameter and a watermark number parameter;
the data calculation unit includes:
the list acquisition subunit is used for mapping the song ID of the target song and acquiring a mapping parameter list after the mapping;
a parameter obtaining subunit, configured to randomly obtain, in the mapping parameter list, a watermark position parameter and a watermark frequency parameter associated with the target song;
the mark calculating subunit is used for calculating the note position mark of the target note unit added by the watermark data according to the watermark position parameter;
and the frequency calculating subunit is used for calculating the watermark adding frequency of the watermark data according to the watermark frequency parameter.
9. The apparatus according to claim 7, wherein the symbol obtaining unit is specifically configured to obtain a symbol of note position of the target note unit that is less than or equal to the total number of note units in the note sequence;
the number of times obtaining unit is specifically configured to:
when the number of times of adding the watermark is smaller than the minimum value in the preset number interval, adjusting the number of times of adding the watermark to be the minimum value;
when the number of times of adding the watermark is larger than the maximum value in the preset number interval, adjusting the number of times of adding the watermark to be the maximum value;
and when the number of times of adding the watermark is in the preset number interval, determining the number of times of adding the watermark.
10. The apparatus according to claim 7, wherein the data adding unit includes:
a note unit adding subunit, configured to obtain a note unit next to the target note unit, and add a note unit corresponding to the watermark adding frequency between the target note unit and the next note unit;
a data setting subunit, configured to set, according to the note time data of the target note unit, the note value of the target note unit, the note time data of the next note unit, and the note value of the next note unit, the note time data of the note unit corresponding to the number of times of adding the watermark and the note value of the note unit corresponding to the number of times of adding the watermark, respectively;
a data adjusting subunit, configured to adjust note time data of the target note unit;
and the mark adjusting subunit is used for adjusting the note position marks of the note units.
11. The apparatus of claim 10, wherein the note time data for each note unit comprises a note onset time and a note duration for each note unit;
the data setting subunit is specifically configured to set the note onset time of the note unit corresponding to the watermark adding time to be a time for decrementing the note onset time of the next note unit according to a unit time amount, set the note duration of the note unit corresponding to the watermark adding time to be the unit time amount, and set the note value of the note unit corresponding to the watermark adding time to be the note value of the target note unit or the note value of the next note unit.
12. The apparatus according to claim 11, wherein the data adjusting subunit is specifically configured to adjust the note duration of the target note unit according to a sum of the note durations of the note units corresponding to the number of times of the watermarking is performed.
13. A song data processing apparatus, characterized in that the song data processing apparatus comprises a processor and a memory, wherein the memory is configured to store program code, and the processor is configured to invoke the program code to perform the song data processing method according to any one of claims 1 to 6.
14. A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program comprising program instructions; the program instructions, when executed by a processor, cause the processor to perform a song data processing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610620768.8A CN106250729B (en) | 2016-08-01 | 2016-08-01 | Song data processing method and equipment thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610620768.8A CN106250729B (en) | 2016-08-01 | 2016-08-01 | Song data processing method and equipment thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106250729A CN106250729A (en) | 2016-12-21 |
CN106250729B true CN106250729B (en) | 2020-05-26 |
Family
ID=57605736
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610620768.8A Active CN106250729B (en) | 2016-08-01 | 2016-08-01 | Song data processing method and equipment thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106250729B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1386276A (en) * | 2000-06-13 | 2002-12-18 | 索尼株式会社 | Data record meidum, content data, record medium, data recording method and apparatus, and data reproducing method and apparatus |
KR20030016583A (en) * | 2001-08-21 | 2003-03-03 | (주)마크텍 | Transmitting/receiving system using watermark as control signal and method thereof |
CN1539142A (en) * | 2000-02-01 | 2004-10-20 | �ʼҷ����ֵ�������˾ | Portecting content from illicit reproduction by proof of existence of complete data set |
CN1659579A (en) * | 2002-06-05 | 2005-08-24 | 索尼电子有限公司 | Method and apparatus to detect watermark that are resistant to resizing, rotation and translation |
CN101038771A (en) * | 2006-03-18 | 2007-09-19 | 辽宁师范大学 | Novel method of digital watermarking for protecting literary property of music works |
CN101206861A (en) * | 2007-12-25 | 2008-06-25 | 宁波大学 | Method for imbedding digital music production authentication information and method for authentication of said production |
CN101211562A (en) * | 2007-12-25 | 2008-07-02 | 宁波大学 | Digital music works damage-free digital watermarking embedding and extraction method |
CN104412609A (en) * | 2012-07-05 | 2015-03-11 | Lg电子株式会社 | Method and apparatus for processing digital service signals |
CN104810022A (en) * | 2015-05-11 | 2015-07-29 | 东北师范大学 | Time-domain digital audio watermarking method based on audio breakpoint |
CN105741845A (en) * | 2016-03-30 | 2016-07-06 | 北京奇艺世纪科技有限公司 | Audio watermark adding and detecting methods and devices |
-
2016
- 2016-08-01 CN CN201610620768.8A patent/CN106250729B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1539142A (en) * | 2000-02-01 | 2004-10-20 | �ʼҷ����ֵ�������˾ | Portecting content from illicit reproduction by proof of existence of complete data set |
CN1386276A (en) * | 2000-06-13 | 2002-12-18 | 索尼株式会社 | Data record meidum, content data, record medium, data recording method and apparatus, and data reproducing method and apparatus |
KR20030016583A (en) * | 2001-08-21 | 2003-03-03 | (주)마크텍 | Transmitting/receiving system using watermark as control signal and method thereof |
CN1659579A (en) * | 2002-06-05 | 2005-08-24 | 索尼电子有限公司 | Method and apparatus to detect watermark that are resistant to resizing, rotation and translation |
CN101038771A (en) * | 2006-03-18 | 2007-09-19 | 辽宁师范大学 | Novel method of digital watermarking for protecting literary property of music works |
CN101206861A (en) * | 2007-12-25 | 2008-06-25 | 宁波大学 | Method for imbedding digital music production authentication information and method for authentication of said production |
CN101211562A (en) * | 2007-12-25 | 2008-07-02 | 宁波大学 | Digital music works damage-free digital watermarking embedding and extraction method |
CN104412609A (en) * | 2012-07-05 | 2015-03-11 | Lg电子株式会社 | Method and apparatus for processing digital service signals |
CN104810022A (en) * | 2015-05-11 | 2015-07-29 | 东北师范大学 | Time-domain digital audio watermarking method based on audio breakpoint |
CN105741845A (en) * | 2016-03-30 | 2016-07-06 | 北京奇艺世纪科技有限公司 | Audio watermark adding and detecting methods and devices |
Also Published As
Publication number | Publication date |
---|---|
CN106250729A (en) | 2016-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102110057B1 (en) | Song confirmation method and device, storage medium | |
CN105528372B (en) | A kind of address search method and equipment | |
CN105005439B (en) | Icon management method, device and mobile terminal | |
CN103699530A (en) | Method and equipment for inputting texts in target application according to voice input information | |
CN105023559A (en) | Karaoke processing method and system | |
US9734828B2 (en) | Method and apparatus for detecting user ID changes | |
CN108256718B (en) | Policy service task allocation method and device, computer equipment and storage equipment | |
CN108519998B (en) | Problem guiding method and device based on knowledge graph | |
CN106055659B (en) | Lyric data matching method and equipment thereof | |
CN105138557A (en) | Music random play method and apparatus | |
CN103955490A (en) | Audio playing method and audio playing equipment | |
CN111190962A (en) | File synchronization method and device and local terminal | |
CN105047203A (en) | Audio processing method, device and terminal | |
CN110688518A (en) | Rhythm point determining method, device, equipment and storage medium | |
CN104615333B (en) | The group technology and play system of a kind of playback equipment | |
CN104978377A (en) | Multimedia data processing method, multimedia data processing device and terminal | |
CN106601268B (en) | Multimedia data processing method and device | |
CN101896876A (en) | Input system, portable terminal, data processing device, and input method | |
CN104978961B (en) | A kind of audio-frequency processing method, device and terminal | |
CN111932198B (en) | File auditing method and related products | |
WO2019100031A1 (en) | User interface and method based on sliding-scale cluster groups for precise look-alike modeling | |
CN106250729B (en) | Song data processing method and equipment thereof | |
CN109117622A (en) | A kind of identity identifying method based on audio-frequency fingerprint | |
CN112925711A (en) | Local joint debugging test method and related device | |
CN111027065B (en) | Leucavirus identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |