CN110909789A - Sound volume prediction method and device, electronic equipment and storage medium - Google Patents

Sound volume prediction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110909789A
CN110909789A CN201911140661.3A CN201911140661A CN110909789A CN 110909789 A CN110909789 A CN 110909789A CN 201911140661 A CN201911140661 A CN 201911140661A CN 110909789 A CN110909789 A CN 110909789A
Authority
CN
China
Prior art keywords
volume data
sound volume
target
data
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911140661.3A
Other languages
Chinese (zh)
Inventor
于广泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing second hand Artificial Intelligence Technology Co.,Ltd.
Original Assignee
Jingshuo Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingshuo Technology Beijing Co Ltd filed Critical Jingshuo Technology Beijing Co Ltd
Priority to CN201911140661.3A priority Critical patent/CN110909789A/en
Publication of CN110909789A publication Critical patent/CN110909789A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The application provides a sound volume prediction method and device, electronic equipment and a storage medium, and relates to the technical field of data processing. In the application, first sound volume data of a target keyword at a preset time is acquired, and second sound volume data of a reference word having a preset relationship with the target keyword is acquired. Next, reference sound volume data is determined in the second sound volume data based on the first sound volume data. Then, the sound volume data of the target keyword at the target time is obtained based on the reference sound volume data. By the method, the problem of low accuracy of the existing sound volume prediction technology can be solved.

Description

Sound volume prediction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a sound volume prediction method and apparatus, an electronic device, and a storage medium.
Background
The continuous development of data processing technology has led to the continuous widening of the application range of the technology. For example, to predict trends in sound volume (e.g., the number of times a word appears on an internet platform), this can be done based on existing data processing techniques.
The inventor researches and discovers that the sound volume prediction technology generally performs sound volume prediction based on the historical change trend of the sound volume of the target keyword, so that the accuracy of the prediction result is low.
Disclosure of Invention
In view of the above, an object of the present application is to provide a sound volume prediction method and apparatus, an electronic device, and a storage medium, so as to solve the problem of low accuracy in the existing sound volume prediction technology.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
a method of sound volume prediction, comprising:
acquiring first volume data of a target keyword at preset time, and acquiring second volume data of a reference word having a preset relation with the target keyword;
determining reference sound volume data in the second sound volume data based on the first sound volume data;
and obtaining the sound volume data of the target keyword at the target time based on the reference sound volume data.
In a preferred choice of the embodiment of the present application, in the sound volume prediction method, the step of obtaining second sound volume data of a reference word having a preset relationship with the target keyword includes:
obtaining at least one related word of the target keyword through a word vector, and taking each related word as a reference word having a preset relation with the target keyword;
second sound data of each of the reference words is obtained.
In a preferred option of an embodiment of the present application, in the sound volume prediction method, the step of determining reference sound volume data in the second sound volume data based on the first sound volume data includes:
calculating a correlation coefficient between each group of unit sound volume data and the first sound volume data for each group of unit sound volume data, wherein the second sound volume data comprises at least one group of unit sound volume data, and the time length of each group of unit sound volume data is the same as the length of the preset time;
in the at least one set of unit sound volume data, a set of unit sound volume data is determined as reference sound volume data based on the correlation coefficient.
In a preferred choice of the embodiment of the present application, in the sound volume prediction method, the step of calculating, for each set of unit sound volume data, a correlation coefficient between the set of unit sound volume data and the first sound volume data includes:
and calculating the correlation coefficient of each group of unit acoustic quantity data and the first acoustic quantity data based on a Pearson correlation coefficient calculation formula for each group of unit acoustic quantity data.
In a preferred option of the embodiment of the present application, in the sound volume prediction method, the step of obtaining the sound volume data of the target keyword at the target time based on the reference sound volume data includes:
acquiring target sound volume data adjacent to the reference sound volume data in the second sound volume data, wherein the formation time of the target sound volume data is after the formation time of the reference sound volume data in a time dimension;
and obtaining the sound volume data of the target keyword at the target time based on the target sound volume data, wherein the target time is positioned after the preset time in the time dimension.
In a preferred option of the embodiment of the present application, in the sound volume prediction method, the number of the reference words is multiple, the target sound volume data is multiple groups and corresponds to the multiple reference words one by one, and the step of obtaining the sound volume data of the target keyword at the target time by calculation based on the target sound volume data includes:
determining a weight coefficient of each group of the target volume data;
and calculating the sound volume data of the target keyword at the target time based on each group of target sound volume data and the weight coefficient of the group of target sound volume data.
In a preferred selection of the embodiment of the present application, in the sound volume prediction method, the reference sound volume data are multiple groups and correspond to multiple groups of the target sound volume data one to one, and the step of determining the weight coefficient of each group of the target sound volume data includes:
acquiring a similarity coefficient between a reference word corresponding to each group of target volume data and the target keyword, and calculating a ratio of each group of reference volume data to the first volume data;
and calculating to obtain a weight coefficient of the group of target volume data based on the ratio of the similarity coefficient of the group of target volume data to the reference volume data corresponding to the group of target volume data aiming at each group of target volume data.
An embodiment of the present application further provides a sound volume prediction apparatus, including:
the data acquisition module is used for acquiring first sound volume data of a target keyword at preset time and acquiring second sound volume data of a reference word having a preset relation with the target keyword;
a data determination module to determine reference sound volume data in the second sound volume data based on the first sound volume data;
and the data acquisition module is used for acquiring the sound volume data of the target keyword at the target time based on the reference sound volume data.
On the basis, an embodiment of the present application further provides an electronic device, including:
a memory for storing a computer program;
and the processor is connected with the memory and is used for executing the computer program to realize the sound volume prediction method.
On the basis of the above, embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed, implements the above-described sound volume prediction method.
According to the sound volume prediction method and device, the electronic equipment and the storage medium, the corresponding reference sound volume data is determined in the second sound volume data of the reference word through the first sound volume data based on the target keyword, and the sound volume data of the target keyword is obtained based on the reference sound volume data. In this way, the sound volume change of the target keyword can be determined based on the sound volume change of the reference word, so that the sound volume of the target keyword can be predicted, the problem that the accuracy of a prediction result is low due to the fact that the sound volume prediction is performed only based on the sound volume historical change trend of the target keyword in the existing sound volume prediction technology is solved, and the method has high practical value.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is an application scene interaction diagram of an electronic device according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a sound volume prediction method according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating sub-steps included in step S110 in fig. 2.
Fig. 4 is a flowchart illustrating the sub-steps included in step S120 in fig. 2.
Fig. 5 is a waveform diagram of sound volume data provided in an embodiment of the present application.
Fig. 6 is a flowchart illustrating sub-steps included in step S130 in fig. 2.
Fig. 7 is a block diagram schematically illustrating functional modules included in a sound volume prediction apparatus according to an embodiment of the present application.
Icon: 10-an electronic device; 12-a memory; 14-a processor; 100-a sound volume prediction means; 110-a data acquisition module; 120-a data determination module; 130-data acquisition module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, an electronic device 10 according to an embodiment of the present disclosure may include a memory 12 and a processor 14, where the memory 12 may have a sound volume prediction apparatus 100 disposed therein.
Wherein the memory 12 and the processor 14 are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The sound volume prediction apparatus 100 includes at least one software function module that can be stored in the memory 12 in the form of software or firmware (firmware). The processor 14 is configured to execute an executable computer program stored in the memory 12, for example, a software functional module and a computer program included in the sound volume prediction apparatus 100, so as to implement the sound volume prediction method provided in the embodiment of the present application.
Alternatively, the Memory 12 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The Processor 14 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the configuration shown in fig. 1 is merely illustrative, and that the electronic device 10 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1, and may also include a communication unit for exchanging information with other devices, for example.
The specific type of the electronic device 10 is not limited, and may be selected according to actual application requirements, as long as the electronic device has a certain data processing capability.
For example, in an alternative example, the electronic device 10 includes a terminal device such as a mobile phone or a computer, a server device, and the like.
With reference to fig. 2, an embodiment of the present application further provides a sound volume prediction method applicable to the electronic device 10. Wherein the method steps defined by the flow related to the sound volume prediction method can be implemented by the electronic device 10. The specific process shown in fig. 2 will be described in detail below.
Step S110, obtaining first volume data of a target keyword at a preset time, and obtaining second volume data of a reference word having a preset relationship with the target keyword.
In this embodiment, when the sound volume prediction is required, a target keyword (an object of sound volume prediction) may be determined first. Then, first volume data of the target keyword at a preset time may be acquired, and second volume data of the reference word may be acquired.
The target keywords and the reference words have a certain preset relationship, so that the sound volume data of the target keywords can be predicted based on the sound volume data of the reference words, and the target keywords and the reference words have higher reliability.
Step S120, determining reference sound volume data in the second sound volume data based on the first sound volume data.
In this embodiment, after the first sound volume data and the second sound volume data are obtained based on step S110, reference sound volume data may be determined in the second sound volume data based on the first sound volume data. In this way, a certain relationship exists between the reference sound volume data and the first sound volume data.
Step S130, obtaining the volume data of the target keyword in the target time based on the reference volume data.
In this embodiment, after determining the reference volume data based on step S130, prediction may be performed based on the reference volume data to obtain volume data of the target time of the target keyword.
Based on the method, because the target keyword and the reference word have a preset relationship, and the reference sound volume data is also determined in the second sound volume data of the reference word based on the first sound volume data of the target keyword, so that the sound volume change of the target keyword can be determined based on the sound volume change of the reference word, thereby realizing the prediction of the sound volume of the target keyword, and further improving the problem that the accuracy of a prediction result is low because the sound volume prediction is performed only based on the sound volume historical change trend of the target keyword (for example, the change trend of the current time to the current time is predicted by the change trend of the previous time, or the change trend of the current time in the same year is predicted by the change trend of the previous year, two years and a certain time in the past year) in the existing sound volume prediction technology, has high practical value.
It should be noted that, in step S110, the target keyword should be understood in a broad sense, and should not be understood as only a word, but rather as a combination of a word, a word and a plurality of words.
In detail, in a specific application example, the target keyword may be retinol, post-basking repair, eye fine lines, rosemary, marijuana, import instrument, or other keywords.
The specific manner of obtaining the first volume data based on step S110 is not limited, and may be selected according to the actual application requirements.
For example, in an alternative example, in order to test the accuracy of the sound volume prediction method provided by the present application, sound volume data of the target keyword in a history period may be obtained, and the sound volume data may be used as the first sound volume data.
In detail, in a specific application example, if the current day is 11/19 in 2019, the preset time may be 10/1 in 2019 to 10/25 in 2019. In this way, the sound volume data of the target keyword from 26/10/2019 to 19/11/2019 can be predicted, and then the sound volume data is compared with the actual sound volume data of the target keyword from 26/10/2019 to 19/11/2019, so as to determine the prediction accuracy.
For another example, in another alternative example, in order to predict the sound volume data of the target keyword after the current time period, the sound volume data of the target keyword in the current time period may be acquired, and the sound volume data may be used as the first sound volume data.
In detail, in a specific application example, if the current time is 11/19 days in 2019, the preset time may be 26 days in 10/2019 to 19 days in 11/2019, so as to predict the target keyword sound volume data after 19 days in 11/2019.
It is understood that, in the two examples, the time length of the preset time is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, the length of the preset time may be 20 days, 25 days, 30 days, 32 days, 50 days, etc., and may be determined according to actual needs.
Moreover, the specific manner of obtaining the second volume data based on step S110 is not limited, and may be selected according to the actual application requirement, for example, different selections may be performed based on different specific contents of the preset relationship.
For example, in an alternative example, the reference word may be a related word to the target keyword. Therefore, based on this example, in conjunction with fig. 3, step S110 may include step S111 and step S113, as described in detail below.
Step S111, at least one related word of the target keyword is obtained through the word vector, and each related word is respectively used as a reference word having a preset relation with the target keyword.
In this embodiment, after determining a target keyword that needs to be subjected to sound volume prediction, at least one related word of the target keyword may be obtained through a word vector, and each related word is respectively used as a reference word having a preset relationship with the target keyword.
In step S113, second volume data of each of the reference words is obtained.
In this embodiment, after obtaining at least one reference word based on different S111, second sound data of each reference word may be obtained to obtain at least one set of second sound data.
Optionally, the specific manner of executing step S111 to obtain the related word is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, each related word determined based on the word vector may be obtained and used as a reference word of the target keyword.
For another example, in another alternative example, after determining at least one related word based on the word vector, whether to acquire the related word may be determined based on a similarity coefficient between each related word and the target keyword, and the related word may be used as a reference word of the target keyword.
In detail, in a specific application example, a related word having a similarity coefficient of a certain size may be used as a reference word of the target keyword.
Based on the foregoing example, if the target keyword is retinol, the related words determined based on the word vector may include niacinamide, vitamin A, retinol, retinol palmitate, retinal, ceramide, kojic acid, vitamin B3, ferulic acid, and panthenol.
If the target keyword is a post-sun repair, the related words determined based on the word vector may include post-sun repair, post-sun repair mask, post-sun repair cream, whitening sun protection, post-sun skin, sun protection repair, calming the skin, post-sun care, and whitening the mask.
If the target keyword is an eye fine line, the related words determined based on the related words determined by the word vector may include periocular fine lines, eye wrinkles, small fine lines, dry lines, canthus fine lines, dry line fine lines, eye lines, fine lines, periocular wrinkles, and under-eye black circles.
If the target keyword is rosemary, the related words determined based on the word vector may include thyme, sage, lemongrass, parsley, basil, rosemary leaf, oregano, dill, and vanilla.
If the target keyword is hot Maji, the related words determined based on the word vector may include hot pulling, ultrasonic knife, electric wave pulling, face lifting, thread burying lifting, water laser needle, gold micro needle, deep blue radio frequency, sonic wave pulling and photon skin tendering.
If the target keyword is a importing instrument, the related words determined based on the word vector may include a beauty instrument, a face cleaning instrument, a massage instrument, refa, yaman, a face washing instrument, a beauty instrument, a gold bar, a face washing machine, and an ion importing instrument.
In step S111, the specific manner of obtaining the related word based on the word vector is not specifically limited, and for example, an open source tool "word 2 vec" of a certain software platform, which is a neural network model, may be used.
Optionally, the specific manner of executing step S113 to obtain the second volume data is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, in order to improve the accuracy of sound volume prediction, all the historical sound volume data of each of the reference words may be acquired as the second sound volume data of the reference word.
For another example, in another alternative example, in order to avoid the problem that excessive data causes waste of computing resources and storage resources, historical sound volume data of each reference word in a certain period of time (such as the last year, two years, etc.) may be obtained as the second sound volume data of the reference word.
It should be noted that, in step S120, a specific manner for determining the reference volume data is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, in order to ensure that the determined reference volume data has high referential property, in conjunction with fig. 4, step S120 may include step S121 and step S123, which are described in detail below.
Step S121, calculating a correlation coefficient between each set of unit volume data and the first volume data for each set of unit volume data.
In this embodiment, after obtaining the second sound volume data based on step S110, each set of unit sound volume data in the second sound volume data may be compared with the first sound volume data to obtain a corresponding correlation coefficient.
The second sound volume data comprises at least one group of unit sound volume data, and the time length of each group of unit sound volume data is the same as the length of the preset time. That is, the time length of the second sound volume data should be greater than or equal to the time length of the first sound volume data.
Step S123, determining a set of unit volume data as reference volume data based on the correlation coefficient in the at least one set of unit volume data.
In this embodiment, after the correlation coefficient of each set of the unit sound volume data is obtained based on step S121, it may be determined whether to determine the set of the unit sound volume data as the reference sound volume data based on the magnitude of the correlation coefficient.
Alternatively, the specific way of performing step S121 to calculate the correlation coefficient is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, a correlation coefficient may be calculated based on the average value of each set of the unit sound volume data and the first sound volume data.
For another example, in another alternative example, a correlation coefficient may be calculated based on the maximum value, the minimum value, and the average value of each set of the unit acoustic quantity data and the first acoustic quantity data.
For another example, in another alternative example, in order to ensure that the calculated correlation coefficient can accurately reflect the degree of similarity between each set of the unit volume data and the first volume data, step S121 may include the following steps:
and calculating the correlation coefficient of each group of unit acoustic quantity data and the first acoustic quantity data based on a Pearson correlation coefficient calculation formula for each group of unit acoustic quantity data.
Wherein the pearson correlation coefficient calculation formula may include:
Figure BDA0002280835630000111
ρ X, Y is the correlation coefficient, E is a calculated average, X is the first sound volume data, Y is the unit sound volume data, σ X is a variance of the first sound volume data, σ Y is a variance of the unit sound volume data, μ Y is an average of the first sound volume data, and μ X is an average of the unit sound volume data.
The specific way of calculating the correlation coefficient of each group of the unit volume data is not limited, and can be selected according to the actual application requirements.
For example, in an alternative example, the second sound volume data may be grouped in advance, and then each group of the unit sound volume data and the first sound volume data are calculated at the same time, so as to obtain the correlation coefficient of the group of the unit sound volume data.
For another example, in another alternative example, each set of unit volume data and the first volume data may be sequentially calculated based on a sliding window manner, so as to obtain a correlation coefficient of the set of unit volume data.
In detail, in a specific application example, referring to fig. 5, if the target keyword is thermal magic, the reference word may include photon tender skin, deep blue radio frequency and gold microneedles, the preset time period may be 50 days, and the first sound volume data may be sound volume data of the thermal magic from 651 days to 700 days.
Based on the above example, for the reference word "photon rejuvenation", the first sound volume data may be calculated sequentially with the sound volume data of 1 st day to 50 th day, the sound volume data of 2 th day to 51 th day, the sound volume data of 3 rd day to 52 th day, the sound volume data of 650 th day to 659 th day, and the sound volume data of 651 th day to 700 th day of "photon rejuvenation".
For the reference word "deep blue radio frequency", the first sound volume data can be calculated sequentially with the sound volume data of 1 st day to 50 th day, the sound volume data of 2 nd day to 51 th day, the sound volume data of 3 rd day to 52 th day, the sound volume data of the.
For the reference word "gold microneedle", the first sound volume data can be calculated sequentially with the sound volume data of the "gold microneedle" from day 1 to day 50, from day 2 to day 51, from day 3 to day 52, from day 9 to day 650, and from day 651 to day 700.
Optionally, the specific manner of executing step S123 to determine the reference volume data is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, a set of unit sound volume data having a correlation coefficient within a preset range may be used as the reference sound volume data.
For another example, in another alternative example, a set of unit sound volume data having the largest number of relationships may be used as the reference sound volume data. In this way, the determined reference sound volume data has the greatest similarity to the first sound volume data in the second sound volume data.
It should be noted that, in step S130, a specific manner of obtaining the volume data of the target keyword at the target time is not limited, and may be selected according to an actual application requirement, for example, the predicted time periods are different, and the specific manner may be different.
For example, in an alternative example, in the second sound volume data, target sound volume data that is not adjacent to the reference sound volume data may be obtained, and the sound volume data of the keyword at a target time may be obtained based on the target sound volume data. Wherein a formation time of the target sound volume data is located after a formation time of the reference sound volume data in a time dimension.
For another example, in another alternative example, in conjunction with fig. 6, step S130 may include step S131 and step S133, which are described in detail below.
Step S131, in the second sound volume data, target sound volume data adjacent to the reference sound volume data is acquired.
In the present embodiment, after the reference sound volume data is determined based on step S120, the target sound volume data adjacent to the reference sound volume data may be acquired in the second sound volume data.
Wherein a formation time of the target sound volume data is located after a formation time of the reference sound volume data in a time dimension. For example, in the foregoing example regarding the reference word "photon-tender", if the reference sound volume data is sound volume data of 391 th to 440 th days of "photon-tender", sound volume data of 441 th to 490 th days of "photon-tender" may be taken as the target sound volume data.
And step S133, obtaining the volume data of the target keyword at the target time based on the target volume data.
In this embodiment, after the target sound volume data is acquired based on step S133, the sound volume data of the target keyword at the target time may be calculated based on the target sound volume data.
Wherein, since the formation time of the target sound volume data is located after the formation time of the reference sound volume data in the time dimension, the target time is also located after the preset time in the time dimension.
Optionally, the specific manner of obtaining the volume data of the target keyword at the target time based on step S133 is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, the target volume data may be directly used as the volume data of the target keyword at the target time.
For another example, in another alternative example, when the reference word is plural, the target sound volume data is plural groups, and when there is a correspondence with plural reference words, the step S133 may include the following sub-steps:
first, a weight coefficient of each set of the target sound volume data may be determined; secondly, the sound volume data of the target keyword at the target time can be calculated and obtained based on each group of target sound volume data and the weight coefficient of the group of target sound volume data.
That is, when the target volume data is a plurality of sets, since the prediction contribution degree of each reference word to the target keyword may be different, calculation based on different weight coefficients is required.
The specific manner of determining the weight coefficient of each group of the target volume data is not limited, and can be selected according to the actual application requirements.
For example, in an alternative example, the weighting coefficient of the target volume data corresponding to each reference word may be determined directly based on the similarity coefficient of the reference word and the target keyword.
For another example, in another alternative example, the weighting coefficient of the target sound volume data corresponding to each group of reference sound volume data may also be determined directly based on the correlation coefficient between the group of reference sound volume data and the first sound volume data.
For another example, in another alternative example, the reference sound volume data are a plurality of sets, and correspond to the plurality of sets of target sound volume data one to one, and the weighting coefficient of the charm target sound volume data may be further determined based on the following sub-steps:
firstly, a similarity coefficient between a reference word corresponding to each group of target volume data and the target keyword can be obtained, and the ratio of each group of reference volume data to the first volume data is calculated; secondly, for each group of target sound volume data, a weight coefficient of the group of target sound volume data can be calculated based on a ratio of a similarity coefficient of the group of target sound volume data to reference sound volume data corresponding to the group of target sound volume data.
The specific way of obtaining the similarity coefficient is not limited, and can be selected according to the actual application requirements.
For example, in an alternative example, if the reference words are related words obtained based on word vectors, the similarity coefficient between each reference word and the target keyword may be obtained based on the word vectors.
Based on the foregoing example, if the target keyword is retinol, the obtained similarity coefficient between each reference word and the target keyword may be shown in the following table.
Figure BDA0002280835630000151
Based on the foregoing example, if the target keyword is a post-solarization restoration, the obtained similarity coefficient between each reference word and the target keyword may be shown in the following table.
Figure BDA0002280835630000152
Figure BDA0002280835630000161
Based on the foregoing example, if the target keyword is an eye fine line, the obtained similarity coefficient between each reference word and the target keyword may be shown in the following table.
Figure BDA0002280835630000162
Also, in calculating the weight coefficient based on the above-described similarity coefficient and ratio, it is also possible to perform normalization processing on the ratio in consideration that the ratio may not be a value in the range of 0 to 1.
The normalization processing of the ratio can be performed according to the following formula:
Figure BDA0002280835630000163
Figure BDA0002280835630000171
is a normalized ratio, bkFor the reference volume data, biIs the first volume data. Further, after the ratio is normalized based on the above formula, it may be based on the followingCalculating the weight coefficient by a formula:
Figure BDA0002280835630000172
wherein the content of the first and second substances,
Figure BDA0002280835630000173
in order to be the weight coefficient,
Figure BDA0002280835630000174
is the similarity coefficient.
Further, considering that the weighting factor obtained by the above formula may not be a value in the range of 0 to 1, the normalization process may be performed on the weighting factor based on the softmax function.
With reference to fig. 7, the present embodiment further provides an acoustic quantity prediction apparatus 100 applicable to the electronic device 10. The sound volume prediction apparatus 100 may include a data obtaining module 110, a data determining module 120, and a data obtaining module 130.
The data obtaining module 110 is configured to obtain first volume data of a target keyword at a preset time, and obtain second volume data of a reference word having a preset relationship with the target keyword. In this embodiment, the data obtaining module 110 may be configured to execute step S110 shown in fig. 2, and reference may be made to the foregoing description of step S110 for relevant contents of the data obtaining module 110.
The data determining module 120 is configured to determine reference sound volume data in the second sound volume data based on the first sound volume data. In this embodiment, the data determining module 120 may be configured to perform step S120 shown in fig. 2, and reference may be made to the foregoing description of step S120 for relevant contents of the data determining module 120.
The data obtaining module 130 is configured to obtain, based on the reference volume data, volume data of the target keyword at a target time. In this embodiment, the data obtaining module 130 may be configured to perform step S130 shown in fig. 2, and reference may be made to the foregoing description of step S130 for relevant contents of the data obtaining module 130.
In an embodiment of the present application, there is also provided a computer-readable storage medium, in which a computer program is stored, and the computer program executes the steps of the sound volume prediction method when running, corresponding to the sound volume prediction method.
The steps executed when the computer program runs are not described in detail herein, and reference may be made to the explanation of the sound volume prediction method above.
In summary, according to the sound volume prediction method and apparatus, the electronic device, and the storage medium provided by the present application, the corresponding reference sound volume data is determined in the second sound volume data of the reference word by using the first sound volume data based on the target keyword, and the sound volume data of the target keyword is obtained based on the reference sound volume data. In this way, the sound volume change of the target keyword can be determined based on the sound volume change of the reference word, so that the sound volume of the target keyword can be predicted, the problem that the accuracy of a prediction result is low due to the fact that the sound volume prediction is performed only based on the sound volume historical change trend of the target keyword in the existing sound volume prediction technology is solved, and the method has high practical value.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for predicting a sound volume, comprising:
acquiring first volume data of a target keyword at preset time, and acquiring second volume data of a reference word having a preset relation with the target keyword;
determining reference sound volume data in the second sound volume data based on the first sound volume data;
and obtaining the sound volume data of the target keyword at the target time based on the reference sound volume data.
2. The method of claim 1, wherein the step of obtaining the second sound volume data of the reference word having the predetermined relationship with the target keyword comprises:
obtaining at least one related word of the target keyword through a word vector, and taking each related word as a reference word having a preset relation with the target keyword;
second sound data of each of the reference words is obtained.
3. The sound volume prediction method according to claim 1, wherein the step of determining reference sound volume data in the second sound volume data based on the first sound volume data comprises:
calculating a correlation coefficient between each group of unit sound volume data and the first sound volume data for each group of unit sound volume data, wherein the second sound volume data comprises at least one group of unit sound volume data, and the time length of each group of unit sound volume data is the same as the length of the preset time;
in the at least one set of unit sound volume data, a set of unit sound volume data is determined as reference sound volume data based on the correlation coefficient.
4. The sound volume prediction method according to claim 3, wherein the step of calculating, for each set of unit sound volume data, a correlation coefficient between the set of unit sound volume data and the first sound volume data includes:
and calculating the correlation coefficient of each group of unit acoustic quantity data and the first acoustic quantity data based on a Pearson correlation coefficient calculation formula for each group of unit acoustic quantity data.
5. The sound volume prediction method according to any one of claims 1 to 4, wherein the step of obtaining the sound volume data of the target keyword at the target time based on the reference sound volume data comprises:
acquiring target sound volume data adjacent to the reference sound volume data in the second sound volume data, wherein the formation time of the target sound volume data is after the formation time of the reference sound volume data in a time dimension;
and obtaining the sound volume data of the target keyword at the target time based on the target sound volume data, wherein the target time is positioned after the preset time in the time dimension.
6. The sound volume prediction method according to claim 5, wherein the reference words are plural, the target sound volume data are plural groups and correspond to the plural reference words one to one, and the step of obtaining the sound volume data of the target keyword at the target time by calculation based on the target sound volume data includes:
determining a weight coefficient of each group of the target volume data;
and calculating the sound volume data of the target keyword at the target time based on each group of target sound volume data and the weight coefficient of the group of target sound volume data.
7. The method according to claim 6, wherein the reference volume data are a plurality of sets corresponding to the target volume data, and the step of determining the weighting factor of each set of the target volume data comprises:
acquiring a similarity coefficient between a reference word corresponding to each group of target volume data and the target keyword, and calculating a ratio of each group of reference volume data to the first volume data;
and calculating to obtain a weight coefficient of the group of target volume data based on the ratio of the similarity coefficient of the group of target volume data to the reference volume data corresponding to the group of target volume data aiming at each group of target volume data.
8. A sound volume prediction apparatus, comprising:
the data acquisition module is used for acquiring first sound volume data of a target keyword at preset time and acquiring second sound volume data of a reference word having a preset relation with the target keyword;
a data determination module to determine reference sound volume data in the second sound volume data based on the first sound volume data;
and the data acquisition module is used for acquiring the sound volume data of the target keyword at the target time based on the reference sound volume data.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor coupled to the memory for executing the computer program to implement the sound volume prediction method of any one of claims 1-7.
10. A computer-readable storage medium on which a computer program is stored, characterized in that the program, when executed, implements the sound volume prediction method of any one of claims 1 to 7.
CN201911140661.3A 2019-11-20 2019-11-20 Sound volume prediction method and device, electronic equipment and storage medium Pending CN110909789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911140661.3A CN110909789A (en) 2019-11-20 2019-11-20 Sound volume prediction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911140661.3A CN110909789A (en) 2019-11-20 2019-11-20 Sound volume prediction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110909789A true CN110909789A (en) 2020-03-24

Family

ID=69816808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911140661.3A Pending CN110909789A (en) 2019-11-20 2019-11-20 Sound volume prediction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110909789A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699666A (en) * 2020-12-29 2021-04-23 北京秒针人工智能科技有限公司 Method, system, equipment and storage medium for predicting keyword sound volume

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933183A (en) * 2015-07-03 2015-09-23 重庆邮电大学 Inquiring term rewriting method merging term vector model and naive Bayes
CN105631009A (en) * 2015-12-25 2016-06-01 广州视源电子科技股份有限公司 Word vector similarity based retrieval method and system
CN110069558A (en) * 2019-03-18 2019-07-30 中科恒运股份有限公司 Data analysing method and terminal device based on deep learning
CN110110207A (en) * 2018-01-18 2019-08-09 北京搜狗科技发展有限公司 A kind of information recommendation method, device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933183A (en) * 2015-07-03 2015-09-23 重庆邮电大学 Inquiring term rewriting method merging term vector model and naive Bayes
CN105631009A (en) * 2015-12-25 2016-06-01 广州视源电子科技股份有限公司 Word vector similarity based retrieval method and system
CN110110207A (en) * 2018-01-18 2019-08-09 北京搜狗科技发展有限公司 A kind of information recommendation method, device and electronic equipment
CN110069558A (en) * 2019-03-18 2019-07-30 中科恒运股份有限公司 Data analysing method and terminal device based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699666A (en) * 2020-12-29 2021-04-23 北京秒针人工智能科技有限公司 Method, system, equipment and storage medium for predicting keyword sound volume

Similar Documents

Publication Publication Date Title
Ahmad et al. Discriminative feature learning for skin disease classification using deep convolutional neural network
CN110457432B (en) Interview scoring method, interview scoring device, interview scoring equipment and interview scoring storage medium
Wilcox et al. Comparing two independent groups via the lower and upper quantiles
EP3166105A1 (en) Neural network training apparatus and method, and speech recognition apparatus and method
Li et al. A hybrid method coupling empirical mode decomposition and a long short-term memory network to predict missing measured signal data of SHM systems
Robbins et al. Mean shift testing in correlated data
Lombardo et al. Rainfall downscaling in time: theoretical and empirical comparison between multifractal and Hurst-Kolmogorov discrete random cascades
CN106709318B (en) A kind of recognition methods of user equipment uniqueness, device and calculate equipment
CN109493979A (en) A kind of disease forecasting method and apparatus based on intelligent decision
Paluš From nonlinearity to causality: statistical testing and inference of physical mechanisms underlying complex dynamics
Lee Variable short-time Fourier transform for vibration signals with transients
Rea et al. Identification of changes in mean with regression trees: an application to market research
Khan et al. Moment tests for window length selection in singular spectrum analysis of short–and long–memory processes
Dudek et al. PARMA models with applications in R
Marwan et al. Trends in recurrence analysis of dynamical systems
An et al. Nonlinear prediction of condition parameter degradation trend for hydropower unit based on radial basis function interpolation and wavelet transform
CN114298997B (en) Fake picture detection method, fake picture detection device and storage medium
CN110909789A (en) Sound volume prediction method and device, electronic equipment and storage medium
Kim et al. Kernel ridge regression with lagged-dependent variable: Applications to prediction of internal bond strength in a medium density fiberboard process
Østergaard et al. Oscillating systems with cointegrated phase processes
Altamirano et al. Nonstationary multi-output Gaussian processes via harmonizable spectral mixtures
Soldati et al. The use of a priori information in ICA-based techniques for real-time fMRI: an evaluation of static/dynamic and spatial/temporal characteristics
CN113705792A (en) Personalized recommendation method, device, equipment and medium based on deep learning model
Carmack et al. Generalised correlated cross-validation
Li et al. Order detection for fMRI analysis: Joint estimation of downsampling depth and order by information theoretic criteria

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201217

Address after: A108, 1 / F, curling hall, winter training center, 68 Shijingshan Road, Shijingshan District, Beijing 100041

Applicant after: Beijing second hand Artificial Intelligence Technology Co.,Ltd.

Address before: Room 9014, 9 / F, building 3, yard 30, Shixing street, Shijingshan District, Beijing

Applicant before: ADMASTER TECHNOLOGY (BEIJING) Co.,Ltd.

TA01 Transfer of patent application right
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200324

WD01 Invention patent application deemed withdrawn after publication