CN111583901A - Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method - Google Patents

Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method Download PDF

Info

Publication number
CN111583901A
CN111583901A CN202010253310.XA CN202010253310A CN111583901A CN 111583901 A CN111583901 A CN 111583901A CN 202010253310 A CN202010253310 A CN 202010253310A CN 111583901 A CN111583901 A CN 111583901A
Authority
CN
China
Prior art keywords
voice
data
pinyin
sentence
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010253310.XA
Other languages
Chinese (zh)
Other versions
CN111583901B (en
Inventor
李广达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shengguang Information Technology Co ltd
Original Assignee
Hunan Shengguang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shengguang Information Technology Co ltd filed Critical Hunan Shengguang Information Technology Co ltd
Priority to CN202010253310.XA priority Critical patent/CN111583901B/en
Publication of CN111583901A publication Critical patent/CN111583901A/en
Application granted granted Critical
Publication of CN111583901B publication Critical patent/CN111583901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an intelligent weather forecast system of a broadcasting station, which comprises a recording unit, a voice segmentation unit, a voice database, an information reading unit, an information analysis unit and a voice synthesis unit. The invention also discloses a weather forecast voice segmentation method, which is used for the intelligent weather forecast system of the broadcasting station. The invention can make the voice more soft and not rigid when the segmented voice material is synthesized, and the phenomena of blockage and incongruity of voice mutation can not occur, thereby having better pronunciation effect and being closer to the real-time manual playing of personnel; through reading weather data from the internet and carrying out playing processing work, the weather forecast voice playing can be played in real time, the data is accurate and timely, errors caused by manual voice broadcasting are reduced, the time of manual voice broadcasting is shortened, and labor cost is saved.

Description

Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method
Technical Field
The invention belongs to the field of voice playing processing, and particularly relates to an intelligent weather forecast system of a broadcasting station and a weather forecast voice segmentation method.
Background
At present, weather forecast of a broadcasting station needs to be recorded in advance by a dubbing person according to local weather data, and the dubbing person can only finally arrange to broadcast after editing the weather data into an audio file, because the weather changes every day, the broadcasting cannot achieve 100% effectiveness (weather changes with time difference), and workers need to rest (non-working time, legal holidays), the manual operation has certain error probability, the final weather forecast can only broadcast temperature and approximate weather changes, and the information amount is less.
Disclosure of Invention
Aiming at the defects, the invention provides an intelligent weather forecast system of a broadcasting station and a weather forecast voice segmentation method.
In order to achieve the purpose, the invention provides the following technical scheme: an intelligent weather forecast system of a broadcasting station comprises a recording unit, a weather forecast unit and a weather forecast unit, wherein the recording unit is used for recording according to text and sentence data to form voice data;
the voice segmentation unit is used for converting the character sentence data into pinyin sentence data, corresponding the pinyin sentence data to the voice data one by one, finding corresponding initial voice fragments and final voice fragments in the voice data by the initials and the finals in the pinyin sentence data, and analyzing the frequency of the initial voice fragments;
the voice segmentation unit comprises a frequency segmentation unit and is used for segmenting by taking the rear end of the consonant voice segment with dense frequency as a node according to an analysis result to form a voice material, and segmenting the pinyin sentence according to the voice material to form a corresponding pinyin sentence material;
the voice database is used for storing the voice materials, the pinyin sentence materials, and the music materials such as the head music, the tail music and the background music;
the information reading unit is used for reading weather data information through the Internet;
the information analysis unit is used for analyzing the weather data information of the information reading unit to form reading pinyin sentence data and splitting the reading pinyin sentence data into pinyin sentence materials which can be identified by the voice database;
and the voice synthesis unit is used for finding the voice materials corresponding to one by one from the voice database according to the pinyin sentence materials of the information analysis unit to synthesize and form playing voice.
Preferably, the voice segmentation unit further includes a silence segmentation unit, configured to segment the voice segment with a volume of less than 20 db as a node.
Preferably, the speech synthesis unit further includes a sound mixing unit, configured to add the piece head music to the front end of the played speech, add the piece tail music and the background music to the rear end of the played speech, and then perform synthesis to form a forecast speech.
Preferably, the consonant voice segments with dense frequencies in the voice segmentation unit are the consonant voice segments with frequencies above 8000 hz.
A weather forecast voice segmentation method is suitable for an intelligent weather forecast system of a broadcasting station, and comprises the following steps:
step S101, recording according to the required text and sentence data to form voice data;
step S102, converting the text sentence data into pinyin sentence data;
step S103, finding out a corresponding pinyin voice segment in the voice data according to the pinyin in the pinyin sentence data;
step S104, finding out corresponding initial phonetic fragments and final phonetic fragments in the pinyin phonetic fragments according to the initial consonants and the final consonants in the pinyin;
step S105, when the frequency of the initial consonant voice segment is more than 8000 Hz, the rear end of the initial consonant voice segment is taken as a node to be segmented, and corresponding voice materials are formed;
step S106, segmenting the phonetic sentence data according to the voice material to form corresponding phonetic sentence material;
step S107, storing the phonetic sentence material and the corresponding voice material in the voice database.
Preferably, the step S105 further includes taking a voice with a voice volume below 20 db in the voice data as a node, and segmenting the voice data of the node to form the corresponding voice material.
Compared with the prior art, the invention has the following beneficial effects:
the intelligent weather forecast system of the broadcasting station records the text and sentence data to be played to form voice data through the recording unit, and the voice segmentation unit segments the voice data into voice materials to be stored in the voice database, so that the voice materials in the voice database are updated in real time; weather data information is acquired on the Internet through the information reading unit, the weather data information acquired by the information reading unit is analyzed by the information analyzing unit to form a pinyin statement material which can be identified by the voice database, the pinyin statement material analyzed by the information analyzing unit is synthesized by the voice synthesizing unit when the corresponding voice material is found in the voice database, and therefore the weather forecast voice can be played in real time, data is accurate and timely, errors caused by manual voice broadcasting are reduced, time for manual voice broadcasting is shortened, and labor cost is saved; the head music, the tail music and the background music are added into the played voice through the voice mixing unit, so that the played voice is more mild and comfortable in the listening process;
the weather forecast voice segmentation method provided by the invention has the advantages that by adopting a mode that the frequency of the initial consonant voice segment is more than 8000 Hz for segmenting the voice data, the voice can be softer and not stiff when the segmented voice material is synthesized, the phenomena of blocking and incongruity of voice mutation can not occur, the voice segmentation method has a better pronunciation effect, and is closer to the real-time manual playing of personnel.
Drawings
FIG. 1 is a block diagram of the basic structure of the intelligent weather forecast system of the broadcasting station of the present invention;
fig. 2 is a flowchart of a weather forecast speech segmentation method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical solutions of the embodiments of the present invention can be combined, and the technical features of the embodiments can also be combined to form a new technical solution.
Referring to fig. 1, the present invention provides the following technical solutions: an intelligent weather forecast system of a broadcasting station comprises a recording unit 1, a weather forecast unit and a weather forecast unit, wherein the recording unit is used for carrying out manual recording according to text and sentence data to form voice data, and the text and sentence data are Chinese sentences;
the voice segmentation unit 2 is used for converting the character sentence data into pinyin sentence data, corresponding the pinyin sentence data to the voice data one by one, finding corresponding initial voice fragments and final voice fragments in the voice data by the initials and the finals in the pinyin sentence data, and analyzing the frequency of the initial voice fragments;
the voice segmentation unit 2 comprises a frequency segmentation unit 21, which is used for performing segmentation by taking the rear end of the consonant voice segment with dense frequency as a node according to an analysis result to form a voice material, and performing segmentation on the pinyin sentence according to the voice material to form a corresponding pinyin sentence material;
the voice database 3 is used for storing the voice materials, the pinyin sentence materials, the head music, the tail music, the background music and other music materials; the voice materials comprise common voice materials and variable voice materials, and the pinyin sentence materials comprise common pinyin sentence materials and variable pinyin sentence materials;
an information reading unit 4 for reading weather data information through the internet; for example, acquiring real-time weather data of cities and counties in China from official data organizations such as a meteorological bureau, wherein the real-time weather data comprises real-time temperature, highest and lowest temperature, real-time humidity, real-time air quality, real-time weather state, predicted weather change within 24 hours and the like;
the information analysis unit 5 is used for analyzing the weather data information of the information reading unit to form reading pinyin sentence data and splitting the reading pinyin sentence data into pinyin sentence materials which can be identified by the voice database 3;
and the voice synthesis unit 6 is used for finding the voice materials corresponding to each other from the voice database according to the pinyin sentence materials of the information analysis unit to synthesize the voice materials to form playing voice.
The voice segmentation unit 2 further includes a mute segmentation unit 22, configured to segment the voice segments with a volume below 20 db as nodes (e.g., pause tone in a sentence, and corresponding comma, colon, and other parts in the text and sentence data). The voice segmentation unit 2 further comprises a selection unit 23, which is used for dividing the pinyin sentence data into a common pinyin sentence part and a changed pinyin sentence part according to different application occasions, preferentially selecting the mute segmentation unit 21 to segment the voice data for the voice data containing the common pinyin sentence part and the changed pinyin part, segmenting the voice data part corresponding to the common pinyin sentence part into common voice materials, and directly storing the common voice materials in the voice database 3 as the voice materials without segmentation; the voice data portion corresponding to the changed pinyin portion is segmented by the frequency segmentation unit 21.
The speech synthesis unit 6 further includes a sound mixing unit 61, configured to add the piece head music to the front end of the played speech, and add the piece tail music and the background music to the rear end of the played speech, and then perform synthesis to form a forecast speech. The speech synthesis unit 6 further includes a breath sound unit, configured to add breath sound at the silence segmentation unit 21 in the played speech when the synthesized number of initials and finals in the played speech is greater than 40. The breath sounds are stored in the voice database 3.
The consonant voice segments with dense frequency in the voice segmentation unit 2 are the consonant voice segments with frequency more than 8000 Hz. The frequency of the initial consonant voice segment is s, sh, q, x and the like with the initial consonant above 8000 Hz.
Referring to fig. 2, a method for voice segmentation of weather forecast is applicable to an intelligent weather forecast system of a broadcasting station, and a voice segmentation unit 2 in the intelligent weather forecast system of the broadcasting station performs processing of voice data by using the method, and the method includes:
step S101, recording according to the required text and sentence data to form voice data; the text sentence data is a Chinese character sentence; the recording is manual recording, and different people record voice data with different timbres;
step S102, converting the text sentence data into pinyin sentence data;
step S103, finding out a corresponding pinyin voice segment in the voice data according to the pinyin in the pinyin sentence data;
step S104, finding out corresponding initial phonetic fragments and final phonetic fragments in the pinyin phonetic fragments according to the initial consonants and the final consonants in the pinyin;
step S105, when the frequency of the initial consonant voice segment is more than 8000 Hz, the rear end of the initial consonant voice segment is taken as a node to be segmented, and corresponding voice materials are formed; the frequency of the initial consonant voice segment is s, sh, q, x and the like with the initial consonant above 8000 Hz;
step S106, segmenting the phonetic sentence data according to the voice material to form corresponding phonetic sentence material;
step S107, a voice database 3 is established, and the pinyin sentence materials and the corresponding voice materials are stored in the voice database 3.
The step S105 further includes segmenting the speech data of the node by using a speech with a speech volume below 20 db in the speech data as a node, so as to form the corresponding speech material.
In the step S105, the pinyin voice segment may be divided by taking a voice with a voice volume below 20 db in the divided voice data as a node, so as to form a common pinyin voice segment and a variable pinyin voice segment; and secondly, detecting and judging the changed pinyin voice segment, and when the frequency of the initial consonant voice segment is more than 8000 Hz, segmenting by taking the rear end of the initial consonant voice segment as a node to form a corresponding voice material.
Example 1:
the working principle of the intelligent weather forecast system of the broadcasting station is as follows:
recording the commonly used text and sentence data (such as' the current temperature is sixteen ℃) in the weather forecast through the recording unit 1 to form voice data; the phonetic segmentation unit 2 converts textual statement data into phonetic statement data (e.g., "d ā n qi a n q ext and n w ei our sh and then d), the selection unit 23 divides the phonetic statement data into a commonly used phonetic statement portion (e.g.," d ā n qi a n q w and n w ei "), a changed phonetic statement portion (e.g.," sh li sh least), and selects the mute segmentation unit 21 to segment the phonetic data including the commonly used phonetic statement portion and the changed phonetic statement portion (e.g., "the colon is in the sentence and a transient pause occurs), divides the phonetic data portion corresponding to the commonly used phonetic statement portion into commonly used phonetic materials which can no longer be segmented, and directly stores the commonly used phonetic materials in the phonetic database 3 as phonetic materials corresponding to the commonly used phonetic materials (e.g.," i sh material) Mu) is stored in the speech database 3, the commonly used pinyin sentence material is used for coding the commonly used speech material (the coding can also adopt a Chinese character mode (such as the current temperature is) or a Chinese character plus initial consonant and final sound mode), and the query of the speech synthesis unit 6 is facilitated;
to changing pronunciation data that pronunciation sentence part (for example "sh li sh 10 d below") corresponds pass through frequency segmentation unit 21 according to the frequency of initial consonant pronunciation fragment when more than 8000 hertz, with this initial consonant pronunciation fragment rear end cuts apart for the node, forms corresponding change pronunciation material and stores in pronunciation database 3, should change pronunciation sentence part after the pronunciation material corresponds the segmentation and store in pronunciation database 3 as changing pronunciation sentence material (for example "sh", "li sh", "mu d fall"), change pronunciation sentence material and be used for changing the pronunciation material the code (this code also can adopt the mode of chinese character or the mode of chinese character plus initial consonant and final (for example "six sh")) make things convenient for pronunciation synthesis unit 6's inquiry;
the information reading unit 4 acquires real-time weather data information (such as 'the current temperature is 16 ℃)' through the internet and transmits the weather data to the information analysis unit 5; the information analysis unit 5 analyzes the weather data information to form pinyin sentence reading information (for example, "d ā n qi n q ew < n w ei < sh > li < sh > d <), and splits the pinyin sentence reading information into pinyin sentence materials (for example," d ā n qi < n q min w < n w ei, "" sh, "" li < sh, "" e < d > score) that can be identified by the voice database 3, when the pinyin sentence reading information is split by the information analysis unit 5, the symbols (such as comma, colon and the like) are split at the symbol position first, if the split pinyin sentence reading information contains a pinyin sentence part and a pinyin change part, the pinyin sentence part is not split any more to form pinyin sentences, and the change part takes the frequency of the pinyin fragment as 8000 or more Timber;
the voice synthesis unit 6 finds the common phonetic materials and the variable phonetic materials in the voice database 3 corresponding to each other one by one according to the common phonetic sentence materials and the variable phonetic sentence materials analyzed by the information analysis unit 5 to synthesize the common phonetic materials and the variable phonetic sentences to form a playing voice; the sound mixing unit 61 finds out the corresponding beginning music, ending music and background music from the voice database 3, loads them in the corresponding position of the played voice, synthesizes them, forms the forecast voice and plays it.
The whole process of the intelligent weather forecast system of the broadcasting station acquires data by connecting the Internet, the data is accurate and timely, manual operation is not needed, the data can be automatically completed by a computer, the synthesis time is not more than 1 second, finally, the synthesized voice covers the real-time weather data all over the country and the weather change within 24 hours, weather early warning and warm prompting are realized, the purpose that the system is still available in non-working time and legal holidays is achieved, and the broadcasting station is helped to save time and labor cost.
The working principle of the weather forecast voice segmentation method comprises the following steps:
in step S101, manual recording is performed according to the required text and sentence data (such as 'the current temperature is sixteen degrees centigrade') to form voice data, and then step S102 is performed;
in step S102, the textual statement data is converted into phonetic statement data (e.g., "d ā n Ag qi n q for child n w ei: sh < d >), followed by step S103;
in step S103, finding out corresponding phonetic fragments in the speech data according to the pinyin in the pinyin sentence data, finding out each pinyin corresponding to the phonetic fragment thereof one by one, and then performing step S104;
in step S104, finding out corresponding initial phonetic fragments and final phonetic fragments in the pinyin phonetic fragments according to the initial consonants and the final consonants in the pinyin, and then performing step S105;
in step S105, when the frequency of the initial voice segment is more than 8000 hz, segmenting the voice data by using the rear end of the initial voice segment as a node, and obtaining a first voice data (such as "d ā n ag q", "i nq", "i iw syth ish", "li lich sh", "iesh", "li mor", and "li", further using the voice with the voice volume below 20 db in the voice data as a node, and obtaining a second voice data (such as "d ā n and q", "i nq", "li and iw", sh "," li.
In step S106, segmenting pinyin sentence data according to the speech material to form corresponding pinyin sentence material (e.g., "d ā nq", "i iw nw eish", "i li marsh", "e sh", "i marsh"). The phonetic sentence data is segmented with the segmentation point of the voice material as the standard, and then the step S107 is carried out;
in step S107, a voice database 3 is established, and Pinyin sentence materials and corresponding voice materials are stored in the voice database; wherein the phonetic sentence material is equivalent to the code or code number in the voice database 3.
In another embodiment of the weather forecast speech segmentation method, the improvement in step S105 is performed by taking a speech segment that is obtained by dividing a speech volume of 20 db or less in the speech data as a node, segmenting the speech data of the node, segmenting a first speech data (e.g., "d ā n qi a n q my and n w ei", "sh li sh 10 sh mur") formed after segmentation, sorting the first speech data according to different environments to form a commonly used speech segment (e.g., "d ā n qi a n q iw and n w ei") and a changed speech segment ("li sh. Secondly, detect the judgement to changing pinyin pronunciation fragment, work as when the frequency of consonant pronunciation fragment is more than 8000 hertz, with this consonant pronunciation fragment rear end cuts for the node, and the second time pronunciation data (like "sh", "lali 'sh", "hieh", "for d' the) that form after the cutting form corresponding pronunciation material and store in pronunciation database 3.
When synthesizing the voice material in the voice database 3, firstly converting the text and sentence data to be synthesized into phonetic sentence data, and then splitting the phonetic sentence data by taking the initial (such as s, sh, q, x, and the like) with the voice segment frequency of more than 8000 Hz and the symbol (such as comma, colon, and the like) as a node to form a phonetic sentence material; the corresponding voice materials are found in the voice database 3 through the pinyin sentence materials and are connected and synthesized, and then the playing voice can be formed; by taking the rear end of the initial voice segment as a node and taking the voice with the voice volume below 20 decibels in the voice data as the node, the voice data is segmented to form the voice material, when the voice material is spliced and synthesized to play the voice, the voice sounds softer and not stiff, the phenomena of blockage and incongruity of voice mutation can not occur, the voice playing method has better pronunciation effect and is closer to the real-time manual playing of personnel.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. The utility model provides a broadcasting station intelligence weather forecast system which characterized in that: comprises that
The recording unit is used for recording according to the text and sentence data to form voice data;
the voice segmentation unit is used for converting the character sentence data into pinyin sentence data, corresponding the pinyin sentence data to the voice data one by one, finding corresponding initial voice fragments and final voice fragments in the voice data by the initials and the finals in the pinyin sentence data, and analyzing the frequency of the initial voice fragments;
the voice segmentation unit comprises a frequency segmentation unit and is used for segmenting by taking the rear end of the consonant voice segment with dense frequency as a node according to an analysis result to form a voice material, and segmenting the pinyin sentence according to the voice material to form a corresponding pinyin sentence material;
the voice database is used for storing the voice materials, the pinyin sentence materials, and the music materials such as the head music, the tail music and the background music;
the information reading unit is used for reading weather data information through the Internet;
the information analysis unit is used for analyzing the weather data information of the information reading unit to form reading pinyin sentence data and splitting the reading pinyin sentence data into pinyin sentence materials which can be identified by the voice database;
and the voice synthesis unit is used for finding the voice materials corresponding to one by one from the voice database according to the pinyin sentence materials of the information analysis unit to synthesize and form playing voice.
2. The broadcaster intelligent weather forecast system of claim 1, wherein: the voice segmentation unit also comprises a mute segmentation unit which is used for segmenting according to the voice segment with the volume below 20 decibels as a node.
3. The broadcaster intelligent weather forecast system of claim 1, wherein: the voice synthesis unit also comprises a voice mixing unit which is used for synthesizing the head music added at the front end of the played voice and the tail music and the background music added at the rear end of the played voice to form the forecast voice.
4. The broadcaster intelligent weather forecast system of claim 1, wherein: the consonant voice segments with dense frequency in the voice segmentation unit are the consonant voice segments with frequency more than 8000 Hz.
5. A weather forecast voice segmentation method is suitable for an intelligent weather forecast system of a broadcasting station, and comprises the following steps:
step S101, recording according to the required text and sentence data to form voice data;
step S102, converting the text sentence data into pinyin sentence data;
step S103, finding out a corresponding pinyin voice segment in the voice data according to the pinyin in the pinyin sentence data;
step S104, finding out corresponding initial phonetic fragments and final phonetic fragments in the pinyin phonetic fragments according to the initial consonants and the final consonants in the pinyin;
step S105, when the frequency of the initial consonant voice segment is more than 8000 Hz, the rear end of the initial consonant voice segment is taken as a node to be segmented, and corresponding voice materials are formed;
step S106, segmenting the phonetic sentence data according to the voice material to form corresponding phonetic sentence material;
step S107, storing the phonetic sentence material and the corresponding voice material in the voice database.
6. The weather forecast speech segmentation method according to claim 5, characterized in that: the step S105 further includes segmenting the speech data of the node by using a speech with a speech volume below 20 db in the speech data as a node, so as to form the corresponding speech material.
CN202010253310.XA 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method Active CN111583901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010253310.XA CN111583901B (en) 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010253310.XA CN111583901B (en) 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method

Publications (2)

Publication Number Publication Date
CN111583901A true CN111583901A (en) 2020-08-25
CN111583901B CN111583901B (en) 2023-07-11

Family

ID=72126123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010253310.XA Active CN111583901B (en) 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method

Country Status (1)

Country Link
CN (1) CN111583901B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802460A (en) * 2021-04-14 2021-05-14 中国科学院国家空间科学中心 Space environment forecasting system based on voice processing

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1126349A (en) * 1995-03-06 1996-07-10 郑元成 Semi-syllable method for continuously composing Chinese speech
CN1333501A (en) * 2001-07-20 2002-01-30 北京捷通华声语音技术有限公司 Dynamic Chinese speech synthesizing method
US20060155544A1 (en) * 2005-01-11 2006-07-13 Microsoft Corporation Defining atom units between phone and syllable for TTS systems
CN101261831A (en) * 2007-03-05 2008-09-10 凌阳科技股份有限公司 A phonetic symbol decomposition and its synthesis method
US20080270140A1 (en) * 2007-04-24 2008-10-30 Hertz Susan R System and method for hybrid speech synthesis
JP2009020264A (en) * 2007-07-11 2009-01-29 Hitachi Ltd Voice synthesis device and voice synthesis method, and program
US20100211393A1 (en) * 2007-05-08 2010-08-19 Masanori Kato Speech synthesis device, speech synthesis method, and speech synthesis program
CN104318920A (en) * 2014-10-07 2015-01-28 北京理工大学 Construction method of cross-syllable Chinese speech synthesis element with spectrum stable boundary
CN104967789A (en) * 2015-06-16 2015-10-07 福建省泉州市气象局 Automatic processing method and system for city window weather dubbing
CN105336321A (en) * 2015-09-25 2016-02-17 百度在线网络技术(北京)有限公司 Phonetic segmentation method and device for speech synthesis
JP2016218281A (en) * 2015-05-21 2016-12-22 日本電信電話株式会社 Voice synthesizer, method thereof, and program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1126349A (en) * 1995-03-06 1996-07-10 郑元成 Semi-syllable method for continuously composing Chinese speech
CN1333501A (en) * 2001-07-20 2002-01-30 北京捷通华声语音技术有限公司 Dynamic Chinese speech synthesizing method
US20060155544A1 (en) * 2005-01-11 2006-07-13 Microsoft Corporation Defining atom units between phone and syllable for TTS systems
CN101261831A (en) * 2007-03-05 2008-09-10 凌阳科技股份有限公司 A phonetic symbol decomposition and its synthesis method
US20080270140A1 (en) * 2007-04-24 2008-10-30 Hertz Susan R System and method for hybrid speech synthesis
US20100211393A1 (en) * 2007-05-08 2010-08-19 Masanori Kato Speech synthesis device, speech synthesis method, and speech synthesis program
JP2009020264A (en) * 2007-07-11 2009-01-29 Hitachi Ltd Voice synthesis device and voice synthesis method, and program
CN104318920A (en) * 2014-10-07 2015-01-28 北京理工大学 Construction method of cross-syllable Chinese speech synthesis element with spectrum stable boundary
JP2016218281A (en) * 2015-05-21 2016-12-22 日本電信電話株式会社 Voice synthesizer, method thereof, and program
CN104967789A (en) * 2015-06-16 2015-10-07 福建省泉州市气象局 Automatic processing method and system for city window weather dubbing
CN105336321A (en) * 2015-09-25 2016-02-17 百度在线网络技术(北京)有限公司 Phonetic segmentation method and device for speech synthesis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Perceptual and objective detection of discontinuties in concatenative speech synthesis", 《2001 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS,SPEECH,AND SIGNAL PROCESSING》 *
李永光 等: "基于小波变换的自动声/韵切分的研究", 《哈尔滨工程大学学报 》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802460A (en) * 2021-04-14 2021-05-14 中国科学院国家空间科学中心 Space environment forecasting system based on voice processing

Also Published As

Publication number Publication date
CN111583901B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN108259965B (en) Video editing method and system
US8849669B2 (en) System for tuning synthesized speech
JP6083764B2 (en) Singing voice synthesis system and singing voice synthesis method
US7853452B2 (en) Interactive debugging and tuning of methods for CTTS voice building
Gauvain et al. Partitioning and transcription of broadcast news data.
CN101996627B (en) Speech processing apparatus, speech processing method and program
US20050102135A1 (en) Apparatus and method for automatic extraction of important events in audio signals
CN102122506A (en) Method for recognizing voice
CN106486128A (en) A kind of processing method and processing device of double-tone source audio data
US20080109225A1 (en) Speech Synthesis Device, Speech Synthesis Method, and Program
CN101185115A (en) Voice edition device, voice edition method, and voice edition program
CN108172211B (en) Adjustable waveform splicing system and method
CN112750421B (en) Singing voice synthesis method and device and readable storage medium
CN105895102A (en) Recording editing method and recording device
CN113299272B (en) Speech synthesis model training and speech synthesis method, equipment and storage medium
JP4324089B2 (en) Audio reproduction program, recording medium therefor, audio reproduction apparatus, and audio reproduction method
CN111583901A (en) Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method
CN113572977B (en) Video production method and device
CN105719641B (en) Sound method and apparatus are selected for waveform concatenation speech synthesis
JPH09146580A (en) Effect sound retrieving device
JPWO2008056604A1 (en) Audio recording system, audio recording method, and recording processing program
JP2006227363A (en) Device and program for generating dictionary for broadcast speech
Warcharasupat et al. Remastering Divide and Remaster: A Cinematic Audio Source Separation Dataset with Multilingual Support
CN111564153B (en) Intelligent broadcasting music program system of broadcasting station
JP2005070604A (en) Voice-labeling error detecting device, and method and program therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.104, North building, No.10 Lanni Road, Tianxin District, Changsha City, Hunan Province

Applicant after: Hunan Shengguang Information Technology Co.,Ltd.

Address before: No.104, North building, No.10 Lanni Road, Tianxin District, Wuhan City, Hubei Province, 430000

Applicant before: Hunan Shengguang Information Technology Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.104, North building, No.10 Lanni Road, Tianxin District, Changsha, Hunan 410000

Applicant after: Hunan Shengguang Technology Co.,Ltd.

Address before: No.104, North building, No.10 Lanni Road, Tianxin District, Changsha City, Hunan Province

Applicant before: Hunan Shengguang Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant