CN111583901B - Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method - Google Patents

Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method Download PDF

Info

Publication number
CN111583901B
CN111583901B CN202010253310.XA CN202010253310A CN111583901B CN 111583901 B CN111583901 B CN 111583901B CN 202010253310 A CN202010253310 A CN 202010253310A CN 111583901 B CN111583901 B CN 111583901B
Authority
CN
China
Prior art keywords
voice
data
phonetic
sentence
materials
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010253310.XA
Other languages
Chinese (zh)
Other versions
CN111583901A (en
Inventor
李广达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shengguang Technology Co ltd
Original Assignee
Hunan Shengguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shengguang Technology Co ltd filed Critical Hunan Shengguang Technology Co ltd
Priority to CN202010253310.XA priority Critical patent/CN111583901B/en
Publication of CN111583901A publication Critical patent/CN111583901A/en
Application granted granted Critical
Publication of CN111583901B publication Critical patent/CN111583901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an intelligent weather forecast system of a broadcasting station, which comprises a recording unit, a voice segmentation unit, a voice database, an information reading unit, an information analysis unit and a voice synthesis unit. The invention also discloses a weather forecast voice segmentation method which is used for the intelligent weather forecast system of the broadcasting station. According to the invention, the voice data is segmented by adopting the mode that the frequency of the initial consonant voice segment is more than 8000 Hz as the segmentation node, so that the voice is softer and not stiff when the segmented voice material is synthesized, the phenomenon of clamping and uncoordinated voice mutation is avoided, the sound effect is better, and the real-time manual playing of personnel is more similar; the weather data are read from the Internet and play processing work is carried out, so that the weather forecast voice play can be played in real time, the data are accurate and timely, errors of manual voice play are reduced, the time of manual voice play is shortened, and labor cost is saved.

Description

Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method
Technical Field
The invention belongs to the field of voice playing processing, and particularly relates to an intelligent weather forecast system of a broadcasting station and a weather forecast voice segmentation method.
Background
At present, a dubbing person must record in advance according to local weather data and edit the weather forecast into an audio file according to the local weather data, so that the audio file can be finally scheduled for broadcasting, 100% of effectiveness (time difference weather change) cannot be achieved after broadcasting every day, workers need to rest (non-working time and legal holidays), certain error probability exists in manual operation, the final weather forecast can only broadcast temperature and approximate weather change, and the information quantity is small.
Disclosure of Invention
The invention aims at the defects and provides an intelligent weather forecast system and a weather forecast voice segmentation method for a broadcasting station.
In order to achieve the above purpose, the present invention provides the following technical solutions: the intelligent weather forecast system of the broadcasting station comprises a recording unit, a recording unit and a control unit, wherein the recording unit is used for recording according to word statement data to form voice data;
the voice segmentation unit is used for converting the text sentence data into phonetic sentence data, and enabling the phonetic sentence data to correspond to the voice data one by one, so that corresponding initial consonant voice fragments and corresponding final voice fragments are found in the voice data by the initial consonant and the final in the phonetic sentence data, and the frequency of the initial consonant voice fragments is analyzed;
the voice segmentation unit comprises a frequency segmentation unit and is used for segmenting the rear ends of the initial consonant voice fragments with dense frequencies to form voice materials according to analysis results, and segmenting the pinyin sentences according to the voice materials to form corresponding pinyin sentence materials;
the voice database is used for storing the voice materials, the pinyin sentence materials, and music materials such as head music, tail music, background music and the like;
the information reading unit is used for reading weather data information through the Internet;
the information analysis unit is used for analyzing the weather data information of the information reading unit to form read pinyin sentence data, and splitting the read pinyin sentence data into pinyin sentence materials which can be identified by the voice database;
and the voice synthesis unit is used for finding the voice materials corresponding to each other from the voice database according to the phonetic sentence materials of the information analysis unit to synthesize the voice materials to form play voice.
Preferably, the voice segmentation unit further comprises a mute segmentation unit, which is used for segmenting according to the voice segments with the volume below 20 db as nodes.
Preferably, the voice synthesis unit further comprises a sound mixing unit, and the sound mixing unit is used for synthesizing the played voice after the front end of the played voice is added with the head music, the rear end of the played voice is added with the tail music and the background music, so as to form the forecast voice.
Preferably, the sound box speech segments with dense frequencies in the speech segmentation unit are sound box speech segments with frequencies above 8000 hz.
A weather forecast voice segmentation method is applicable to a broadcasting station intelligent weather forecast system, and comprises the following steps:
step S101, recording according to the required text sentence data to form voice data;
step S102, converting the text sentence data into phonetic sentence data;
step S103, finding out corresponding phonetic voice fragments in the voice data according to the phonetic in the phonetic sentence data;
step S104, finding out corresponding initial consonant voice fragments and final sound fragments in the pinyin voice fragments according to the initial consonants and final sounds in the pinyin;
step S105, when the frequency of the initial consonant voice segment is above 8000 Hz, cutting by taking the rear end of the initial consonant voice segment as a node to form a corresponding voice material;
step S106, the phonetic sentence data is segmented according to the voice material to form corresponding phonetic sentence materials;
step S107, the phonetic sentence material and the corresponding phonetic material are stored in the phonetic database.
Preferably, in step S105, the voice with the voice volume below 20 db in the voice material is taken as a node, and the voice material of the node is segmented to form the corresponding voice material.
Compared with the prior art, the invention has the following beneficial effects:
according to the intelligent weather forecast system of the broadcasting station, the word statement data to be played is recorded through the recording unit to form voice data, and the voice segmentation unit segments the voice data into voice materials to be stored in the voice database, so that the voice materials in the voice database are updated in real time; the weather data information is acquired on the Internet through the information reading unit, the information analysis unit analyzes the weather data information acquired by the information reading unit to form pinyin statement materials which can be identified by the voice database, and the voice synthesis unit analyzes the pinyin statement materials from the information analysis unit to find out corresponding voice materials in the voice database for synthesis, so that the weather forecast voice broadcasting can be played in real time, the data is accurate and timely, errors of manual voice broadcasting are reduced, meanwhile, the time of manual voice broadcasting is reduced, and the labor cost is saved; the sound mixing unit adds the head music, the tail music and the background music into the played voice, so that the played voice is milder and more comfortable in the listening process;
according to the weather forecast voice segmentation method, the voice data is segmented by adopting the mode that the frequency of the initial consonant voice segments is more than 8000 Hz as the segmentation node, so that voice is softer and not stiff when the segmented voice material is synthesized, the phenomenon of clamping and uncoordinated voice mutation is avoided, a better pronunciation effect is achieved, and the method is closer to real-time manual playing of personnel.
Drawings
FIG. 1 is a schematic block diagram of the basic structure of the intelligent weather forecast system of the broadcasting station of the present invention;
fig. 2 is a flowchart of a weather forecast voice segmentation method according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical schemes of the embodiments of the invention can be combined, and the technical features of the embodiments can also be combined to form a new technical scheme.
Referring to fig. 1, the present invention provides the following technical solutions: the intelligent weather forecast system of the broadcasting station comprises a recording unit 1, a voice recording unit and a data processing unit, wherein the recording unit is used for manually recording according to word statement data to form voice data, and the word statement data is Chinese statement;
the voice segmentation unit 2 is used for converting the text sentence data into phonetic sentence data, and enabling the phonetic sentence data to correspond to the voice data one by one, so that corresponding initial consonant voice fragments and corresponding final vowel voice fragments are found in the voice data by the initial consonant and the final vowel in the phonetic sentence data, and analyzing the frequency of the initial consonant voice fragments;
the voice segmentation unit 2 comprises a frequency segmentation unit 21, which is used for segmenting the rear ends of the initial consonant voice fragments with dense frequencies to form voice materials according to analysis results, and segmenting the pinyin sentences according to the voice materials to form corresponding pinyin sentence materials;
a voice database 3 for storing the voice material and the pinyin sentence material, and music materials such as head music, tail music, background music, etc.; the phonetic materials comprise common phonetic materials and change phonetic materials, and the pinyin sentence materials comprise common pinyin sentence materials and change pinyin sentence materials;
an information reading unit 4 for reading weather data information through the internet; the method comprises the steps of obtaining real-time weather data of the county, the city and the county of the country from an official data organization such as a weather bureau and the like, wherein the real-time weather data comprise real-time temperature, highest and lowest temperature, real-time humidity, real-time air quality, real-time weather state, predicted weather change within 24 hours and the like;
the information analysis unit 5 is used for analyzing the weather data information of the information reading unit to form read pinyin sentence data, and splitting the read pinyin sentence data into pinyin sentence materials which can be identified by the voice database 3;
and the voice synthesis unit 6 is used for finding the voice materials corresponding to one from the voice database according to the phonetic sentence materials of the information analysis unit to synthesize the voice materials so as to form play voice.
The voice segmentation unit 2 further includes a silence segmentation unit 22, which is configured to segment (e.g. pause in a sentence, and corresponding to comma, colon, etc. in text sentence data) according to a voice segment with a volume below 20 db as a node. The voice segmentation unit 2 further includes a selection unit 23, configured to divide the pinyin sentence data into a common pinyin sentence portion and a variable pinyin sentence portion according to different application occasions, and for the voice data including the common pinyin sentence portion and the variable pinyin portion, the silence segmentation unit 21 is preferably selected to segment the voice data, and segment the voice data portion corresponding to the common pinyin sentence portion into common voice materials, and for the common voice materials, the common voice materials may not be segmented any more and are directly stored in the voice database 3 as voice materials; the speech data portion corresponding to the changed pinyin portion is split by the frequency splitting unit 21.
The speech synthesis unit 6 further includes a mixing unit 61 for synthesizing the played speech after the front end of the played speech is added with the head music, the rear end of the played speech is added with the tail music and the background music, so as to form a predicted speech. The speech synthesis unit 6 further comprises a breathing sound unit for adding breathing sound at the silence segmentation unit 21 in the synthesized playing speech when the number of initials and finals in the playing speech is greater than 40. The breath sounds are stored in the speech database 3.
The sound source sound fragments with dense frequency in the sound segmentation unit 2 are sound source sound fragments with frequency more than 8000 Hz. The frequency of the initial consonant voice segment is more than 8000 Hz, and the initial consonant is s, sh, q, x.
Referring to fig. 2, a method for voice segmentation of weather forecast is applicable to a broadcasting station intelligent weather forecast system, and a voice segmentation unit 2 in the broadcasting station intelligent weather forecast system processes voice data by adopting the method, the method comprises:
step S101, recording according to the required text sentence data to form voice data; the text sentence data is a Chinese character sentence; the recording is artificial recording, and different people record voice data with different tone colors;
step S102, converting the text sentence data into phonetic sentence data;
step S103, finding out corresponding phonetic voice fragments in the voice data according to the phonetic in the phonetic sentence data;
step S104, finding out corresponding initial consonant voice fragments and final sound fragments in the pinyin voice fragments according to the initial consonants and final sounds in the pinyin;
step S105, when the frequency of the initial consonant voice segment is above 8000 Hz, cutting by taking the rear end of the initial consonant voice segment as a node to form a corresponding voice material; the frequency of the initial consonant voice segment is 8000 Hz or more, and the initial consonant is s, sh, q, x;
step S106, the phonetic sentence data is segmented according to the voice material to form corresponding phonetic sentence materials;
step S107, a voice database 3 is established, and the phonetic sentence materials and the corresponding voice materials are stored in the voice database 3.
The step S105 further includes using the voice with the voice volume below 20 db in the voice data as a node, and splitting the voice data of the node to form the corresponding voice material.
In the step S105, the pinyin voice segment may be further divided into a common pinyin voice segment and a variable pinyin voice segment by using the voice with the voice volume below 20 db in the voice data as a node, and dividing the voice data of the node; and secondly, detecting and judging the varied pinyin voice segment, and when the frequency of the initial consonant voice segment is more than 8000 Hz, cutting by taking the rear end of the initial consonant voice segment as a node to form a corresponding voice material.
Example 1:
the working principle of the intelligent weather forecast system of the broadcasting station in the invention is as follows:
recording the text sentence data (such as' the current air temperature is sixteen ℃ C.) commonly used in weather forecast through a recording unit 1 to form voice data; the speech segmentation unit 2 converts the text sentence data into pinyin sentence data (e.g. "d ā n E.G. qi n q. Mu.w n w. Mu.i: the sh I is the sh d), the selection unit 23 divides the phonetic sentence data into a common phonetic sentence part (such as'd ā n-size qia n q-size w n w ei'), a changed phonetic sentence part (such as 'sh I is the sh d is the d), for the phonetic data comprising the common phonetic sentence part and the changed phonetic sentence part, the mute segmentation unit 21 is preferentially selected to segment the phonetic data (short pause occurs in the sentence at the colon), the phonetic data part corresponding to the common phonetic sentence part is segmented into the common phonetic material, the common phonetic material can be directly stored in the phonetic database 3 as the phonetic material, the common phonetic sentence part is stored in the common phonetic sentence part (such as' sh I is the d 'and the common phonetic sentence part), the common phonetic sentence material is used for encoding the common phonetic material (the Chinese character mode (such as' current) or the Chinese character and the vowel addition unit 6 is convenient to query;
when the frequency of the initial sound fragment is above 8000 Hz through the frequency segmentation unit 21, the back end of the initial sound fragment is used as a node to segment the sound data corresponding to the changed phonetic sentence part (such as 'sh', 'I Li' and 'E' and 'D' and the changed phonetic sentence part corresponding to the segmented changed phonetic sentence part is stored in the sound database 3, and the changed phonetic sentence material is used for encoding the changed phonetic material (the encoding can also adopt a Chinese character mode or a Chinese character and initial consonant adding mode (such as 'I six' and) and is convenient for the inquiry of the sound synthesis unit 6;
the information reading unit 4 obtains real-time weather data information (for example, "the current air temperature is 16 ℃) through the Internet, and transmits the weather data to the information analyzing unit 5; the information analysis unit 5 analyzes the weather data information to form read pinyin sentence data (such as'd ā n Gama' n q n Gama 'n w Ei: sh's sh'd' and split the read pinyin sentence data into pinyin sentence materials (such as'd ā n' qi n q Ei's n w Ei', 'sh' and 'i's sh'd' and split the read pinyin sentence data into varied speech materials with frequencies of more than 8000 hz voice fragments if the split read pinyin sentence data contains a common pinyin sentence part and a varied pinyin sentence part, and split the varied pinyin sentence part into varied speech materials by the information analysis unit 5;
the voice synthesis unit 6 finds out one-to-one corresponding common voice materials and changing voice materials in the voice database 3 according to the common pinyin sentence materials and the changing pinyin sentence materials analyzed by the information analysis unit 5, synthesizes the common voice materials and the changing voice materials, and forms play voice; the mixing unit 61 finds out the corresponding piece of head music, piece of tail music and background music from the voice database 3, loads the corresponding positions of the played voices and synthesizes the corresponding positions to form forecast voices for playing.
The whole process of the intelligent weather forecast system of the broadcasting station acquires data through connecting with the Internet, the data are accurate and timely, manual operation is not needed, the computer can automatically complete the intelligent weather forecast system, the synthesis time is not more than 1 second, finally, the synthesized voice comprises real-time weather data of all places of the whole country and weather changes within 24 hours, weather early warning and warmth prompt are realized, the broadcasting station is still available in non-working time and legal holidays, and the time and labor cost are saved.
The working principle of the weather forecast voice segmentation method in the invention is as follows:
in step S101, manual recording is performed according to the required text sentence data (such as "the current air temperature is sixteen ℃), so as to form voice data, and then step S102 is performed;
in step S102, the text sentence data is converted into phonetic sentence data (e.g. "d ā n site qi n q. Mu.w n w. Mu.i: sh I sh d. Mu."), followed by step S103;
in step S103, corresponding pinyin voice segments are found in the voice data according to the pinyin in the pinyin sentence data, each pinyin is found to correspond to one of the pinyin voice segments, and step S104 is performed;
in step S104, corresponding initial consonant speech segments and final sound segments are found in the pinyin speech segments according to the initial consonants and final sounds in the pinyin, and then step S105 is performed;
in step S105, when the frequency of the initial consonant voice segment is above 8000 hz, the back end of the initial consonant voice segment is used as a node to segment the voice data, the first voice data (such as "d ā n #," i_nq "," i_w_nwysish "," i_li_sh "," i_w_h "", and the second voice data (such as "d ā n_q", "i_nq", "i_w_nwi", "sh_h) formed after segmentation is used as a node, and the second voice data (such as" d ā n_q "," i_w_nq "," sh "," i_li_d) formed after segmentation is further used as a voice material, and the second voice data segment after segmentation is used as a voice material;
in step S106, the phonetic sentence data is segmented according to the phonetic material, forming corresponding Pinyin sentence materials (e.g. "d ā n q", "i nq", "i w nwe ish", "i liu", "e sh", "i d); cutting the phonetic sentence data based on the cutting point of the phonetic material, and then performing step S107;
in step S107, a speech database 3 is established, and pinyin sentence materials and corresponding speech materials are stored in the speech database; wherein the phonetic sentence material phonetic material corresponds to the code or code number in the phonetic database 3.
In another embodiment of the weather forecast voice segmentation method, the step S105 is improved, the voice with the voice volume below 20 db in the voice data is divided into nodes, the voice data of the nodes is segmented, the first voice data (such as'd ā n, qi n q μw n w ei', 'sh i li'd ') formed after segmentation is directly used as the voice material, and the first voice data is arranged according to different use environments to form common pinyin voice segments (such as'd ā n, qi n q a, w n w i ') and variable pinyin voice segments (' sh i'd') which are directly stored in the voice database 3; secondly, detecting and judging the varied pinyin voice segment, when the frequency of the initial consonant voice segment is more than 8000 Hz, cutting the back end of the initial consonant voice segment as a node, and forming second voice data (such as ' sh ', ' Li ' sh ', ' e sh ','d ' after cutting) which are formed after cutting, thereby forming corresponding voice materials and storing the voice materials in the voice database 3.
When synthesizing the voice materials in the voice database 3, firstly converting the word sentence data to be synthesized into pinyin sentence data, and then splitting the pinyin sentence data by taking the initial consonants (such as s, sh, q, x and the like) with the voice fragment frequency of more than 8000 Hz and the symbols (such as commas, colon and the like) as nodes to form the pinyin sentence materials; the corresponding voice materials are found in the voice database 3 through the phonetic sentence materials to be connected and synthesized, and play voice can be formed; the voice material formed after the voice data is segmented by taking the rear end of the initial consonant voice segment as a node and voice with voice volume below 20 dB in the voice data as a node, when the voice material is spliced and synthesized to play voice, the voice sounds softer and not stiff, the phenomenon of clamping and voice mutation incompatibility can not occur, the voice playing device has a better pronunciation effect, and is closer to real-time manual playing of personnel.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. A broadcasting station intelligent weather forecast system is characterized in that: comprising
The recording unit is used for recording according to the text sentence data to form voice data;
the voice segmentation unit is used for converting the text sentence data into phonetic sentence data, and enabling the phonetic sentence data to correspond to the voice data one by one, so that corresponding initial consonant voice fragments and corresponding final voice fragments are found in the voice data by the initial consonant and the final in the phonetic sentence data, and the frequency of the initial consonant voice fragments is analyzed;
the voice segmentation unit comprises a frequency segmentation unit and is used for segmenting the rear ends of the initial consonant voice fragments with dense frequencies to form voice materials according to analysis results, and segmenting the pinyin sentences according to the voice materials to form corresponding pinyin sentence materials; the sound box voice fragments with dense frequency in the voice segmentation unit are sound box voice fragments with the frequency of more than 8000 hertz; the voice segmentation unit further comprises a mute segmentation unit, and is used for segmenting according to the voice fragments with the volume below 20 dB as nodes;
the voice database is used for storing the voice materials, the pinyin sentence materials, and music materials such as head music, tail music, background music and the like;
the information reading unit is used for reading weather data information through the Internet;
the information analysis unit is used for analyzing the weather data information of the information reading unit to form read pinyin sentence data, and splitting the read pinyin sentence data into pinyin sentence materials which can be identified by the voice database;
and the voice synthesis unit is used for finding the voice materials corresponding to each other from the voice database according to the phonetic sentence materials of the information analysis unit to synthesize the voice materials to form play voice.
2. The broadcaster intelligent weather forecast system of claim 1, wherein: the voice synthesis unit also comprises a sound mixing unit, and is used for synthesizing after the front end of the played voice is added with the head music, the rear end is added with the tail music and the background music, so as to form the forecast voice.
3. A weather forecast voice segmentation method, which is applicable to the intelligent weather forecast system of the broadcasting station as recited in any one of claims 1-2, and the method comprises the following steps:
step S101, recording according to the required text sentence data to form voice data;
step S102, converting the text sentence data into phonetic sentence data;
step S103, finding out corresponding phonetic voice fragments in the voice data according to the phonetic in the phonetic sentence data;
step S104, finding out corresponding initial consonant voice fragments and final sound fragments in the pinyin voice fragments according to the initial consonants and final sounds in the pinyin;
step S105, when the frequency of the initial consonant voice segment is above 8000 Hz, cutting by taking the rear end of the initial consonant voice segment as a node to form a corresponding voice material;
step S106, the phonetic sentence data is segmented according to the voice material to form corresponding phonetic sentence materials;
step S107, the phonetic sentence material and the corresponding phonetic material are stored in the phonetic database.
4. A weather forecast phonetic segmentation method as claimed in claim 3, wherein: the step S105 further includes using the voice with the voice volume below 20 db in the voice data as a node, and segmenting the voice data of the node to form the corresponding voice material.
CN202010253310.XA 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method Active CN111583901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010253310.XA CN111583901B (en) 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010253310.XA CN111583901B (en) 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method

Publications (2)

Publication Number Publication Date
CN111583901A CN111583901A (en) 2020-08-25
CN111583901B true CN111583901B (en) 2023-07-11

Family

ID=72126123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010253310.XA Active CN111583901B (en) 2020-04-02 2020-04-02 Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method

Country Status (1)

Country Link
CN (1) CN111583901B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802460B (en) * 2021-04-14 2021-10-19 中国科学院国家空间科学中心 Space environment forecasting system based on voice processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1126349A (en) * 1995-03-06 1996-07-10 郑元成 Semi-syllable method for continuously composing Chinese speech
JP2009020264A (en) * 2007-07-11 2009-01-29 Hitachi Ltd Voice synthesis device and voice synthesis method, and program
CN105336321A (en) * 2015-09-25 2016-02-17 百度在线网络技术(北京)有限公司 Phonetic segmentation method and device for speech synthesis
JP2016218281A (en) * 2015-05-21 2016-12-22 日本電信電話株式会社 Voice synthesizer, method thereof, and program

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1333501A (en) * 2001-07-20 2002-01-30 北京捷通华声语音技术有限公司 Dynamic Chinese speech synthesizing method
US7418389B2 (en) * 2005-01-11 2008-08-26 Microsoft Corporation Defining atom units between phone and syllable for TTS systems
CN101261831B (en) * 2007-03-05 2011-11-16 凌阳科技股份有限公司 A phonetic symbol decomposition and its synthesis method
US7953600B2 (en) * 2007-04-24 2011-05-31 Novaspeech Llc System and method for hybrid speech synthesis
JP5177135B2 (en) * 2007-05-08 2013-04-03 日本電気株式会社 Speech synthesis apparatus, speech synthesis method, and speech synthesis program
CN104318920A (en) * 2014-10-07 2015-01-28 北京理工大学 Construction method of cross-syllable Chinese speech synthesis element with spectrum stable boundary
CN104967789B (en) * 2015-06-16 2016-06-01 福建省泉州市气象局 The automatic processing method that city window weather is dubbed and system thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1126349A (en) * 1995-03-06 1996-07-10 郑元成 Semi-syllable method for continuously composing Chinese speech
JP2009020264A (en) * 2007-07-11 2009-01-29 Hitachi Ltd Voice synthesis device and voice synthesis method, and program
JP2016218281A (en) * 2015-05-21 2016-12-22 日本電信電話株式会社 Voice synthesizer, method thereof, and program
CN105336321A (en) * 2015-09-25 2016-02-17 百度在线网络技术(北京)有限公司 Phonetic segmentation method and device for speech synthesis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Perceptual and objective detection of discontinuties in concatenative speech synthesis.《2001 IEEE International Conference on Acoustics,Speech,and Signal Processing》.2001,第837-840页. *
基于小波变换的自动声/韵切分的研究;李永光 等;《哈尔滨工程大学学报 》;第第19卷卷(第第3期期);第75-80页 *

Also Published As

Publication number Publication date
CN111583901A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
Vroomen et al. Duration and intonation in emotional speech
Chu et al. Selecting non-uniform units from a very large corpus for concatenative speech synthesizer
CN101171624B (en) Speech synthesis device and speech synthesis method
CN110136687B (en) Voice training based cloned accent and rhyme method
US20200082805A1 (en) System and method for speech synthesis
US8775185B2 (en) Speech samples library for text-to-speech and methods and apparatus for generating and using same
CN103597543A (en) Semantic audio track mixer
CN111583901B (en) Intelligent weather forecast system of broadcasting station and weather forecast voice segmentation method
Heeringa et al. Norwegian dialects examined perceptually and acoustically
Toivanen et al. Emotions in [a]: a perceptual and acoustic study
Louw et al. A general-purpose IsiZulu speech synthesizer
von Coler et al. CMMSD: A data set for note-level segmentation of monophonic music
JPH09146580A (en) Effect sound retrieving device
CN105719641B (en) Sound method and apparatus are selected for waveform concatenation speech synthesis
JP4150645B2 (en) Audio labeling error detection device, audio labeling error detection method and program
CN111429878B (en) Self-adaptive voice synthesis method and device
CN109389969B (en) Corpus optimization method and apparatus
JP2536169B2 (en) Rule-based speech synthesizer
US6934680B2 (en) Method for generating a statistic for phone lengths and method for determining the length of individual phones for speech synthesis
EP1589524B1 (en) Method and device for speech synthesis
JP3374767B2 (en) Recording voice database method and apparatus for equalizing speech speed, and storage medium storing program for equalizing speech speed
JPH0863187A (en) Speech synthesizer
EP1640968A1 (en) Method and device for speech synthesis
CN117475991A (en) Method and device for converting text into audio and computer equipment
FI119859B (en) A method for producing speech synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.104, North building, No.10 Lanni Road, Tianxin District, Changsha City, Hunan Province

Applicant after: Hunan Shengguang Information Technology Co.,Ltd.

Address before: No.104, North building, No.10 Lanni Road, Tianxin District, Wuhan City, Hubei Province, 430000

Applicant before: Hunan Shengguang Information Technology Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.104, North building, No.10 Lanni Road, Tianxin District, Changsha, Hunan 410000

Applicant after: Hunan Shengguang Technology Co.,Ltd.

Address before: No.104, North building, No.10 Lanni Road, Tianxin District, Changsha City, Hunan Province

Applicant before: Hunan Shengguang Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant