CN110070893A - A kind of system, method and apparatus carrying out sentiment analysis using vagitus - Google Patents

A kind of system, method and apparatus carrying out sentiment analysis using vagitus Download PDF

Info

Publication number
CN110070893A
CN110070893A CN201910227535.5A CN201910227535A CN110070893A CN 110070893 A CN110070893 A CN 110070893A CN 201910227535 A CN201910227535 A CN 201910227535A CN 110070893 A CN110070893 A CN 110070893A
Authority
CN
China
Prior art keywords
model
sound
vagitus
training data
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910227535.5A
Other languages
Chinese (zh)
Inventor
陈丹
徐滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN201910227535.5A priority Critical patent/CN110070893A/en
Publication of CN110070893A publication Critical patent/CN110070893A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The invention belongs to depth learning technology fields, and in particular to a kind of system, method and apparatus that sentiment analysis is carried out using vagitus;Crying detection module, training data sample, obtains detection model, using the sound to be detected of detection model detection input, judges whether the sound of input is vagitus;Crying analysis module, training data sample, obtains analysis model, does emotional semantic classification to the vagitus detected using the analysis model;Model modification module uploads the acoustic segment labeled as identification mistake and corresponding class label, while the disaggregated model that upload user is currently used;The voice data and original training data uploaded using user, the disaggregated model that fine tuning user uploads;Trained new model is downloaded, original model is replaced.Have the advantages that portable, recognition accuracy is high and applicability is wide.

Description

A kind of system, method and apparatus carrying out sentiment analysis using vagitus
Technical field
The invention belongs to depth learning technology fields, and in particular to a kind of to be using what vagitus carried out sentiment analysis System, method and apparatus.
Background technique
Consumption in terms of baby-monitoring now, which only resides within, employs nurse and hospital to nurse a baby.With the development of society, people The problems such as power is expensive gradually emerges in large numbers, and undoubtedly increases the burden for the young parent that those need that nurse is engaged to nurse family. In addition the busy time is more and more outside by present young parent, therefore lacks to oneself baby and look after.If giving old man to look after Baby, old man are getting in years, and looking after baby may be neglected, it may appear that baby cry is not slept by timely nursing, at night Baby kick quilt son nobody know, babies uncomfortable unmanned dawn phenomena such as.
ZL201310440063.4 discloses a kind of baby monitor that can identify vagitus and vagitus identification side Method;Which disclose following technical solutions: a kind of baby monitor that can identify vagitus, including main control module, baby Crying identification module and SMS transmission module, wherein main control module are receiving baby that vagitus identification module is sent After the information cry and screamed, SMS transmission module is sent that information to;Vagitus identification module, is connected with main control module, in real time The voice messaging in ambient enviroment is acquired, and voice messaging is handled, by other sound in the crying and environment of baby It distinguishes, vagitus identification module includes that voice messaging acquisition module, speech signal analysis module and voice messaging determine mould Block, wherein voice messaging acquisition module acquires the voice messaging in ambient enviroment in real time, and speech signal analysis module will collect Voice messaging be first split into speech frame, frame length is 100 milliseconds, and it is 50 milliseconds that frame, which moves, is then made plus Hanning window processing, finally right Every frame voice carries out Fast Fourier Transform (FFT), will treated transmission of speech information to voice messaging determination module, voice determines Module calculates the ratio between 1k Hz to 3k Hz frequency band energy and a frame energy summation, and ratio is greater than to 0.4 speech frame Labeled as crying frame, comprehensive continuous 20 frame, when having 10 frames in continuous 20 frame the above are when crying frame, judgement detects baby cried Sound, and baby cry information is sent to main control module;SMS transmission module is connected with main control module, receives main control module After the information that the baby of transmission is crying and screaming.
Bulky entity apparatus, it is not readily portable or need to connect server and analyze, depend on network.And it lacks Few feedback to analysis result.
Summary of the invention
In view of this, the main purpose of the present invention is to provide it is a kind of using vagitus carry out sentiment analysis system, Method and apparatus have the advantages that portable, recognition accuracy is high and applicability is wide.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
As shown in Figure 1, a kind of system for carrying out sentiment analysis using vagitus, the system comprises:
Crying detection module, training data sample, obtains detection model, uses the to be detected of detection model detection input Sound judges whether the sound of input is vagitus;
Crying analysis module, training data sample, obtains analysis model, using the analysis model to the baby cried detected Sound does emotional semantic classification;
Model modification module uploads the acoustic segment labeled as identification mistake and corresponding class label, while upload user Currently used disaggregated model;The voice data and original training data uploaded using user, the classification that fine tuning user uploads Model;Trained new model is downloaded, original model is replaced.
Further, the crying detection module includes: detection training module, and training data, training are collected in training part A model that can detecte out vagitus out;Test module is detected, the sound to be detected of detection model detection input is used Sound judges whether the sound of input is vagitus.
Further, the crying analysis module includes: analyzing and training module, and training data sample obtains analysis model, Train the model that can analyze vagitus;Test module is analyzed, emotional semantic classification is done to the vagitus detected.
Further, the model modification module includes: data uploading module, uploads the acoustic segment labeled as identification mistake And corresponding class label, while the disaggregated model that upload user is currently used;Model training module, the sound uploaded using user Sound data and original training data, the disaggregated model that fine tuning user uploads;New model download module downloads trained new mould Type replaces original model.
A method of sentiment analysis being carried out using vagitus, the method executes following steps:
Training data sample, obtains detection model, using the sound to be detected of detection model detection input, judges to input Sound whether be vagitus;
Training data sample, obtains analysis model, does emotional semantic classification to the vagitus detected using the analysis model;
Upload the acoustic segment labeled as identification mistake and corresponding class label, while the classification that upload user is currently used Model;The voice data and original training data uploaded using user, the disaggregated model that fine tuning user uploads;Downloading trains New model, replace original model.
Further, the training data sample, obtains detection model, uses the to be detected of detection model detection input Sound, judge input sound whether be vagitus method execute following steps:
Various ambient sounds are collected as training data, and manually add a tag along sort for every section of sound;
It is training set and test set by training data random division;
Every section of sound in training set is sampled, at random cutting and normalized so that the numerical value of each sampled point In [- 1,1] range;
Training data is sent into neural network training model;
Acquire sound to be detected;
Collected sound to be detected is sampled, random cutting and normalized, so that the numerical value of each sampled point In [- 1,1] range;
The collected multistage sound of multistage is sent into trained neural network in advance, prediction result is obtained, prediction is tied Fruit is cooked ballot processing, and highest prediction result of winning the vote is final prediction result.
Further, the training data sample, obtains analysis model, using the analysis model to the baby cried detected The method that sound does emotional semantic classification executes following steps:
Crying of the baby under different emotions state is collected, and puts on class label;
It is training set and test set by training data random division;
Every section of sound in training set is pre-processed, comprising: sample, have and overlappingly cut, sound is done at normalization Reason, so that the numerical value of each sampled point is in [- 1,1] range;
Training data is sent into neural network training model;
The crying that vagitus detection part is detected is as input.
Sound is sent into trained neural network in advance, prediction result is obtained, ballot processing is done to prediction result, win the vote Highest prediction result is final prediction result.
A kind of device carrying out sentiment analysis using vagitus, described device includes: a kind of computer of non-transitory Readable storage medium storing program for executing, the storage medium store computations comprising: training data sample obtains detection model, uses this Detection model detection input sound to be detected, judge input sound whether be vagitus code segment;Training data sample This, is obtained analysis model, is done the code segment of emotional semantic classification to the vagitus detected using the analysis model;Upload is labeled as Identify wrong acoustic segment and corresponding class label, while the disaggregated model that upload user is currently used;It is uploaded using user Voice data and original training data, fine tuning user upload disaggregated model;Trained new model is downloaded, replacement is original Model code segment.
A kind of system, method and apparatus being carried out sentiment analysis using vagitus of the invention are had following beneficial to effect Fruit: can automatically detect the crying of baby, and analyze emotion reason corresponding to this kind of crying.It can be directly mounted at intelligent hand On the intelligent terminals such as machine, tablet computer, without additional hardware;Conventional crying detection and analysis is all based on local (intelligence Energy mobile phone, tablet computer etc. itself), do not depend on network;User can make feedback to analysis result, collect the language of analysis mistake The regular upload server of segment, updates classifier, and the exclusive crying classifier of customization baby improves classification accuracy.
Detailed description of the invention
Fig. 1 is the method flow schematic diagram of the system of the invention that sentiment analysis is carried out using vagitus.
Fig. 2 is the structural representation of the crying detection module of the system of the invention that sentiment analysis is carried out using vagitus Figure.
Fig. 3 is that the structure of the crying analysis detection module of the system of the invention that sentiment analysis is carried out using vagitus is shown It is intended to.
Fig. 4 is the structural representation of the model modification module of the system of the invention that sentiment analysis is carried out using vagitus Figure.
Fig. 5 is the method flow schematic diagram of the invention that sentiment analysis method is carried out using vagitus.
Specific embodiment
With reference to the accompanying drawing and the embodiment of the present invention is described in further detail method of the invention.
A kind of system carrying out sentiment analysis using vagitus, the system comprises:
Crying detection module, training data sample, obtains detection model, uses the to be detected of detection model detection input Sound judges whether the sound of input is vagitus;
Crying analysis module, training data sample, obtains analysis model, using the analysis model to the baby cried detected Sound does emotional semantic classification;
Model modification module uploads the acoustic segment labeled as identification mistake and corresponding class label, while upload user Currently used disaggregated model;The voice data and original training data uploaded using user, the classification that fine tuning user uploads Model;Trained new model is downloaded, original model is replaced.
Further, the crying detection module includes: detection training module, and training data, training are collected in training part A model that can detecte out vagitus out;Test module is detected, the sound to be detected of detection model detection input is used Sound judges whether the sound of input is vagitus.
Further, the crying analysis module includes: analyzing and training module, and training data sample obtains analysis model, Train the model that can analyze vagitus;Test module is analyzed, emotional semantic classification is done to the vagitus detected.
Further, the model modification module includes: data uploading module, uploads the acoustic segment labeled as identification mistake And corresponding class label, while the disaggregated model that upload user is currently used;Model training module, the sound uploaded using user Sound data and original training data, the disaggregated model that fine tuning user uploads;New model download module downloads trained new mould Type replaces original model.
The method of operation of crying detection module are as follows: collect various ambient sounds as training data, and be manually every section of sound Sound add tag along sort (shared K class) (such as: vagitus, the patter of rain, sound of the wind, laugh, mew, barking, footsteps, Enabling sound etc.).
It then is training set and test set by training data random division.
Every section of sound in training set is pre-processed, including sampling (sample frequency 16khz), random cutting (so that The length of every section of sound is 25ms, that is, includes 16000 × 0.25=400 sampled point), normalized is done to sound, so that The numerical value of each sampled point is in [- 1,1] range.Each section of sound can generate multiple isometric acoustic segments in this way.Finally N number of training sample is generated, { xi, yi }, wherein xi is acoustic vector (length 400, value range are [- 1,1]), and yi is sound Label (value range be [0, K])
Training data is sent into neural network training model.Network structure is two one-dimensional convolution module (each module packets Containing an one-dimensional convolutional layer, one Relu layers, one pool layers), (each module includes a two dimension volume to three two-dimensional convolution layers Lamination, one Relu layers, one pool layers), three full articulamentums.Loss function is cross entropy.Calculation formula is as follows:
The output of neural network is K dimensional vector [a0, a1 ... ak-1], brings softmax formula into and K dimensional vector is calculated S=[s0, s1 ..., sk-1];
By the label vector y [y0, y1 ..., yk-1] of the S being calculated and the sample, (wherein yi=1, i are the sample Corresponding classification) bring cross entropy formula into, obtain loss L.
Wherein, cross entropy formula is as follows:
Sound is acquired in real time by intelligent terminal.
Collected sound is pre-processed, is sampled (sample frequency 16khz), random cutting (so that every section of sound Length is 25ms, that is, includes 16000 × 0.25=400 sampled point), normalized (Data=Data*1.0/ is done to sound Max (abs (Data))) so that the numerical value of each sampled point obtains M sections of sound in [- 1,1] range.
The collected multistage sound of multistage is sent into trained neural network in advance, M prediction result is obtained, to M Prediction result does ballot processing, and highest prediction result of winning the vote is final prediction result.
The method of operation of crying analysis module is as follows: collect baby under different emotions state crying (it is hungry, it is sleepy, Want to have the hiccups, pain, uncomfortable (diaper is wet, heat etc.)) and put on class label.
It is training set and test set by training data random division.
Every section of sound in training set is pre-processed, including sampling (sample frequency 16khz), has and overlappingly cuts (it is cut once at interval of 10ms, so that the length of every section of sound is 25ms, that is, include 16000 × 0.25=400 sampled point), Normalized is done to sound, so that the numerical value of each sampled point is in [- 1,1] range.Each section of sound in this way
Generate multiple isometric acoustic segments.N number of training sample is finally generated, { xi, yi }, wherein xi is that acoustic vector is (long Degree is 400, and value range is [- 1,1]), yi is the label of sound (value range is [0,5])
Training data is sent into neural network training model.Network structure is two one-dimensional convolution module (each module packets Containing an one-dimensional convolutional layer, one Relu layers, one pool layers), (each module includes two two dimension volumes to three two-dimensional convolution layers Lamination, one Relu layers, one pool layers), three full articulamentums.Loss function is cross entropy.Calculation formula is as follows:
The output of neural network is K dimensional vector [a0, a1 ... ak-1], brings softmax formula into and K dimensional vector is calculated S=[s0, s1 ..., sk-1];
By the label vector y [y0, y1 ..., yk-1] of the S being calculated and the sample, (wherein yi=1, i are the sample Corresponding classification) bring cross entropy formula into, obtain loss L.
It regard the crying (preprocessed obtained M section sound) that vagitus detection part detects as input.
M sound is sent into trained neural network in advance, M prediction result is obtained, voting booth is done to M prediction result Reason, highest prediction result of winning the vote is final prediction result.
User feedback, user can make feedback to analysis result, click Yes or No button, and if it is No, user can be with Select the classification that she praises.
A method of sentiment analysis being carried out using vagitus, the method executes following steps:
Training data sample, obtains detection model, using the sound to be detected of detection model detection input, judges to input Sound whether be vagitus;
Training data sample, obtains analysis model, does emotional semantic classification to the vagitus detected using the analysis model;
Upload the acoustic segment labeled as identification mistake and corresponding class label, while the classification that upload user is currently used Model;The voice data and original training data uploaded using user, the disaggregated model that fine tuning user uploads;Downloading trains New model, replace original model.
Further, the training data sample, obtains detection model, uses the to be detected of detection model detection input Sound, judge input sound whether be vagitus method execute following steps:
Various ambient sounds are collected as training data, and manually add a tag along sort for every section of sound;
It is training set and test set by training data random division;
Every section of sound in training set is sampled, at random cutting and normalized so that the numerical value of each sampled point In [- 1,1] range;
Training data is sent into neural network training model;
Acquire sound to be detected;
Collected sound to be detected is sampled, random cutting and normalized, so that the numerical value of each sampled point In [- 1,1] range;
The collected multistage sound of multistage is sent into trained neural network in advance, prediction result is obtained, prediction is tied Fruit is cooked ballot processing, and highest prediction result of winning the vote is final prediction result.
Further, the training data sample, obtains analysis model, using the analysis model to the baby cried detected The method that sound does emotional semantic classification executes following steps:
Crying of the baby under different emotions state is collected, and puts on class label;
It is training set and test set by training data random division;
Every section of sound in training set is pre-processed, comprising: sample, have and overlappingly cut, sound is done at normalization Reason, so that the numerical value of each sampled point is in [- 1,1] range;
Training data is sent into neural network training model;
The crying that vagitus detection part is detected is as input.
Sound is sent into trained neural network in advance, prediction result is obtained, ballot processing is done to prediction result, win the vote Highest prediction result is final prediction result.
A kind of device carrying out sentiment analysis using vagitus, described device includes: a kind of computer of non-transitory Readable storage medium storing program for executing, the storage medium store computations comprising: training data sample obtains detection model, uses this Detection model detection input sound to be detected, judge input sound whether be vagitus code segment;Training data sample This, is obtained analysis model, is done the code segment of emotional semantic classification to the vagitus detected using the analysis model;Upload is labeled as Identify wrong acoustic segment and corresponding class label, while the disaggregated model that upload user is currently used;It is uploaded using user Voice data and original training data, fine tuning user upload disaggregated model;Trained new model is downloaded, replacement is original Model code segment.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description The specific work process of system and related explanation, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
It should be noted that system provided by the above embodiment, only illustrate with the division of above-mentioned each functional module It is bright, in practical applications, it can according to need and complete above-mentioned function distribution by different functional modules, i.e., it will be of the invention Module or step in embodiment are decomposed or are combined again, for example, the module of above-described embodiment can be merged into a module, It can also be further split into multiple submodule, to complete all or part of the functions described above.The present invention is implemented Module, the title of step involved in example, it is only for distinguish modules or step, be not intended as to of the invention improper It limits.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description The specific work process and related explanation of storage device, processing unit, can refer to corresponding processes in the foregoing method embodiment, Details are not described herein.
Those skilled in the art should be able to recognize that, mould described in conjunction with the examples disclosed in the embodiments of the present disclosure Block, method and step, can be realized with electronic hardware, computer software, or a combination of the two, software module, method and step pair The program answered can be placed in random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electric erasable and can compile Any other form of storage well known in journey ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field is situated between In matter.In order to clearly demonstrate the interchangeability of electronic hardware and software, in the above description according to function generally Describe each exemplary composition and step.These functions are executed actually with electronic hardware or software mode, depend on technology The specific application and design constraint of scheme.Those skilled in the art can carry out using distinct methods each specific application Realize described function, but such implementation should not be considered as beyond the scope of the present invention.
Term " first ", " second " etc. are to be used to distinguish similar objects, rather than be used to describe or indicate specific suitable Sequence or precedence.
Term " includes " or any other like term are intended to cover non-exclusive inclusion, so that including a system Process, method, article or equipment/device of column element not only includes those elements, but also including being not explicitly listed Other elements, or further include the intrinsic element of these process, method, article or equipment/devices.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these Technical solution after change or replacement will fall within the scope of protection of the present invention.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.

Claims (8)

1. a kind of system for carrying out sentiment analysis using vagitus, which is characterized in that the system comprises:
Crying detection module, training data sample, obtains detection model, uses the sound to be detected of detection model detection input Sound judges whether the sound of input is vagitus;
Crying analysis module, training data sample, obtains analysis model, is done using the analysis model to the vagitus detected Emotional semantic classification;
Model modification module uploads the acoustic segment labeled as identification mistake and corresponding class label, while upload user is current The disaggregated model used;The voice data and original training data uploaded using user, the disaggregated model that fine tuning user uploads; Trained new model is downloaded, original model is replaced.
2. the system as described in claim 1 for carrying out sentiment analysis using vagitus, which is characterized in that the crying detection Module includes: detection training module, and training data is collected in training part, trains the mould that can detecte out vagitus Type;Test module is detected, using the sound to be detected of detection model detection input, judges whether the sound of input is baby cried Sound.
3. the system as claimed in claim 2 for carrying out sentiment analysis using vagitus, which is characterized in that the crying analysis Module includes: analyzing and training module, and training data sample obtains analysis model, and training one can analyze vagitus Model;Test module is analyzed, emotional semantic classification is done to the vagitus detected.
4. the system as claimed in claim 3 for carrying out sentiment analysis using vagitus, which is characterized in that the model modification Module includes: data uploading module, uploads the acoustic segment labeled as identification mistake and corresponding class label, while upload user Currently used disaggregated model;Model training module, the voice data and original training data uploaded using user, microcall The disaggregated model that family uploads;New model download module downloads trained new model, replaces original model.
5. a kind of method for carrying out sentiment analysis using vagitus, which is characterized in that the method executes following steps:
Training data sample, obtains detection model, using the sound to be detected of detection model detection input, judges the sound of input Whether sound is vagitus;
Training data sample, obtains analysis model, does emotional semantic classification to the vagitus detected using the analysis model;
Upload the acoustic segment labeled as identification mistake and corresponding class label, while the classification mould that upload user is currently used Type;The voice data and original training data uploaded using user, the disaggregated model that fine tuning user uploads;It downloads trained New model replaces original model.
6. the method as claimed in claim 5 for carrying out sentiment analysis using vagitus, which is characterized in that the training data Sample obtains detection model, using the sound to be detected of detection model detection input, judges whether the sound of input is baby The method of crying executes following steps:
Various ambient sounds are collected as training data, and manually add a tag along sort for every section of sound;
It is training set and test set by training data random division;
Every section of sound in training set is sampled, at random cutting and normalized so that the numerical value of each sampled point [- 1,1] in range;
Training data is sent into neural network training model;
Acquire sound to be detected;
Collected sound to be detected is sampled, random cutting and normalized so that the numerical value of each sampled point [- 1,1] in range;
The collected multistage sound of multistage is sent into trained neural network in advance, prediction result is obtained, prediction result is done Ballot processing, highest prediction result of winning the vote is final prediction result.
7. the method as claimed in claim 6 for carrying out sentiment analysis using vagitus, which is characterized in that the training data Sample obtains analysis model, executes following step using the method that the analysis model does emotional semantic classification to the vagitus detected It is rapid:
Crying of the baby under different emotions state is collected, and puts on class label;
It is training set and test set by training data random division;
Every section of sound in training set is pre-processed, comprising: sample, have and overlappingly cut, normalized is done to sound, is made The numerical value of each sampled point is obtained in [- 1,1] range;
Training data is sent into neural network training model;
The crying that vagitus detection part is detected is as input;
Sound is sent into trained neural network in advance, obtains prediction result, ballot processing is done to prediction result, highest of winning the vote Prediction result be final prediction result.
8. a kind of device for carrying out sentiment analysis using vagitus, which is characterized in that described device includes: a kind of non-transitory Computer readable storage medium, which stores computations comprising: training data sample obtains detection mould Type, using the detection model detection input sound to be detected, judge input sound whether be vagitus code segment;Instruction Practice data sample, obtains analysis model, do the code segment of emotional semantic classification to the vagitus detected using the analysis model;On Pass the acoustic segment labeled as identification mistake and corresponding class label, while the disaggregated model that upload user is currently used;It utilizes The voice data and original training data that user uploads, the disaggregated model that fine tuning user uploads;Trained new model is downloaded, Replace the code segment of original model.
CN201910227535.5A 2019-03-25 2019-03-25 A kind of system, method and apparatus carrying out sentiment analysis using vagitus Pending CN110070893A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910227535.5A CN110070893A (en) 2019-03-25 2019-03-25 A kind of system, method and apparatus carrying out sentiment analysis using vagitus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910227535.5A CN110070893A (en) 2019-03-25 2019-03-25 A kind of system, method and apparatus carrying out sentiment analysis using vagitus

Publications (1)

Publication Number Publication Date
CN110070893A true CN110070893A (en) 2019-07-30

Family

ID=67366561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910227535.5A Pending CN110070893A (en) 2019-03-25 2019-03-25 A kind of system, method and apparatus carrying out sentiment analysis using vagitus

Country Status (1)

Country Link
CN (1) CN110070893A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354375A (en) * 2020-02-25 2020-06-30 咪咕文化科技有限公司 Cry classification method, device, server and readable storage medium
CN111785300A (en) * 2020-06-12 2020-10-16 北京快鱼电子股份公司 Crying detection method and system based on deep neural network
CN113270115A (en) * 2020-02-17 2021-08-17 广东美的制冷设备有限公司 Infant monitoring device, infant monitoring method thereof, control device and storage medium
CN114463937A (en) * 2022-03-07 2022-05-10 云知声智能科技股份有限公司 Infant monitoring method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426438A (en) * 2012-05-25 2013-12-04 洪荣昭 Method and system for analyzing baby crying
CN104347066A (en) * 2013-08-09 2015-02-11 盛乐信息技术(上海)有限公司 Deep neural network-based baby cry identification method and system
CN106653001A (en) * 2016-11-17 2017-05-10 沈晓明 Baby crying identifying method and system
CN107122807A (en) * 2017-05-24 2017-09-01 努比亚技术有限公司 A kind of family's monitoring method, service end and computer-readable recording medium
CN107657963A (en) * 2016-07-25 2018-02-02 韦创科技有限公司 Sob identification system and sob discrimination method
CN107818779A (en) * 2017-09-15 2018-03-20 北京理工大学 A kind of infant's crying sound detection method, apparatus, equipment and medium
CN108305642A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN109065034A (en) * 2018-09-25 2018-12-21 河南理工大学 A kind of vagitus interpretation method based on sound characteristic identification
CN109243493A (en) * 2018-10-30 2019-01-18 南京工程学院 Based on the vagitus emotion identification method for improving long memory network in short-term
CN109509484A (en) * 2018-12-25 2019-03-22 科大讯飞股份有限公司 A kind of prediction technique and device of baby crying reason

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426438A (en) * 2012-05-25 2013-12-04 洪荣昭 Method and system for analyzing baby crying
CN104347066A (en) * 2013-08-09 2015-02-11 盛乐信息技术(上海)有限公司 Deep neural network-based baby cry identification method and system
CN107657963A (en) * 2016-07-25 2018-02-02 韦创科技有限公司 Sob identification system and sob discrimination method
CN106653001A (en) * 2016-11-17 2017-05-10 沈晓明 Baby crying identifying method and system
CN107122807A (en) * 2017-05-24 2017-09-01 努比亚技术有限公司 A kind of family's monitoring method, service end and computer-readable recording medium
CN108305642A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN107818779A (en) * 2017-09-15 2018-03-20 北京理工大学 A kind of infant's crying sound detection method, apparatus, equipment and medium
CN109065034A (en) * 2018-09-25 2018-12-21 河南理工大学 A kind of vagitus interpretation method based on sound characteristic identification
CN109243493A (en) * 2018-10-30 2019-01-18 南京工程学院 Based on the vagitus emotion identification method for improving long memory network in short-term
CN109509484A (en) * 2018-12-25 2019-03-22 科大讯飞股份有限公司 A kind of prediction technique and device of baby crying reason

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨汝清等: "《智能控制工程》", 31 January 2001 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113270115A (en) * 2020-02-17 2021-08-17 广东美的制冷设备有限公司 Infant monitoring device, infant monitoring method thereof, control device and storage medium
CN113270115B (en) * 2020-02-17 2023-04-11 广东美的制冷设备有限公司 Infant monitoring device, infant monitoring method thereof, control device and storage medium
CN111354375A (en) * 2020-02-25 2020-06-30 咪咕文化科技有限公司 Cry classification method, device, server and readable storage medium
CN111785300A (en) * 2020-06-12 2020-10-16 北京快鱼电子股份公司 Crying detection method and system based on deep neural network
CN111785300B (en) * 2020-06-12 2021-05-25 北京快鱼电子股份公司 Crying detection method and system based on deep neural network
CN114463937A (en) * 2022-03-07 2022-05-10 云知声智能科技股份有限公司 Infant monitoring method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110070893A (en) A kind of system, method and apparatus carrying out sentiment analysis using vagitus
CN108053838B (en) In conjunction with fraud recognition methods, device and the storage medium of audio analysis and video analysis
CN108009521B (en) Face image matching method, device, terminal and storage medium
San-Segundo et al. Feature extraction from smartphone inertial signals for human activity segmentation
US11355138B2 (en) Audio scene recognition using time series analysis
CN110853617B (en) Model training method, language identification method, device and equipment
CN103294199B (en) A kind of unvoiced information identifying system based on face's muscle signals
CN110570873A (en) voiceprint wake-up method and device, computer equipment and storage medium
CN109783798A (en) Method, apparatus, terminal and the storage medium of text information addition picture
CN104036776A (en) Speech emotion identification method applied to mobile terminal
CN108717852A (en) A kind of intelligent robot Semantic interaction system and method based on white light communication and the cognition of class brain
CN110972112B (en) Subway running direction determining method, device, terminal and storage medium
CN109394258A (en) A kind of classification method, device and the terminal device of lung's breath sound
CN108109331A (en) Monitoring method and monitoring system
CN111667818A (en) Method and device for training awakening model
CN107085717A (en) A kind of family's monitoring method, service end and computer-readable recording medium
CN109658921A (en) A kind of audio signal processing method, equipment and computer readable storage medium
Turan et al. Monitoring Infant's Emotional Cry in Domestic Environments Using the Capsule Network Architecture.
CN104123930A (en) Guttural identification method and device
CN110136726A (en) A kind of estimation method, device, system and the storage medium of voice gender
CN109602421A (en) Health monitor method, device and computer readable storage medium
CN114078472A (en) Training method and device for keyword calculation model with low false awakening rate
Mahmoud et al. Smart nursery for smart cities: Infant sound classification based on novel features and support vector classifier
Nguyen et al. A potential approach for emotion prediction using heart rate signals
Eyobu et al. A real-time sleeping position recognition system using IMU sensor motion data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190730

RJ01 Rejection of invention patent application after publication