CN107564531A - Minutes method, apparatus and computer equipment based on vocal print feature - Google Patents
Minutes method, apparatus and computer equipment based on vocal print feature Download PDFInfo
- Publication number
- CN107564531A CN107564531A CN201710743944.1A CN201710743944A CN107564531A CN 107564531 A CN107564531 A CN 107564531A CN 201710743944 A CN201710743944 A CN 201710743944A CN 107564531 A CN107564531 A CN 107564531A
- Authority
- CN
- China
- Prior art keywords
- vocal print
- print feature
- speech data
- mark
- record
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Telephonic Communication Services (AREA)
Abstract
The present invention proposes a kind of minutes method, apparatus and computer equipment based on vocal print feature, wherein, this method includes:It is determined that the first vocal print feature corresponding to the speech data currently obtained;Judge that first vocal print feature is characterized in no matching with fixed second vocal print;If mismatch, it is determined that the first mark corresponding with first vocal print feature;Record is labeled to the speech data with the described first mark.Hereby it is achieved that automatically making a distinction speech data according to vocal print feature, and minutes are generated, save time and cost, improved the accuracy and reliability of minutes, improve Consumer's Experience.
Description
Technical field
The present invention relates to technical field of information management, more particularly to a kind of minutes method based on vocal print feature, dress
Put and computer equipment.
Background technology
The method reported and recorded in existing meeting, typically utilizes smart mobile phone, video camera, microphone, recording pen
Recorded and recorded a video etc. speech of the equipment to everyone in conference process.The personnel to take minutes after the meeting can check, return
Play a record and record a video, be labeled, distinguished with the voice to different personnel, to sort out minutes.
However, the mode of above-mentioned minutes, the time of consuming is longer, and needs special personnel to be recognized, distinguished,
The waste of human resources is caused, cost is high, poor user experience.
The content of the invention
It is contemplated that at least solves one of technical problem in correlation technique to a certain extent.
Therefore, the present invention proposes a kind of minutes method based on vocal print feature, realize automatic according to vocal print feature
Speech data is made a distinction, and generates minutes, saves time and cost, improve minutes accuracy and can
By property, Consumer's Experience is improved.
The present invention also proposes a kind of minutes device based on vocal print feature.
The present invention also proposes a kind of computer equipment.
The present invention also proposes a kind of computer-readable recording medium
First aspect present invention embodiment proposes a kind of minutes method based on vocal print feature, including:It is determined that work as
First vocal print feature corresponding to the speech data of preceding acquisition;Judge first vocal print feature and fixed second vocal print feature
Whether match;If mismatch, it is determined that the first mark corresponding with first vocal print feature;With the described first mark to described
Speech data is labeled record.
The minutes method based on vocal print feature of the embodiment of the present invention, it is first determined the speech data pair currently obtained
The first vocal print feature answered, if the first vocal print feature mismatches with fixed second vocal print feature, it is determined that with the first vocal print
First mark corresponding to feature, then record is labeled to speech data with the first mark.Hereby it is achieved that according to vocal print feature
Automatically speech data is made a distinction, and generates minutes, save time and cost, improve the accuracy of minutes
And reliability, improve Consumer's Experience.
Second aspect of the present invention embodiment proposes a kind of minutes device based on vocal print feature, including:First is true
Cover half block, the first vocal print feature corresponding to the speech data currently obtained for determination;Judge module, for judging described first
Vocal print feature is characterized in no matching with fixed second vocal print;Second determining module, for first vocal print feature and
When the second vocal print feature determined mismatches, it is determined that the first mark corresponding with first vocal print feature;First logging modle,
For being labeled record to the speech data with the described first mark.
The minutes device based on vocal print feature of the embodiment of the present invention, it is first determined the speech data pair currently obtained
The first vocal print feature answered, if the first vocal print feature mismatches with fixed second vocal print feature, it is determined that with the first vocal print
First mark corresponding to feature, then record is labeled to speech data with the first mark.Hereby it is achieved that according to vocal print feature
Automatically speech data is made a distinction, and generates minutes, save time and cost, improve the accuracy of minutes
And reliability, improve Consumer's Experience.
Third aspect present invention embodiment proposes a kind of computer equipment, including:
Memory, processor and storage on a memory and the computer program that can run on a processor, the processing
Device realizes the minutes method based on vocal print feature as described in relation to the first aspect when performing described program.
Fourth aspect present invention embodiment proposes a kind of computer-readable recording medium, is stored thereon with computer journey
Sequence, the minutes method based on vocal print feature as described in relation to the first aspect is realized when the program is executed by processor.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the flow chart of the minutes method based on vocal print feature of one embodiment of the invention;
Fig. 2 is the flow chart of the minutes method based on vocal print feature of another embodiment of the present invention;
Fig. 3 is the structural representation of the minutes device based on vocal print feature of one embodiment of the invention;
Fig. 4 is the structural representation of the minutes device based on vocal print feature of another embodiment of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Various embodiments of the present invention are directed to the mode of existing minutes, and the time of consuming is longer, and need special people
Member is recognized, distinguished, and causes the waste of human resources, and cost is high, the problem of poor user experience, proposes that one kind is based on vocal print
The minutes method of feature.
Minutes method provided in an embodiment of the present invention based on vocal print feature, it is first determined the voice number currently obtained
According to corresponding first vocal print feature, if the first vocal print feature mismatches with fixed second vocal print feature, it is determined that with first
First mark corresponding to vocal print feature, then record is labeled to speech data with the first mark.Hereby it is achieved that according to vocal print
Feature automatically makes a distinction speech data, and generates minutes, saves time and cost, improves the standard of minutes
True property and reliability, improve Consumer's Experience.
Below with reference to the accompanying drawings the minutes method, apparatus and computer based on vocal print feature of the embodiment of the present invention are described
Equipment.
Fig. 1 is the flow chart of the minutes method based on vocal print feature of one embodiment of the invention.
As shown in figure 1, the minutes method based on vocal print feature of being somebody's turn to do includes:
Step 101, it is determined that the first vocal print feature corresponding to the speech data currently obtained.
Wherein, the minutes method provided in an embodiment of the present invention based on vocal print feature, can be by the embodiment of the present invention
The minutes device based on vocal print feature provided, hereinafter referred to as minutes device perform.The minutes device, can be with
It is configured in the arbitrary equipments such as mobile phone, computer, to be labeled in conference process to speech data, so as to generate meeting note
Record.
Specifically, the voice-input devices such as microphone can be pre-set in minutes device or equipment, so that
In conference process, the speech data of everyone by voice-input device, can be obtained.
During specific implementation, after the speech data for obtaining current speaker person, you can using sound groove recognition technology in e, it is determined that working as
First vocal print feature corresponding to the speech data of preceding acquisition.
Step 102, judge that the first vocal print feature is characterized in no matching with fixed second vocal print.
Wherein, the second vocal print feature, any vocal print feature in fixed vocal print feature is referred to.
Step 103, if mismatching, it is determined that the first mark corresponding with the first vocal print feature.
Step 104, record is labeled to speech data with the first mark.
It is understood that after getting speech data every time, you can it is determined that vocal print is special corresponding to the speech data obtained
Sign, it is if vocal print feature and vocal print feature corresponding to each speech data difference got before are different, i.e., first in conference process
When secondary acquisition has the speech data of certain vocal print feature, represent that the first time that the speech data is some personnel makes a speech, then may be used
So that the vocal print feature obtained first to be recorded, and for its distribution one mark, so as to the mark to speech data
It is labeled record.
Accordingly, before step 102, can also include:
After meeting starts, when getting speech data corresponding with the second vocal print feature first, the second vocal print is recorded
Feature.
If specifically, the first vocal print feature corresponding to the speech data currently obtained, with fixed second vocal print feature
Mismatch, i.e., the speech data currently obtained is made a speech for the first time of some personnel, then can be recorded the first vocal print feature
Come, and determine the first mark corresponding with the first vocal print feature, so as to which with the first mark, record is labeled to speech data.
Wherein, the first mark, there is the speech data of the first vocal print feature for unique mark.Specifically, the first mark
Can be as needed, set by any-mode.
For example it can set what is got for the first time to be marked with the unmatched vocal print feature of fixed vocal print feature, correspondence
Know " A ", what is got for the second time obtains with the unmatched vocal print feature of fixed vocal print feature, corresponding mark " B ", third time
Arrive with the unmatched vocal print feature of fixed vocal print feature, it is corresponding to identify " C ", the like, so as to obtain it is all with
Identified corresponding to the unmatched vocal print feature of vocal print feature of determination.
It should be noted that the first mark, can be any form of mark of Chinese character, letter, phonetic etc., not make herein
Limitation.
If in addition, the first vocal print feature corresponding to the speech data currently got, with some voice got before
The second vocal print feature is identical corresponding to data, then can determine voice corresponding to the first vocal print feature and the second vocal print feature difference
Data are the speech of same personnel, and due to being its corresponding second vocal print feature point when the personnel make a speech for the first time
Mark is matched somebody with somebody, then record can be labeled to the speech data currently obtained with mark corresponding with the second vocal print feature.
I.e., after step 102, can also include:
If matching, record is labeled to speech data with the corresponding with the second vocal print feature second mark.
Wherein, the second mark, there is the speech data of the second vocal print feature for unique mark.Specifically, the second mark
Can be as needed, set by any-mode.
It should be noted that the second mark, can be any form of mark of Chinese character, letter, phonetic etc., not make herein
Limitation.
" A " is identified as corresponding to vocal print feature a as an example it is assumed that having determined that, is identified as corresponding to vocal print feature b
“B”.If vocal print feature corresponding to the speech data currently obtained is c, due to vocal print feature c and fixed vocal print feature not
It matching, then can be that vocal print feature c distributes a mark " C ", and vocal print feature c and corresponding mark " C " are recorded, from
And record is labeled to the speech data currently obtained with " C ".If vocal print feature corresponding to the speech data currently obtained is
A, then can be to identify " A ", to current because vocal print feature a matches with fixed vocal print feature a corresponding to vocal print feature a
The speech data of acquisition is labeled record.
Specifically, by the minutes method provided in an embodiment of the present invention based on vocal print feature, can be in meeting
Cheng Zhong, according to vocal print feature, speech data is made a distinction automatically, and generate minutes, and before a conference begins, without carrying
The preceding vocal print feature to people with a part in a conference person is acquired and stored, and saves time and cost, is obtained compared to manual operation
Minutes it is more accurate, reliable.
In addition, in conference process, the time of different personnel's speeches is typically different, in embodiments of the present invention, can be with root
According to the time of speech, record is labeled to speech data.
That is, before step 104, can also include:
It is determined that current temporal information;
Accordingly, step 104 can include:
With the first mark and temporal information, record is labeled to speech data.
Wherein, temporal information, can be corresponding real-time time when obtaining current speech data;Or or
Be arranged as required to sometime to start the time of timing.For example 8 divide 8 seconds during August in 2017 8 days 8, get current
Speech data, then current temporal information can be " 2017-8-8 08:08:08”;Or when can set since meeting
Timing is carried out, if meeting started 10 points after 10 seconds, gets current speech data, then current temporal information can be " 00:
10:10 ", etc..
Specifically, after current temporal information and the first mark is determined, you can with the first mark and temporal information, to language
Sound data are labeled record.
For example speech data XXXX is got 8 when dividing 8 seconds during August in 2017 8 days 8, and corresponding to the speech data
Vocal print feature distribution is identified as " A ", then can be labeled as the speech data:2017-8-8 08:08:08, A says
XXXX。
The minutes method based on vocal print feature of the embodiment of the present invention, it is first determined the speech data pair currently obtained
The first vocal print feature answered, if the first vocal print feature mismatches with fixed second vocal print feature, it is determined that with the first vocal print
First mark corresponding to feature, then record is labeled to speech data with the first mark.Hereby it is achieved that according to vocal print feature
Automatically speech data is made a distinction, and generates minutes, save time and cost, improve the accuracy of minutes
And reliability, improve Consumer's Experience.
By above-mentioned analysis, it is determined that after the first vocal print feature corresponding to the speech data currently obtained, if first
Vocal print feature mismatches with fixed second vocal print feature, then can determine the first mark corresponding with the first vocal print feature,
So as to be labeled record to speech data with the first mark., can also be by voice when carrying out minutes in practice
Data are converted to text message and recorded.With reference to Fig. 2, the above situation is specifically described.
Fig. 2 is the flow chart of the minutes method based on vocal print feature of another embodiment of the present invention.
As shown in Fig. 2 this method includes:
Step 201, it is determined that the first vocal print feature corresponding to the speech data currently obtained.
Step 202, judge that the first vocal print feature is characterized in no matching with fixed second vocal print, if so, then performing step
Rapid 203, otherwise, perform step 205.
Wherein, above-mentioned steps 201-202 specific implementation process and principle, is referred to step 101- in above-described embodiment
The detailed description of step 102, here is omitted.
Step 203, text message is converted voice data into.
Step 204, with the corresponding with the second vocal print feature second mark, note is labeled to speech data and text message
Record.
Step 205, it is determined that the first mark corresponding with the first vocal print feature.
Step 206, text message is converted voice data into.
Step 207, with the first mark, record is labeled to speech data and text message.
If specifically, the first vocal print feature corresponding to the speech data currently obtained, with fixed second vocal print feature
Mismatch, i.e., the speech data currently obtained is made a speech for the first time of some personnel, then can be recorded the first vocal print feature
Come, and determine the first mark corresponding with the first vocal print feature, can be with so that after text message is converted voice data into
First mark, record is labeled to speech data and text message.
If the first vocal print feature corresponding to the speech data currently got, with some speech data pair got before
The second vocal print feature answered is identical, represents that speech data corresponding to the first vocal print feature and the second vocal print feature difference is same people
The speech of member, and due to when the personnel make a speech for the first time, being assigned with the second mark for its corresponding second vocal print feature,
Then note can be labeled to the speech data currently obtained with the second mark after text message is converted voice data into
Record.
Recorded by converting voice data into text message, user can be made directly to understand meeting according to text message
Content is discussed, understands conference content compared to according only to speech data, the time of consuming is shorter.
It is understood that in conference process, the speech of personnel participating in the meeting may be longer, so as to a piece for the minutes of generation
It is tediously long, in embodiments of the present invention, the keyword of meeting after meeting adjourned, can also be generated, so that user passes through pass
Keyword, the quick content of the discussions for understanding meeting.
That is, after step 203 or step 206, can also include:
Determine whether include the character features recorded in text message;
If including the frequency of occurrences of renewal character features;
After meeting adjourned, according to the frequency of occurrences of all character features recorded, the keyword of minutes is generated.
Specifically, character features can be extracted from text message by segmenting, filtering the modes such as meaningless word.
During specific implementation, after converting voice data into text message every time, you can from text message, extraction word is special
Sign, if the character features extracted from current text information did not occurred before, can record the character features,
And the frequency of occurrences of the character features is labeled as 1.
If the character features extracted in current text information, occurred before, i.e., current text information includes having remembered
The character features of record, then the existing frequency of occurrences of the character features can be determined, and added on the basis of the existing frequency of occurrences
1。
As an example it is assumed that after converting voice data into text message, determine that text message includes " computer ", and
Do not include " computer " in the text message obtained before, then " computer " can be recorded, and by " computer "
The frequency of occurrences is labeled as 1.After getting text message again, if text message includes " computer ", " it will can calculate
The frequency of occurrences of machine " is updated to 2.
Further, in embodiments of the present invention, the frequency of occurrences threshold value of keyword can be pre-set, so as in meeting
After end, the character features that frequency is more than predetermined threshold value can be will appear from, are defined as the keyword of meeting.Such as, it is assumed that in advance
The frequency of occurrences threshold value for setting keyword is 20, if after meeting adjourned, the frequency of occurrences for determining character features A is 10, word
The feature B frequency of occurrences is 21, and the character features C frequency of occurrences is 30, etc..Because character features B and the C frequency of occurrences are big
In predetermined threshold value 20, then character features B and C can be defined as to the keyword of meeting.
Or the quantity of keyword can be pre-set, so as to all words that after meeting adjourned, will can have been recorded
Order sequence of the feature by the frequency of occurrences from high to low, and then the character features that will come predetermined number above, are defined as meeting
The keyword of view.Such as, it is assumed that the quantity for pre-setting keyword is 2, if after meeting adjourned, determines character features A's
The frequency of occurrences is 10, and the character features B frequency of occurrences is 21, and the character features C frequency of occurrences is 30, character features D appearance
Frequency is 15, etc..Then according to the sequence of character features, frequency higher character features B and C can be will appear from and be defined as meeting
Keyword.
In addition, in embodiments of the present invention, the position letter where meeting can also be added in minutes as needed
Breath, meeting date, people with a part in a conference person's name etc..
Accordingly, when meeting starts or meeting adjourned, it may be determined that position, meeting date, participation meeting where meeting
Personnel's name of view etc., so that after minutes are generated, can be by the position where meeting, meeting date, people with a part in a conference
Member's name etc. is added in minutes.
It should be noted that position, meeting date, people with a part in a conference person's name where meeting etc., can be meeting
What automatically generated in journey or user was manually entered, it is not restricted herein.
The minutes method based on vocal print feature of the embodiment of the present invention, it is determined that the speech data currently obtained is corresponding
The first vocal print feature after, it can be determined that the first vocal print feature and fixed second vocal print are characterized in no matching, if matching,
Text message can be converted voice data into, and with the corresponding with the second vocal print feature second mark, to speech data and text
This information is labeled record, if mismatching, the first mark corresponding with the first vocal print feature can be determined, then by voice
Data are converted to text message, and with the first mark, record is labeled to speech data and text message.Hereby it is achieved that
Speech data is made a distinction automatically according to vocal print feature, and generates minutes, and converts voice data into text progress
Record, saves time and cost, improves the accuracy and reliability of minutes, improve Consumer's Experience.
Fig. 3 is the structural representation of the minutes device based on vocal print feature of one embodiment of the invention.
As shown in figure 3, the minutes device based on vocal print feature is somebody's turn to do, including:
First determining module 31, the first vocal print feature corresponding to the speech data currently obtained for determination;
Judge module 32, for judging that the first vocal print feature and fixed second vocal print are characterized in no matching;
Second determining module 33, when being mismatched for the first vocal print feature and fixed second vocal print feature, it is determined that with
First mark corresponding to first vocal print feature;
First logging modle 34, for being labeled record to speech data with the first mark.
Specifically, the minutes device based on vocal print feature that the present embodiment provides, can perform the embodiment of the present invention
The minutes method based on vocal print feature provided.The minutes device based on vocal print feature, can be configured in hand
In the arbitrary equipments such as mechanical, electrical brain, to be labeled in conference process to speech data, so as to generate minutes.
It should be noted that the foregoing explanation to the minutes embodiment of the method based on vocal print feature is also applied for
The minutes device based on vocal print feature of the embodiment, here is omitted.
The minutes device based on vocal print feature of the embodiment of the present invention, it is first determined the speech data pair currently obtained
The first vocal print feature answered, if the first vocal print feature mismatches with fixed second vocal print feature, it is determined that with the first vocal print
First mark corresponding to feature, then record is labeled to speech data with the first mark.Hereby it is achieved that according to vocal print feature
Automatically speech data is made a distinction, and generates minutes, save time and cost, improve the accuracy of minutes
And reliability, improve Consumer's Experience.
Fig. 4 is the structural representation of the minutes device based on vocal print feature of another embodiment of the present invention.
As shown in figure 4, on the basis of shown in Fig. 3, the minutes device based on vocal print feature is somebody's turn to do, in addition to:
Second logging modle 41, for the first vocal print feature and during fixed second vocal print characteristic matching, with second
The second mark is labeled record to speech data corresponding to vocal print feature.
3rd logging modle 42, for after meeting starts, getting voice corresponding with the second vocal print feature first
During data, the second vocal print feature is recorded.
3rd determining module 43, for determining current temporal information.
Accordingly, above-mentioned first logging modle 34, is specifically used for:
With the first mark and temporal information, record is labeled to speech data.
In a kind of possible way of realization, above-mentioned first logging modle 34, it is additionally operable to:
Convert voice data into text message;
With the first mark, record is labeled to speech data and text message.
In alternatively possible way of realization, the device, in addition to:
4th determining module 44, for determining whether include the character features recorded in text message;
Update module 45, when including the character features recorded for text message, update the appearance frequency of character features
Rate;
Generation module 46, for after meeting adjourned, according to the frequency of occurrences of all character features recorded, generating meeting
The keyword of view.
It should be noted that the foregoing explanation to the minutes embodiment of the method based on vocal print feature is also applied for
The minutes device based on vocal print feature of the embodiment, here is omitted.
The minutes device based on vocal print feature of the embodiment of the present invention, it is first determined the speech data pair currently obtained
The first vocal print feature answered, if the first vocal print feature mismatches with fixed second vocal print feature, it is determined that with the first vocal print
First mark corresponding to feature, then record is labeled to speech data with the first mark.Hereby it is achieved that according to vocal print feature
Automatically speech data is made a distinction, and generates minutes, save time and cost, improve the accuracy of minutes
And reliability, improve Consumer's Experience.
The invention also provides a kind of computer equipment, including:
Memory, processor and storage on a memory and the computer program that can run on a processor, the processing
Device realizes the minutes method based on vocal print feature as described in first aspect embodiment when performing described program.
Wherein, computer equipment, it can be the arbitrary equipments such as mobile phone, computer, not be restricted herein.
The invention also provides a kind of computer-readable recording medium, computer program is stored thereon with, when the program quilt
Realized during computing device such as the minutes method based on vocal print feature in aforementioned first aspect embodiment.
The invention also provides a kind of computer program product, when the instruction in the computer program product is by processor
During execution, perform such as the minutes method based on vocal print feature in aforementioned first aspect embodiment.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office
Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area
Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification
Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three
It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize custom logic function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable
Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (14)
- A kind of 1. minutes method based on vocal print feature, it is characterised in that including:It is determined that the first vocal print feature corresponding to the speech data currently obtained;Judge that first vocal print feature is characterized in no matching with fixed second vocal print;If mismatch, it is determined that the first mark corresponding with first vocal print feature;Record is labeled to the speech data with the described first mark.
- 2. the method as described in claim 1, it is characterised in that described to judge first vocal print feature and fixed second After whether vocal print feature matches, in addition to:If matching, record is labeled to the speech data with the corresponding with second vocal print feature second mark.
- 3. the method as described in claim 1, it is characterised in that described to judge first vocal print feature and fixed second Before whether vocal print feature matches, in addition to:After meeting starts, when getting speech data corresponding with second vocal print feature first, described second is recorded Vocal print feature.
- 4. the method as described in claim 1-3 is any, it is characterised in that described to be identified with described first to the speech data It is labeled before record, in addition to:It is determined that current temporal information;It is described that record is labeled to the speech data with the described first mark, including:With the described first mark and the temporal information, record is labeled to the speech data.
- 5. the method as described in claim 1-3 is any, it is characterised in that described to be identified with described first to the speech data Record is labeled, including:The speech data is converted into text message;With the described first mark, record is labeled to the speech data and the text message.
- 6. method as claimed in claim 5, it is characterised in that it is described the speech data is converted into text message after, Also include:Determine whether include the character features recorded in the text message;If including updating the frequencies of occurrences of the character features;After meeting adjourned, according to the frequency of occurrences of all character features recorded, the keyword of the meeting is generated.
- A kind of 7. minutes device based on vocal print feature, it is characterised in that including:First determining module, the first vocal print feature corresponding to the speech data currently obtained for determination;Judge module, for judging that first vocal print feature and fixed second vocal print are characterized in no matching;Second determining module, when being mismatched for first vocal print feature and fixed second vocal print feature, it is determined that and institute State the first mark corresponding to the first vocal print feature;First logging modle, for being labeled record to the speech data with the described first mark.
- 8. device as claimed in claim 7, it is characterised in that also include:Second logging modle, for first vocal print feature and during fixed second vocal print characteristic matching, with described the The second mark is labeled record to the speech data corresponding to two vocal print features.
- 9. device as claimed in claim 7, it is characterised in that also include:3rd logging modle, for after meeting starts, getting voice number corresponding with second vocal print feature first According to when, record second vocal print feature.
- 10. the device as described in claim 7-9 is any, it is characterised in that also include:3rd determining module, for determining current temporal information;First logging modle, is specifically used for:With the described first mark and the temporal information, record is labeled to the speech data.
- 11. the device as described in claim 7-9 is any, it is characterised in that first logging modle, be additionally operable to:The speech data is converted into text message;With the described first mark, record is labeled to the speech data and the text message.
- 12. device as claimed in claim 11, it is characterised in that also include:4th determining module, for determining whether include the character features recorded in the text message;Update module, when including the character features recorded for the text message, update the appearance of the character features Frequency;Generation module, for after meeting adjourned, according to the frequency of occurrences of all character features recorded, generating the meeting Keyword.
- 13. a kind of computer equipment, including:Memory, processor and storage are on a memory and the computer program that can run on a processor, it is characterised in that institute The minutes side based on vocal print feature as described in any in claim 1-6 is realized when stating computing device described program Method.
- 14. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The minutes method based on vocal print feature as described in any in claim 1-6 is realized during execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710743944.1A CN107564531A (en) | 2017-08-25 | 2017-08-25 | Minutes method, apparatus and computer equipment based on vocal print feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710743944.1A CN107564531A (en) | 2017-08-25 | 2017-08-25 | Minutes method, apparatus and computer equipment based on vocal print feature |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107564531A true CN107564531A (en) | 2018-01-09 |
Family
ID=60975951
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710743944.1A Pending CN107564531A (en) | 2017-08-25 | 2017-08-25 | Minutes method, apparatus and computer equipment based on vocal print feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107564531A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447502A (en) * | 2018-03-09 | 2018-08-24 | 福州米鱼信息科技有限公司 | A kind of memo method and terminal based on voice messaging |
CN108733649A (en) * | 2018-04-25 | 2018-11-02 | 北京华夏电通科技有限公司 | A kind of speech recognition text is inserted into the method, apparatus and system of notes document |
CN108922267A (en) * | 2018-07-12 | 2018-11-30 | 河南恩久信息科技有限公司 | A kind of intelligent voice system for wisdom classroom |
CN110322869A (en) * | 2019-05-21 | 2019-10-11 | 平安科技(深圳)有限公司 | Meeting subangle color phoneme synthesizing method, device, computer equipment and storage medium |
WO2019227579A1 (en) * | 2018-05-29 | 2019-12-05 | 平安科技(深圳)有限公司 | Conference information recording method and apparatus, computer device, and storage medium |
CN111063355A (en) * | 2018-10-16 | 2020-04-24 | 上海博泰悦臻网络技术服务有限公司 | Conference record generation method and recording terminal |
CN111312260A (en) * | 2020-04-16 | 2020-06-19 | 厦门快商通科技股份有限公司 | Human voice separation method, device and equipment |
CN111583932A (en) * | 2020-04-30 | 2020-08-25 | 厦门快商通科技股份有限公司 | Sound separation method, device and equipment based on human voice model |
CN111583953A (en) * | 2020-04-30 | 2020-08-25 | 厦门快商通科技股份有限公司 | Voiceprint feature-based voice separation method, device and equipment |
CN111667837A (en) * | 2019-02-21 | 2020-09-15 | 奇酷互联网络科技(深圳)有限公司 | Conference record acquisition method, intelligent terminal and device with storage function |
CN111859006A (en) * | 2019-04-17 | 2020-10-30 | 上海颐为网络科技有限公司 | Method, system, electronic device and storage medium for establishing voice entry tree |
CN113421563A (en) * | 2021-06-21 | 2021-09-21 | 安徽听见科技有限公司 | Speaker labeling method, device, electronic equipment and storage medium |
WO2022142610A1 (en) * | 2020-12-28 | 2022-07-07 | 深圳壹账通智能科技有限公司 | Speech recording method and apparatus, computer device, and readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150249664A1 (en) * | 2012-09-11 | 2015-09-03 | Auraya Pty Ltd. | Voice Authentication System and Method |
CN105575391A (en) * | 2014-10-10 | 2016-05-11 | 阿里巴巴集团控股有限公司 | Voiceprint information management method, voiceprint information management device, identity authentication method, and identity authentication system |
CN105895102A (en) * | 2015-11-15 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Recording editing method and recording device |
CN105895077A (en) * | 2015-11-15 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Recording editing method and recording device |
CN106448683A (en) * | 2016-09-30 | 2017-02-22 | 珠海市魅族科技有限公司 | Method and device for viewing recording in multimedia files |
-
2017
- 2017-08-25 CN CN201710743944.1A patent/CN107564531A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150249664A1 (en) * | 2012-09-11 | 2015-09-03 | Auraya Pty Ltd. | Voice Authentication System and Method |
CN105575391A (en) * | 2014-10-10 | 2016-05-11 | 阿里巴巴集团控股有限公司 | Voiceprint information management method, voiceprint information management device, identity authentication method, and identity authentication system |
CN105895102A (en) * | 2015-11-15 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Recording editing method and recording device |
CN105895077A (en) * | 2015-11-15 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Recording editing method and recording device |
CN106448683A (en) * | 2016-09-30 | 2017-02-22 | 珠海市魅族科技有限公司 | Method and device for viewing recording in multimedia files |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447502A (en) * | 2018-03-09 | 2018-08-24 | 福州米鱼信息科技有限公司 | A kind of memo method and terminal based on voice messaging |
CN108447502B (en) * | 2018-03-09 | 2020-09-22 | 福州米鱼信息科技有限公司 | Memorandum method and terminal based on voice information |
CN108733649A (en) * | 2018-04-25 | 2018-11-02 | 北京华夏电通科技有限公司 | A kind of speech recognition text is inserted into the method, apparatus and system of notes document |
WO2019227579A1 (en) * | 2018-05-29 | 2019-12-05 | 平安科技(深圳)有限公司 | Conference information recording method and apparatus, computer device, and storage medium |
CN108922267A (en) * | 2018-07-12 | 2018-11-30 | 河南恩久信息科技有限公司 | A kind of intelligent voice system for wisdom classroom |
CN111063355A (en) * | 2018-10-16 | 2020-04-24 | 上海博泰悦臻网络技术服务有限公司 | Conference record generation method and recording terminal |
CN111667837A (en) * | 2019-02-21 | 2020-09-15 | 奇酷互联网络科技(深圳)有限公司 | Conference record acquisition method, intelligent terminal and device with storage function |
CN111859006A (en) * | 2019-04-17 | 2020-10-30 | 上海颐为网络科技有限公司 | Method, system, electronic device and storage medium for establishing voice entry tree |
CN110322869A (en) * | 2019-05-21 | 2019-10-11 | 平安科技(深圳)有限公司 | Meeting subangle color phoneme synthesizing method, device, computer equipment and storage medium |
CN111312260A (en) * | 2020-04-16 | 2020-06-19 | 厦门快商通科技股份有限公司 | Human voice separation method, device and equipment |
CN111583953A (en) * | 2020-04-30 | 2020-08-25 | 厦门快商通科技股份有限公司 | Voiceprint feature-based voice separation method, device and equipment |
CN111583932A (en) * | 2020-04-30 | 2020-08-25 | 厦门快商通科技股份有限公司 | Sound separation method, device and equipment based on human voice model |
WO2022142610A1 (en) * | 2020-12-28 | 2022-07-07 | 深圳壹账通智能科技有限公司 | Speech recording method and apparatus, computer device, and readable storage medium |
CN113421563A (en) * | 2021-06-21 | 2021-09-21 | 安徽听见科技有限公司 | Speaker labeling method, device, electronic equipment and storage medium |
CN113421563B (en) * | 2021-06-21 | 2024-05-28 | 安徽听见科技有限公司 | Speaker labeling method, speaker labeling device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107564531A (en) | Minutes method, apparatus and computer equipment based on vocal print feature | |
CN107818798B (en) | Customer service quality evaluation method, device, equipment and storage medium | |
US10706873B2 (en) | Real-time speaker state analytics platform | |
US11417343B2 (en) | Automatic speaker identification in calls using multiple speaker-identification parameters | |
US9070369B2 (en) | Real time generation of audio content summaries | |
CN107767869B (en) | Method and apparatus for providing voice service | |
US20180197548A1 (en) | System and method for diarization of speech, automated generation of transcripts, and automatic information extraction | |
US11475897B2 (en) | Method and apparatus for response using voice matching user category | |
CN108986826A (en) | Automatically generate method, electronic device and the readable storage medium storing program for executing of minutes | |
Mariooryad et al. | Building a naturalistic emotional speech corpus by retrieving expressive behaviors from existing speech corpora | |
CN109740077A (en) | Answer searching method, device and its relevant device based on semantic indexing | |
Triantafyllopoulos et al. | Deep speaker conditioning for speech emotion recognition | |
CN107679033A (en) | Text punctuate location recognition method and device | |
CN108091324A (en) | Tone recognition methods, device, electronic equipment and computer readable storage medium | |
JP6732703B2 (en) | Emotion interaction model learning device, emotion recognition device, emotion interaction model learning method, emotion recognition method, and program | |
CN109448704A (en) | Construction method, device, server and the storage medium of tone decoding figure | |
CN110890088A (en) | Voice information feedback method and device, computer equipment and storage medium | |
CN107767873A (en) | A kind of fast and accurately offline speech recognition equipment and method | |
CN110853621A (en) | Voice smoothing method and device, electronic equipment and computer storage medium | |
CN112053692A (en) | Speech recognition processing method, device and storage medium | |
CN110556098B (en) | Voice recognition result testing method and device, computer equipment and medium | |
CN112201253B (en) | Text marking method, text marking device, electronic equipment and computer readable storage medium | |
Pathak et al. | Recognizing emotions from speech | |
CN112667787A (en) | Intelligent response method, system and storage medium based on phonetics label | |
CN111583932A (en) | Sound separation method, device and equipment based on human voice model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180109 |