CN107025913A - A kind of way of recording and terminal - Google Patents
A kind of way of recording and terminal Download PDFInfo
- Publication number
- CN107025913A CN107025913A CN201610073408.0A CN201610073408A CN107025913A CN 107025913 A CN107025913 A CN 107025913A CN 201610073408 A CN201610073408 A CN 201610073408A CN 107025913 A CN107025913 A CN 107025913A
- Authority
- CN
- China
- Prior art keywords
- mark
- time point
- user profile
- terminal
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/10537—Audio or video recording
- G11B2020/10546—Audio or video recording specifically adapted for audio data
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a kind of way of recording, including:In recording audio during, i-th of label information is obtained, i-th of label information includes:I-th of mark time point and i-th of mark mark, wherein, N >=i >=1, N >=2;According to the corresponding relation of i-th of mark mark and default mark mark and interviewed user profile, determine that i-th of mark identifies corresponding user profile;As i ≠ N, i-th of mark time point and i+1 are marked into the voice data between time point, i-th of audio file of user profile matching corresponding with i-th of mark mark is saved as.The embodiment of the present invention further simultaneously discloses a kind of terminal.
Description
Technical field
The present invention relates to the information processing technology in electronic application field, more particularly to a kind of way of recording and end
End.
Background technology
With the popularization of terminal, terminal has become electronic equipment that is essential in life and carrying with,
Thing at one's side can be recorded at any time.For some occasions, it is also desirable to which terminal is by the way of recording
Information record is carried out, for example, minutes etc..
In the prior art, in Recording Process, terminal starts recording and set, by microphone in scene
Voice messaging is acquired, and obtains voice data, so as to any time of the user after recording be obtained, at end
Can be with the voice messaging in reconstruction of scenes when playing the voice data on end.Such as, user can pass through Mike
Wind records the conference content of meeting, when then by the terminal plays voice data, can reproduce conference content,
In order to interpretation of records.
If however, user needs to inquire about predetermined content in the voice data that recording is obtained, such as, in meeting
There are multiple personnel to be made a speech, then can adjust the playing progress rate of voice data, voice data is tried
Listen, to find the speech content of someone.Therefore, it is predetermined in lookup voice data of the prior art
The operation of audio content is comparatively laborious, not humane, reduce Consumer's Experience sense.
The content of the invention
In order to solve the above technical problems, the embodiment of the present invention is expected to provide a kind of way of recording and terminal, can
Different audios are recorded according to different users, human oriented design is embodied, the intellectuality of terminal is improved.
The technical proposal of the invention is realized in this way:
The embodiments of the invention provide a kind of way of recording, including:
In recording audio during, i-th of label information is obtained, i-th of label information includes:
I-th of mark time point and i-th of mark mark, wherein, N >=i >=1, N >=2;
According to the corresponding relation of described i-th mark mark and default mark mark and interviewed user profile,
Determine that i-th of mark identifies corresponding user profile;
As i ≠ N, described i-th mark time point and i+1 are marked to the audio number between time point
According to saving as i-th of the audio file matched with the described i-th corresponding user profile of mark mark.
It is described to be believed according to described i-th mark mark and default mark mark with user in such scheme
The corresponding relation of breath, is determined after the corresponding user profile of i-th of mark mark, methods described also includes:
As i=N, End of Tape time point is obtained;
By the voice data between described i-th mark time point and the End of Tape time point, save as
I-th of audio file of user profile matching corresponding with described i-th mark mark.
Detect in N number of audio file, if there is at least two audios text that have matched identical user profile
Part;
If in the presence of at least two audio file is synthesized into an audio file, one audio
File is matched and at least two audio files identical user profile.
It is described to be believed according to described i-th mark mark and default mark mark with user in such scheme
The corresponding relation of breath, is determined after the corresponding user profile of i-th of mark mark, methods described also includes:
Described i-th mark is identified into corresponding user profile and is converted into i-th of audio-frequency information;
When marking time point described i-th, i-th of audio-frequency information is inserted on the first track;
As i ≠ N, described i-th mark time point and i+1 are marked to the audio number between time point
According to being inserted on the second track;
As i=N, the End of Tape time point is obtained;
Voice data between described i-th mark time point and the End of Tape time point is inserted into the
On two tracks;
Voice data on N number of audio-frequency information on first track and second track is synthesized one
Individual synthesis recording file.
In such scheme, before i-th of label information of the acquisition, methods described also includes:
Obtain default interviewed user information database;
According to the default interviewed user information database, pair of default mark mark and interviewed user profile is determined
It should be related to.
In such scheme, i-th of label information of the acquisition, including:
I-th first operations are obtained, described first operates for determining i-th of label information;
According to described i-th first operations, i-th of mark mark is obtained;
The time of origin of i-th first operations is obtained, the time of origin of i-th first operations is institute
State i-th of mark time point.
The embodiments of the invention provide a kind of terminal, including:
Acquiring unit, for during, obtaining i-th of label information, described i-th in recording audio
Individual label information includes:I-th of mark time point and i-th of mark mark, wherein, Ni1, N is just whole
Number;
Determining unit, for i-th of mark mark obtained according to the acquiring unit and default mark
The corresponding relation of note mark and interviewed user profile, determines that i-th of mark identifies corresponding user profile;
Storage unit, for as i ≠ N, i-th of mark time point that the acquiring unit is obtained
Voice data between i+1 mark time point, saves as described i-th determined with the determining unit
Individual mark identifies i-th of audio file of corresponding user profile matching.
In above-mentioned terminal, the terminal also includes:Detection unit and synthesis unit;
The acquiring unit, is additionally operable to the determining unit according to described i-th mark mark and default mark
Note mark and the corresponding relation of user profile, after determining that i-th of mark identifies corresponding user profile, when
During i=N, End of Tape time point is obtained;
The storage unit, is additionally operable to i-th of mark time point and the institute for obtaining the acquiring unit
The voice data between End of Tape time point is stated, described i-th determined with the determining unit is saved as
I-th of audio file of the corresponding user profile matching of mark mark;
The detection unit, for detecting in N number of audio file that the storage unit is preserved, if exist
It has matched at least two audio files of identical user profile;
The synthesis unit, exists if being additionally operable to the detection unit detection, the storage unit is preserved
At least two audio file synthesize an audio file, one audio file matching with it is described
At least two audio file identical user profile.
In above-mentioned terminal, the terminal also includes:Conversion unit, insertion unit;
The conversion unit, for the determining unit according to described i-th mark mark and default mark
Mark and the corresponding relation of user profile, after determining that i-th of mark identifies corresponding user profile, by institute
The corresponding user profile of i-th of mark mark for stating determining unit determination is converted into i-th of audio-frequency information;
The insertion unit, for the acquiring unit obtain it is described i-th mark time point when, will
I-th of audio-frequency information of the conversion unit conversion is inserted on the first track, and as i ≠ N,
Voice data between described i-th mark time point and i+1 mark time point is inserted into the second sound
On rail;
The acquiring unit, is additionally operable to as i=N, obtains the End of Tape time point;
The insertion unit, is additionally operable to i-th of mark time point and the institute for obtaining the acquiring unit
The voice data between End of Tape time point is stated to be inserted on the second track;
The synthesis unit, is additionally operable to the N number of audio inserted on first track that unit is got togather
Voice data in information and second track synthesizes a synthesis recording file.
In above-mentioned terminal, the acquiring unit is additionally operable to before i-th of label information of the acquisition, obtained
Take default interviewed user information database;
The determining unit, is additionally operable to the default interviewed user information database obtained according to the acquiring unit,
Determine the corresponding relation of default mark mark and interviewed user profile.
In above-mentioned terminal, the acquiring unit, specifically for obtaining i-th first operations, described first
Operate for determining i-th of label information;And according to described i-th first operations, obtain i-th of mark
Note mark;And obtain the time of origin of i-th first operations, the generation of i-th first operations
Time is described i-th mark time point.
The embodiments of the invention provide a kind of way of recording and terminal, by recording audio during,
I-th of label information is obtained, i-th of label information includes:I-th of mark time point and i-th of mark
Mark, wherein, N >=i >=1, N >=2;According to i-th of mark mark and default mark mark and interviewed use
The corresponding relation of family information, determines that i-th of mark identifies corresponding user profile;As i ≠ N, by i-th
Voice data between individual mark time point and i+1 mark time point, saves as and is marked with i-th of mark
Know i-th of audio file of corresponding user profile matching.Using above-mentioned technic relization scheme, due to terminal
During recording, by by the information of spokesman or interviewee (user profile) and corresponding spokesman
Speech content it is corresponding mark save, therefore, terminal can record different according to different users
Audio, embodies human oriented design, improves the intellectuality of terminal.
Brief description of the drawings
Fig. 1 is the embodiments of the invention provide a kind of flow chart one of way of recording;
Fig. 2 is the embodiments of the invention provide a kind of flowchart 2 of the way of recording;
Fig. 3 is that the embodiments of the invention provide the interface schematic diagram that a kind of record labels are set;
Fig. 4 is the embodiments of the invention provide a kind of flow chart 3 of the way of recording;
Fig. 5 is the embodiments of the invention provide a kind of flow chart four of way of recording;
Fig. 6 is the embodiments of the invention provide a kind of flow chart five of way of recording;
Fig. 7 is the embodiments of the invention provide a kind of structural representation one of terminal;
Fig. 8 is the embodiments of the invention provide a kind of structural representation two of terminal;
Fig. 9 is the embodiments of the invention provide a kind of structural representation three of terminal.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear
Chu, it is fully described by.
Embodiment one
The embodiments of the invention provide a kind of way of recording, as shown in figure 1, this method can include:
S101, recording audio according to during, obtain i-th of label information, this i-th mark letter
Breath includes:I-th of mark time point and i-th of mark mark, wherein, N >=i >=1, N >=2.
It should be noted that the way of recording that the embodiment of the present invention is provided is applied in the process once recorded
In to record the situation of multiple visitors, or many personal situations about being made a speech in a meeting of record, i.e.,
Situation about recording etc. is accessed carrying out session recording or many people.
Optionally, the terminal in the embodiment of the present invention is the electronic equipment with sound-recording function, for example, recording
Pen, smart mobile phone, tablet personal computer etc..Terminal in the embodiment of the present invention can be the reception by touch-screen
Label information, can also set the progress at interface mark setting accordingly, can also be by the way that be provided with can be with
The sensor or inductor for sensing touch operation receive label information, and the embodiment of the present invention is not restricted.
It should be noted that i-th in the embodiment of the present invention is exactly to realize in sequence successively, such as
The identification information of 1st acquisition is exactly the 1st identification information, and the identification information of the 2nd acquisition is exactly the 2nd
Individual identification information, the like.
Particularly, because the cutting stylus in the embodiment of the present invention is to more than two speeches, therefore, the present invention
N in embodiment is at least 2, and specific N numerical value is that situation about can be made a speech according to actual user is determined
's.
Specifically, as shown in Fig. 2 terminal obtains the process tool of i-th of identification information in the embodiment of the present invention
Body can include:
S1011, obtain i-th first operation, this first operate for determine i-th of label information.
Terminal in start recording voice data, user can terminal touch-screen, induction region or mark
Note setting interface is marked the input of mark, i.e. terminal and obtains i-th for i-th of label information of determination
Individual first operation.
Optionally, in the embodiment of the present invention, the first operation can be gesture, or input operation is specific
The concrete form of the first operation is not limited.
Exemplary, user can slide i-th of first gesture by the touch-screen or induction region of terminal,
I.e. terminal has just got i-th first operations (gesture operation).Or, user can be by with setting
The terminal of interface or settings button directly carries out i-th of input operation so that the terminal is got i-th first
Operate (input operation).
It should be noted that because the embodiment of the present invention is to record the audio of multiple users in once recording
Premised on data, therefore, can there is a situation where to need to obtain multiple label informations, so as to occur obtaining multiple
The process of first operation, and because a user can state one's views in the different periods, so as to be remembered by terminal
Record, therefore, the corresponding mark mark of the first operation and the first operation that each terminal is obtained it is possible that
The same situation.That is, it is each first operation can be identical or it is different, it is necessary to
Depending on actual situation about obtaining.
S1012, according to i-th first operation, obtain i-th mark mark.
Terminal is obtained after i-th first operations, because i-th first operations allow terminal to get this
I-th of peration data that i-th first operations are produced, therefore, these peration datas that terminal is got are exactly
I-th of mark mark.
Optionally, the mark mark in the embodiment of the present invention can for figure, symbol, numeral or word etc.,
The embodiment of the present invention is not restricted.
Exemplary, during user A is recorded using mobile phone, when the 2nd user's speech, use
Family A has slided a broken line gesture on the touch-screen of mobile phone, touch screen induction of the broken line gesture in mobile phone
It is out one " Z ", here, " Z " is identified as the 3rd mark.Similarly, as shown in figure 3,
User A can also be in mobile phone mark interface is set, by input operation, obtain input operation input
" ZM " is identified as mark.
It should be noted that the first operation in the embodiment of the present invention is the specific gesture or dynamic for mark
Work or input etc., such as specific Alphabet Gesture, terminal, could basis when obtaining specific gesture
The gesture identification goes out mark mark.If terminal is got outside the first action type of setting during recording
Operation, then terminal-pair operation is processing of not marking.So, avoid in Recording Process by
In the reason such as maloperation or no touch cause it is mis-marked the occurrence of.
Optionally, it is that can be obtained in the preset time that operation starts that terminal, which obtains the first operation,
The preset time can be 30 seconds etc., and specific numerical value can set the time according to actual setting, this hair
Bright embodiment is not restricted.
It should be noted that the first operation in the embodiment of the present invention can be two touch actions or input time
Number etc., therefore, terminal need to obtain the first operation in a preset time.For example, being grasped from first
In the preset time that the time that work occurs starts, obtain first and operate, so, avoid terminal acquisition
I-th first operations can be mistakenly considered the feelings of two first operations when being the situation of two Alphabet Gestures by terminal
Condition occurs.
Exemplary, when user A is recorded using mobile phone, user A carries out word on the touch-screen of mobile phone
Female images of gestures operation, the touch-screen of the mobile phone extracts key point pair from the marginal information of Alphabet Gesture image
Gesture is identified, and corresponding Alphabet Gesture image can be then shown on display screen, such as " Zhang Ming " makes a speech
When, user can be represented with inputting " Z " and " M " Alphabet Gesture in 30 seconds respectively on screen;Here " Z "
" M " is to sequentially input.It is understood that the preset time due to 30 seconds is shorter, i.e., mobile phone obtains " Z "
" M " midfeather time is shorter, therefore, and mobile phone can judge " ZM " according to Alphabet Gesture image manipulation
For a complete mark mark.
S1013, obtain i-th first operation time of origin, this i-th first operation time of origin be
I-th of mark time point.
Terminal is when obtaining i-th first operations, while the corresponding time of origin of this i-th first operations is obtained,
Namely i-th mark time point.
It should be noted that the time of origins of i-th first in embodiment of the present invention operations are with from current
Recording start as initial time.The time of origin of i-th first operations i.e. in the embodiment of the present invention is should
The difference occurred between moment and initial time of i-th first operations.
Exemplary, user A starts recording at 10 points using mobile phone, this mobile phone 10 points 30 minutes
When got the 3rd gesture operation, at this moment, mobile phone just record that the 3rd gesture occur when
Between be 30 minutes.
S102, the corresponding relation according to i-th of mark mark and default mark mark and interviewed user profile,
Determine that i-th of mark identifies corresponding user profile.
Terminal obtain i-th of label information after, due to before recording the terminal got it is pre-
If mark mark and the corresponding relation of interviewed user profile, therefore, the terminal can be according to i-th of mark
Mark and default mark mark determine that i-th of mark is corresponding with the corresponding relation of interviewed user profile
User profile.
It should be noted that the user profile in the embodiment of the present invention can be the name of spokesman, user
Head portrait etc. can characterize the information of spokesman's identity.
It should be noted that specifically obtaining default mark mark and the corresponding relation of interviewed user profile
Process will be described in detail in follow-up embodiment.
Optionally, default mark mark and the corresponding relation of respondent's user profile can for mark mark with
Corresponding lists between subscriber identity information, for example, can be the name or user of mark mark and user
Corresponding lists relation between head portrait.
Exemplary, the corresponding relation of default mark mark and respondent's user profile can be address name
Initial for mark identify, the mark mark correspondence user name.For example, the mark mark of " ZM " sample
It is Zhang Ming to know corresponding.
Exemplary, when user A records session recording using mobile phone, get i-th and be designated
" ZM ", at this moment mobile phone have found according to default mark mark and the corresponding relation of respondent's user profile
" ZM- bright ", then, the mobile phone have determined that the corresponding user profile of i-th of mark mark is Zhang Ming.
S103, as i ≠ N, by i-th mark time point and i+1 mark time point between audio
Data, save as i-th of audio file of user profile matching corresponding with i-th of mark mark.
Terminal identifies the corresponding relation with interviewed user profile according to i-th of mark mark and default mark,
Determine after the corresponding user profile of i-th of mark mark, as i ≠ N, i+1 is got in the terminal
During individual label information, the terminal can mark i-th of mark time point and i+1 between time point
Recorded voice data, saves as i-th of sound of user profile matching corresponding with i-th of mark mark
Frequency file.
Specifically, terminal can mark i-th between time point and i+1 mark time point
The audio data segment of record comes out, and saves as using the corresponding user profile name of i-th of mark mark
I-th of audio file.
It should be noted that as i ≠ N (terminal obtains i+1 label information), showing that recording does not have also
There is end, then, i-th of identified time point and the i+1 obtained next time are marked time point by terminal
Between the voice data that records first separately save.Meanwhile, terminal is also synchronously carrying out normal i+1 mark
The recording work of the corresponding user of note mark.
Exemplary, during user A carries out minutes using mobile phone, mobile phone is according to front and rear two deutero-albumose
Clock a dot sequency, and after the 2nd mark mark acquisition is finished, mobile phone is segmented first saves as the 1st sound
Frequency file, the start mark point that at this moment the 2nd mark moment point is then preserved as next set of segmentation;Wherein,
The title of recording file is automatically saved as this according to the corresponding relation of default mark mark and interviewed user profile
The user profile (name, such as " Zhang Ming ") of spokesman, when the user profile in the embodiment of the present invention is also same
When head portrait including user when, the 1st audio file can also be shown with the head image information of spokesman simultaneously
Come.If next there is multiple label informations, it is same as described above that segmentation preserves principle.Further,
If same people's (identical label information) has the voice data of speech can also be according to this side in different periods
Formula segmentation is preserved, or is automatically saved and be named as the patterns such as Zhang Ming -1, Zhang Ming -2.
Further, as shown in figure 4, after S102, a kind of way of recording provided in an embodiment of the present invention
Also include:S104-S107.It is specific as follows:
S104, as i=N, obtain End of Tape time point.
Terminal identifies the corresponding relation with interviewed user profile according to i-th of mark mark and default mark,
After determining that i-th mark identifies corresponding user profile, due to i-th of label information that now terminal is obtained
It is probably last label information, the terminal will not receive i+1 when proceeding following recording
Individual label information, therefore, in the case of i=N, terminal can get the time point of End of Tape.
S105, by i-th mark time point and End of Tape time point between voice data, save as with
I-th of audio file of the corresponding user profile matching of i-th of mark mark.
Terminal was obtained after End of Tape time point, and the terminal just can be by i-th of mark time point and recording
Voice data (the namely voice data of last spokesman) between end time point, is saved as and the
I-th of audio file of the corresponding user profile matching of i mark mark.
In S106, the N number of audio file of detection, if there are have matched identical user profile at least two
Audio file.
After the preservation that terminal finishes N number of audio file, because same spokesman can be different
Time point has made a speech several times, accordingly, it is possible to there are the corresponding multiple audio files of same spokesman, then,
The terminal can detect whether the N number of or corresponding user profile of N section audio files has identical.
If S107, in the presence of at least two audio files being synthesized into an audio file, an audio
File is matched and at least two audio file identical user profile.
Terminal is detected in N number of audio file, if there are at least two sounds that have matched identical user profile
After frequency file, when terminal, to detect the presence of at least two audio files that are matched with same user profile (more
It is individual) when, at least two audio file that the terminal can match same user profile is combined,
Synthesize an audio file.
Exemplary, should if the audio file that terminal is separately deposited has Zhang Ming -1, Li Si, Zhang San, Zhang Ming -2
Terminal just splices the corresponding audio files of Zhang Ming -1 and Zhang Ming -2, synthesizes a section audio file and saves,
And name a section audio file with bright.In this manner it is possible to which the speech of same spokesman is concentrated
Together, it is easy to inquiry and arrangement that user is later.
Further, as shown in figure 5, before S101, a kind of way of recording provided in an embodiment of the present invention
Also include:S108-S109.It is specific as follows:
S108, the default interviewed user information database of acquisition.
S109, according to default interviewed user information database, determine default mark mark and interviewed user profile
Corresponding relation.
It should be noted that before user carries out minutes or recording work starts, can first get
Which the spokesman's (default interviewed user information database) specifically recorded has, and then just sets each in terminal
The corresponding mark mark of spokesman.
Specifically, default interviewed user information database can be by the manual statistic record of user.Specific terminal according to
Default interviewed user information database, determines default mark mark and the process of the corresponding relation of interviewed user profile
Can be the form that terminal sets mark mark, then with the user profile phase of each respondent (spokesman)
Association.Concrete implementation mode can be information association scheme of the prior art, the embodiment of the present invention
It is not restricted.
It should be noted that a kind of usage scenario of way of recording provided in an embodiment of the present invention can be terminal
Blank screen carries out backstage recording or screen lights progress backstage recording, can also be the end without screen
Hold progress to record etc..User can select the input side of different label informations according to different usage scenarios
Formula so that the screen that is set in terminal of user it is dark or bright in the case of can be carried out recording.For example,
In the case that terminal does not have screen or screen lock, terminal is also in recording, and user can be on screen
The direct gesture operation in predetermined inductive area completes mark action, or has screen and screen light yellow and rear in terminal
When platform is recorded, directly in induction zone or setting interface input marking information.
A kind of way of recording that the embodiment of the present invention is provided, by during, being obtained in recording audio
I-th of label information is taken, i-th of label information includes:I-th of mark time point and i-th of mark mark
Know, wherein, N >=i >=1, N >=2;According to i-th of mark mark and default mark mark and interviewed user
The corresponding relation of information, determines that i-th of mark identifies corresponding user profile;As i ≠ N, by i-th
The voice data between time point and i+1 mark time point is marked, saves as and is identified with i-th of mark
I-th of audio file of corresponding user profile matching.Using above-mentioned technic relization scheme, because terminal exists
During recording, by by the information of spokesman or interviewee (user profile) with corresponding spokesman's
The corresponding mark of speech content is saved, and therefore, terminal can record different sounds according to different users
Frequently, human oriented design is embodied, the intellectuality of terminal is improved.
Embodiment two
The embodiment of the present invention provides a kind of way of recording, as shown in fig. 6, this method can include:
S201, the default interviewed user information database of acquisition.
S202, according to default interviewed user information database, determine default mark mark and interviewed user profile
Corresponding relation.
It should be noted that the way of recording that the embodiment of the present invention is provided is applied in the process once recorded
In to record the situation of multiple visitors, or many personal situations about being made a speech in a meeting of record, i.e.,
Situation about recording etc. is accessed carrying out session recording or many people.
Optionally, the terminal in the embodiment of the present invention is the electronic equipment with sound-recording function, for example, recording
Pen, smart mobile phone, tablet personal computer etc..Terminal in the embodiment of the present invention can be the reception by touch-screen
Label information, can also set the progress at interface mark setting accordingly, can also be by the way that be provided with can be with
The sensor or inductor for sensing touch operation receive label information, and the embodiment of the present invention is not restricted.
It should be noted that before user carries out minutes or recording work starts, can first get
Which the spokesman's (default interviewed user information database) specifically recorded has, and then just sets each in terminal
The corresponding mark mark of spokesman.
Specifically, default interviewed user information database can be by the manual statistic record of user.Specific terminal according to
Default interviewed user information database, determines default mark mark and the process of the corresponding relation of interviewed user profile
Can be the form that terminal sets mark mark, then with the user profile phase of each respondent (spokesman)
Association.Concrete implementation mode can be information association scheme of the prior art, the embodiment of the present invention
It is not restricted.
It should be noted that a kind of usage scenario of way of recording provided in an embodiment of the present invention can be terminal
Blank screen carries out backstage recording or screen lights progress backstage recording, can also be the end without screen
Hold progress to record etc..User can select the input side of different label informations according to different usage scenarios
Formula so that the screen that is set in terminal of user it is dark or bright in the case of can be carried out recording.For example,
In the case that terminal does not have screen or screen lock, terminal is also in recording, and user can be on screen
The direct gesture operation in predetermined inductive area completes mark action, or has screen and screen light yellow and rear in terminal
When platform is recorded, directly in induction zone or setting interface input marking information.
S203, recording audio according to during, obtain i-th of label information, this i-th mark letter
Breath includes:I-th of mark time point and i-th of mark mark, wherein, N >=i >=1, N >=2.
It should be noted that i-th in the embodiment of the present invention is exactly to realize in sequence successively, such as
The identification information of 1st acquisition is exactly the 1st identification information, and the identification information of the 2nd acquisition is exactly the 2nd
Individual identification information, the like.
Particularly, because the cutting stylus in the embodiment of the present invention is to more than two speeches, therefore, the present invention
N in embodiment is at least 2, and specific N numerical value is that situation about can be made a speech according to actual user is determined
's.
Specifically, as shown in Fig. 2 terminal obtains the process tool of i-th of identification information in the embodiment of the present invention
Body can include:
S1011, obtain i-th first operation, this first operate for determine i-th of label information.
Terminal in start recording voice data, user can terminal touch-screen, induction region or mark
Note setting interface is marked the input of mark, i.e. terminal and obtains i-th for i-th of label information of determination
Individual first operation.
Optionally, in the embodiment of the present invention, the first operation can be gesture, or input operation is specific
The concrete form of the first operation is not limited.
Exemplary, user can slide i-th of first gesture by the touch-screen or induction region of terminal,
I.e. terminal has just got i-th first operations (gesture operation).Or, user can be by with setting
The terminal of interface or settings button directly carries out i-th of input operation so that the terminal is got i-th first
Operate (input operation).
It should be noted that because the embodiment of the present invention is to record the audio of multiple users in once recording
Premised on data, therefore, can there is a situation where to need to obtain multiple label informations, so as to occur obtaining multiple
The process of first operation, and because a user can state one's views in the different periods, so as to be remembered by terminal
Record, therefore, the corresponding mark mark of the first operation and the first operation that each terminal is obtained it is possible that
The same situation.That is, it is each first operation can be identical or it is different, it is necessary to
Depending on actual situation about obtaining.
S1012, according to i-th first operation, obtain i-th mark mark.
Terminal is obtained after i-th first operations, because i-th first operations allow terminal to get this
I-th of peration data that i-th first operations are produced, therefore, these peration datas that terminal is got are exactly
I-th of mark mark.
Optionally, the mark mark in the embodiment of the present invention can for figure, symbol, numeral or word etc.,
The embodiment of the present invention is not restricted.
Exemplary, during user A is recorded using mobile phone, when the 2nd user's speech, use
Family A has slided a broken line gesture on the touch-screen of mobile phone, touch screen induction of the broken line gesture in mobile phone
It is out one " Z ", here, " Z " is identified as the 3rd mark.Similarly, as shown in figure 3,
User A can also be in mobile phone mark interface is set, by input operation, obtain input operation input
" ZM " is identified as mark.
It should be noted that the first operation in the embodiment of the present invention is the specific gesture or dynamic for mark
Work or input etc., such as specific Alphabet Gesture, terminal, could basis when obtaining specific gesture
The gesture identification goes out mark mark.If terminal is got outside the first action type of setting during recording
Operation, then terminal-pair operation is processing of not marking.So, avoid in Recording Process by
In the reason such as maloperation or no touch cause it is mis-marked the occurrence of.
Optionally, it is that can be obtained in the preset time that operation starts that terminal, which obtains the first operation,
The preset time can be 30 seconds etc., and specific numerical value can set the time according to actual setting, this hair
Bright embodiment is not restricted.
It should be noted that the first operation in the embodiment of the present invention can be two touch actions or input time
Number etc., therefore, terminal need to obtain the first operation in a preset time.For example, being grasped from first
In the preset time that the time that work occurs starts, obtain first and operate, so, avoid terminal acquisition
I-th first operations can be mistakenly considered the feelings of two first operations when being the situation of two Alphabet Gestures by terminal
Condition occurs.
Exemplary, when user A is recorded using mobile phone, user A carries out word on the touch-screen of mobile phone
Female images of gestures operation, the touch-screen of the mobile phone extracts key point pair from the marginal information of Alphabet Gesture image
Gesture is identified, and corresponding Alphabet Gesture image can be then shown on display screen, such as " Zhang Ming " makes a speech
When, user can be represented with inputting " Z " and " M " Alphabet Gesture in 30 seconds respectively on screen;Here " Z "
" M " is to sequentially input.It is understood that the preset time due to 30 seconds is shorter, i.e., mobile phone obtains " Z "
" M " midfeather time is shorter, therefore, and mobile phone can judge " ZM " according to Alphabet Gesture image manipulation
For a complete mark mark.
S1013, obtain i-th first operation time of origin, this i-th first operation time of origin be
I-th of mark time point.
Terminal is when obtaining i-th first operations, while the corresponding time of origin of this i-th first operations is obtained,
Namely i-th mark time point.
It should be noted that the time of origins of i-th first in embodiment of the present invention operations are with from current
Recording start as initial time.The time of origin of i-th first operations i.e. in the embodiment of the present invention is should
The difference occurred between moment and initial time of i-th first operations.
Exemplary, user A starts recording at 10 points using mobile phone, this mobile phone 10 points 30 minutes
When got the 3rd gesture operation, at this moment, mobile phone just record that the 3rd gesture occur when
Between be 30 minutes.
S204, the corresponding relation according to i-th of mark mark and default mark mark and interviewed user profile,
Determine that i-th of mark identifies corresponding user profile.
Terminal obtain i-th of label information after, due to before recording the terminal got it is pre-
If mark mark and the corresponding relation of interviewed user profile, therefore, the terminal can be according to i-th of mark
Mark and default mark mark determine that i-th of mark is corresponding with the corresponding relation of interviewed user profile
User profile.
It should be noted that the user profile in the embodiment of the present invention can be the name of spokesman, user
Head portrait etc. can characterize the information of spokesman's identity.
It should be noted that specifically obtaining default mark mark and the corresponding relation of interviewed user profile
Process will be described in detail in follow-up embodiment.
Optionally, default mark mark and the corresponding relation of respondent's user profile can for mark mark with
Corresponding lists between subscriber identity information, for example, can be the name or user of mark mark and user
Corresponding lists relation between head portrait.
Exemplary, the corresponding relation of default mark mark and respondent's user profile can be address name
Initial for mark identify, the mark mark correspondence user name.For example, the mark mark of " ZM " sample
It is Zhang Ming to know corresponding.
Exemplary, when user A records session recording using mobile phone, get i-th and be designated
" ZM ", at this moment mobile phone have found according to default mark mark and the corresponding relation of respondent's user profile
" ZM- bright ", then, the mobile phone have determined that the corresponding user profile of i-th of mark mark is Zhang Ming.
S205, i-th of mark is identified into corresponding user profile it is converted into i-th of audio-frequency information.
Terminal is identifying the corresponding relation with interviewed user profile according to i-th of mark mark and default mark,
After determining that i-th mark identifies corresponding user profile, because voice data can be divided into two sound in left and right
Road is played, therefore, and i-th of mark can be identified corresponding user when recording, first and believed by terminal
Breath is converted into i-th of audio-frequency information (voice).
Exemplary, i-th of mark is identified " ZM " corresponding user profile " Zhang Ming " and converted by terminal
I-th of audio-frequency information.
S206, i-th mark time point when, i-th of audio-frequency information is inserted on the first track.
I-th of mark is identified corresponding user profile and is converted into after i-th of audio-frequency information by terminal, due to sound
Frequency is played according to that can be divided into the sound channel of left and right two, therefore, and terminal is when recording, in i-th of mark
During time point, i-th of audio-frequency information can be inserted on the first track by the terminal.
Exemplary, i-th of mark is identified " ZM " corresponding user profile " Zhang Ming " and converted by terminal
I-th of audio-frequency information, and voice " Zhang Ming " is inserted on the track of L channel.
S207, as i ≠ N, by i-th mark time point and i+1 mark time point between audio
Data are inserted on the second track.
Terminal is identifying the corresponding relation with interviewed user profile according to i-th of mark mark and default mark,
Determine after the corresponding user profile of i-th of mark mark, as i ≠ N, i+1 is got in the terminal
During individual label information, the terminal can mark i-th of mark time point and i+1 between time point
Recorded voice data is inserted on the second track.
Specifically, terminal can mark i-th between time point and i+1 mark time point
The audio data segment of record, which comes out, to be inserted on the second track.
It should be noted that as i ≠ N (terminal obtains i+1 label information), showing that recording does not have also
There is end, then, i-th of identified time point and the i+1 obtained next time are marked time point by terminal
Between the voice data that records preserve and be inserted on the second track.Meanwhile, terminal can also proceed just
Normal i+1 mark identifies the recording work of corresponding user.
Exemplary, during user A carries out minutes using mobile phone, mobile phone is according to front and rear two deutero-albumose
Clocked a dot sequency, and after the 2nd mark mark acquisition is finished, the 1st audio file is inserted into by mobile phone
On track where R channel.
S208, as i=N, obtain End of Tape time point.
Terminal is identifying the corresponding relation with interviewed user profile according to i-th of mark mark and default mark,
After determining that i-th mark identifies corresponding user profile, due to i-th of label information that now terminal is obtained
It is probably last label information, the terminal will not receive i+1 when proceeding following recording
Individual label information, therefore, in the case of i=N, terminal can get the time point of End of Tape.
S209, by i-th mark time point and End of Tape time point between voice data be inserted into second
On track.
Terminal was obtained after End of Tape time point, and the terminal can tie i-th of mark time point and recording
Voice data between beam time point is inserted on the second track.
S210, the voice data on the N number of audio-frequency information and the second track on the first track synthesized one
Synthesize recording file.
N number of audio-frequency information is inserted into the first track by terminal, and the terminal is by i-th of mark time point and the
Voice data between i+1 mark time point is inserted on the second track and the terminal is by i-th of mark time
After voice data between point and End of Tape time point is inserted on the second track, the terminal is by the first sound
The voice data on N number of audio-frequency information and the second track on rail synthesizes a synthesis recording file.
Specifically, each mark is identified corresponding user profile by terminal at each mark time point is inserted into record
First track of sound, meanwhile, the content of recording is inserted into the second track.
It is understood that so terminal will each mark mark corresponding during being recorded, just
User profile is inserted in the first track at each mark time point, and the recording substance of spokesman is inserted into the
Two tracks, synthesis two sound of recording file that last terminal is obtained have different voice data
Recording file is simultaneously preserved.
Further, terminal is in playback file, it is necessary first to judge whether to need multi-channel equipment to broadcast
The recording file of recording is put, if needing, then L channel plays the audio-frequency information of track 1, and R channel is normal
Play the recording (voice data) of spokesman.Specifically, track 1 and track 2 are done into separating treatment,
So that the corresponding L channel of track 1, the corresponding R channel of track 2.Exemplary, when user's insertion earphone is played
During recording, the L channel of earphone can play corresponding user information " opening " and " bright " in a certain moment point of mark
Voice messaging, R channel then plays spokesman Zhang Ming recording substance.
A kind of way of recording that the embodiment of the present invention is provided, by during, being obtained in recording audio
I-th of label information is taken, i-th of label information includes:I-th of mark time point and i-th of mark mark
Know, wherein, N >=i >=1, N >=2;According to i-th of mark mark and default mark mark and interviewed user
The corresponding relation of information, determines that i-th of mark identifies corresponding user profile;As i ≠ N, by i-th
The voice data between time point and i+1 mark time point is marked, saves as and is identified with i-th of mark
I-th of audio file of corresponding user profile matching.Using above-mentioned technic relization scheme, because terminal exists
During recording, by by the information of spokesman or interviewee (user profile) with corresponding spokesman's
The corresponding mark of speech content is saved, and therefore, terminal can record different sounds according to different users
Frequently, human oriented design is embodied, the intellectuality of terminal is improved.
Embodiment three
As shown in fig. 7, the embodiments of the invention provide a kind of terminal 1, the terminal 1 can include:
Acquiring unit 10, it is described for during, obtaining i-th of label information in recording audio
I-th of label information includes:I-th of mark time point and i-th of mark mark, wherein, N >=i >=1, N >=2.
Determining unit 11, i-th of mark for being obtained according to the acquiring unit 10 is identified and default
Mark mark and the corresponding relation of interviewed user profile, determine that i-th of mark identifies corresponding user profile.
Storage unit 12, for as i ≠ N, i-th of mark that the acquiring unit 10 is obtained
Voice data between time point and i+1 mark time point, saves as and is determined with the determining unit 11
I-th of mark mark corresponding user profile matching i-th of audio file.
Optionally, as shown in figure 8, the terminal 1 also includes:Detection unit 13 and synthesis unit 14.
The acquiring unit 10, is additionally operable to the determining unit 11 and is identified according to described i-th mark and default
Mark mark and the corresponding relation of user profile, after determining that i-th of mark identifies corresponding user profile,
As i=N, End of Tape time point is obtained.
The storage unit 12, is additionally operable to as i=N, described i-th that the acquiring unit 10 is obtained
Voice data between individual mark time point and the End of Tape time point, is saved as and the determining unit
I-th of audio file of the 11 corresponding user profile matchings of i-th of mark mark determined.
The detection unit 13, for detecting in N number of audio file that the storage unit 12 is preserved, be
It is no to there are at least two audio files that have matched identical user profile.
The synthesis unit 14, exists if being additionally operable to the detection of detection unit 13, preserves single by described
At least two audio file that member 12 is preserved synthesizes an audio file, one audio file
With with least two audio files identical user profile.
Optionally, as shown in figure 9, the terminal 1 also includes:Conversion unit 15, insertion unit 16.
The conversion unit 15, is identified and default for the determining unit 11 according to described i-th mark
Mark mark and the corresponding relation of user profile, after determining that i-th of mark identifies corresponding user profile,
I-th of mark that the determining unit 11 is determined identifies corresponding user profile and is converted into i-th of sound
Frequency information.
It is described insertion unit 16, for the acquiring unit 10 obtains it is described i-th mark time point when,
I-th of audio-frequency information that the conversion unit 15 is converted is inserted on the first track, and as i ≠ N
When, the voice data between described i-th mark time point and i+1 mark time point is inserted into the
On two tracks.
The acquiring unit 10, is additionally operable to as i=N, obtains the End of Tape time point.
The insertion unit 16, is additionally operable to i-th of the mark time point for obtaining the acquiring unit 10
Voice data between the End of Tape time point is inserted on the second track.
The synthesis unit 14, is additionally operable to the N inserted on first track that unit 16 is got togather
Voice data on individual audio-frequency information and second track synthesizes a synthesis recording file.
Optionally, the acquiring unit 10, is additionally operable to before i-th of label information of the acquisition, obtains pre-
If interviewed user information database.
The determining unit 11, is additionally operable to the default interviewed user obtained according to the acquiring unit 10
Information bank, determines the corresponding relation of default mark mark and interviewed user profile.
Optionally, the acquiring unit 10, specifically for obtaining i-th first operations, first operation
For determining i-th of label information;And according to described i-th first operations, obtain i-th of mark mark
Know.And obtain the time of origin of i-th first operations, the time of origin of i-th first operations
For described i-th mark time point.
Optionally, the terminal in the embodiment of the present invention is the electronic equipment with sound-recording function, for example, recording
Pen, smart mobile phone, tablet personal computer etc..Terminal in the embodiment of the present invention can be the reception by touch-screen
Label information, can also set the progress at interface mark setting accordingly, can also be by the way that be provided with can be with
The sensor or inductor for sensing touch operation receive label information, and the embodiment of the present invention is not restricted.
In actual applications, above-mentioned acquiring unit 10, determining unit 11, detection unit 13, synthesis unit
14th, conversion unit 15 and insertion unit 16 can be realized by the processor in terminal 1, be specially center
Processor (CPU), microprocessor (MPU), digital signal processor (DSP) or field programmable gate
Array (FPGA) etc. realizes that storage unit 12 can be realized that the memory can pass through system by memory
Bus is connected with processor, wherein, memory is used to store executable program code, and the program code includes
Computer-managed instruction, memory may include high-speed RAM memory, it is also possible to also including non-volatile
Memory, for example, at least one magnetic disk storage.
A kind of terminal that the embodiment of the present invention is provided, by during, obtaining the in recording audio
I label information, i-th of label information includes:I-th of mark time point and i-th of mark mark,
Wherein, N >=i >=1, N >=2;According to i-th of mark mark and default mark mark and interviewed user profile
Corresponding relation, determine that i-th mark identifies corresponding user profile;As i ≠ N, by i-th of mark
Voice data between time point and i+1 mark time point, is saved as corresponding with i-th of mark mark
User profile matching i-th of audio file.Using above-mentioned technic relization scheme, because terminal is in recording
During, by by speech of the information of spokesman or interviewee (user profile) with corresponding spokesman
The corresponding mark of content is saved, and therefore, terminal can record different audios according to different users,
Human oriented design is embodied, the intellectuality of terminal is improved.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or meter
Calculation machine program product.Therefore, the present invention can using hardware embodiment, software implementation or combine software and
The form of the embodiment of hardware aspect.Moreover, the present invention can be used wherein includes calculating one or more
The computer-usable storage medium of machine usable program code (includes but is not limited to magnetic disk storage and optical storage
Device etc.) on the form of computer program product implemented.
The present invention is with reference to method according to embodiments of the present invention, equipment (system) and computer program product
Flow chart and/or block diagram describe.It should be understood that can be by computer program instructions implementation process figure and/or side
Each flow and/or square frame in block diagram and flow and/or the knot of square frame in flow chart and/or block diagram
Close.Can provide these computer program instructions to all-purpose computer, special-purpose computer, Embedded Processor or
The processor of other programmable data processing devices is to produce a machine so that by computer or other can
The instruction of the computing device of programming data processing equipment is produced for realizing in one flow or multiple of flow chart
The device for the function of being specified in one square frame of flow and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices
In the computer-readable memory worked in a specific way so that be stored in the computer-readable memory
Instruction, which is produced, includes the manufacture of command device, and the command device is realized in one flow of flow chart or multiple streams
The function of being specified in one square frame of journey and/or block diagram or multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices, made
Obtain and series of operation steps performed on computer or other programmable devices to produce computer implemented processing,
So as to which the instruction performed on computer or other programmable devices is provided for realizing in one flow of flow chart
Or specified in one square frame of multiple flows and/or block diagram or multiple square frames function the step of.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the protection model of the present invention
Enclose.
Claims (10)
1. a kind of way of recording, it is characterised in that including:
In recording audio during, i-th of label information is obtained, i-th of label information includes:
I-th of mark time point and i-th of mark mark, wherein, N >=i >=1, N >=2;
According to the corresponding relation of described i-th mark mark and default mark mark and interviewed user profile,
Determine that i-th of mark identifies corresponding user profile;
As i ≠ N, described i-th mark time point and i+1 are marked to the audio number between time point
According to saving as i-th of the audio file matched with the described i-th corresponding user profile of mark mark.
2. according to the method described in claim 1, it is characterised in that described to be marked according to described i-th mark
Knowledge and the corresponding relation of default mark mark and user profile, determine that i-th of mark identifies corresponding user
After information, methods described also includes:
As i=N, End of Tape time point is obtained;
By the voice data between described i-th mark time point and the End of Tape time point, save as
I-th of audio file of user profile matching corresponding with described i-th mark mark;
Detect in N number of audio file, if there is at least two audios text that have matched identical user profile
Part;
If in the presence of at least two audio file is synthesized into an audio file, one audio
File is matched and at least two audio files identical user profile.
3. method according to claim 1 or 2, it is characterised in that described according to described i-th mark
The corresponding relation of note mark and default mark mark and user profile, determines that i-th of mark mark is corresponding
After user profile, methods described also includes:
Described i-th mark is identified into corresponding user profile and is converted into i-th of audio-frequency information;
When marking time point described i-th, i-th of audio-frequency information is inserted on the first track;
As i ≠ N, described i-th mark time point and i+1 are marked to the audio number between time point
According to being inserted on the second track;
As i=N, the End of Tape time point is obtained;
Voice data between described i-th mark time point and the End of Tape time point is inserted into the
On two tracks;
Voice data on N number of audio-frequency information on first track and second track is synthesized one
Individual synthesis recording file.
4. according to the method described in claim 1, it is characterised in that it is described obtain i-th label information it
Before, methods described also includes:
Obtain default interviewed user information database;
According to the default interviewed user information database, pair of default mark mark and interviewed user profile is determined
It should be related to.
5. the method according to claim 1 or 4, it is characterised in that described i-th of mark letter of acquisition
Breath, including:
I-th first operations are obtained, described first operates for determining i-th of label information;
According to described i-th first operations, i-th of mark mark is obtained;
The time of origin of i-th first operations is obtained, the time of origin of i-th first operations is institute
State i-th of mark time point.
6. a kind of terminal, it is characterised in that including:
Acquiring unit, for during, obtaining i-th of label information, described i-th in recording audio
Individual label information includes:I-th of mark time point and i-th of mark mark, wherein, N >=i >=1, N >=2;
Determining unit, for i-th of mark mark obtained according to the acquiring unit and default mark
The corresponding relation of note mark and interviewed user profile, determines that i-th of mark identifies corresponding user profile;
Storage unit, for as i ≠ N, i-th of mark time point that the acquiring unit is obtained
Voice data between i+1 mark time point, saves as described i-th determined with the determining unit
Individual mark identifies i-th of audio file of corresponding user profile matching.
7. terminal according to claim 6, it is characterised in that the terminal also includes:Detection unit
And synthesis unit;
The acquiring unit, is additionally operable to the determining unit according to described i-th mark mark and default mark
Note mark and the corresponding relation of user profile, after determining that i-th of mark identifies corresponding user profile, when
During i=N, End of Tape time point is obtained;
The storage unit, is additionally operable to i-th of mark time point and the institute for obtaining the acquiring unit
The voice data between End of Tape time point is stated, described i-th determined with the determining unit is saved as
I-th of audio file of the corresponding user profile matching of mark mark;
The detection unit, for detecting in N number of audio file that the storage unit is preserved, if exist
It has matched at least two audio files of identical user profile;
The synthesis unit, exists if being additionally operable to the detection unit detection, the storage unit is preserved
At least two audio file synthesize an audio file, one audio file matching with it is described
At least two audio file identical user profile.
8. the terminal according to claim 6 or 7, it is characterised in that the terminal also includes:Conversion
Unit, insertion unit;
The conversion unit, for the determining unit according to described i-th mark mark and default mark
Mark and the corresponding relation of user profile, after determining that i-th of mark identifies corresponding user profile, by institute
The corresponding user profile of i-th of mark mark for stating determining unit determination is converted into i-th of audio-frequency information;
The insertion unit, for the acquiring unit obtain it is described i-th mark time point when, will
I-th of audio-frequency information of the conversion unit conversion is inserted on the first track, and as i ≠ N,
Voice data between described i-th mark time point and i+1 mark time point is inserted into the second sound
On rail;
The acquiring unit, is additionally operable to as i=N, obtains the End of Tape time point;
The insertion unit, is additionally operable to i-th of mark time point and the institute for obtaining the acquiring unit
The voice data between End of Tape time point is stated to be inserted on the second track;
The synthesis unit, is additionally operable to the N number of audio inserted on first track that unit is got togather
Voice data in information and second track synthesizes a synthesis recording file.
9. terminal according to claim 6, it is characterised in that
The acquiring unit, is additionally operable to before i-th of label information of the acquisition, obtains default interviewed user
Information bank;
The determining unit, is additionally operable to the default interviewed user information database obtained according to the acquiring unit,
Determine the corresponding relation of default mark mark and interviewed user profile.
10. the terminal according to claim 6 or 9, it is characterised in that
The acquiring unit, specifically for obtaining i-th first operations, described first operates for determining institute
State i-th of label information;And according to described i-th first operations, obtain i-th of mark mark;And obtain
The time of origin of i-th first operations is taken, the time of origin of i-th first operations is described i-th
Individual mark time point.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610073408.0A CN107025913A (en) | 2016-02-02 | 2016-02-02 | A kind of way of recording and terminal |
PCT/CN2016/079919 WO2016197708A1 (en) | 2016-02-02 | 2016-04-21 | Recording method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610073408.0A CN107025913A (en) | 2016-02-02 | 2016-02-02 | A kind of way of recording and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107025913A true CN107025913A (en) | 2017-08-08 |
Family
ID=57503070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610073408.0A Withdrawn CN107025913A (en) | 2016-02-02 | 2016-02-02 | A kind of way of recording and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107025913A (en) |
WO (1) | WO2016197708A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108052578A (en) * | 2017-12-08 | 2018-05-18 | 上海星佑网络科技有限公司 | For the method and apparatus of information processing |
CN111145803A (en) * | 2019-12-11 | 2020-05-12 | 秒针信息技术有限公司 | Voice information acquisition method and device, storage medium and electronic device |
CN111191754A (en) * | 2019-12-30 | 2020-05-22 | 秒针信息技术有限公司 | Voice acquisition method and device, electronic equipment and storage medium |
CN111191199A (en) * | 2019-12-11 | 2020-05-22 | 秒针信息技术有限公司 | Voice information acquisition method and device, storage medium and electronic device |
CN111198646A (en) * | 2019-12-11 | 2020-05-26 | 秒针信息技术有限公司 | Voice information acquisition method and device, storage medium and electronic device |
CN111224785A (en) * | 2019-12-19 | 2020-06-02 | 秒针信息技术有限公司 | Binding method and device of voice equipment, storage medium and electronic device |
CN112017655A (en) * | 2020-07-25 | 2020-12-01 | 云开智能(深圳)有限公司 | Intelligent voice recording playback method and system |
CN113055529A (en) * | 2021-03-29 | 2021-06-29 | 深圳市艾酷通信软件有限公司 | Recording control method and recording control device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657074A (en) * | 2015-01-27 | 2015-05-27 | 中兴通讯股份有限公司 | Method, device and mobile terminal for realizing sound recording |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101897774B1 (en) * | 2012-05-21 | 2018-09-12 | 엘지전자 주식회사 | Method and electronic device for easily searching for voice record |
CN104581351A (en) * | 2015-01-28 | 2015-04-29 | 上海与德通讯技术有限公司 | Audio/video recording method, audio/video playing method and electronic device |
CN105227744B (en) * | 2015-09-15 | 2019-01-18 | 广州三星通信技术研究有限公司 | The method and apparatus of recording call content in communication terminal |
-
2016
- 2016-02-02 CN CN201610073408.0A patent/CN107025913A/en not_active Withdrawn
- 2016-04-21 WO PCT/CN2016/079919 patent/WO2016197708A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657074A (en) * | 2015-01-27 | 2015-05-27 | 中兴通讯股份有限公司 | Method, device and mobile terminal for realizing sound recording |
CN104978145A (en) * | 2015-01-27 | 2015-10-14 | 中兴通讯股份有限公司 | Recording realization method and apparatus and mobile terminal |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108052578A (en) * | 2017-12-08 | 2018-05-18 | 上海星佑网络科技有限公司 | For the method and apparatus of information processing |
CN108052578B (en) * | 2017-12-08 | 2020-07-28 | 上海星佑网络科技有限公司 | Method and apparatus for information processing |
CN111145803A (en) * | 2019-12-11 | 2020-05-12 | 秒针信息技术有限公司 | Voice information acquisition method and device, storage medium and electronic device |
CN111191199A (en) * | 2019-12-11 | 2020-05-22 | 秒针信息技术有限公司 | Voice information acquisition method and device, storage medium and electronic device |
CN111198646A (en) * | 2019-12-11 | 2020-05-26 | 秒针信息技术有限公司 | Voice information acquisition method and device, storage medium and electronic device |
CN111191199B (en) * | 2019-12-11 | 2021-11-16 | 秒针信息技术有限公司 | Voice information acquisition method and device, storage medium and electronic device |
CN111224785A (en) * | 2019-12-19 | 2020-06-02 | 秒针信息技术有限公司 | Binding method and device of voice equipment, storage medium and electronic device |
CN111191754A (en) * | 2019-12-30 | 2020-05-22 | 秒针信息技术有限公司 | Voice acquisition method and device, electronic equipment and storage medium |
CN111191754B (en) * | 2019-12-30 | 2023-10-27 | 秒针信息技术有限公司 | Voice acquisition method and device, electronic equipment and storage medium |
CN112017655A (en) * | 2020-07-25 | 2020-12-01 | 云开智能(深圳)有限公司 | Intelligent voice recording playback method and system |
CN113055529A (en) * | 2021-03-29 | 2021-06-29 | 深圳市艾酷通信软件有限公司 | Recording control method and recording control device |
Also Published As
Publication number | Publication date |
---|---|
WO2016197708A1 (en) | 2016-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107025913A (en) | A kind of way of recording and terminal | |
CN106024009B (en) | Audio processing method and device | |
CN103035247B (en) | Based on the method and device that voiceprint is operated to audio/video file | |
JP4600828B2 (en) | Document association apparatus and document association method | |
CN104123115B (en) | Audio information processing method and electronic device | |
US20050182627A1 (en) | Audio signal processing apparatus and audio signal processing method | |
US10409547B2 (en) | Apparatus for recording audio information and method for controlling same | |
CN108305632A (en) | A kind of the voice abstract forming method and system of meeting | |
CN105975569A (en) | Voice processing method and terminal | |
WO2016119370A1 (en) | Method and device for implementing sound recording, and mobile terminal | |
CN106971723A (en) | Method of speech processing and device, the device for speech processes | |
WO2017080239A1 (en) | Audio recording tagging method and recording device | |
CN105895102A (en) | Recording editing method and recording device | |
US20120035919A1 (en) | Voice recording device and method thereof | |
CN104469487B (en) | A kind of detection method and device of scene switching point | |
JP6176041B2 (en) | Information processing apparatus and program | |
CN108665889A (en) | The Method of Speech Endpoint Detection, device, equipment and storage medium | |
CN113920560A (en) | Method, device and equipment for identifying identity of multi-modal speaker | |
US10347299B2 (en) | Method to automate media stream curation utilizing speech and non-speech audio cue analysis | |
CN106782625A (en) | Audio-frequency processing method and device | |
US9412380B2 (en) | Method for processing data and electronic device thereof | |
CN107452408B (en) | Audio playing method and device | |
CN109688430A (en) | A kind of court trial file playback method, system and storage medium | |
JP7133367B2 (en) | MOVIE EDITING DEVICE, MOVIE EDITING METHOD, AND MOVIE EDITING PROGRAM | |
WO2016197755A1 (en) | Audio data processing method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170808 |