CN104202458A - Method and intelligent terminal for automatically storing contact information - Google Patents

Method and intelligent terminal for automatically storing contact information Download PDF

Info

Publication number
CN104202458A
CN104202458A CN201410444452.9A CN201410444452A CN104202458A CN 104202458 A CN104202458 A CN 104202458A CN 201410444452 A CN201410444452 A CN 201410444452A CN 104202458 A CN104202458 A CN 104202458A
Authority
CN
China
Prior art keywords
user
associated person
person information
phonetic feature
analysis engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410444452.9A
Other languages
Chinese (zh)
Inventor
高伟
朱俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201410444452.9A priority Critical patent/CN104202458A/en
Publication of CN104202458A publication Critical patent/CN104202458A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephonic Communication Services (AREA)

Abstract

The invention provides a method and an intelligent terminal for automatically storing contact information. The method includes acquiring user call record according to a user instruction during a call; selecting a voice analyzing engine and a voice feature library according to current network condition and user configuration information; preprocessing the user cal record by the voice analyzing engine to generate multiple voice sections; comparing the voice sections with the voice feature library by the voice analyzing engine, and acquiring contact information; displaying the contact information, and playing the voice sections containing the contact information; receiving a modifying instruction input by a user, correspondingly modifying the contact information, and storing the contact information after being modified; receiving a confirmation instruction input by the user, and storing the contact information. By the method, the contact information can be automatically stored by the intelligent terminal.

Description

A kind of method of automatic preservation associated person information and intelligent terminal
Technical field
The present invention relates to intelligent terminal technical field, relate in particular to a kind of method and intelligent terminal of automatic preservation associated person information.
Background technology
Intelligent terminal has become an indispensable part in our life, is the important tool of people and extraneous communication contact.In actual use, user wishes that the simple and effective as far as possible effective information by contact person is added in the address list of intelligent terminal.
In the use of intelligent terminal, often can run into the scene that needs to record associated person information in communication process, the method conventionally adopting is at present by manual record associated person information, and associated person information is entered in intelligent terminal; Current technology, the associated person information that intelligent terminal relates in cannot automatically storing conversation contents.
Summary of the invention
The invention provides a kind of method of automatic preservation associated person information, can automatically preserve the associated person information relating in communication process by intelligent terminal.
The present invention also provides a kind of intelligent terminal of automatic preservation associated person information, can automatically preserve the associated person information relating in communication process.
Technical scheme of the present invention is achieved in that
A method for automatic preservation associated person information, comprising:
In communication process, according to user instruction, obtain user's communication recording;
According to current network conditions and user configuration information, select speech analysis engine and phonetic feature storehouse;
Speech analysis engine carries out preliminary treatment to described user's communication recording, generates a plurality of voice segments;
Speech analysis engine contrasts the data in described voice segments and phonetic feature storehouse, obtains associated person information;
Show described associated person information, and play the voice segments that comprises described associated person information; The modify instruction that receives user's input, carries out corresponding modify to described associated person information, preserves amended associated person information; Or, receive the confirmation instruction of user's input, preserve described associated person information.
A mobile terminal for automatic preservation associated person information, comprising:
Recording module, for obtaining user's communication recording at communication process according to user instruction;
Select module, for selecting speech analysis engine and phonetic feature storehouse according to current network conditions and user configuration information;
Sound identification module, is used to indicate speech analysis engine described user's communication recording is carried out to preliminary treatment, generates a plurality of voice segments; And indicate speech analysis engine that the data in described voice segments and phonetic feature storehouse are contrasted, obtain associated person information;
Revision module, for showing described associated person information, and plays the voice segments that comprises described associated person information; The modify instruction that receives user's input, carries out corresponding modify to described associated person information, preserves amended associated person information; Or, receive the confirmation instruction of user's input, preserve described associated person information.
Visible, method and the intelligent terminal of the automatic preservation associated person information that the present invention proposes, in user's communication process, obtain calling record, select suitable speech analysis engine and phonetic feature storehouse from calling record, to extract associated person information, thereby realized automatic preservation associated person information.
Accompanying drawing explanation
Fig. 1 is the method realization flow figure of the automatic preservation associated person information that proposes of the present invention;
Fig. 2 is the realization flow figure that obtains calling record;
Fig. 3 is for selecting the realization flow figure in speech analysis engine and loading phonetic feature storehouse;
Fig. 4 is the pretreated realization flow figure of voice;
Fig. 5 is contact number coupling realization flow figure;
Fig. 6 for creating the realization flow figure of associated person information in social activity application;
Fig. 7 is that feature database upgrades realization flow figure;
Fig. 8 is the intelligent terminal structural representation of the automatic preservation associated person information that proposes of the present invention.
Embodiment
The present invention proposes a kind of method of automatic preservation associated person information, if Fig. 1 is the method realization flow figure, comprising:
Step 101: in communication process, obtain user's communication recording according to user instruction;
Step 102: select speech analysis engine and phonetic feature storehouse according to current network conditions and user configuration information;
Step 103: speech analysis engine carries out preliminary treatment to described user's communication recording, generates a plurality of voice segments;
Step 104: speech analysis engine contrasts the data in described voice segments and phonetic feature storehouse, obtains associated person information;
Step 105: show described associated person information, and play the voice segments that comprises described associated person information; The modify instruction that receives user's input, carries out corresponding modify to described associated person information, preserves amended associated person information; Or, receive the confirmation instruction of user's input, preserve described associated person information.
The concrete mode of above-mentioned steps 102 can be: when current network has connected and when user permits, selected high in the clouds speech analysis engine and online phonetic feature storehouse; When current network does not connect or when user does not permit, selects local off-line speech analysis engine and off-line phonetic feature storehouse.
In above-mentioned steps 102, can further according to customer position information, select corresponding phonetic feature storehouse.
The concrete mode of above-mentioned steps 103 can be: user's communication recording is divided into a plurality of frames according to phonetic feature; Wherein, phonetic feature is: speech energy, frequency spectrum or zero-crossing rate; For two adjacent frames, the initial T2 time period content of the end T1 time period content of former frame and a rear frame is carried out to words and phrases Continuity Analysis, if meet continuity requirement, these two adjacent frames are carried out to merger, form a voice segments; Otherwise, using a frame separately as a voice segments; Wherein, described T1 and T2 are the predefined time period.
The concrete mode of above-mentioned steps 104 can be: the keyword in voice segments and phonetic feature storehouse is contrasted, identify associated person information, and judge whether correct format of described associated person information by the corresponding interface, obtain the associated person information of correct format; Wherein, described keyword is continuous number, unit, address or name.
Said method may further include:
Step 106: the associated person information of described preservation is converted into the form that social application is supported, calls the interface that social application provides, preserve this associated person information after format conversion in social activity application.
Step 107: the associated person information that user is revised and corresponding voice segments are uploaded to cloud server, for revising high in the clouds speech analysis engine and online phonetic feature storehouse; Or, off-line speech analysis engine and the off-line phonetic feature storehouse of voice segments correction this locality of the associated person information of revising according to user and correspondence.
Referring to accompanying drawing, introduce respectively each step of said method.
If Fig. 2 is the realization flow figure that obtains calling record.
As shown in Figure 2, when user need to record recording, can pass through specific switch, start the sound-recording function of intelligent terminal; After user clicks predefined stop button or user's on-hook, stop recording, and recording is kept in specific catalogue according to specific filename.Calling record is as the infrastructure elements of follow-up whole application.
Above-mentioned specific switch or button comprise: the physical button switch of intelligent terminal, and the virtual key switch on screen, the specific voice command switch of user, or the specific gesture command switch of user etc.
If Fig. 3 is for selecting the realization flow figure in speech analysis engine and loading phonetic feature storehouse.
As shown in Figure 3, by analyzing current network conditions and user configuration information, select suitable speech analysis engine and phonetic feature storehouse.Specifically, the in the situation that of intelligent terminal interconnection network and acquisition user license, for obtaining more accurate effect, select high in the clouds speech analysis engine and online phonetic feature storehouse; At intelligent terminal interconnection network or do not obtain user's license in the situation that not, select local off-line speech analysis engine and off-line phonetic feature storehouse.Meanwhile, in order to improve the discrimination to the existing personnel's voice of address list, intelligent terminal also can load local address list personnel speech recognition library.
In addition,, when selecting phonetic feature storehouse, it is also conceivable that customer position information; That is, automatic acquisition customer position information, inquiry local area language used, loads corresponding phonetic feature storehouse.Meanwhile, the phonetic feature storehouse that user can select to wish employing in software configuration, the speech habits that user is special can be identified in these phonetic feature storehouses, to meet specific user's requirement, for the extraction of voice customizing messages.
If Fig. 4 is the pretreated realization flow figure of voice.
As shown in Figure 4, speech analysis engine can change into user's communication recording the voice segments of energy rapid analysis, mainly by following steps, realizes:
First, speech analysis engine, according to phonetic feature information computing voice intervals such as speech energy, frequency spectrum and zero crossings, is divided into some frames to recording accordingly;
Then every frame is analyzed.When the initial part character analysis result of the end of present frame segment word and next frame is consecutive hours, these 2 frames are carried out to merger, form a voice segments; Otherwise, directly using present frame as a voice segments.These voice segments are by the step being used in below.
Afterwards, speech analysis engine contrasts the data in voice segments and phonetic feature storehouse, therefrom draws the needed associated person information of address list.
If Fig. 5 is the flow chart of a simple contact number coupling.
First load a voice segments, then the special keyword in this voice segments and phonetic feature storehouse is compared, such as " address ", " name ", " address ", " identity card " etc.If there is these special keywords, this voice segments and character pair storehouse can be compared to information extraction.If no, detect in this voice segments whether have 11 continuous numbers, if existed, analyze the number head whether this piece of digital meets China Mobile's telephone number, if met, just can simply think that this is a user's Mobile Directory Number.For some content, such as ID card information, No. QQ etc., can also confirm by network interface, thus the correctness of assurance data.For analyzing the voice segments that out has effective information, intelligent terminal mobile phone can save, and for user, recalls.
In addition, can increase phonetic feature storehouse according to user's selection, thereby realize the recognition function to some certain content, improve the accuracy of a certain special speech pattern of identification.
Recording is shown to user by the associated person information of identification after having analyzed, and can play corresponding sound bite according to user's instruction simultaneously.After user listens to, input modify instruction or confirmation; If receive the modify instruction of user's input, described associated person information carried out to corresponding modify, and preserve amended associated person information; If receive the confirmation instruction of user's input, directly preserve the associated person information automatically identifying.When preserving associated person information, can call the interface by intelligent terminal, finally definite associated person information is created to a new contact person, and be kept in contacts list.
The present invention can also further be kept at the associated person information finally identifying in social application, and Fig. 6 for creating the realization flow figure of associated person information in social activity application.
As shown in Figure 6, can the associated person information of acquisition be changed into the form that other social application are supported by data format conversion module, call afterwards the interface that these social application provide, newly-built contact person in social activity application.
As shown in Figure 7, under the condition that the present invention can also allow user, user is thought to incorrect recognition result (the namely modification situation of user to the associated person information of automatic identification) and corresponding voice segments are uploaded to cloud server, be used for revising high in the clouds speech analysis engine and online phonetic feature storehouse, thereby help system promotes accuracy and performance; Meanwhile, off-line speech analysis engine and the off-line phonetic feature storehouse of the associated person information that also can revise according to user and corresponding voice segments correction this locality, to improve discrimination.
Below step has been introduced the method that the present invention proposes one by one.Below lift a specific embodiment.
Embodiment mono-:
In first and second liang of people's phone contacts, first learns that second has third contact method, just inquires to second.Now first just can be selected " starting recording " this button in the menu of smart mobile phone, starts the sound-recording function of smart mobile phone.Smart mobile phone just can be recorded this dialog context, and exists among the file of program setting.
After end of conversation, the program of the smart mobile phone file that retrieval is set automatically, before discovery exists, the recording file of call, just can eject prompt window, and whether reminding user needs by the newly-built contact person of telephonograph.User select be after, smart mobile phone can start this section of recording to analyze.
Smart mobile phone detects whether connected networks of current smart mobile phone by application interface, detects afterwards user's config option, whether allows to use phonetic feature storehouse and the speech analysis engine in high in the clouds.If mobile phone connected networks and obtained user's mandate, smart mobile phone connects the server in high in the clouds.After cloud server is received speech analysis request, obtain user's config option, and the positional information current according to user, load corresponding telephone number feature database, address feature database, user communication record personnel's phonetic feature storehouse and user configured other phonetic feature storehouses.These feature databases have defined some patterns of specific vocabulary, such as a string 11 and the numeral that meets China Mobile's telephone number composition rule just can simply be judged to be telephone number, No. N, B street, A city and can in database, search corresponding place just can think address.
Speech analysis engine in cloud server can be analyzed the characteristic informations such as energy, frequency spectrum and zero-crossing rate of voice in recording file, thereby calculates speech interval, then according to gap length, recording is divided into some frames.Then every frame is carried out to speech recognition, then obtain the spoken and written languages that every frame is corresponding.For two adjacent frames, can choose a period of time T1 at former frame end and a period of time T2 (0<T1, T2<5s) that a rear frame starts, carry out words and phrases Continuity Analysis; If meet continuity requirement, two adjacent frames are carried out to merger, form a voice segments, realize by this method the intelligent segmentation to recording.
Then, for each voice segments, carry out information extraction, exactly these voice segments are compared from different phonetic feature storehouses specifically, judge whether to contain the information that meets feature mode, if there is this type of information, these information are extracted, finally feed back to user.
After above step finishes, smart mobile phone can show user by the result of identification, and while user also can play the recording section that recognition result is corresponding and be confirmed, if user is dissatisfied to some information, can modify as required.If user has demand, can supplement the result of identification, to meet the demand of user individual.
User revise or supplement complete after, can click importing button, smart mobile phone can call the interface that creates contact person, thereby with these information newly-built contact person in address list.
The present invention can also support user that these information are created to contact person in social activity application.Particularly, after getting associated person information, user can create this contact person by menu setecting in social activity application, the social activity application that meeting of the present invention is selected according to user, automatically these data are converted into the form that this social activity application is supported, and the interface providing by these social application, will in social activity application, be newly-built this contact person of user.
For the personnel that exist in address list, smart mobile phone can upgrade local address book personnel sound bank according to analysis result.Smart mobile phone also can read user's setting simultaneously, if user allows to help high in the clouds improving performance, smart mobile phone can be thought user needs the information of modification and corresponding voice segments to be uploaded to high in the clouds, for revising speech analysis engine and phonetic feature storehouse.
Content described in above-described embodiment is preferably execution mode of the present invention, is not limitation of the present invention, and without departing from the inventive concept of the premise, any apparent replacement is all within protection scope of the present invention.
Accordingly, the present invention also proposes a kind of intelligent terminal of automatic preservation associated person information, if Fig. 8 is this apparatus structure schematic diagram, comprising:
Recording module 801, for obtaining user's communication recording at communication process according to user instruction;
Select module 802, for selecting speech analysis engine and phonetic feature storehouse according to current network conditions and user configuration information;
Sound identification module 803, is used to indicate speech analysis engine described user's communication recording is carried out to preliminary treatment, generates a plurality of voice segments; And indicate speech analysis engine that the data in described voice segments and phonetic feature storehouse are contrasted, obtain associated person information;
Revision module 804, for showing described associated person information, and plays the voice segments that comprises described associated person information; The modify instruction that receives user's input, carries out corresponding modify to described associated person information, preserves amended associated person information; Or, receive the confirmation instruction of user's input, preserve described associated person information.
In above-mentioned intelligent terminal, select the module 802 can be for, when current network has connected and when user permits, selected high in the clouds speech analysis engine and online phonetic feature storehouse; When current network does not connect or when user does not permit, selects local off-line speech analysis engine and off-line phonetic feature storehouse.
And, select module 802 further according to customer position information, to select corresponding phonetic feature storehouse.
In above-mentioned intelligent terminal, speech analysis engine is recorded and is carried out preliminary treatment user's communication, and the mode that generates a plurality of voice segments can be: user's communication recording is divided into a plurality of frames according to phonetic feature; Wherein, phonetic feature is: speech energy, frequency spectrum or zero-crossing rate; For two adjacent frames, the initial T2 time period content of the end T1 time period content of former frame and a rear frame is carried out to words and phrases Continuity Analysis, if meet continuity requirement, these two adjacent frames are carried out to merger, form a voice segments; Otherwise, using a frame separately as a voice segments; Wherein, described T1 and T2 are the predefined time period.
Speech analysis engine contrasts the data in voice segments and phonetic feature storehouse, the mode of obtaining associated person information can be: the keyword in voice segments and phonetic feature storehouse is contrasted, identify associated person information, and judge whether correct format of described associated person information by the corresponding interface, obtain the associated person information of correct format; Wherein, described keyword is continuous number, unit, address or name.
Above-mentioned intelligent terminal may further include:
Providing data formatting module 805, the form of supporting for the associated person information of described preservation being converted into social application, calls the interface that social application provides, and in social activity application, preserves this associated person information after format conversion.
Improve data upload module 806, for associated person information and the corresponding voice segments that user is revised, be uploaded to cloud server, for revising high in the clouds speech analysis engine and online phonetic feature storehouse; Or, off-line speech analysis engine and the off-line phonetic feature storehouse of voice segments correction this locality of the associated person information of revising according to user and correspondence.
To sum up, the present invention has overcome inquiry, record, input this traditional establishment contact person's mode, a kind of method of directly extracting associated person information from calling record and creating contact person is provided, it comprises and obtains calling record, speech analysis engine is selected and property data base loads, voice preliminary treatment, characteristic information coupling, add modification information, create contact person, data format transforms and uploads improvement data procedures, the trouble of effectively having avoided needing paper notes record in other people contact method of inquiry or having sent note, user can be talked easily, and obtain relevant information after call, generate easily needed contact person.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of making, be equal to replacement, improvement etc., within all should being included in the scope of protection of the invention.

Claims (14)

1. automatically preserve a method for associated person information, it is characterized in that, described method comprises:
In communication process, according to user instruction, obtain user's communication recording;
According to current network conditions and user configuration information, select speech analysis engine and phonetic feature storehouse;
Speech analysis engine carries out preliminary treatment to described user's communication recording, generates a plurality of voice segments;
Speech analysis engine contrasts the data in described voice segments and phonetic feature storehouse, obtains associated person information;
Show described associated person information, and play the voice segments that comprises described associated person information; The modify instruction that receives user's input, carries out corresponding modify to described associated person information, preserves amended associated person information; Or, receive the confirmation instruction of user's input, preserve described associated person information.
2. method according to claim 1, is characterized in that, the described mode according to current network conditions and user configuration information selection speech analysis engine and phonetic feature storehouse is:
When current network has connected and when user permits, selected high in the clouds speech analysis engine and online phonetic feature storehouse; When current network does not connect or when user does not permit, selects local off-line speech analysis engine and off-line phonetic feature storehouse.
3. method according to claim 1 and 2, is characterized in that, further according to customer position information, selects corresponding phonetic feature storehouse.
4. method according to claim 1, is characterized in that, described speech analysis engine is recorded and carried out preliminary treatment user's communication, and the mode that generates a plurality of voice segments is:
User's communication recording is divided into a plurality of frames according to phonetic feature; Wherein, phonetic feature is: speech energy, frequency spectrum or zero-crossing rate;
For two adjacent frames, the initial T2 time period content of the end T1 time period content of former frame and a rear frame is carried out to words and phrases Continuity Analysis, if meet continuity requirement, these two adjacent frames are carried out to merger, form a voice segments; Otherwise, using a frame separately as a voice segments; Wherein, described T1 and T2 are the predefined time period.
5. method according to claim 1, is characterized in that, described speech analysis engine contrasts the data in voice segments and phonetic feature storehouse, and the mode of obtaining associated person information is:
Keyword in voice segments and phonetic feature storehouse is contrasted, identify associated person information, and judge whether correct format of described associated person information by the corresponding interface, obtain the associated person information of correct format; Wherein, described keyword is continuous number, unit, address or name.
6. according to the method described in claim 1,2,4 or 5, it is characterized in that, described method further comprises:
The associated person information of described preservation is converted into the form that social application is supported, calls the interface that social application provides, in social activity application, preserve this associated person information after format conversion.
7. according to the method described in claim 1,2,4 or 5, it is characterized in that, described method further comprises:
The associated person information that user is revised and corresponding voice segments are uploaded to cloud server, for revising high in the clouds speech analysis engine and online phonetic feature storehouse; Or, off-line speech analysis engine and the off-line phonetic feature storehouse of voice segments correction this locality of the associated person information of revising according to user and correspondence.
8. automatically preserve a mobile terminal for associated person information, it is characterized in that, described mobile terminal comprises:
Recording module, for obtaining user's communication recording at communication process according to user instruction;
Select module, for selecting speech analysis engine and phonetic feature storehouse according to current network conditions and user configuration information;
Sound identification module, is used to indicate speech analysis engine described user's communication recording is carried out to preliminary treatment, generates a plurality of voice segments; And indicate speech analysis engine that the data in described voice segments and phonetic feature storehouse are contrasted, obtain associated person information;
Revision module, for showing described associated person information, and plays the voice segments that comprises described associated person information; The modify instruction that receives user's input, carries out corresponding modify to described associated person information, preserves amended associated person information; Or, receive the confirmation instruction of user's input, preserve described associated person information.
9. intelligent terminal according to claim 8, is characterized in that, described selection module is used for, when current network has connected and when user permits, selected high in the clouds speech analysis engine and online phonetic feature storehouse; When current network does not connect or when user does not permit, selects local off-line speech analysis engine and off-line phonetic feature storehouse.
10. intelligent terminal according to claim 8 or claim 9, is characterized in that, described selection module is further selected corresponding phonetic feature storehouse according to customer position information.
11. intelligent terminals according to claim 8, is characterized in that, described speech analysis engine is recorded and carried out preliminary treatment user's communication, and the mode that generates a plurality of voice segments is:
User's communication recording is divided into a plurality of frames according to phonetic feature; Wherein, phonetic feature is: speech energy, frequency spectrum or zero-crossing rate;
For two adjacent frames, the initial T2 time period content of the end T1 time period content of former frame and a rear frame is carried out to words and phrases Continuity Analysis, if meet continuity requirement, these two adjacent frames are carried out to merger, form a voice segments; Otherwise, using a frame separately as a voice segments; Wherein, described T1 and T2 are the predefined time period.
12. intelligent terminals according to claim 8, is characterized in that, described speech analysis engine contrasts the data in voice segments and phonetic feature storehouse, and the mode of obtaining associated person information is:
Keyword in voice segments and phonetic feature storehouse is contrasted, identify associated person information, and judge whether correct format of described associated person information by the corresponding interface, obtain the associated person information of correct format; Wherein, described keyword is continuous number, unit, address or name.
Intelligent terminal 13. according to Claim 8, described in 9,11 or 12, is characterized in that, described intelligent terminal further comprises:
Providing data formatting module, the form of supporting for the associated person information of described preservation being converted into social application, calls the interface that social application provides, and in social activity application, preserves this associated person information after format conversion.
Intelligent terminal 14. according to Claim 8, described in 9,11 or 12, is characterized in that, described intelligent terminal further comprises:
Improve data upload module, for associated person information and the corresponding voice segments that user is revised, be uploaded to cloud server, for revising high in the clouds speech analysis engine and online phonetic feature storehouse; Or, off-line speech analysis engine and the off-line phonetic feature storehouse of voice segments correction this locality of the associated person information of revising according to user and correspondence.
CN201410444452.9A 2014-09-02 2014-09-02 Method and intelligent terminal for automatically storing contact information Pending CN104202458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410444452.9A CN104202458A (en) 2014-09-02 2014-09-02 Method and intelligent terminal for automatically storing contact information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410444452.9A CN104202458A (en) 2014-09-02 2014-09-02 Method and intelligent terminal for automatically storing contact information

Publications (1)

Publication Number Publication Date
CN104202458A true CN104202458A (en) 2014-12-10

Family

ID=52087683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410444452.9A Pending CN104202458A (en) 2014-09-02 2014-09-02 Method and intelligent terminal for automatically storing contact information

Country Status (1)

Country Link
CN (1) CN104202458A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751847A (en) * 2015-03-31 2015-07-01 刘畅 Data acquisition method and system based on overprint recognition
WO2016141697A1 (en) * 2015-03-06 2016-09-15 中兴通讯股份有限公司 Temporary contact storing method and device
CN106453936A (en) * 2016-10-31 2017-02-22 北京小米移动软件有限公司 Terminal control method and device
CN106708582A (en) * 2016-12-30 2017-05-24 珠海市魅族科技有限公司 Data storage method and device
WO2018058327A1 (en) * 2016-09-27 2018-04-05 中兴通讯股份有限公司 Method and device for processing contact information, and storage medium
CN111105544A (en) * 2019-12-31 2020-05-05 深圳市哈希树科技有限公司 Face recognition access control system of unmanned supermarket and control method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040117179A1 (en) * 2002-12-13 2004-06-17 Senaka Balasuriya Method and apparatus for selective speech recognition
CN101277338A (en) * 2007-03-29 2008-10-01 西门子(中国)有限公司 Method for recording downstream voice signal of communication terminal as well as the communication terminal
CN101513023A (en) * 2006-08-28 2009-08-19 索尼爱立信移动通讯有限公司 System and method for coordinating audiovisual content with contact list information
CN103167121A (en) * 2012-07-05 2013-06-19 深圳市金立通信设备有限公司 System and method of rapidly saving contact during mobile phone call process

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040117179A1 (en) * 2002-12-13 2004-06-17 Senaka Balasuriya Method and apparatus for selective speech recognition
CN101513023A (en) * 2006-08-28 2009-08-19 索尼爱立信移动通讯有限公司 System and method for coordinating audiovisual content with contact list information
CN101277338A (en) * 2007-03-29 2008-10-01 西门子(中国)有限公司 Method for recording downstream voice signal of communication terminal as well as the communication terminal
CN103167121A (en) * 2012-07-05 2013-06-19 深圳市金立通信设备有限公司 System and method of rapidly saving contact during mobile phone call process

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016141697A1 (en) * 2015-03-06 2016-09-15 中兴通讯股份有限公司 Temporary contact storing method and device
CN104751847A (en) * 2015-03-31 2015-07-01 刘畅 Data acquisition method and system based on overprint recognition
WO2018058327A1 (en) * 2016-09-27 2018-04-05 中兴通讯股份有限公司 Method and device for processing contact information, and storage medium
CN106453936A (en) * 2016-10-31 2017-02-22 北京小米移动软件有限公司 Terminal control method and device
CN106708582A (en) * 2016-12-30 2017-05-24 珠海市魅族科技有限公司 Data storage method and device
CN111105544A (en) * 2019-12-31 2020-05-05 深圳市哈希树科技有限公司 Face recognition access control system of unmanned supermarket and control method thereof

Similar Documents

Publication Publication Date Title
CN104202458A (en) Method and intelligent terminal for automatically storing contact information
TWI711967B (en) Method, device and equipment for determining broadcast voice
US20160189713A1 (en) Apparatus and method for automatically creating and recording minutes of meeting
CN106896932A (en) A kind of candidate word recommends method and device
CN110377908B (en) Semantic understanding method, semantic understanding device, semantic understanding equipment and readable storage medium
CN104427292A (en) Method and device for extracting a conference summary
CN103000175A (en) Voice recognition method and mobile terminal
CN106302933B (en) Voice information processing method and terminal
CN105469789A (en) Voice information processing method and voice information processing terminal
CN109841210B (en) Intelligent control implementation method and device and computer readable storage medium
CN108132768A (en) The processing method of phonetic entry, terminal and network server
CN107885483B (en) Audio information verification method and device, storage medium and electronic equipment
CN110287364B (en) Voice search method, system, device and computer readable storage medium
US20130066634A1 (en) Automated Conversation Assistance
CN107615270A (en) A kind of man-machine interaction method and its device
CN107609047A (en) Using recommendation method, apparatus, mobile device and storage medium
US20160370959A1 (en) Method and device for updating input method system, computer storage medium, and device
CN113051362A (en) Data query method and device and server
CN103533155A (en) Method and an apparatus for recording and playing a user voice in a mobile terminal
US20220262339A1 (en) Audio processing method, apparatus, and device, and storage medium
KR101590078B1 (en) Apparatus and method for voice archiving
CN109410934A (en) A kind of more voice sound separation methods, system and intelligent terminal based on vocal print feature
CN111899859A (en) Surgical instrument counting method and device
WO2015188454A1 (en) Method and device for quickly accessing ivr menu
CN103546613A (en) Contact person recording method, contact person recording device and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20141210