CN105159979A - Good friend recommendation method and apparatus - Google Patents

Good friend recommendation method and apparatus Download PDF

Info

Publication number
CN105159979A
CN105159979A CN201510540954.6A CN201510540954A CN105159979A CN 105159979 A CN105159979 A CN 105159979A CN 201510540954 A CN201510540954 A CN 201510540954A CN 105159979 A CN105159979 A CN 105159979A
Authority
CN
China
Prior art keywords
user
audio
information
comparison result
emotional information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510540954.6A
Other languages
Chinese (zh)
Inventor
杨婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201510540954.6A priority Critical patent/CN105159979A/en
Publication of CN105159979A publication Critical patent/CN105159979A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/636Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Library & Information Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Physiology (AREA)
  • Hospice & Palliative Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention is suitable for the technical field of information and provides a good friend recommendation method and apparatus. The method comprises: obtaining audio information of a user, obtaining values of N types of sound parameters from the audio information, and obtaining N reference values corresponding to the N types of sound parameters, wherein N is an integer greater than or equal to 1; comparing the values of the N type of sound parameters with the N reference values respectively to obtain first comparison results; according to the first comparison results, determining emotion information of the user; and according to the emotion information of the user, recommending good friends for the user. According to the good friend recommendation method and apparatus, the emotion of the user is considered when the good friends are recommended for the user, and the good friends with the same emotion are quickly found for the user, so that the flexibility of good friend recommendation is improved.

Description

Friend recommendation method and device
Technical field
The invention belongs to areas of information technology, particularly relate to friend recommendation method and device.
Background technology
People, when sadness, often wish to communicate with each other with there being the people of identical experience or identical mood; People, when happy, often wish with to have the people of identical mood to share happy.If linked up with the people of contrary mood, then likely make sad people sadder, make happy people originally feel disappointing.But the dirigibility of existing friend recommendation mode is poor, the good friend with identical mood can not be recommended for user.
Summary of the invention
Given this, embodiments provide a kind of friend recommendation method and device, poor with the dirigibility solving existing friend recommendation mode, the problem with the good friend of identical mood can not be recommended for user.
First aspect, embodiments provides a kind of friend recommendation method, comprising:
Obtain the audio-frequency information of user, from described audio-frequency information, obtain the value of the audio parameter of the N type of described user, and N number of reference value that the audio parameter obtaining described N type is corresponding, wherein, N be more than or equal to 1 integer;
The value of the audio parameter of described N type is compared with described N number of reference value respectively, obtains the first comparison result;
The emotional information of described user is determined according to described first comparison result;
According to the emotional information of described user to described user's commending friends.
Second aspect, embodiments provides a kind of friend recommendation device, comprising:
First acquiring unit, for obtaining the audio-frequency information of user, obtains the value of the audio parameter of the N type of described user from described audio-frequency information, and N number of reference value that the audio parameter obtaining described N type is corresponding, wherein, N be more than or equal to 1 integer;
First comparing unit, the value for the audio parameter by described N type is compared with described N number of reference value respectively, obtains the first comparison result;
Emotional information determining unit, for determining the emotional information of described user according to described first comparison result;
Friend recommendation unit, for according to the emotional information of described user to described user's commending friends.
The beneficial effect that the embodiment of the present invention compared with prior art exists is: the embodiment of the present invention is by obtaining the audio-frequency information of user, the value of the audio parameter of the N type of user is obtained from audio-frequency information, the emotional information of user is determined according to the audio parameter of N type, again according to the emotional information of user to user's commending friends, thus in the mood to consideration user during user's commending friends, fast for user finds the good friend with identical mood, thus improve the dirigibility of friend recommendation.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the realization flow figure of the friend recommendation method that the embodiment of the present invention provides;
Fig. 2 is the realization flow figure of the friend recommendation method that another embodiment of the present invention provides;
Fig. 3 is the realization flow figure of the friend recommendation method that another embodiment of the present invention provides;
Fig. 4 is the realization flow figure of the friend recommendation method that another embodiment of the present invention provides;
Fig. 5 is the realization flow figure of the friend recommendation method that another embodiment of the present invention provides;
Fig. 6 is the structured flowchart of the friend recommendation device that the embodiment of the present invention provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Fig. 1 shows the realization flow figure of the friend recommendation method that the embodiment of the present invention provides, and details are as follows:
In step S101, obtain the audio-frequency information of user, from audio-frequency information, obtain the value of the audio parameter of the N type of user, and N number of reference value that the audio parameter obtaining N type is corresponding, wherein, N be more than or equal to 1 integer.
It should be noted that, the executive agent of the embodiment of the present invention can be the mobile terminal such as mobile phone and panel computer, can also be the wearable intelligent equipment such as intelligent watch, in this no limit.
In embodiments of the present invention, obtained the audio-frequency information of user's typing by microphone, from audio-frequency information, obtain the value of the audio parameter of the N type of user, and N number of reference value that the audio parameter obtaining N type from storer is corresponding.
Preferably, described audio parameter comprises word speed, average fundamental frequency, base frequency range, intensity of sound and/or sharpness.
Wherein, fundamental frequency refers to the frequency of gene.
In step s 102, the value of the audio parameter of N type is compared with N number of reference value respectively, obtains the first comparison result.
In step s 103, the emotional information of user is determined according to the first comparison result.
Here, the type of the emotional information of user can comprise happiness, anger, frightened and/or sad, in this no limit.
As one embodiment of the present of invention, determine that the emotional information of user is specially according to the first comparison result: the weight determining various types of emotional information according to the first comparison result, the type of emotional information maximum for weight is defined as the emotional information of user.
Such as, the audio parameter of the first kind is word speed, reference value corresponding to word speed is first reference value, determine that first is interval according to first reference value, second is interval, 3rd interval and the 4th interval, wherein, the mean value in the first interval is greater than the mean value in the second interval, the mean value in the second interval is greater than the mean value in the 3rd interval, the mean value in the 3rd interval is greater than four-range mean value, first interval corresponding emotional information is happiness, second interval corresponding emotional information is anger, 3rd interval corresponding emotional information is frightened, 4th interval corresponding emotional information is sad.If the value of the audio parameter of the first kind obtained is in the first interval, be then that the weight of happiness adds 1 by emotional information; If the value of the audio parameter of the first kind obtained is in the second interval, be then that the weight of anger adds 1 by emotional information; If the value of the audio parameter of the first kind obtained is in the 3rd interval, be then that frightened weight adds 1 by emotional information; If the value of the audio parameter of the first kind obtained is in the 4th interval, be then that sad weight adds 1 by emotional information.
Again such as, the audio parameter of Second Type is average fundamental frequency, the reference value that average fundamental frequency is corresponding is the second reference value, determine that the 5th is interval according to the second reference value, 6th is interval, between SECTOR-SEVEN and between Section Eight, wherein, the mean value in the 6th interval is greater than the mean value in the 5th interval, mean value between SECTOR-SEVEN is greater than the mean value in the 5th interval, the mean value in the 5th interval is greater than the mean value between Section Eight, 5th interval corresponding emotional information is happiness, 6th interval corresponding emotional information is anger, emotional information corresponding between SECTOR-SEVEN is frightened, emotional information corresponding between Section Eight is sad.If the value of the audio parameter of the Second Type obtained is in the 5th interval, be then that the weight of happiness adds 1 by emotional information; If the value of the audio parameter of the Second Type obtained is in the 6th interval, be then that the weight of anger adds 1 by emotional information; If the value of the audio parameter of the Second Type obtained is in SECTOR-SEVEN, be then that frightened weight adds 1 by emotional information; If the value of the audio parameter of the Second Type obtained is in Section Eight, be then that sad weight adds 1 by emotional information.
Again such as, the audio parameter of the 3rd type is base frequency range, the reference value that base frequency range is corresponding is the 3rd reference value, determine that the 9th is interval according to the 3rd reference value, tenth is interval, 11 interval and twelve-section, wherein, the burst length in the 9th interval is greater than the burst length of twelve-section, the burst length in the tenth interval is greater than the burst length of twelve-section, the burst length in the 11 interval is greater than the burst length of twelve-section, 9th interval corresponding emotional information is happiness, tenth interval corresponding emotional information is anger, 11 interval corresponding emotional information is frightened, emotional information corresponding to twelve-section is sad.If the value of the audio parameter of the 3rd type obtained is in the 9th interval, be then that the weight of happiness adds 1 by emotional information; If the value of the audio parameter of the 3rd type obtained is in the tenth interval, be then that the weight of anger adds 1 by emotional information; If the value of the audio parameter of the 3rd type obtained is in the 11 interval, be then that frightened weight adds 1 by emotional information; If the value of the audio parameter of the 3rd type obtained is in twelve-section, be then that sad weight adds 1 by emotional information.
Again such as, the audio parameter of the 4th type is intensity of sound, reference value corresponding to intensity of sound is the 4th reference value, determine that the 13 is interval according to the 4th reference value, 14 is interval, 15 interval and the 16 interval, wherein, tenth four-range mean value is greater than the mean value in the 13 interval, the mean value in the 13 interval is greater than the mean value in the 15 interval, the mean value in the 15 interval is greater than the mean value in the 16 interval, 13 interval corresponding emotional information is happiness, 14 interval corresponding emotional information is anger, 15 interval corresponding emotional information is frightened, 16 interval corresponding emotional information is sad.If the value of the audio parameter of the 4th type obtained is in the 13 interval, be then that the weight of happiness adds 1 by emotional information; If the value of the audio parameter of the 4th type obtained is in the 14 interval, be then that the weight of anger adds 1 by emotional information; If the value of the audio parameter of the 4th type obtained is in the 15 interval, be then that frightened weight adds 1 by emotional information; If the value of the audio parameter of the 4th type obtained is in the 16 interval, be then that sad weight adds 1 by emotional information.
Again such as, the audio parameter of the 5th type is sharpness, reference value corresponding to sharpness is the 5th reference value, determine that the 17 is interval according to the 5th reference value, 18 is interval, 19 interval and the 20 interval, wherein, the mean value in the 19 interval is greater than the mean value in the 17 interval, the mean value in the 17 interval is greater than the mean value in the 18 interval, the mean value in the 18 interval is greater than the mean value in the 20 interval, 17 interval corresponding emotional information is happiness, 18 interval corresponding emotional information is anger, 19 interval corresponding emotional information is frightened, 20 interval corresponding emotional information is sad.If the value of the audio parameter of the 5th type obtained is in the 17 interval, be then that the weight of happiness adds 1 by emotional information; If the value of the audio parameter of the 5th type obtained is in the 18 interval, be then that the weight of anger adds 1 by emotional information; If the value of the audio parameter of the 5th type obtained is in the 19 interval, be then that frightened weight adds 1 by emotional information; If the value of the audio parameter of the 5th type obtained is in the 20 interval, be then that sad weight adds 1 by emotional information.
Alternatively, audio parameter also comprises: tonequality and/or fundamental curve.
As one embodiment of the present of invention, if breathing or wheeze detected, be then that the weight of happiness and anger respectively adds 1 by emotional information.If detect, fundamental curve is upwards be out of shape, be then that the weight of happiness adds 1 by emotional information; If detect, fundamental curve is that the curve segment that corresponding intensity of sound is larger raises suddenly, be then that the weight of anger adds 1 by emotional information; If detect, fundamental curve is comparatively mild, be then that frightened weight adds 1 by emotional information; If detect, fundamental curve is downward distortion, be then that sad weight adds 1 by emotional information.
In step S104, according to the emotional information of user to user's commending friends.
In embodiments of the present invention, the good friend identical with the emotional information of user is recommended to user.Emotional information according to user comprises to user's commending friends: from local buddy list, recommend the good friend with user with identical emotional information; Or, from server, obtain the strangers information with user with identical emotional information, generate good friend's recommendation information according to this strangers information, and show friend recommendation information to user.
Fig. 2 shows the realization flow figure of the friend recommendation method that another embodiment of the present invention provides, with reference to Fig. 2:
In step s 201, obtain the audio-frequency information of user, from audio-frequency information, obtain the value of the audio parameter of the N type of user, and N number of reference value that the audio parameter obtaining N type is corresponding, wherein, N be more than or equal to 1 integer;
In step S202, the value of the audio parameter of N type is compared with N number of reference value respectively, obtains the first comparison result;
In step S203, obtain the average heart rate value of user's at the appointed time section, and obtain the benchmark heart rate value of user;
In step S204, average heart rate value and benchmark heart rate value are compared, obtains the second comparison result;
In step S205, determine the emotional information of user according to the first comparison result and the second comparison result;
In step S206, according to the emotional information of user to user's commending friends.
As one embodiment of the present of invention, determine that the emotional information of user is specially according to the first comparison result and the second comparison result: the weight determining various types of emotional information according to the first comparison result and the second comparison result, is defined as the emotional information of user by the type of emotional information maximum for weight.
Wherein, average heart rate value and benchmark heart rate value are compared, obtain the second comparison result to be specially: determine the 21 interval, the second twelve-section, the 23 interval and the 24 interval according to benchmark heart rate value, wherein, 21 interval corresponding emotional information is happiness, emotional information corresponding to the second twelve-section is anger, and the 23 interval corresponding emotional information be frightened, and the emotional information of the 24 interval correspondence is compassion.If the value of the audio parameter of the 5th type obtained is in the 21 interval, be then that the weight of happiness adds 1 by emotional information; If the value of the audio parameter of the 5th type obtained is in the second twelve-section, be then that the weight of anger adds 1 by emotional information; If the value of the audio parameter of the 5th type obtained is in the 23 interval, be then that frightened weight adds 1 by emotional information; If the value of the audio parameter of the 5th type obtained is in the 24 interval, be then that sad weight adds 1 by emotional information.
Fig. 3 shows the realization flow figure of the friend recommendation method that another embodiment of the present invention provides, with reference to Fig. 3:
In step S301, obtain the audio-frequency information of user, from audio-frequency information, obtain the value of the audio parameter of the N type of user, and N number of reference value that the audio parameter obtaining N type is corresponding, wherein, N be more than or equal to 1 integer;
In step s 302, the value of the audio parameter of N type is compared with N number of reference value respectively, obtains the first comparison result;
In step S303, obtain keyword database, and according to keyword database, speech recognition is carried out to audio-frequency information;
In step s 304, if there is the sound bite matched with at least one keyword in keyword database in audio-frequency information, then using the mood keyword of at least one keyword as user;
In step S305, determine the emotional information of user according to the first comparison result and mood keyword;
In step S306, according to the emotional information of user to user's commending friends.
As one embodiment of the present of invention, determine that the emotional information of user is specially according to the first comparison result and mood keyword: the weight determining various types of emotional information according to the first comparison result and mood keyword, is defined as the emotional information of user by the type of emotional information maximum for weight.Such as, if mood keyword comprises " sad ", be then that sad weight adds 1 by emotional information.
Fig. 4 shows the realization flow figure of the friend recommendation method that another embodiment of the present invention provides, with reference to Fig. 4:
In step S401, obtain the audio-frequency information of user, from audio-frequency information, obtain the value of the audio parameter of the N type of user, and N number of reference value that the audio parameter obtaining N type is corresponding, wherein, N be more than or equal to 1 integer;
In step S402, the value of the audio parameter of N type is compared with N number of reference value respectively, obtains the first comparison result;
In step S403, obtain the average heart rate value of user's at the appointed time section, and obtain the benchmark heart rate value of user;
In step s 404, average heart rate value and benchmark heart rate value are compared, obtains the second comparison result;
In step S405, obtain keyword database, and according to keyword database, speech recognition is carried out to audio-frequency information;
In step S406, if there is the sound bite matched with at least one keyword in keyword database in audio-frequency information, then using the mood keyword of at least one keyword as user;
In step S 407, the emotional information of user is determined according to the first comparison result, the second comparison result and mood keyword;
In step S408, according to the emotional information of user to user's commending friends.
As one embodiment of the present of invention, determine that the emotional information of user is specially according to the first comparison result, the second comparison result and mood keyword: the weight determining various types of emotional information according to the first comparison result, the second comparison result and mood keyword, is defined as the emotional information of user by the type of emotional information maximum for weight.
Fig. 5 shows the realization flow figure of the friend recommendation method that another embodiment of the present invention provides, with reference to Fig. 5:
In step S501, when receiving the friend recommendation request that user sends, request user inputting audio information;
In step S502, obtain the audio-frequency information of user, from audio-frequency information, obtain the value of the audio parameter of the N type of user, and N number of reference value that the audio parameter obtaining N type is corresponding, wherein, N be more than or equal to 1 integer;
In step S503, the value of the audio parameter of N type is compared with N number of reference value respectively, obtains the first comparison result;
In step S504, obtain the average heart rate value of user's at the appointed time section, and obtain the benchmark heart rate value of user;
In step S505, average heart rate value and benchmark heart rate value are compared, obtains the second comparison result;
In step S506, obtain keyword database, and according to keyword database, speech recognition is carried out to audio-frequency information;
In step s 507, if there is the sound bite matched with at least one keyword in keyword database in audio-frequency information, then using the mood keyword of at least one keyword as user;
In step S508, determine the emotional information of user according to the first comparison result, the second comparison result and mood keyword;
In step S509, according to the emotional information of user to user's commending friends.
In embodiments of the present invention, when receiving the friend recommendation request that user sends, request user inputting audio information, to analyze the audio-frequency information of user's typing, determines the emotional information of user, thus according to the emotional information of user to user's commending friends.
Should be understood that in embodiments of the present invention, the size of the sequence number of above-mentioned each process does not also mean that the priority of execution sequence, and the execution sequence of each process should be determined with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.
The embodiment of the present invention is by obtaining the audio-frequency information of user, the value of the audio parameter of the N type of user is obtained from audio-frequency information, the emotional information of user is determined according to the audio parameter of N type, again according to the emotional information of user to user's commending friends, thus in the mood to consideration user during user's commending friends, fast for user finds the good friend with identical mood, thus improve the dirigibility of friend recommendation.
Fig. 6 shows the structured flowchart of the friend recommendation device that the embodiment of the present invention provides, and this device may be used for the friend recommendation method shown in service chart 1 to Fig. 5.For convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.
With reference to Fig. 6, this device comprises:
First acquiring unit 61, for obtaining the audio-frequency information of user, from described audio-frequency information, obtain the value of the audio parameter of the N type of described user, and N number of reference value that the audio parameter obtaining described N type is corresponding, wherein, N be more than or equal to 1 integer;
First comparing unit 62, the value for the audio parameter by described N type is compared with described N number of reference value respectively, obtains the first comparison result;
Emotional information determining unit 63, for determining the emotional information of described user according to described first comparison result;
Friend recommendation unit 64, for according to the emotional information of described user to described user's commending friends.
Preferably, described device also comprises:
Second acquisition unit 65, for obtaining the average heart rate value of described user at the appointed time section, and obtains the benchmark heart rate value of described user;
Second comparing unit 66, for described average heart rate value and described benchmark heart rate value being compared, obtains the second comparison result;
Described emotional information determining unit 63 specifically for:
The emotional information of described user is determined according to described first comparison result and described second comparison result.
Preferably, described device also comprises:
Voice recognition unit 67, for obtaining keyword database, and carries out speech recognition according to described keyword database to described audio-frequency information;
Mood keyword determining unit 68, if for there is the sound bite matched with at least one keyword in described keyword database in described audio-frequency information, then using the mood keyword of at least one keyword described as described user;
Described emotional information determining unit 63 specifically for:
The emotional information of described user is determined according to described first comparison result and described mood keyword.
Preferably, described audio parameter comprises word speed, average fundamental frequency, base frequency range, intensity of sound and/or sharpness.
Preferably, described device also comprises:
Request unit 69, for when receiving the friend recommendation request that described user sends, asks described user's inputting audio information.
The embodiment of the present invention is by obtaining the audio-frequency information of user, the value of the audio parameter of the N type of user is obtained from audio-frequency information, the emotional information of user is determined according to the audio parameter of N type, again according to the emotional information of user to user's commending friends, thus in the mood to consideration user during user's commending friends, fast for user finds the good friend with identical mood, thus improve the dirigibility of friend recommendation.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the device of foregoing description and the specific works process of unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that disclosed apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-OnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (10)

1. a friend recommendation method, is characterized in that, comprising:
Obtain the audio-frequency information of user, from described audio-frequency information, obtain the value of the audio parameter of the N type of described user, and N number of reference value that the audio parameter obtaining described N type is corresponding, wherein, N be more than or equal to 1 integer;
The value of the audio parameter of described N type is compared with described N number of reference value respectively, obtains the first comparison result;
The emotional information of described user is determined according to described first comparison result;
According to the emotional information of described user to described user's commending friends.
2. the method for claim 1, is characterized in that, described determine the emotional information of described user according to described first comparison result before, described method also comprises:
Obtain the average heart rate value of described user at the appointed time section, and obtain the benchmark heart rate value of described user;
Described average heart rate value and described benchmark heart rate value are compared, obtains the second comparison result;
Describedly determine that the emotional information of described user is specially according to described first comparison result:
The emotional information of described user is determined according to described first comparison result and described second comparison result.
3. the method for claim 1, is characterized in that, described acquisition user audio-frequency information after, described determine the emotional information of described user according to described first comparison result before, described method also comprises:
Obtain keyword database, and according to described keyword database, speech recognition is carried out to described audio-frequency information;
If there is the sound bite matched with at least one keyword in described keyword database in described audio-frequency information, then using the mood keyword of at least one keyword described as described user;
Describedly determine that the emotional information of described user is specially according to described first comparison result:
The emotional information of described user is determined according to described first comparison result and described mood keyword.
4. the method for claim 1, is characterized in that, described audio parameter comprises word speed, average fundamental frequency, base frequency range, intensity of sound and/or sharpness.
5. the method for claim 1, is characterized in that, before the audio-frequency information of described acquisition user, described method also comprises:
When receiving the friend recommendation request that described user sends, ask described user's inputting audio information.
6. a friend recommendation device, is characterized in that, comprising:
First acquiring unit, for obtaining the audio-frequency information of user, obtains the value of the audio parameter of the N type of described user from described audio-frequency information, and N number of reference value that the audio parameter obtaining described N type is corresponding, wherein, N be more than or equal to 1 integer;
First comparing unit, the value for the audio parameter by described N type is compared with described N number of reference value respectively, obtains the first comparison result;
Emotional information determining unit, for determining the emotional information of described user according to described first comparison result;
Friend recommendation unit, for according to the emotional information of described user to described user's commending friends.
7. device as claimed in claim 6, it is characterized in that, described device also comprises:
Second acquisition unit, for obtaining the average heart rate value of described user at the appointed time section, and obtains the benchmark heart rate value of described user;
Second comparing unit, for described average heart rate value and described benchmark heart rate value being compared, obtains the second comparison result;
Described emotional information determining unit specifically for:
The emotional information of described user is determined according to described first comparison result and described second comparison result.
8. device as claimed in claim 6, it is characterized in that, described device also comprises:
Voice recognition unit, for obtaining keyword database, and carries out speech recognition according to described keyword database to described audio-frequency information;
Mood keyword determining unit, if for there is the sound bite matched with at least one keyword in described keyword database in described audio-frequency information, then using the mood keyword of at least one keyword described as described user;
Described emotional information determining unit specifically for:
The emotional information of described user is determined according to described first comparison result and described mood keyword.
9. device as claimed in claim 6, is characterized in that, described audio parameter comprises word speed, average fundamental frequency, base frequency range, intensity of sound and/or sharpness.
10. device as claimed in claim 6, it is characterized in that, described device also comprises:
Request unit, for when receiving the friend recommendation request that described user sends, asks described user's inputting audio information.
CN201510540954.6A 2015-08-27 2015-08-27 Good friend recommendation method and apparatus Pending CN105159979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510540954.6A CN105159979A (en) 2015-08-27 2015-08-27 Good friend recommendation method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510540954.6A CN105159979A (en) 2015-08-27 2015-08-27 Good friend recommendation method and apparatus

Publications (1)

Publication Number Publication Date
CN105159979A true CN105159979A (en) 2015-12-16

Family

ID=54800835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510540954.6A Pending CN105159979A (en) 2015-08-27 2015-08-27 Good friend recommendation method and apparatus

Country Status (1)

Country Link
CN (1) CN105159979A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105656756A (en) * 2015-12-28 2016-06-08 百度在线网络技术(北京)有限公司 Friend recommendation method and device
CN105683964A (en) * 2016-01-07 2016-06-15 马岩 Network social contact searching method and system
CN105677722A (en) * 2015-12-29 2016-06-15 百度在线网络技术(北京)有限公司 Method and apparatus for recommending friends in social software
CN106341527A (en) * 2016-08-25 2017-01-18 珠海市魅族科技有限公司 Emotion adjustment method and mobile terminal
CN107679249A (en) * 2017-10-27 2018-02-09 上海掌门科技有限公司 Friend recommendation method and apparatus
CN108010516A (en) * 2017-12-04 2018-05-08 广州势必可赢网络科技有限公司 A kind of semanteme independent voice mood characteristic recognition method and device
CN109087670A (en) * 2018-08-30 2018-12-25 西安闻泰电子科技有限公司 Mood analysis method, system, server and storage medium
WO2019192148A1 (en) * 2018-04-03 2019-10-10 南京阿凡达机器人科技有限公司 Method and system for recommending social object to user
CN110958172A (en) * 2018-09-26 2020-04-03 上海掌门科技有限公司 Method, device and computer storage medium for recommending friends
CN111984122A (en) * 2020-08-19 2020-11-24 北京鲸世科技有限公司 Electroencephalogram data matching method and system, storage medium and processor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1637740A (en) * 2003-11-20 2005-07-13 阿鲁策株式会社 Conversation control apparatus, and conversation control method
JP2013058061A (en) * 2011-09-08 2013-03-28 Dainippon Printing Co Ltd Menu recommending server, menu recommending system, menu recommending method, program, and recording medium
CN103829958A (en) * 2014-02-19 2014-06-04 广东小天才科技有限公司 Method and device for monitoring moods of people
CN103941853A (en) * 2013-01-22 2014-07-23 三星电子株式会社 Electronic device for determining emotion of user and method for determining emotion of user
CN104811469A (en) * 2014-01-29 2015-07-29 北京三星通信技术研究有限公司 Mobile terminal and emotion sharing method and device thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1637740A (en) * 2003-11-20 2005-07-13 阿鲁策株式会社 Conversation control apparatus, and conversation control method
JP2013058061A (en) * 2011-09-08 2013-03-28 Dainippon Printing Co Ltd Menu recommending server, menu recommending system, menu recommending method, program, and recording medium
CN103941853A (en) * 2013-01-22 2014-07-23 三星电子株式会社 Electronic device for determining emotion of user and method for determining emotion of user
CN104811469A (en) * 2014-01-29 2015-07-29 北京三星通信技术研究有限公司 Mobile terminal and emotion sharing method and device thereof
CN103829958A (en) * 2014-02-19 2014-06-04 广东小天才科技有限公司 Method and device for monitoring moods of people

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105656756A (en) * 2015-12-28 2016-06-08 百度在线网络技术(北京)有限公司 Friend recommendation method and device
CN105677722A (en) * 2015-12-29 2016-06-15 百度在线网络技术(北京)有限公司 Method and apparatus for recommending friends in social software
CN105683964A (en) * 2016-01-07 2016-06-15 马岩 Network social contact searching method and system
WO2017117786A1 (en) * 2016-01-07 2017-07-13 马岩 Social network search method and system
CN106341527A (en) * 2016-08-25 2017-01-18 珠海市魅族科技有限公司 Emotion adjustment method and mobile terminal
CN107679249A (en) * 2017-10-27 2018-02-09 上海掌门科技有限公司 Friend recommendation method and apparatus
WO2019080637A1 (en) * 2017-10-27 2019-05-02 上海掌门科技有限公司 Friend recommendation method and device
CN108010516A (en) * 2017-12-04 2018-05-08 广州势必可赢网络科技有限公司 A kind of semanteme independent voice mood characteristic recognition method and device
WO2019192148A1 (en) * 2018-04-03 2019-10-10 南京阿凡达机器人科技有限公司 Method and system for recommending social object to user
CN109087670A (en) * 2018-08-30 2018-12-25 西安闻泰电子科技有限公司 Mood analysis method, system, server and storage medium
CN110958172A (en) * 2018-09-26 2020-04-03 上海掌门科技有限公司 Method, device and computer storage medium for recommending friends
CN111984122A (en) * 2020-08-19 2020-11-24 北京鲸世科技有限公司 Electroencephalogram data matching method and system, storage medium and processor

Similar Documents

Publication Publication Date Title
CN105159979A (en) Good friend recommendation method and apparatus
US9189471B2 (en) Apparatus and method for recognizing emotion based on emotional segments
CN107393541B (en) Information verification method and device
CN105681546A (en) Voice processing method, device and terminal
CN103870553B (en) A kind of input resource supplying method and system
CN104239458A (en) Method and device for representing search results
CN104409080A (en) Voice end node detection method and device
EP3121772A1 (en) Common data repository for improving transactional efficiencies across one or more communication channels
CN104866985A (en) Express bill number identification method, device and system
CA2916896C (en) Method and apparatus for automating network data analysis of user's activities
CN107871279A (en) User ID authentication method and application server
CN104900236A (en) Audio signal processing
CN103235773A (en) Method and device for extracting text labels based on keywords
CN104994223A (en) Text message editing method and device
CN104952451A (en) Sound recording processing method and sound recording processing device
CN104598245A (en) Chatting method and device and mobile terminal
CN106304084B (en) Information processing method and device
JP2019211633A (en) Voice processing program, voice processing method and voice processing device
CN103905661A (en) Message forwarding method and cloud server
CN110990577A (en) Text classification method and device
CN104834450A (en) Function starting method and terminal
CN115374793A (en) Voice data processing method based on service scene recognition and related device
CN110557351A (en) Method and apparatus for generating information
CN110442615B (en) Resource information processing method and device, electronic equipment and storage medium
CN108111908A (en) Audio quality determines method, equipment and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151216

RJ01 Rejection of invention patent application after publication