CN108182098A - Receive speech selection method, system and reception robot - Google Patents

Receive speech selection method, system and reception robot Download PDF

Info

Publication number
CN108182098A
CN108182098A CN201711282814.9A CN201711282814A CN108182098A CN 108182098 A CN108182098 A CN 108182098A CN 201711282814 A CN201711282814 A CN 201711282814A CN 108182098 A CN108182098 A CN 108182098A
Authority
CN
China
Prior art keywords
user
language
information
speech
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711282814.9A
Other languages
Chinese (zh)
Inventor
刘雪楠
沈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kngli Youlan Robot Technology Co Ltd
Original Assignee
Beijing Kngli Youlan Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kngli Youlan Robot Technology Co Ltd filed Critical Beijing Kngli Youlan Robot Technology Co Ltd
Priority to CN201711282814.9A priority Critical patent/CN108182098A/en
Publication of CN108182098A publication Critical patent/CN108182098A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of speech selection method, system and robot is received, including:User is sensed to approach;Judge whether user belongs to registered user, language is if it is selected according to user's registration information, if otherwise creating and recording user's registration information;Judge whether to receive user speech, language is if it is selected according to user speech, if otherwise identification user classification;According to user's categorizing selection language.Can user language be judged according to the voice of user according to reception speech selection method of the invention, reception speech selection system and using the reception robot of this method and/or system, and classified using biological information, secondary attribute to user and judge user language, so as to improve reception speech selection efficiency and success rate prediction, user experience is improved.

Description

Receive speech selection method, system and reception robot
Technical field
The present invention relates to field in intelligent robotics, are selected more particularly to a kind of reception language of energy automatic discrimination user language Selection method, reception speech selection system and the reception robot using this method and/or system.
Background technology
With the development of robot technology, robot is applied to every field, and existing robot is divided into two classes, i.e. work Industry robot and specialized robot.So-called industrial robot is exactly towards the multi-joint manipulator of industrial circle or multiple degrees of freedom machine Device people.And specialized robot is then in addition to industrial robot, for nonmanufacturing industry and serves the various advanced machines of the mankind Device people, including:, underwater robot, amusement robot, military robot, agricultural robot, robotization machine etc..And it services Robot is frequently utilized in the occasions welcomes such as bank, market, restaurant, the portion that sells house, hotel reception, guide service and advertising Service trades are waited, this kind of service robot can set the operating modes such as welcome, inquiry, food delivery, checkout, amusement, server Device people have the characteristics that intelligence substitute manpower, in addition, have the function of with people interaction, compared to the waiter in life, server Device people can preferably try to please and customer is attracted either client etc. and to bring completely new service body to customer or client It tests, while service robot can save the cost of labor of businessman;Can meet can also ensure that while working long hours it is excellent Matter service avoids manual service and generates the tired feelings that the satisfaction of customer or client is caused to decline due to working long hours Condition greatly improves work efficiency.
The application places of current service robot hotel for example concerning foreign affairs, airport duty-free shop, external mechanism, shopping mall All it is using the service of pure Chinese operating system or reception robot Deng majority.Minority reception robot is using Chinese, English two A parallel system, is then selected between the two systems.But due to technology barrier, dual system selection is restarted The operations such as robot, complex steps take and extremely grow, in actual use, great inconvenience caused to user.And due to Lack the support for other languages, the international personage for more grasping different language is made to have no way of selecting.
A solution for this is using the distributed system based on browser, and multiple reception robot terminals are only Selection for display data, and the processing particularly languages of information carries out in the server.User can manually with reception Robot is interactive, such as button or mouse clicking operation receive language so as to which voluntarily selection is corresponding.Although however, this scheme Solve the selection diverse problems of language, but user experience is poor, the initiative of welcome's reception is relatively low, works as user(Such as Business people or travel party tour guide etc.)It is difficult often to divert one's attention actively with receiving robot interaction, therefore connect during in busy condition Treat that robot is only capable of providing help information with the Chinese of acquiescence or English, it is difficult to promote the service level of application places.
Invention content
Therefore, it is an object of the invention to automatically judge that user grasps language so as to initiatively select reception robot Language used in welcome is to effectively improve user experience.
Robot language selection method is received the present invention provides a kind of, including:User is sensed to approach;Judging user is It is no to belong to registered user, language is if it is selected according to user's registration information, if otherwise creating and recording user's registration letter Breath;Judge whether to receive user speech, language is if it is selected according to user speech, if otherwise identification user classification; According to user's categorizing selection language.
The step of wherein identification user classifies further comprises:According to biological information to user's rough sort;Belonged to according to auxiliary Property is to subscriber segmentation class.
Wherein, it is quiet to include Skin Color Information, face contour information, height, paces, movement speed, limbs for the biological information Only/walking posture;Optionally, secondary attribute includes clothing style, user carries luggage or the identity on clothing, Yong Hubian The language of the device name of language, user-portable device in portable device operation interface;Optionally, rough sort includes the yellow race People, white people, black race.
Wherein, user's registration information include the recognition of face information of user, the identity information of user, user use language Information.
Wherein, language is selected according to user's registration information and/or language is selected and/or according to user point according to user speech Class selects to further comprise after the step of language, receives user feedback and change user's registration information.
The present invention also provides a kind of reception robot languages to select system, including:User is close to sensing module, for feeling User is measured to approach;User information registration module, for recording user's registration information;User speech identification module, for receiving User speech;Processor is used for:Judge whether user belongs to registered user, language is if it is selected according to user's registration information Speech, if otherwise being created in user information registration module and recording user's registration information;Judge whether to receive user speech, If it is language is selected according to user speech, if otherwise identification user classification, and according to user's categorizing selection language.
Wherein, processor is further used for:According to biological information to user's rough sort;According to secondary attribute to subscriber segmentation Class.
Wherein, processor is further used for, and receives user feedback and changes user's registration information.
Invention additionally provides a kind of reception robots, and reception user is selected using according to any of the above-described the method Used language
Reception speech selection method according to the present invention receives speech selection system and connecing using this method and/or system Treat that robot can judge user language, and user is classified and sentenced using biological information, secondary attribute according to the voice of user Determine user language, so as to improve reception speech selection efficiency and success rate prediction, improve user experience.
Purpose of the present invention and other purposes unlisted herein, in the range of the application independent claims It is satisfied.The embodiment of the present invention limits in the independent claim, and specific features limit in dependent claims thereto.
Description of the drawings
Carry out the technical solution that the present invention will be described in detail referring to the drawings, wherein:
Fig. 1 shows the schematic diagram of reception robot system according to embodiments of the present invention;
Fig. 2 shows the flow chart of reception robot language selection method according to a first embodiment of the present invention;
Fig. 3 shows the flow chart of reception robot language selection method according to a second embodiment of the present invention;
Fig. 4 shows the flow chart of reception robot language selection method according to a third embodiment of the present invention;And
Fig. 5 shows the block diagram of reception robot language selection system according to embodiments of the present invention.
Specific embodiment
Carry out the feature and its skill of the present invention will be described in detail technical solution referring to the drawings and with reference to schematical embodiment Art effect discloses the speech selection method that can effectively improve user experience, system and connecing using this method and/or system Treat robot.It should be pointed out that the structure that similar reference numeral expression is similar, term use herein " first ", " second ", " on ", " under " etc. can be used for modifying various system units or method and step.These modifications are not unless stated otherwise Imply space, order or the hierarchical relationship of institute's modification system component or method and step.
As shown in Figure 1, reception robot according to embodiments of the present invention includes:Highly sensitive microphone 1 positioned at the crown, For acquiring or receiving ambient enviroment acoustic information or personnel's voice messaging;Positioned at the high-definition camera 2 of forehead, for acquiring or The topology information of reception staff's face(Such as bone contours);Positioned at the fine sensor 3A and 3B of eye, for capturing The face detail of personnel(Such as iris, retina, the dynamic change at eyebrow or canthus, the smile degree of lip or tooth reflection, The slight twitch of ear or nose)So as to reflect the biological information of personnel or emotional information;It is touched positioned at robot various pieces Sensor, including chin touch sensor 4, abdomen touch sensor 7, crown touch sensor 10, left ear/auris dextra touch sensing Device 12A/12B, hindbrain touch sensor 13, left shoulder/right shoulder touch sensor 15A/15B, buttocks touch sensor 17, these are touched Touch sensor for identify with the interaction of the sense of touch of user, so as to improve the accuracy for user identity or Emotion identification, and It provides with the stress information of user's extremity to feed back, change movement/rotational parameters of robot body;Positioned at neck neck 3D depth cameras 5, for acquiring the depth of view information of surrounding scene;Positioned at the touch display screen 6 of chest, for show heartbeat or The raw information of plan of colour of skin variation is so as to improve the fidelity of robot or broadening to traversing entire chest(It is not shown)And to Family shows reception/Query Information or other video informations;Positioned at underbelly 2D laser radars 8, for measuring user or scene In the distance of other mobile objects and robot, the height of auxiliary judgement object, movement speed, limbs are static/walking posture etc.; Positioned at the Omni-mobile wheel 9 of foot, for the path movement for entire Robot being driven to prestore or real-time judgment selects;It is located at The loud speaker 11A/11B of ear, for transmitting voice, audio-frequency information to user;Positioned at the emergency stop switch 14 of back part, for tight Emergency stop stops the movement or action of robot, convenient for improving safety;Positioned at the power on button 16 of rear waist, connect for manually starting The operating system of robot is treated to provide reception, counseling services;Positioned at the hand biosensor 18 of hand, for acquiring user Fingerprint, measure user moisture content of skin(Resistivity)Or stress, the measurement user's pulse that roughness, measurement and user shake hands Or oxygen content of capillary etc.;Charging interface 19 positioned at leg side and the power switch positioned at the leg back side.
Fig. 2 shows the flow chart of reception robot language selection method according to a first embodiment of the present invention, and Fig. 5 is then shown The block diagram of robot language selection system is received used by having shown the embodiment.
First, it is approached by user close to sensing module to sense user.For example, pass through highly sensitive microphone 1, high definition Camera 2,3D depth cameras 5 or 2D laser radars 8 or other approaching sensors(It is not shown, such as including bioelectricity Field sensor, magnetic field sensor, sensor of chemical gas, mechanical vibration sensor)Receiving sensor information, if sensor is believed Breath is more than preset threshold value(The storage device that selection system is included is obtained and is pre-stored according to high-volume test data(Do not show Go out)In)Then judge in the neighbouring certain distance of robot or effective range(Such as 5 meters)In the presence of the personnel to the movement of reception robot Or user.If there is close user, then reception robot system particularly speech selection system is waken up.If it is determined that it does not deposit It is then receiving robot and is continuing to keep standby or dormant state.
Then, judge whether user belongs to registered user by user information registration module, if it is according to user's registration Information selection reception language used, if otherwise creating and recording user's registration information.
User's registration information includes the recognition of face information of user, mainly includes facial skeleton profile(Topology)Structure, rainbow Film/retinal feature information, facial surface characteristic information(Such as eyebrow distribution, eyelash length/curvature, the shape of color spot/mole Shape/position etc.), these recognition of face information create and recorded in the case where recognizing the user for the first time by reception robot. User's registration information further includes the identity information of user, mainly comprising name, gender, height, native place/nationality, occupation etc., these Subscriber identity information is actively provided during subsequent feedback by user or according to being consumed in venue, market, hotel etc. It records and records automatically.User's registration information further comprises the use language message of user, the mother tongue comprising user, outside first Language, Second Foreign Language, dialect etc., reception robot when recognizing user for the first time according to user be actively entered or system is known automatically It does not record, and can change during subsequent feedback or be corrected automatically using consumer record.
Specifically, in a preferred embodiment of the invention, user is collected by high-definition camera 2, fine sensor 3A/3B Facial information, be supplied to the processor of system(It is not shown)User information in recognition of face, with system storage is carried out to register Module recognition of face information in pre-stored user's registration information make comparisons.
In other preferred embodiments of the present invention, by bio-electric field sensor, sensor of chemical gas or above-mentioned hand Portion's biosensor 18 identifies the identity of user(Special bio-electric field either unique smell or hand it is relevant on State other biological feature)Or by RFID tag reader(It is not shown)Identification carries the identity of the user of identity label.
If result of the comparison determines that user belongs to registered users, user's registration letter is extracted from system storage Breath determines that reception robot is used for the user according to user recorded in user's registration information using language message Which kind of language is greeted, is seeked advice from, is received.
If result of the comparison determines that user is not belonging to registered users, the record of user's registration information is created, including Recognition of face information, identity information and use language message.
Next, it is determined that user speech information whether is received, if(Such as user is during reception robot ambulation It is talking with surrounding people or is being conversed with mobile phone with extraneous)Reception language is then selected according to the voice of user, if not(Example Such as user in the webpage of browsing mobile phone or tablet computer, viewing video, listen melody with earphone and do not make a sound actively)Then Further identification user classification.Specifically, it is acquired or received close to machine by the highly sensitive microphone 1 on the reception robot crown The voice messaging that a certain range of user of people actively sends out is used by user speech identification module according to voice messaging discriminance analysis Language, such as Chinese, Japanese, Korean, English, French, Spanish, Arabic etc. are grasped in family, and receive robot Voice selecting system will use the user recognized to grasp language and interact with user, be seeked advice to user's active inquiry, answer later Project etc..
Then, in the case that user does not make a sound actively, voice selecting system using robot acquired about The non-audio information of user and with reference to big data statistical result to predict the classification of user, and select to use according to user's classification The language that family may skillfully use.
For example, for the team such as travel party customer, it can make a reservation for move in the time of hotel or visit according to travel party, and With reference to 3D depth cameras 5,2D laser radars 8(And the optional thermal imaging system not showed that)The team's number recognized And judge that team corresponds to which of predetermined team's list, and belonging country or ground are retrieved according to the information of predetermined team It is greeted so as to select the language of the country in area.
For example, for the tourist near airports shopping center, can be caught according to high-definition camera 2,3D depth cameras 5 The label tag that tourist's luggage case is wrapped is grasped, identify label tag plays enclave so as to judge tourist belonging country or area so as to pre- Sentence the language that the tourist may skillfully use.
In another example the user for watching video, can be identified video caption or comment barrage by high-definition camera 2 Language(Chinese, English etc.)And select the subtitle or the corresponding language of barrage.
Fig. 3 shows the flow chart of reception robot language selection method according to a second embodiment of the present invention, with Fig. 2 institutes The first embodiment shown is compared, and this method has carried out user's classification performed by user's Classification and Identification module further thin Change, and method and step before is same or similar, repeats no more.
Specifically, the step of classifying for user further comprises, carries out rough sort to user according to biological information first, Then class is finely divided to user according to secondary attribute.
In an embodiment of the invention, biological information includes the progress obtained colour of skin of recognition of face of high-definition camera 2 Information(Yellow, white, black), face contour information(Face is wide/narrow, and nose is high/low, lower jaw side/point, Sunken orbital socket depth, Cheekbone projecting height etc.), height, paces measured by 2D laser radars 8, movement speed, limbs are static/walking posture etc..
Yellow will be roughly divided into according to these biological informations close to the user of reception robot(Anticipation may use the Chinese Language, Japanese, Korean), white people(Anticipation may use English, French, German, Spanish, Portuguese, Arabic)、 Black race(Anticipation may use English, French, German, Spanish, Portuguese)Three categories.
In a preferred embodiment of the invention, recognition of face is carried out based on geometric properties.
First, the image data acquired to high-definition camera 2 pre-processes.Using the innovative variable neighborhood method of average Smooth original image to eliminate most of noise, specifically, for w high h pixel unit, width pixel unit image g (x, y)(0<=x<w,0<=y<H), the window of a n*n is taken out centered on being put using (i, j) as neighborhood, averaged rear output center pixel Gray value g*(x,y).Usual n is simply selected to be a fixed odd number, such as 3,5 etc. in the prior art, for big Majority application is enough to eliminate noise.However, for the scene that illumination condition is limited or atmospheric visibility is relatively low, image obscures journey Degree can aggravate.Therefore, in a preferred embodiment of the invention, using smaller n1(Such as 3)Calculate initial pictures gray scale value set {g*(x, y) }, then compare (i, j) and put corresponding gray value g*(x, y) is m with (i, j) point distance(Such as 1 ~ 4)Consecutive points Initial gray value g*(x, y) ', the g if the two difference is within threshold value T*The final gray value that (x, y) is put as (i, j), Larger n2 is used if difference is more than or equal to T(Such as 5 or 7, the odd number less than or equal to n1+m)Recalculate a small number of ashes Degree fluctuates big specified point, so can obtain better smooth effect with smaller calculation amount, overcomes under haze, dark environment Illumination, air cleanliness factor the problems such as.After preliminary smoothing processing, the minority being still had in image is isolated and is made an uproar Sound, further using two dimension median filter denoising.Similar as before, the dimension n of two-dimentional window is equally variable, Ye Jixian It is calculated for the first time using smaller n1, then compares the gray value of each point in distance m, if difference is in threshold value T2 It remains unchanged using first calculated value, window calculation gray scale is reset using larger n2 if difference is more than or equal to T2 Value.Change of scale is then carried out based on linear interpolation, and considers that average gray and variance carry out intensity profile markization, into one Step is preferably based on Canny methods progress edge detection and binary conversion treatment finally obtains graphics set to be detected.
Then, recognition of face or gesture recognition are carried out based on detection geometric properties point(The variation of skeletal support frame).At this In invention preferred embodiment, the recognition of face facial characteristics based on geometric properties has the center of two, the exterior measuring side of nose Edge position, corners of the mouth position, the supplementary features point constructed, particularly, the present invention also further uses eye width, open wiring and nose baseline Spacing, auricle length and width are distinguished to refine the subspecies between same ethnic group(Such as South Korea's human eye width is small etc.).Examine human face Position when, it is first determined then two centre coordinates extract eyebrow according to the facial ratio of face on the basis of two positions Each window such as window, eyes window, nose window, face window, auricle window carries out independent calculating, detects characteristic point namely calculates each The position of organ and geometric parameter.It is similar, gesture recognition use head, shoulder, hip, ankle, elbow position and width with And its change to characterize geometric properties point.In order to overcome the slight change of face/walking postures to the influence of recognition result, according to Characteristic point constructs foundation of the feature vector with size, rotation and shift invariant as recognition of face.Specifically, (fix, fiy) it is characterized the coordinate of point fi, dij=√ (fix-fjx)2+( fiy-fjy)2For distance between two points, then set of eigenvectors is combined into { d12/ d7a, d34/d7a, d23/d7a, d14/d7a, d56/d7a, d1a/d7a, d1a/d7a, d2a/d7a, d3a/d7a, d4a/d7a, d5a/d7a, d6a/d7a, dba/d7a}.Then according to the feature vector of obtained all training samples training neural network classifier, such as Bagging god Through network.
Further, using adaptive weighted multi-categorizer, the geometric properties point extraction subspace arrived for aforementioned identification is special Sign obtains two single classifier taxonomic structures and exports last classification results using weighting coefficient estimation.And it further, is based on Ethnic group classification results, with reference to reception robot historical data(Such as registered user's data, the use receiving and provided feedback The data at family etc.)To prejudge the languages that user may select.
Secondary attribute includes the clothing style of user(Such as Chinese style Tang style clothing/cheongsam, Japanese kimonos, Arabic dust-robe, Su Ge Blue skirt, ethnic group's characteristic clothes etc.), user carry identity on luggage or clothing(Such as travel party's League flag, regimental Or group's name on luggage), user-portable device(Mobile phone, laptop, tablet computer etc.)Language in operation interface or Corresponding language of bluetooth ID/ device names of person's user-portable device etc..Secondary attribute is equally by high-definition camera 2, essence Thin sensor 3A/3B, 3D depth camera 5, which is combined, to be obtained.
Voice selecting system is received by the identification for user's secondary attribute, is carried out with the big data that network statistics obtains It compares, prejudges the user for having identified secondary attribute in above-mentioned three categories user rough sort skillfully using each language Probability is chosen user's most probable and is used(Probability highest)Language as reception robot will be interacted with the user used in language Speech.Specifically, certain user is judged as white people and wears Arabic dust-robe then selecting Arabic, certain user is judged as Huang It kind of people and wears kimonos and then selects Japanese, certain user is white people and wears Scotland skirt and then selects English, certain user is Black people And cell phone apparatus number then selects French with French characters, certain user is white man and wears the young team uniform of rich card and then selects Spain Language, certain user then select Sichuan dialect, etc. for yellow and travel party's name comprising " Sichuan ".
Fig. 4 shows the flow chart of reception robot language selection method according to a third embodiment of the present invention, with Fig. 3 institutes The first embodiment shown is compared, and this method is for having increased another step-reception user feedback newly after selection language and having changed User's registration information, and method and step before is same or similar, repeats no more.
Specifically, user feedback module receives user feedback, such as thin by fine sensor 3A/3B acquisitions user's face The reflected emotional information of section(Such as surprised or smile), pass through highly sensitive microphone 1 receive user direct audible feedback (Such as it praises or cries out in alarm), pass through touch screen 6 obtain user touch feed back(Such as it chooses, draw and sweep), touched by the crown Sensor 10 or 18 grade touch sensors of hand biosensor obtain the force feedback of user(Such as it shakes hands, pat), and will These feedback informations are sent to user information registration module.It, will reception voice choosing when decision-feedback information is positive or positive It selects the currently employed language of system and is recorded in user included in user's registration information using in language message.
According to reception speech selection method of the invention, reception speech selection system and use this method and/or system Reception robot can according to the voice of user judge user language, and using biological information, secondary attribute to user classify And judge user language, so as to improve reception speech selection efficiency and success rate prediction, improve user experience.
Although illustrating the present invention with reference to one or more exemplary embodiments, those skilled in the art could be aware that need not It is detached from the scope of the invention and various suitable changes and equivalents is made to system, method.It in addition, can by disclosed introduction The modification of particular condition or material can be can be adapted to without departing from the scope of the invention by making many.Therefore, the purpose of the present invention does not exist In being limited to as the preferred forms for being used to implement the present invention and disclosed specific embodiment, and disclosed system and side Method is by all embodiments including falling within the scope of the present invention.

Claims (9)

1. a kind of reception robot language selection method, including:
User is sensed to approach;
Judge whether user belongs to registered user, language is if it is selected according to user's registration information, if otherwise created simultaneously Record user's registration information;
Judge whether to receive user speech, language is if it is selected according to user speech, if otherwise identification user classification;
According to user's categorizing selection language.
2. according to the method described in claim 1, the step of wherein identification user classifies further comprises:
According to biological information to user's rough sort;
According to secondary attribute to subscriber segmentation class.
3. according to the method described in claim 2, wherein, the biological information includes Skin Color Information, face contour information, body Height, paces, movement speed, limbs are static/walking posture;Optionally, secondary attribute include clothing style, user carry luggage or Identity on clothing, the language in user-portable device operation interface, user-portable device device name language Speech;Optionally, rough sort includes yellow, white people, black race.
4. according to the method described in claim 1, wherein, user's registration information includes the recognition of face information of user, user Identity information, the use language message of user.
5. according to the method described in claim 1, wherein, language is selected and/or according to user speech according to user's registration information It selects language and/or is further comprised later according to the step of user's categorizing selection language, receive user feedback and simultaneously change user Log-on message.
6. a kind of reception robot language selection system, including:
User approaches close to sensing module for sensing user;
User information registration module, for recording user's registration information;
User speech identification module, for receiving user speech;
Processor is used for:
Judge whether user belongs to registered user, language is if it is selected according to user's registration information, if otherwise in user It is created in information registering module and records user's registration information;
Judge whether to receive user speech, language is if it is selected according to user speech, if otherwise identification user classification, And according to user's categorizing selection language.
7. system according to claim 6, wherein, processor is further used for:According to biological information to user's rough sort; According to secondary attribute to subscriber segmentation class.
8. system according to claim 6, wherein, processor is further used for, and receives user feedback and changes user's note Volume information.
9. a kind of reception robot, used by a user using being received according to claim 1-5 any one of them method choice Language.
CN201711282814.9A 2017-12-07 2017-12-07 Receive speech selection method, system and reception robot Pending CN108182098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711282814.9A CN108182098A (en) 2017-12-07 2017-12-07 Receive speech selection method, system and reception robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711282814.9A CN108182098A (en) 2017-12-07 2017-12-07 Receive speech selection method, system and reception robot

Publications (1)

Publication Number Publication Date
CN108182098A true CN108182098A (en) 2018-06-19

Family

ID=62545801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711282814.9A Pending CN108182098A (en) 2017-12-07 2017-12-07 Receive speech selection method, system and reception robot

Country Status (1)

Country Link
CN (1) CN108182098A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363890A (en) * 2019-07-01 2019-10-22 上海雷盎云智能技术有限公司 Hotel occupancy control method, device and intelligent panel
CN111745672A (en) * 2020-06-28 2020-10-09 江苏工程职业技术学院 Grabbing robot and control method thereof
WO2021006363A1 (en) * 2019-07-05 2021-01-14 엘지전자 주식회사 Robot for providing information service by using artificial intelligence, and operating method therefor
CN112835661A (en) * 2019-11-25 2021-05-25 奥迪股份公司 On-board auxiliary system, vehicle comprising same, and corresponding method and medium
CN113743946A (en) * 2021-09-16 2021-12-03 中国银行股份有限公司 Business handling method and device, storage medium and electronic equipment
CN114172954A (en) * 2021-12-02 2022-03-11 上海景吾智能科技有限公司 Intelligent gateway system and method for dispatching robot active voice welcome
CN114179083A (en) * 2021-12-10 2022-03-15 北京云迹科技有限公司 Method and device for generating voice information of leading robot and leading robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198831A (en) * 2002-12-19 2004-07-15 Sony Corp Method, program, and recording medium for speech recognition
CN101618542A (en) * 2009-07-24 2010-01-06 塔米智能科技(北京)有限公司 System and method for welcoming guest by intelligent robot
CN105058393A (en) * 2015-08-17 2015-11-18 李泉生 Guest greeting robot
CN106295299A (en) * 2016-08-15 2017-01-04 歌尔股份有限公司 The user registering method of a kind of intelligent robot and device
CN106504743A (en) * 2016-11-14 2017-03-15 北京光年无限科技有限公司 A kind of interactive voice output intent and robot for intelligent robot
CN106649290A (en) * 2016-12-21 2017-05-10 上海木爷机器人技术有限公司 Speech translation method and system
CN106952648A (en) * 2017-02-17 2017-07-14 北京光年无限科技有限公司 A kind of output intent and robot for robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198831A (en) * 2002-12-19 2004-07-15 Sony Corp Method, program, and recording medium for speech recognition
CN101618542A (en) * 2009-07-24 2010-01-06 塔米智能科技(北京)有限公司 System and method for welcoming guest by intelligent robot
CN105058393A (en) * 2015-08-17 2015-11-18 李泉生 Guest greeting robot
CN106295299A (en) * 2016-08-15 2017-01-04 歌尔股份有限公司 The user registering method of a kind of intelligent robot and device
CN106504743A (en) * 2016-11-14 2017-03-15 北京光年无限科技有限公司 A kind of interactive voice output intent and robot for intelligent robot
CN106649290A (en) * 2016-12-21 2017-05-10 上海木爷机器人技术有限公司 Speech translation method and system
CN106952648A (en) * 2017-02-17 2017-07-14 北京光年无限科技有限公司 A kind of output intent and robot for robot

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363890A (en) * 2019-07-01 2019-10-22 上海雷盎云智能技术有限公司 Hotel occupancy control method, device and intelligent panel
WO2021006363A1 (en) * 2019-07-05 2021-01-14 엘지전자 주식회사 Robot for providing information service by using artificial intelligence, and operating method therefor
US11423877B2 (en) 2019-07-05 2022-08-23 Lg Electronics Inc. Robot for providing guidance service using artificial intelligence and method of operating the same
CN112835661A (en) * 2019-11-25 2021-05-25 奥迪股份公司 On-board auxiliary system, vehicle comprising same, and corresponding method and medium
CN111745672A (en) * 2020-06-28 2020-10-09 江苏工程职业技术学院 Grabbing robot and control method thereof
CN113743946A (en) * 2021-09-16 2021-12-03 中国银行股份有限公司 Business handling method and device, storage medium and electronic equipment
CN114172954A (en) * 2021-12-02 2022-03-11 上海景吾智能科技有限公司 Intelligent gateway system and method for dispatching robot active voice welcome
CN114172954B (en) * 2021-12-02 2024-05-14 上海景吾智能科技有限公司 Intelligent gateway system and method for dispatching robot to actively answer guest through voice
CN114179083A (en) * 2021-12-10 2022-03-15 北京云迹科技有限公司 Method and device for generating voice information of leading robot and leading robot
CN114179083B (en) * 2021-12-10 2024-03-15 北京云迹科技股份有限公司 Leading robot voice information generation method and device and leading robot

Similar Documents

Publication Publication Date Title
CN108182098A (en) Receive speech selection method, system and reception robot
CN108161933A (en) Interactive mode selection method, system and reception robot
US9224037B2 (en) Apparatus and method for controlling presentation of information toward human object
JP2023171650A (en) Systems and methods for identifying persons and/or identifying and quantifying pain, fatigue, mood and intent with protection of privacy
Zhang et al. RGB-D camera-based daily living activity recognition
JP4481663B2 (en) Motion recognition device, motion recognition method, device control device, and computer program
Chen et al. Robust activity recognition for aging society
US6188777B1 (en) Method and apparatus for personnel detection and tracking
JP4198951B2 (en) Group attribute estimation method and group attribute estimation apparatus
CN114502061A (en) Image-based automatic skin diagnosis using deep learning
CN108960937A (en) Advertisement sending method of the application based on eye movement tracer technique of AR intelligent glasses
CN108153169A (en) Guide to visitors mode switching method, system and guide to visitors robot
KR20160012902A (en) Method and device for playing advertisements based on associated information between audiences
Chen et al. C-face: Continuously reconstructing facial expressions by deep learning contours of the face with ear-mounted miniature cameras
CN108198159A (en) A kind of image processing method, mobile terminal and computer readable storage medium
JPWO2007043712A1 (en) Emotion evaluation method and emotion display method, and program, recording medium and system therefor
US20230058903A1 (en) Customer Behavioural System
US20090033622A1 (en) Smartscope/smartshelf
WO2017137952A1 (en) Intelligent chatting on digital communication network
WO2023056288A1 (en) Body dimensions from two-dimensional body images
JP2016076259A (en) Information processing apparatus, information processing method, and program
Chua et al. Vision-based hand grasping posture recognition in drinking activity
TW201918934A (en) Intelligent image information and big data analysis system and method using deep learning technology by integrating video monitoring and image deep learning technology to provide acial expression recognition information and motion recognition information
US10019489B1 (en) Indirect feedback systems and methods
CN113921098A (en) Medical service evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180619

WD01 Invention patent application deemed withdrawn after publication