CN110405767A - Intelligent exhibition room leads method, apparatus, equipment and storage medium - Google Patents

Intelligent exhibition room leads method, apparatus, equipment and storage medium Download PDF

Info

Publication number
CN110405767A
CN110405767A CN201910709045.9A CN201910709045A CN110405767A CN 110405767 A CN110405767 A CN 110405767A CN 201910709045 A CN201910709045 A CN 201910709045A CN 110405767 A CN110405767 A CN 110405767A
Authority
CN
China
Prior art keywords
user
machine people
tangible machine
exhibition
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910709045.9A
Other languages
Chinese (zh)
Other versions
CN110405767B (en
Inventor
于夕畔
周楠楠
蔡杭
杨海军
徐倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201910709045.9A priority Critical patent/CN110405767B/en
Publication of CN110405767A publication Critical patent/CN110405767A/en
Application granted granted Critical
Publication of CN110405767B publication Critical patent/CN110405767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Robotics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Game Theory and Decision Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Economics (AREA)
  • Remote Sensing (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

Lead method, apparatus, equipment and storage medium the invention discloses a kind of intelligent exhibition room, which comprises when detect to user led after leading instruction, the tangible machine people for controlling exhibition room, which enters, leads state;The behavior state of the user is detected in the case where leading state;Mode is led according to what the behavior state switched the tangible machine people, wherein described to lead mode include follow the mode and bootmode.The tangible machine people for realizing exhibition room is not limited to mechanically guide this exhibition one mode of user, it further include the follow the mode for passively following user, realize that tangible machine people can neatly switch robot according to the behavior state of user leads mode, so that tangible machine people is more intelligent, the sight exhibition experience for seeing exhibition user is improved.

Description

Intelligent exhibition room leads method, apparatus, equipment and storage medium
Technical field
Method, apparatus, equipment are led the present invention relates to field of artificial intelligence more particularly to a kind of intelligent exhibition room And storage medium.
Background technique
Currently, mostly being drawn by the exhibition room guide of profession when there are visitor visitor in enterprise's exhibition room, l Museum Exhibition Hal etc. Neck and explanation, but a certain number of guides of enterprises recruitment, improve operation cost, and guide needs to take a significant amount of time The application method for practising professional knowledge and interaction device, improves time cost.With the popularization of artificial intelligence, to solve manually to draw The operation cost problem of neck and explanation, proposes the concept of intelligent exhibition room, replaces guide to guide visitor using robot It is visited to each exhibition point, and speech sound eeplaining is carried out to visitor.But existing this guided robot only according to Route pre-planned is mechanically moved to each exhibition point, can not visitor it is exhibition during, according to visitor's Behavior intention neatly guides, and guidance mode is single, not smart enoughization.
Summary of the invention
Lead method, apparatus, equipment and computer can the main purpose of the present invention is to provide a kind of intelligent exhibition room Read storage medium, it is intended to which the robot guidance mode for solving intelligent exhibition room at present is single, the technical issues of not smart enoughization.
To achieve the above object, what the present invention provided a kind of intelligent exhibition room leads method, and the intelligence exhibition room draws Neck method comprising steps of
When detect to user led after leading instruction, control exhibition room tangible machine people enter leads state;
The behavior state of the user is detected in the case where leading state;
Lead mode according to what the behavior state switched the tangible machine people, wherein it is described lead mode include with With mode and bootmode.
Optionally, the behavior state include the user and the tangible machine people distance value and the user Facial direction, it is described to include: the step of leading mode according to what the behavior state switched the tangible machine people
When it is described lead mode be the bootmode when, detect whether the distance value is greater than the first pre-determined distance value, And whether the detection facial direction is back to the tangible machine people;
When the distance value is greater than the first pre-determined distance value, and the facial direction is back to the tangible machine people When, lead pattern switching for the follow the mode for described.
Optionally, the intelligent exhibition room leads method further include:
In the case where leading state, when it is described to lead mode be the bootmode when, according to default guidance rule determination target Point is puted on display, and from the current location of the tangible machine people to the path of navigation of the target exhibition point;
It drives the tangible machine people mobile according to the path of navigation, the user is guided to the target exhibition Point.
Optionally, described the step of determining target exhibition point according to default guidance rule, includes:
Obtain the interest characteristics data of the user;
The exhibition point with the interest characteristics Data Matching is chosen from each exhibition point of exhibition room;
Will be nearest with the current location distance in the exhibition point of selection, and the exhibition point that the user does not visit is as mesh Mark exhibition point.
Optionally, the driving tangible machine people is mobile according to the path of navigation, by the user guide to After the step of target exhibition point, further includes:
After the tangible machine people reaches the target exhibition point, the target exhibition is obtained from default explanation resources bank The explanation resource look at a little, and control the tangible machine people and the explanation resource is exported with voice mode;Or,
Enabled instruction is sent to the virtual robot of the target exhibition point, so that the virtual robot is opened according to Dynamic instruction shows that default displaying content or the virtual robot are interacted with the user.
Optionally, the intelligent exhibition room leads method further include:
In the case where leading state, when it is described lead mode be the follow the mode when, detect the user location of the user;
Determine that move adjacent to position from the current location of the tangible machine people follows path, wherein described neighbouring Position and the user location are at a distance of the second pre-determined distance value;
The tangible machine people is driven to follow path to be moved to the close position according to described.
Optionally, it is described when detect to user led after leading instruction, into before the step of leading state, Further include:
The tangible machine people be in it is non-lead state when, control camera in the tangible machine people and carry out face Detection;
When detecting the facial information of user, it is described to prompt to control the default suggestion voice of tangible machine people output Whether user, which needs, leads;
After the confirmation instruction that the speech recognition inputted according to the user to confirmation is led, trigger to the user What is led leads instruction.
In addition, to achieve the above object, the present invention also provides a kind of device that leads of intelligent exhibition room, the intelligence exhibitions The device that leads in the Room includes:
Control module, for when detect to user led after leading instruction, control the tangible machine people of exhibition room Into leading state;
Detection module, for detecting the behavior state of the user in the case where leading state;
Switching module leads mode for switch the tangible machine people according to the behavior state, wherein described to draw Neck mode includes follow the mode and bootmode.
In addition, to achieve the above object, the present invention also provides a kind of equipment that leads of intelligent exhibition room, the intelligence exhibitions The Room leads equipment to include memory, processor and be stored in the intelligence that can be run on the memory and on the processor Change the program that leads of exhibition room, the intelligence exhibition room realizes intelligence as described above when program being led to be executed by the processor Change the step of leading method of exhibition room.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium The program that leads of intelligent exhibition room, reality when program being led to be executed by processor of the intelligence exhibition room are stored on storage medium Intelligent the step of leading method of exhibition room now as described above.
The present invention by when tangible machine people detect to user led after leading instruction, into leading state; The behavior state of user is detected in the case where leading state;Mode is led according to behavior state switching entity robot, leads mode Including follow the mode and bootmode, the tangible machine people for realizing exhibition room is not limited to mechanically guide a kind of this exhibition mould of user Formula further includes the follow the mode for passively following user, and realizing tangible machine people can be according to the behavior state of user neatly Switching robot leads mode, so that tangible machine people is more intelligent, improves the sight exhibition experience for seeing exhibition user.
Detailed description of the invention
Fig. 1 is the structural schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram for leading method first embodiment of the intelligent exhibition room of the present invention;
Fig. 3 is the functional schematic module map for leading device preferred embodiment of the intelligent exhibition room of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Equipment is led the present invention provides a kind of intelligent exhibition room, referring to Fig.1, Fig. 1 is that the embodiment of the present invention relates to And hardware running environment structural schematic diagram.
It should be noted that Fig. 1 can be the structural schematic diagram of the hardware running environment for leading equipment of intelligent exhibition room. Intelligence exhibition room of the embodiment of the present invention leads equipment to can be set in tangible machine people, is also possible to PC, is also possible to intelligence Can mobile phone, intelligent TV set, tablet computer, portable computer etc. independently of tangible machine people's terminal device, by with physical machine Device people communication connection, remotely controls tangible machine people.Tangible machine people can be including robot body, mobile dress It sets, voice device, camera, distance measuring sensor, obstacle sensor.
As shown in Figure 1, the equipment that leads of the intelligence exhibition room may include: processor 1001, such as CPU, network interface 1004, user interface 1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 for realizing these components it Between connection communication.User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), Optional user interface 1003 can also include standard wireline interface and wireless interface.Network interface 1004 optionally may include Standard wireline interface and wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to steady Fixed memory (non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of The storage device of aforementioned processor 1001.
Optionally, the equipment that leads of intelligent exhibition room can also include camera, RF (Radio Frequency, radio frequency) Circuit, sensor, voicefrequency circuit, WiFi module etc..It will be understood by those skilled in the art that intelligence exhibition shown in Fig. 1 The Room leads device structure not constitute the restriction for leading equipment to intelligent exhibition room, may include more more or less than illustrating Component, perhaps combine certain components or different component layouts.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium That believes module, Subscriber Interface Module SIM and intelligent exhibition room leads program.
Intelligent exhibition room shown in Fig. 1 is led in equipment, and network interface 1004 is mainly used for connecting background server, Data communication is carried out with background server;User interface 1003 is mainly used for connecting client (user terminal), carries out with client Data communication;And processor 1001 can be used for calling the program that leads of the intelligent exhibition room stored in memory 1005, and hold The following operation of row:
When detect to user led after leading instruction, control exhibition room tangible machine people enter leads state;
The behavior state of the user is detected in the case where leading state;
Lead mode according to what the behavior state switched the tangible machine people, wherein it is described lead mode include with With mode and bootmode.
Further, the behavior state include the user and the tangible machine people distance value and the user Facial direction, it is described to include: the step of leading mode according to what the behavior state switched the tangible machine people
When it is described lead mode be the bootmode when, detect whether the distance value is greater than the first pre-determined distance value, And whether the detection facial direction is back to the tangible machine people;
When the distance value is greater than the first pre-determined distance value, and the facial direction is back to the tangible machine people When, lead pattern switching for the follow the mode for described.
Further, what processor 1001 can call the intelligent exhibition room stored in memory 1005 leads program, also Execute following operation:
In the case where leading state, when it is described to lead mode be the bootmode when, according to default guidance rule determination target Point is puted on display, and from the current location of the tangible machine people to the path of navigation of the target exhibition point;
It drives the tangible machine people mobile according to the path of navigation, the user is guided to the target exhibition Point.
Further, described the step of determining target exhibition point according to default guidance rule, includes:
Obtain the interest characteristics data of the user;
The exhibition point with the interest characteristics Data Matching is chosen from each exhibition point of exhibition room;
Will be nearest with the current location distance in the exhibition point of selection, and the exhibition point that the user does not visit is as mesh Mark exhibition point.
Further, the driving tangible machine people is mobile according to the path of navigation, and the user is guided After the step of to the target exhibition point, processor 1001 can call drawing for the intelligent exhibition room stored in memory 1005 Program is led, following operation is also executed:
After the tangible machine people reaches the target exhibition point, the target exhibition is obtained from default explanation resources bank The explanation resource look at a little, and control the tangible machine people and the explanation resource is exported with voice mode;Or,
Enabled instruction is sent to the virtual robot of the target exhibition point, so that the virtual robot is opened according to Dynamic instruction shows that default displaying content or the virtual robot are interacted with the user.
Further, what processor 1001 can call the intelligent exhibition room stored in memory 1005 leads program, also Execute following operation:
In the case where leading state, when it is described lead mode be the follow the mode when, detect the user location of the user;
Determine that move adjacent to position from the current location of the tangible machine people follows path, wherein described neighbouring Position and the user location are at a distance of the second pre-determined distance value;
The tangible machine people is driven to follow path to be moved to the close position according to described.
Further, it is described when detect to user led after leading instruction, into the step of leading state it Before, processor 1001 can call the program that leads of the intelligent exhibition room stored in memory 1005, also execute following operation:
The tangible machine people be in it is non-lead state when, control camera in the tangible machine people and carry out face Detection;
When detecting the facial information of user, it is described to prompt to control the default suggestion voice of tangible machine people output Whether user, which needs, leads;
After the confirmation instruction that the speech recognition inputted according to the user to confirmation is led, trigger to the user What is led leads instruction.
Based on above-mentioned hardware configuration, each embodiment for leading method of the intelligent exhibition room of the present invention is proposed.
Referring to Fig. 2, present invention intelligence exhibition room leads method first embodiment to provide a kind of leading for intelligent exhibition room Method, it should be noted that, in some cases, can be to be different from although logical order is shown in flow charts Sequence herein executes shown or described step.In the present embodiment, executing subject can be tangible machine people, tangible machine The controller or remote controllers of people in following embodiment, is illustrated with the artificial executing subject of tangible machine.The intelligence The method that leads that exhibition room can be changed includes:
Step S10, when detect to user led after leading instruction, control exhibition room tangible machine people enter draws Neck state;
In the present embodiment, in the exhibition rooms such as l Museum Exhibition Hal, enterprise's exhibition room, settable multiple artificial users of tangible machine (exhibitor) leads.Tangible machine people in it is non-lead state when, can be in the fixed bit in exhibition room doorway or exhibition room It sets and waits in line, can be by control interface of the operation setting on tangible machine people when user needs to lead, or pass through language Sound instruction, triggering entity robot leads instruction, to wake up tangible machine people.Tangible machine people can lead instruction detecting Afterwards, the identity information for needing the user led is obtained, is bound with the identity information, the specific user is drawn with realizing Neck.Specifically, tangible machine people can carry out face to the user that needs are led by the camera being arranged in tangible machine people Detection, using the user's face feature of identification as the identity information of user, with difference and other users.
When tangible machine people detect to user led after leading instruction, lead shape into tangible machine people State.
Step S20 detects the behavior state of the user in the case where leading state;
When tangible machine people is in and leads state, tangible machine people's leads mode to can be bootmode or follows mould Formula.Under bootmode, tangible machine people to user carry out actively lead, under follow the mode, tangible machine people to user into Row passively follows.Tangible machine people can detect the behavior state of user when leading state, to identify the behavior meaning of user Figure needs tangible machine people to guide user to each exhibition point, when user thinks when not knowing the position of each exhibition point such as user When wanting oneself to find exhibition point, there is no need to tangible machine people guidance.
The behavior state of user may include the distance between user and tangible machine people value, the facial direction of user, use The location of family etc..Tangible machine people can measure the distance between user and tangible machine people value in several ways, such as may be used It is measured by the distance measuring sensor being arranged in tangible machine people, in the present embodiment, measurement method is not construed as limiting.It is real Body robot can be can be by the facial direction for the camera detection user being arranged in tangible machine people, camera it is multiple, Enable to take 360 degree around tangible machine people of environment picture, whether tangible machine people detects has in environment picture User's face feature, if there is then determining that user's face direction is entity-oriented robot, if user's face determining without if Direction is back to tangible machine people.Tangible machine people can detect the location of user in several ways, such as pass through physical machine Positioning device in device people positions the current location of tangible machine people, and according to user at a distance from tangible machine people The direction of value and user relative to tangible machine people, determines the location of user, is also possible to the mobile end of user's carrying Bluetooth connection is established with tangible machine people in end, send the location of user to tangible machine human hair in real time, in the present embodiment, right The detection mode of user present position is not construed as limiting.Further, tangible machine people can be primary every preset duration detection The behavior state of user, preset duration can be previously according to needing to be configured, such as every 1 second.
Step S30 leads mode according to what the behavior state switched the tangible machine people, wherein described to lead mould Formula includes follow the mode and bootmode.
Tangible machine people analyzes the behavior state of user, judges that the behavior of user is intended to, is intended to switching entity according to behavior Robot leads mode.Specifically, when tangible machine people is when being in bootmode, according to the behavior state of user judgement use Family is wanted actively to visit, and when not needing guidance, bootmode is switched to follow the mode;When tangible machine people is in follow the mode When, judge that user needs to guide according to the behavior state of user, when being not desired to actively visit, follow the mode is switched to guidance mould Formula.
It is further possible to be before handover, actively to export preset suggestion voice, inquire whether the user needs to cut Mold changing formula identifies judge whether the confirmation comprising confirmation switch mode is instructed in user speech to the voice of user's input, If recognizing confirmation instruction, pattern switching is carried out, if unidentified to confirmation instruction, keeps current mode.
Further, in the control interface of tangible machine people settable pattern switching physical control or virtual control, or The voice that can be inputted to user identifies, judges whether user needs to carry out pattern switching.User can wake up tangible machine When people, or tangible machine people enter lead after state during leading, pass through the control in application entity robot at any time Interface, or by phonetic order, switching entity robot leads mode.Tangible machine people is in the switching for detecting user's input After leading the setting of mode to instruct, mode is led according to setting instruction switching.It should be noted that if tangible machine people does not detect To mode is being led into user setting when leading state, then user setting can be reminded to draw actively by output voice prompting Neck mode, or will lead mode setting is default mode, default mode, which can be, to be set in advance as bootmode or follows mould Formula.
In the present embodiment, by when tangible machine people detect to user led after leading instruction, into drawing Neck state;The behavior state of user is detected in the case where leading state;Mode is led according to behavior state switching entity robot, is drawn Neck mode includes follow the mode and bootmode, the tangible machine people for realizing exhibition room be not limited to mechanically to guide user it is exhibition this One mode further includes the follow the mode for passively following user, and realizing tangible machine people can be according to the behavior state of user Neatly switch robot leads mode, so that tangible machine people is more intelligent, improves the sight exhibition body for seeing exhibition user It tests.
Further, the behavior state include the user and the tangible machine people distance value and the user Facial direction, the step S30 includes:
Step S301, when it is described to lead mode be the bootmode when, detecting the distance value, whether to be greater than first pre- If distance value, and whether the detection facial direction is back to the tangible machine people;
When tangible machine people is when to lead mode be bootmode, tangible machine people detect user and tangible machine people away from Whether it is greater than the first pre-determined distance value from value, and whether the facial direction of detection user is back to tangible machine people.Wherein, One pre-determined distance value can such as be set as 3 meters previously according to needing to be configured.
Step S302, when the distance value is greater than the first pre-determined distance value, and the facial direction is back to described When tangible machine people, lead pattern switching for the follow the mode for described.
When tangible machine people detects that distance value is greater than the first pre-determined distance value, and facial direction is back to tangible machine people When, illustrate user and tangible machine people distance farther out, and back to tangible machine people, illustrates that user may want according to oneself at this time Wish see exhibition, be not desired to follow tangible machine people, then tangible machine people can will lead mode to be switched to from bootmode and follow mould Formula, passively user to be followed to walk.
Further, tangible machine people is also possible to detecting that distance value is greater than the first pre-determined distance value, but face side To for back to the state of tangible machine people, when continuing certain time length (such as 10 seconds), will lead mode from bootmode be switched to With mode, more accurately to judge that user is not desired to follow the behavior of tangible machine people to be intended to, to further improve sight exhibition Experience is opened up in the sight of user.
Further, when tangible machine people is in follow the mode, tangible machine people can detect the location of user, when When detecting that user present position changes little in a long time, user location variation is no more than 2 meters of ranges such as in 2 minutes When, illustrate that user fluctuates in a place, at this point, tangible machine people can will lead mode to be switched to guidance from follow the mode Mode, or actively export voice prompting and inquire whether the user needs to switch mode, if user's confirmation needs to switch, it is switched to Bootmode.
In the present embodiment, tangible machine people is by detecting that the distance between user and tangible machine people value are greater than first Pre-determined distance value, and when the facial direction of user is back to tangible machine people, will lead mode switch from bootmode and with It with mode, realizes intelligent recognition user and is not desired to the behavior for following tangible machine people to walk intention, and intelligently switch for user The sight exhibition experience for seeing exhibition user is improved so that tangible machine people's leads service more intelligent for follow the mode.
Further, it is based on above-mentioned first embodiment, present invention intelligence exhibition room leads method second embodiment to provide A kind of intelligence exhibition room leads method.In the present embodiment, the intelligent exhibition room leads method further include:
Step S40, in the case where leading state, when it is described to lead mode be the bootmode when, it is regular according to default guidance Determine that target puts on display point, and from the current location of the tangible machine people to the path of navigation of the target exhibition point;
Tangible machine people is in the case where leading state, regular from exhibition room according to default guidance when leading mode is bootmode Each exhibition point in determine target put on display point, and determine from tangible machine people current location to the guidance road of target exhibition point Diameter.
Wherein, presetting guidance rule can be configured in advance, such as be set as selecting the current stream of people from each exhibition point Most or least exhibition point is measured as target and puts on display point, or the exhibition point that selection is nearest apart from tangible machine people current location Point is puted on display as target.If entity robot built-in or cloud server are stored with the map of exhibition room, have on exhibition room map each The location information of a exhibition point;Tangible machine people can position tangible machine people by built-in positioning device, obtain real Current location of the body robot in exhibition room;Tangible machine people can calculate each exhibition point at a distance from current location, selection away from From nearest, and the exhibition point that user does not visit also puts on display point as target.After determining target exhibition point, tangible machine people rule It draws and marches to the path of navigation of the target exhibition point from current location and specifically can be planned according to exhibition room map, work as exhibition room From current location to when clear, path of navigation being planned to straight line path on the straight line path of target exhibition point on map Diameter, the scale bar of path length according to the map calculate, and when there is barrier on straight line path, path of navigation can be planned to curve Path, with avoiding obstacles.
Further, described the step of determining target exhibition point according to default guidance rule, includes:
Step S401 obtains the interest characteristics data of the user;
Tangible machine people can obtain the interest characteristics data of user when determining target exhibition point.Specifically, user can be When waking up tangible machine people, or when going to next exhibition point, the interest characteristics data of oneself are inputted;It is also possible to tangible machine People, by exporting in preset voice-enabled chat perhaps subject of question, chats or is mentioned with user when going to next exhibition point It asks, the interest characteristics data of user is extracted from the user speech of typing.Interest characteristics data can be user and want the exhibition visited It lookes at type a little or user wants the article etc. that the title for the exhibition point visited or user are liked.In addition, tangible machine people can also When with record in follow the mode, the information for the exhibition point that user is visited obtains use according to the information analysis of these exhibition points The interest characteristics data at family, such as according to the type of these exhibition points, analysis obtains the exhibition vertex type that user likes.
Step S402 chooses the exhibition point with the interest characteristics Data Matching from each exhibition point of exhibition room;
The information that each exhibition point is pre-set in tangible machine people, after getting the interest characteristics data of user, The exhibition point with interest characteristics Data Matching is chosen from each exhibition point of exhibition room, specifically, by interest characteristics data and respectively The information of a exhibition point is matched, if interest characteristics data are the exhibition vertex types that user likes, then according to exhibition point information In type, be matched to and belong to the exhibition point that user likes type.
Step S403, will be nearest with the current location distance in the exhibition point of selection, and the exhibition that the user does not visit A conduct target of looking at puts on display point.
Tangible machine people to the exhibition point with interest characteristics Data Matching chosen from each exhibition point, will wherein with reality Body robot current location distance is nearest, and the exhibition point that user does not visit also puts on display point as target.In the present embodiment, lead to It crosses tangible machine people and determines that target puts on display point according to the interest characteristics data of user, realize of tangible machine people's bootmode Property, so that the guide service of tangible machine people is more bonded the demand of particular user, to realize the intelligence of tangible machine people Change, further improves the sight exhibition experience for seeing exhibition user.
Step S50 drives the tangible machine people mobile according to the path of navigation, the user is guided to described Target puts on display point.
After determining path of navigation, tangible machine people drives the mobile device of tangible machine people mobile according to path of navigation, Target exhibition point is directed the user to, if target exhibition point is apart from 20 meters of tangible machine people, path of navigation is to tangible machine people Mobile 20 meters of front, then tangible machine people drives mobile device to move 20 meters forwards.
Tangible machine people according to path of navigation during moving, if the obstacle sensor in tangible machine people It detects that front has barrier (such as other users), then the planning of path of navigation can be re-started, to avoid barrier is bumped against.When After tangible machine people reaches target exhibition point, tangible machine people can stop in target exhibition point, after stopping preset time, Voice prompting can actively be exported to ask the user whether to go to next exhibition point, and after receiving the confirmation instruction of user's input, The determination of next target exhibition point and path of navigation is carried out, or detects user in the preset time that target exhibition point stops After what is be actively entered goes to the instruction of next exhibition point, does not continue to stop, directly carry out next target exhibition point and guidance road The determination of diameter.The exhibition point can be denoted as the exhibition visited after guidance user has visited an exhibition point by tangible machine people Point visits duplicate exhibition point to avoid guidance user, further realizes the intelligence of tangible machine people, improves user's body It tests.
In the present embodiment, when realizing tangible machine people and being in bootmode, next target exhibition point is dynamically planned And path of navigation, rather than according to the visit sequence according to fixed path of navigation and exhibition point, mechanically user is guided to visit exhibition It lookes at a little, to realize the intelligence of tangible machine people, improves the sight exhibition experience for seeing exhibition user.
Further, after step S50, further includes:
Step S60 obtains institute from default explanation resources bank after the tangible machine people reaches the target exhibition point The explanation resource of target exhibition point is stated, and controls the tangible machine people and the explanation resource is exported with voice mode;
After tangible machine people reaches target exhibition point, target can be obtained from the explanation resources bank built in tangible machine people It puts on display in the explanation resource or cloud server of point and is stored with explanation resources bank, tangible machine people asks from cloud server Seek the explanation resource for obtaining target exhibition point.After tangible machine people gets explanation resource, controlled entity robot built-in Voice device exports explanation resource with voice mode.
Step S70 sends enabled instruction to the virtual robot of the target exhibition point, for the virtual robot root Show that default displaying content or the virtual robot are interacted with the user according to the enabled instruction.
Alternatively, when target exhibition point is provided with for interacting with user or the virtual machine of present graphical or video content When device people, tangible machine people can be transmitted enabled instruction to virtual robot, virtual robot after receiving enabled instruction, according to Enabled instruction shows default displaying content, or interacts with user.Wherein, default displaying content, which can be, is set in advance in void Video content or image content in quasi- robot etc..When tangible machine people terminates in target exhibition point voice output explanation resource Afterwards or after virtual robot displaying content, interaction, tangible machine people can carry out the determination of next target exhibition point.
In the present embodiment, explanation or the artificial user of virtual machine of exhibition point are carried out by the artificial user of tangible machine Content displaying or interaction are carried out, so that enterprise does not need that time and money cost is spent to go recruitment guide, and solves and works as When visitor number is more, guide is difficult to the problem of dispatching.
Further, above-mentioned first or second embodiments are based on, present invention intelligence exhibition room leads method third to implement What example provided a kind of intelligent exhibition room leads method.In the present embodiment, the intelligent exhibition room leads method further include:
Step A10, in the case where leading state, when it is described lead mode be the follow the mode when, detect the use of the user Family position;
Tangible machine people lead state under, when leading mode is follow the mode, tangible machine people detects user User location.Specifically, the detection mode of user present position in the detection mode with first embodiment of user location is identical, In this not go into detail.
Step A20 determines that move adjacent to position from the current location of the tangible machine people follows path, wherein The close position and the user location are at a distance of the second pre-determined distance value;
Tangible machine people determines the close position of user location, close position and user location at a distance of the second pre-determined distance Value.Wherein, the second pre-determined distance value can be configured in advance, such as be set as 1 meter, so that tangible machine people is in follow the mode When, 1 meter of following distance is kept with user.Specifically, tangible machine people can be by tangible machine people current location and user location On line, it is determined as close position at a distance of the position of the second pre-determined distance with user location.Tangible machine people determines from present bit Set move adjacent to position follow path.Follow the method for determination of path of navigation in the method for determination and second embodiment in path Similar, in this not go into detail.
Step A30 drives the tangible machine people to follow path to be moved to the close position according to described.
After tangible machine people, which determines, follows path, drive the mobile device of tangible machine people according to following path to be moved to Close position.Tangible machine people continues to test user location after reaching close position, determines new close position and follows road Diameter follows user mobile to realize during user visits.In the present embodiment, by tangible machine people in follow the mode When, passively user is followed to walk, to realize when user wants to visit according to the wish of oneself, can intelligently follow user.
Further, tangible machine people is in follow the mode, if detecting the sight exhibition range into exhibition point, actively or Person obtains the explanation resource of the exhibition point after the instruction for receiving user's input, and control voice device exports the explanation resource, To provide explanation service for user, or enabled instruction is sent to the virtual robot of the exhibition point, by the artificial user of virtual machine Show or and user interaction, thus improve user sight exhibition experience.
Further, before step S10, further includes:
Step A40, the tangible machine people be in it is non-lead state when, control the camera in the tangible machine people Carry out Face datection;
The tangible machine people of exhibition room it is non-lead state when, fix position in exhibition room doorway or exhibition room and wait in line, arrange Face datection is carried out in the controllable camera being arranged in tangible machine people of primary tangible machine people.
Step A50, when detecting the facial information of user, control the tangible machine people export default suggestion voice with Prompt whether the user needs to lead;
When detecting the facial information of user, that is, when having detected face, voice device in controlled entity robot, Default suggestion voice is exported, prompts the user whether to need to lead.
Step A60, after the confirmation instruction that the speech recognition inputted according to the user to confirmation is led, triggering pair What the user was led leads instruction.
Whether tangible machine people controls the voice of speech recognition equipment typing user, and identify and wherein drawn comprising confirmation The confirmation of neck instructs, if recognizing confirmation instruction, triggering leads instruction to what the user was led, and what be will test is somebody's turn to do The face feature information of user is saved as the identity information of user, is bound with the identity information, with difference and its He is user.If unidentified to confirmation instruction, continue Face datection.
In the present embodiment, intelligently visit user is identified by tangible machine people, and actively inquires that user is No needs are led, so that guiding in exhibition room without installing the operation that staff carries out tangible machine people to user, so that exhibition room It is more intelligent.
Further, tangible machine people can plan from current location to reality after receiving the END instruction for terminating to lead The return path of body robot holding areas, to realize the automatic dispatching of tangible machine people.Wherein, END instruction, which can be, works as User does not need after leading, and is triggered by the control interface in tangible machine people, or by speech trigger, is also possible to work as entity Robot leads user to visit automatic trigger after all exhibition points.
In addition, the embodiment of the present invention also propose a kind of intelligent exhibition room lead device, referring to Fig. 3, the intelligence exhibition The device that leads in the Room includes:
Control module 10, for when detect to user led after leading instruction, control the tangible machine of exhibition room People, which enters, leads state;
Detection module 20, for detecting the behavior state of the user in the case where leading state;
Switching module 30 leads mode for switch the tangible machine people according to the behavior state, wherein described Leading mode includes follow the mode and bootmode.
Further, the behavior state include the user and the tangible machine people distance value and the user Facial direction, the switching module 30 includes:
Detection unit, for when it is described to lead mode be the bootmode when, detect whether the distance value is greater than the One pre-determined distance value, and whether the detection facial direction is back to the tangible machine people;
Switch unit, for being greater than the first pre-determined distance value when the distance value, and the facial direction be back to When the tangible machine people, lead pattern switching for the follow the mode for described.
Further, the intelligent exhibition room leads device further include:
Determining module, in the case where leading state, when it is described to lead mode be the bootmode when, according to default guidance Rule determines that target puts on display point, and from the current location of the tangible machine people to the path of navigation of the target exhibition point;
Drive module guides the user for driving the tangible machine people mobile according to the path of navigation Point is puted on display to the target.
Further, the determining module includes:
Acquiring unit, for obtaining the interest characteristics data of the user;
Selection unit, for choosing the exhibition point with the interest characteristics Data Matching from each exhibition point of exhibition room;
Determination unit, in the exhibition point for that will choose recently with the current location distance, and the user does not visit Exhibition point as target put on display point.
Further, the intelligent exhibition room leads device further include:
Explanation module is used for after the tangible machine people reaches the target exhibition point, from default explanation resources bank The explanation resource of the target exhibition point is obtained, and controls the tangible machine people and the explanation resource is exported with voice mode; Or,
Sending module, for sending enabled instruction to the virtual robot of the target exhibition point, for the virtual machine Device people shows default displaying content according to the enabled instruction, or interacts with the user.
Further, the detection module 20 is also used in the case where leading state, leads mode to follow mould to be described when described When formula, the user location of the user is detected;
The determining module is also used to determination from the current location of the tangible machine people and moves adjacent to following for position Path, wherein the close position and the user location are at a distance of the second pre-determined distance value;
The drive module is also used to that the tangible machine people is driven to follow path to be moved to the juxtaposition according to described It sets.
Further, the control module, be also used to the tangible machine people be in it is non-lead state when, described in control Camera in tangible machine people carries out Face datection;When detecting the facial information of user, the tangible machine people is controlled Default suggestion voice is exported to prompt whether the user needs to lead;
The intelligence exhibition room leads device further include:
Trigger module, after the confirmation for being led when the speech recognition inputted according to the user to confirmation instructs, Triggering leads instruction to what the user was led.
The expansion content of the specific embodiment for leading device of intelligence exhibition room of the invention and above-mentioned intelligent exhibition room Lead each embodiment of method essentially identical, this will not be repeated here.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium On be stored with the program that leads of intelligent exhibition room, the intelligence exhibition room realizes institute as above when program being led to be executed by processor State the step of leading method of intelligent exhibition room.
In the expansion of the specific embodiment for leading equipment and computer readable storage medium of intelligence exhibition room of the invention Appearance leads each embodiment of method essentially identical with above-mentioned intelligent exhibition room, and this will not be repeated here.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in one as described above In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone, Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of intelligence exhibition room leads method, which is characterized in that the method that leads of the intelligence exhibition room includes:
When detect to user led after leading instruction, control exhibition room tangible machine people enter leads state;
The behavior state of the user is detected in the case where leading state;
Mode is led according to what the behavior state switched the tangible machine people, wherein described to lead mode include following mould Formula and bootmode.
2. intelligence exhibition room as described in claim 1 leads method, which is characterized in that the behavior state includes the use Family and the distance value of the tangible machine people and the facial direction of the user, it is described according to behavior state switching Tangible machine people's includes: the step of leading mode
When it is described lead mode be the bootmode when, detect whether the distance value is greater than the first pre-determined distance value, and Detect whether the facial direction is back to the tangible machine people;
When the distance value is greater than the first pre-determined distance value, and when the facial direction is back to the tangible machine people, Lead pattern switching for the follow the mode for described.
3. intelligence exhibition room as described in claim 1 leads method, which is characterized in that the side of leading of the intelligence exhibition room Method further include:
In the case where leading state, when it is described to lead mode be the bootmode when, according to default guidance rule determination target exhibition Point, and from the current location of the tangible machine people to the path of navigation of the target exhibition point;
It drives the tangible machine people mobile according to the path of navigation, the user is guided to the target and puts on display point.
4. intelligence exhibition room as claimed in claim 3 leads method, which is characterized in that described true according to default guidance rule Set the goal exhibition point the step of include:
Obtain the interest characteristics data of the user;
The exhibition point with the interest characteristics Data Matching is chosen from each exhibition point of exhibition room;
Will be nearest with the current location distance in the exhibition point of selection, and the exhibition point that the user does not visit is as target exhibition It lookes at a little.
5. intelligence exhibition room as claimed in claim 3 leads method, which is characterized in that the driving tangible machine people It is mobile according to the path of navigation, after the step of user is guided to the target exhibition point, further includes:
After the tangible machine people reaches the target exhibition point, the target exhibition point is obtained from default explanation resources bank Explanation resource, and control the tangible machine people and the explanation resource exported with voice mode;Or,
Enabled instruction is sent to the virtual robot of the target exhibition point, so that the virtual robot refers to according to the starting It enables and shows that default displaying content or the virtual robot are interacted with the user.
6. intelligence exhibition room as described in claim 1 leads method, which is characterized in that the side of leading of the intelligence exhibition room Method further include:
In the case where leading state, when it is described lead mode be the follow the mode when, detect the user location of the user;
Determine that move adjacent to position from the current location of the tangible machine people follows path, wherein the close position With the user location at a distance of the second pre-determined distance value;
The tangible machine people is driven to follow path to be moved to the close position according to described.
7. leading method such as intelligent exhibition room as claimed in any one of claims 1 to 6, which is characterized in that described to detect To user led lead instruction after, into before the step of leading state, further includes:
The tangible machine people be in it is non-lead state when, control camera in the tangible machine people and carry out face inspection It surveys;
When detecting the facial information of user, controls the tangible machine people and export default suggestion voice to prompt the user Whether need to lead;
After the confirmation instruction that the speech recognition inputted according to the user to confirmation is led, triggering carries out the user What is led leads instruction.
8. a kind of intelligence exhibition room leads device, which is characterized in that the device that leads of the intelligence exhibition room includes:
Control module, for when detect to user led after leading instruction, control exhibition room tangible machine people enter Lead state;
Detection module, for detecting the behavior state of the user in the case where leading state;
Switching module leads mode for switch the tangible machine people according to the behavior state, wherein described to lead mould Formula includes follow the mode and bootmode.
9. a kind of intelligence exhibition room leads equipment, which is characterized in that the intelligence exhibition room lead equipment include memory, Processor and the program that leads for being stored in the intelligent exhibition room that can be run on the memory and on the processor, the intelligence The intelligent exhibition of exhibition room realized as described in any one of claims 1 to 7 when program being led to be executed by the processor can be changed The step of leading method of the Room.
10. a kind of computer readable storage medium, which is characterized in that be stored with intelligence on the computer readable storage medium Exhibition room leads program, and the intelligence exhibition room is realized when program being led to be executed by processor as any in claim 1 to 7 The step of leading method of intelligent exhibition room described in.
CN201910709045.9A 2019-08-01 2019-08-01 Leading method, device, equipment and storage medium for intelligent exhibition hall Active CN110405767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910709045.9A CN110405767B (en) 2019-08-01 2019-08-01 Leading method, device, equipment and storage medium for intelligent exhibition hall

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910709045.9A CN110405767B (en) 2019-08-01 2019-08-01 Leading method, device, equipment and storage medium for intelligent exhibition hall

Publications (2)

Publication Number Publication Date
CN110405767A true CN110405767A (en) 2019-11-05
CN110405767B CN110405767B (en) 2022-06-17

Family

ID=68365286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910709045.9A Active CN110405767B (en) 2019-08-01 2019-08-01 Leading method, device, equipment and storage medium for intelligent exhibition hall

Country Status (1)

Country Link
CN (1) CN110405767B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111202330A (en) * 2020-01-07 2020-05-29 灵动科技(北京)有限公司 Self-driven system and method
CN111324129A (en) * 2020-03-19 2020-06-23 中国建设银行股份有限公司 Navigation method and device based on face recognition
CN111582983A (en) * 2020-05-07 2020-08-25 悠尼客(上海)企业管理有限公司 Personalized control method based on face recognition and customer behaviors
CN111694353A (en) * 2020-05-14 2020-09-22 特斯联科技集团有限公司 Guidance control method and device, storage medium and service robot
CN111820822A (en) * 2020-07-30 2020-10-27 睿住科技有限公司 Sweeping robot, illuminating method thereof and computer readable storage medium
CN112104965A (en) * 2020-11-09 2020-12-18 北京声智科技有限公司 Sound amplification method and sound amplification system
CN112104964A (en) * 2020-11-18 2020-12-18 北京声智科技有限公司 Control method and control system of following type sound amplification robot
CN112486165A (en) * 2020-10-22 2021-03-12 深圳优地科技有限公司 Robot guiding method, device, equipment and computer readable storage medium
CN113469844A (en) * 2021-07-02 2021-10-01 柒久园艺科技(北京)有限公司 Distributed exhibition room environment monitoring method and device, electronic equipment and storage medium
CN114003027A (en) * 2020-07-14 2022-02-01 本田技研工业株式会社 Mobile object control device, mobile object control method, and storage medium
CN114027869A (en) * 2020-10-29 2022-02-11 武汉联影医疗科技有限公司 Moving method of ultrasonic imaging apparatus, and medium
CN114199268A (en) * 2021-12-10 2022-03-18 北京云迹科技股份有限公司 Robot navigation and guidance method and device based on voice prompt and guidance robot
CN114193477A (en) * 2021-12-24 2022-03-18 上海擎朗智能科技有限公司 Position leading method, device, robot and storage medium
CN114407024A (en) * 2022-03-15 2022-04-29 上海擎朗智能科技有限公司 Position leading method, device, robot and storage medium
CN115709468A (en) * 2022-11-16 2023-02-24 京东方科技集团股份有限公司 Guide control method and device, electronic equipment and readable storage medium
WO2023159591A1 (en) * 2022-02-28 2023-08-31 京东方科技集团股份有限公司 Intelligent explanation system and method for exhibition scene
JP7478393B2 (en) 2020-10-05 2024-05-07 学校法人早稲田大学 Autonomous mobile robot, and its control device and control program

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1982141A2 (en) * 2006-02-10 2008-10-22 LKT GmbH Device and method for following the movement of a tool of a handling unit
TWM465954U (en) * 2013-06-19 2013-11-21 Kun-Yang Lin Device with following displacement capability
CN103608741A (en) * 2011-06-13 2014-02-26 微软公司 Tracking and following of moving objects by a mobile robot
CN105136136A (en) * 2015-09-07 2015-12-09 广东欧珀移动通信有限公司 Navigation method and terminal
CN106426180A (en) * 2016-11-24 2017-02-22 深圳市旗瀚云技术有限公司 Robot capable of carrying out intelligent following based on face tracking
US20170368691A1 (en) * 2016-06-27 2017-12-28 Dilili Labs, Inc. Mobile Robot Navigation
CN108121359A (en) * 2016-11-29 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of shopping robot
CN108733080A (en) * 2017-12-28 2018-11-02 北京猎户星空科技有限公司 A kind of state switching method and device
CN108748172A (en) * 2018-05-29 2018-11-06 塔米智能科技(北京)有限公司 A kind of robot welcome method, apparatus, equipment and medium
CN108858242A (en) * 2018-08-13 2018-11-23 合肥市徽马信息科技有限公司 One kind lead the way formula museum explanation guide to visitors robot
CN109015676A (en) * 2018-08-13 2018-12-18 范文捷 A kind of robot for accompanying the walking of old man open air
CN109190478A (en) * 2018-08-03 2019-01-11 北京猎户星空科技有限公司 The switching method of target object, device and electronic equipment during focus follows
CN109366504A (en) * 2018-12-17 2019-02-22 广州天高软件科技有限公司 A kind of intelligence exhibition and fair service robot system
CN109571499A (en) * 2018-12-25 2019-04-05 广州天高软件科技有限公司 A kind of intelligent navigation leads robot and its implementation

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1982141A2 (en) * 2006-02-10 2008-10-22 LKT GmbH Device and method for following the movement of a tool of a handling unit
CN103608741A (en) * 2011-06-13 2014-02-26 微软公司 Tracking and following of moving objects by a mobile robot
TWM465954U (en) * 2013-06-19 2013-11-21 Kun-Yang Lin Device with following displacement capability
CN105136136A (en) * 2015-09-07 2015-12-09 广东欧珀移动通信有限公司 Navigation method and terminal
US20170368691A1 (en) * 2016-06-27 2017-12-28 Dilili Labs, Inc. Mobile Robot Navigation
CN106426180A (en) * 2016-11-24 2017-02-22 深圳市旗瀚云技术有限公司 Robot capable of carrying out intelligent following based on face tracking
CN108121359A (en) * 2016-11-29 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of shopping robot
CN108733080A (en) * 2017-12-28 2018-11-02 北京猎户星空科技有限公司 A kind of state switching method and device
CN108748172A (en) * 2018-05-29 2018-11-06 塔米智能科技(北京)有限公司 A kind of robot welcome method, apparatus, equipment and medium
CN109190478A (en) * 2018-08-03 2019-01-11 北京猎户星空科技有限公司 The switching method of target object, device and electronic equipment during focus follows
CN108858242A (en) * 2018-08-13 2018-11-23 合肥市徽马信息科技有限公司 One kind lead the way formula museum explanation guide to visitors robot
CN109015676A (en) * 2018-08-13 2018-12-18 范文捷 A kind of robot for accompanying the walking of old man open air
CN109366504A (en) * 2018-12-17 2019-02-22 广州天高软件科技有限公司 A kind of intelligence exhibition and fair service robot system
CN109571499A (en) * 2018-12-25 2019-04-05 广州天高软件科技有限公司 A kind of intelligent navigation leads robot and its implementation

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111202330A (en) * 2020-01-07 2020-05-29 灵动科技(北京)有限公司 Self-driven system and method
WO2021139684A1 (en) * 2020-01-07 2021-07-15 灵动科技(北京)有限公司 Self-driven system and method
CN111324129A (en) * 2020-03-19 2020-06-23 中国建设银行股份有限公司 Navigation method and device based on face recognition
CN111582983A (en) * 2020-05-07 2020-08-25 悠尼客(上海)企业管理有限公司 Personalized control method based on face recognition and customer behaviors
CN111694353A (en) * 2020-05-14 2020-09-22 特斯联科技集团有限公司 Guidance control method and device, storage medium and service robot
CN114003027A (en) * 2020-07-14 2022-02-01 本田技研工业株式会社 Mobile object control device, mobile object control method, and storage medium
CN111820822A (en) * 2020-07-30 2020-10-27 睿住科技有限公司 Sweeping robot, illuminating method thereof and computer readable storage medium
CN111820822B (en) * 2020-07-30 2022-03-08 广东睿住智能科技有限公司 Sweeping robot, illuminating method thereof and computer readable storage medium
JP7478393B2 (en) 2020-10-05 2024-05-07 学校法人早稲田大学 Autonomous mobile robot, and its control device and control program
CN112486165A (en) * 2020-10-22 2021-03-12 深圳优地科技有限公司 Robot guiding method, device, equipment and computer readable storage medium
CN114027869A (en) * 2020-10-29 2022-02-11 武汉联影医疗科技有限公司 Moving method of ultrasonic imaging apparatus, and medium
CN112104965A (en) * 2020-11-09 2020-12-18 北京声智科技有限公司 Sound amplification method and sound amplification system
CN112104964B (en) * 2020-11-18 2022-03-11 北京声智科技有限公司 Control method and control system of following type sound amplification robot
CN112104964A (en) * 2020-11-18 2020-12-18 北京声智科技有限公司 Control method and control system of following type sound amplification robot
CN113469844A (en) * 2021-07-02 2021-10-01 柒久园艺科技(北京)有限公司 Distributed exhibition room environment monitoring method and device, electronic equipment and storage medium
CN113469844B (en) * 2021-07-02 2023-09-05 柒久园艺科技(北京)有限公司 Distributed exhibition hall environment monitoring method and device, electronic equipment and storage medium
CN114199268A (en) * 2021-12-10 2022-03-18 北京云迹科技股份有限公司 Robot navigation and guidance method and device based on voice prompt and guidance robot
CN114193477A (en) * 2021-12-24 2022-03-18 上海擎朗智能科技有限公司 Position leading method, device, robot and storage medium
CN114193477B (en) * 2021-12-24 2024-06-21 上海擎朗智能科技有限公司 Position leading method, device, robot and storage medium
WO2023159591A1 (en) * 2022-02-28 2023-08-31 京东方科技集团股份有限公司 Intelligent explanation system and method for exhibition scene
CN114407024A (en) * 2022-03-15 2022-04-29 上海擎朗智能科技有限公司 Position leading method, device, robot and storage medium
CN114407024B (en) * 2022-03-15 2024-04-26 上海擎朗智能科技有限公司 Position leading method, device, robot and storage medium
CN115709468A (en) * 2022-11-16 2023-02-24 京东方科技集团股份有限公司 Guide control method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN110405767B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN110405767A (en) Intelligent exhibition room leads method, apparatus, equipment and storage medium
KR102171935B1 (en) Method for providing interior service based virtual reality
KR101170686B1 (en) Method for Guide Service for Person using Moving Robot
EP1873623A2 (en) User interface providing apparatus and method for portable terminal having touchpad
CN104869304A (en) Method of displaying focus and electronic device applying the same
CN113168279A (en) Image display device and method
KR101212057B1 (en) System and method for providing tour information using tour behaviour pattern prediction model of tourists
US20160212591A1 (en) Exhibition guide apparatus, exhibition display apparatus, mobile terminal, and exhibition guide method
CN108007459A (en) Navigation implementation method and device in building
KR20160101605A (en) Gesture input processing method and electronic device supporting the same
CN112486165B (en) Robot lead the way method, apparatus, device, and computer-readable storage medium
CN106919676A (en) The recommendation method in place, device, server and system in map
CN117579791B (en) Information display system with image capturing function and information display method
CN109489182A (en) The pendulum air control method, apparatus and the apparatus of air conditioning of the apparatus of air conditioning
KR20210075484A (en) Method for inspecting facility and user terminal performing the same
CN117631907B (en) Information display apparatus having image pickup module and information display method
US10830593B2 (en) Cognitive fingerprinting for indoor location sensor networks
US20210303648A1 (en) Recommendation system and recommendation method
CN112396997B (en) Intelligent interactive system for shadow sand table
CN114012740A (en) Target location leading method and device based on robot and robot
CN110220530A (en) Air navigation aid and device, computer readable storage medium and electronic equipment
KR102243576B1 (en) AR based guide service for exhibition
KR20220088262A (en) Tourist attraction guide method and apparatus using AR and VR images, and system therefor
TWM542190U (en) Augmented reality cloud and smart service searching system
CN111272133B (en) Four-wheel positioning control method and system, mobile terminal and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant