CN114131626A - Robot, service system and method - Google Patents
Robot, service system and method Download PDFInfo
- Publication number
- CN114131626A CN114131626A CN202111499432.8A CN202111499432A CN114131626A CN 114131626 A CN114131626 A CN 114131626A CN 202111499432 A CN202111499432 A CN 202111499432A CN 114131626 A CN114131626 A CN 114131626A
- Authority
- CN
- China
- Prior art keywords
- data
- module
- service
- robot
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 230000003993 interaction Effects 0.000 claims abstract description 24
- 230000002452 interceptive effect Effects 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims description 3
- 238000007405 data analysis Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 7
- 238000011161 development Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000009223 counseling Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
Abstract
The invention discloses a robot, a service system and a service method, wherein the robot comprises a control module, a robot body, a map building navigation module, an information acquisition module, an interaction module and a data receiving and transmitting module, wherein the robot body, the map building navigation module, the information acquisition module, the interaction module and the data receiving and transmitting module are connected with the control module, the control module comprises a processor and a memory, transaction data are stored in the memory, and the transaction data comprise transaction data. The invention can effectively solve a plurality of pain points in the work by utilizing the technologies of robots, digitalization and the like. The robot is used for explanation and teaching, and firstly, workers can be separated from complicated work; secondly, the form is novel, and the enthusiasm of people for learning is more easily aroused; thirdly, the robot provides a large amount of learning materials, and the interaction operation is convenient; and finally, the data of the robot such as explanation, reception, inquiry and the like can be summarized and counted, so that the data analysis is convenient.
Description
Technical Field
The invention relates to the technical field of service work intellectualization, in particular to a robot, a service system and a service method.
Background
There are a number of problems in the development of organizational service activities: for the leader, the development situation, various service feedbacks and various data information cannot be known in real time; for workers, the work is more and complicated, the number of the part-time workers is more, and the workers of the new people are not enough to know the work business; for basic level personnel, the general enthusiasm is low, and the latest information cannot be actively known; for the masses, the information is not well understood and there is no suitable way to learn.
In the prior art, patent CN201921351238.3 discloses a display device, which comprises a frame body, a display window on the frame body, a writing desk and a tablet computer for displaying, and integrates the functions of learning, form printing, form filling and guiding, and the like. The device is a pure information display device, and does not have the interaction capacity, the explanation capacity, the teaching capacity and the digital display capacity. Patent CN201921471101.1 discloses an information display device, which comprises a back plate fixed on the surface of a wall, an information display plate fixed on the front side of the back plate, and a date display component fixed on the front side of the back plate. A pure information display device has no interaction capability, explanation and teaching capability and digital display capability. Patent CN202022523183.9 discloses a base audio-visual device, which comprises an audio-visual device body, wherein a display screen is arranged at the upper end of the audio-visual device body, and control keys are respectively arranged below the display screen. The intelligent question answering equipment similar to the intelligent sound equipment does not have the mobile explanation capacity and the digital display capacity.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides the robot, the service system and the service method which can movably explain and display information, have rich interaction modes and multiple functions.
The technical scheme is as follows: in order to achieve the above object, the robot of the present invention includes a control module, and connected to the control module:
a robot body that is movable;
the drawing navigation module is used for acquiring environmental data and the operation data of the robot body;
the information acquisition module is used for acquiring video, image and sound information;
the interaction module is used for carrying out information interaction with people; and
the data transceiving module is used for carrying out data transceiving interaction with an external system or a terminal;
the control module comprises a processor and a memory, wherein transaction data are stored in the memory, and the transaction data comprise transaction data.
Further, the mapping navigation module comprises a laser radar, an ultrasonic sensor, an IMU and a milemeter.
Furthermore, the information acquisition module comprises a video stream acquisition camera and a user information acquisition module.
Furthermore, the user information acquisition module comprises a face recognition camera and a voice recognition module.
Further, the interaction module comprises a microphone, a loudspeaker and a display screen.
The service system comprises the robot, a master control center and a plurality of electronic large screens, wherein the master control center can be in wireless communication with the robot, and the robot can be in linkage with the electronic large screens.
The service method is based on the service system and is implemented by a control module of the robot, and comprises the following steps:
collecting information of participating users through the information collection module;
receiving a service trigger signal, wherein the form of the service trigger signal is a remote control instruction sent by the master control center or an instruction triggered by a user through the interactive module;
executing a service task corresponding to the service trigger signal according to the service trigger signal; the service tasks comprise mobile explanation services, consultation services, content display services, ceremony services and reception services, wherein the mobile explanation services are services for walking along a preset route and explaining, and the content display services are services for outputting and displaying contents combined with the electronic large screen;
and generating the service information of the service task and archiving or synchronizing the service information to the cloud.
Further, when the service task is the mobile explanation service, the executing the service task corresponding to the service trigger signal specifically includes:
acquiring operation data through the map building navigation module;
controlling the robot body to perform navigation movement along the preset route according to the operation data and an environment map;
and performing stay explanation, motion explanation or movement with a user according to the real-time position of the robot body.
Further, the method further comprises:
counting business data in a set time period, wherein the business data comprises question and answer data, reception data, the total number of active participants and learning data of each person;
and sending the service data to the master control center through the data transceiver module.
Further, the method further comprises:
adopting the identity data of the user through the information acquisition module;
inquiring user profile data according to the identity data;
formulating information content according to the learning progress of the user in the user profile data;
and displaying the information content to a user through the interactive module and/or the electronic large screen and carrying out corresponding explanation.
Has the advantages that: the invention can effectively solve a plurality of pain points in the work by utilizing the technologies of robots, digitalization and the like. The robot is used for explanation and teaching, and firstly, workers can be separated from complicated work; secondly, the form is novel, and the enthusiasm of people for learning is more easily aroused; thirdly, the robot provides a large amount of learning materials, and the interaction operation is convenient; and finally, the data of the robot such as explanation, reception, inquiry and the like can be summarized and counted, so that the data analysis is convenient.
Drawings
FIG. 1 is a schematic view of a robot;
FIG. 2 is a schematic diagram of the service system;
fig. 3 is a flow chart of a service method.
In the figure: 1-a robot; 11-a control module; 111-a processor; 112-a memory; 12-a robot body; 13-map building navigation module; 14-an information acquisition module; 141-video stream capturing camera; 15-an interaction module; 16-a data transceiver module; 2-a master control center; 3-electronic large screen.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The robot 1 shown in fig. 1 includes a control module 11, and a robot body 12, a map building navigation module 13, an information acquisition module 14, an interaction module 15, and a data transceiver module 16 connected to the control module 11. The robot 1 can be used in exhibition halls, organization centers, community service workstations, and the like.
The robot body 12 includes a plurality of active traveling wheels, which are movable; the mapping navigation module 13 is configured to acquire environmental data and operation data of the robot body 11, specifically, the mapping navigation module 13 includes a laser radar, an ultrasonic sensor, an IMU, and a odometer, and the control module 11 may construct an environmental map by using the data acquired by the mapping navigation module 13, and control the robot body 12 to perform navigation motion based on the constructed environmental map.
The information acquisition module 14 is used for acquiring information such as videos, images and sounds; specifically, the information collection module 14 includes a video stream collection camera 141 and a user information collection module. The video stream capture camera 141 may record live video stream data for later processing, sorting, and archiving. The user information acquisition module is used for acquiring data of users, and can comprise a face recognition camera, and the face recognition camera acquires face images for recognition to acquire user data. The face recognition camera is generally suitable for collecting user data when the number of users is small, when a user group is huge, the user information collection module can further comprise a tag reading module, the user can scan surrounding electronic tags by wearing an intelligent badge with the electronic tags, and the number of people participating in the user and a user list can be counted. In addition, the user information acquisition module can also comprise a voice recognition module, and the voice recognition module can acquire the information of questioners or respondents during questioning and answering and can also extract a discipline-violating user list according to the acquired voice data when the field discipline is not good.
The interaction module 15 is used for information interaction with a person, and comprises a microphone, a loudspeaker and a display screen, wherein the microphone is used for collecting voice information and can be used as a data collection unit of the voice recognition module; the loudspeaker is used for playing audio information; the display screen user presents image information to the user.
The data transceiver module 16 is used for data transceiving interaction with an external system or terminal.
The control module 11 includes a processor 111 and a memory 112, and transaction data is stored in the memory 112, and the transaction data includes transaction data.
The invention also provides a service system which comprises the robot 1, a master control center 2 and a plurality of electronic large screens 3, wherein the master control center 2 and the robot 1 can be in wireless communication, and the robot 1 and the electronic large screens 3 can be in linkage. The electronic large screen 3 is used for displaying statistical information or explaining contents, the robot 1 can directly communicate with the electronic large screen 3 to deliver images to the electronic large screen 3, and the total control center 2 can also uniformly schedule the robot 1 and the electronic large screen 3, so that the contents explained by the robot 1 are synchronous with the contents displayed by the electronic large screen 3.
Among the above-mentioned service system, the quantity of robot 1 is a plurality of, and robot 1 distributes in exhibition hall and the community service workstation all over the country, total control center 2 can make statistics of the behavior in all places according to the information that each robot 1 feedbacks to obtain the online quantity of robot, online region, total data of asking for answering, total data of reception, statistics such as activity participation total number, total control center 2 is equipped with data display screen, total control center 2 can show statistics on data display screen. In addition, the general control center 2 is also responsible for issuing the latest transaction data (such as the latest conference content schema).
In addition, the service system further includes a user wearing unit, the user wearing unit may be an intelligent badge including an electronic tag, and the robot 1 may count a list of persons in the field based on scanning of nearby electronic tags, determine whether the user is continuously present based on continuous scanning data of nearby electronic tags, and count a learning duration of each user.
Preferably, the service system further includes a user terminal, where the user terminal may be in the form of a mobile phone or the like, and a user may interact with the robot 1 through the user terminal.
The invention also provides a service method, based on the service system and implemented by the control module 11 of the robot 1, comprising the following steps S401-S404 (step numbers are not used to limit the execution sequence of the steps):
step S401, collecting information of participating users through the information collection module;
in this step, the user information acquisition module is used to acquire the information of the user. When the user is a single person, the control module 11 acquires the information of the user through the face recognition camera; when the participating users are multiple persons and all persons, the control module 11 scans the electronic tags of the surrounding intelligent badges through the tag reading module, counts the number of the participating persons, and obtains the list of the participating persons. When the participating users are multiple people and include common people and the interaction of the activity content is more, the control module 11 displays the two-dimensional code through the display screen and establishes an online conference room, all participating users scan the two-dimensional code on the display screen through the user terminals in hands to establish connection with the robot 1, user information is input through the user terminals after code scanning, the participating users join the online conference room, and the control module 11 counts the number of the people who scan the code and the information of the participating users.
Step S402, receiving a service triggering signal, wherein the form of the service triggering signal is a remote control instruction sent by the master control center 2 or an instruction triggered by a user through the interactive module 15;
step S403, according to the service trigger signal, executing a service task corresponding to the service trigger signal; the business tasks comprise mobile explanation business, consultation business, content display business, ceremony business and reception business, wherein the mobile explanation business is a service for walking along a preset route and explaining, and the content display business is a content output and display service combining the electronic large screen 3;
in this step, the mobile explanation service is generally used in an exhibition hall, and a plurality of explanation routes can be preset by a worker, and a stop point and corresponding explanation contents are set, and at this time, a plurality of electronic large screens 3 can be distributed at each position of the exhibition hall to assist the display robot 1 in explaining the contents. The counseling service and the contents presentation service can be used in various scenes. The ceremonial service may be in the form of an affidavit or the like. The reception service is generally used in an activity reception scene, and the robot 1 counts data such as activity reception information, human traffic and interaction times during reception.
When the service task includes an interactive ring, the control module 11 may interact with the participating users through the interactive module 15, when the participating users have more difficulty in interacting through the interactive module 15 (for example, the number of participating users is large, the site is noisy, and the distance between many participating users and the robot 1 is long), the users may enter the online meeting room through the above code scanning manner, and issue questions (in the form of inputting characters or inputting voice, etc.) or answer questions proposed by the robot 1 through the user terminal in the online meeting room, the robot 1 acquires the questions proposed by the participating users from the online meeting room, searches related contents from the memory 112 for the questions, and outputs the contents through the interactive module 15 and/or the electronic large screen 3.
Step S404, generating the service information of the service task and archiving or synchronizing the service information to the cloud.
In this step, the service information includes data such as the number of participating persons, a list of persons, the type of service task, time, location, and interaction information (question and answer between the robot 1 and participating users).
When the service task is the mobile explanation service, the executing the service task corresponding to the service trigger signal in step S402 specifically includes the following steps S501 to 503:
step S501, collecting operation data through the map building navigation module 13;
in this step, the operation data includes environmental data (obstacles, pedestrians, environmental profile information, etc.) and robot operation data (odometer data and IMU data).
Step S502, controlling the robot body 12 to navigate along the preset route according to the running data and the environment map;
in this step, the environment map is an application environment map constructed by the manager operating the robot 1 to move and collect environment data, and the environment map is stored in the memory 112, and the environment map is a static map on which the robot autonomously navigates.
Step S503, performing stay explanation, movement explanation or movement with the user according to the real-time position of the robot body 12.
In this step, preferably, the smart badges and the robot 1 both have a positioning module, and the smart badges have a vibration module, and the control module 11 may obtain positions of all the smart badges and the robot 1, generate distribution maps of all the smart badges and the robot 1, determine which smart badges are not located within a circle centered on the robot 1 according to the positions, and send instructions to the smart badges to make the vibration modules of the smart badges vibrate, so as to remind the user who falls behind to follow the team.
Preferably, the method further comprises the following steps S601-S602:
step S601, counting business data in a set time period, wherein the business data comprises question and answer data, reception data, the total number of people participating in activities and learning data of all people;
in this step, the service data is a summary of all service information in a set time period.
Step S602, sending the service data to the central control center 2 through the data transceiver module 16.
In this step, after receiving the service data, the central control center 2 can summarize all the data of the robots 1 associated with the service data, count to obtain statistical information such as the number of the robots on line, the area on line, total data of questions and answers, total data of receptions, total number of persons participating in activities, and the like in a set time period, and display the statistical information on a data display screen.
Preferably, the method further comprises the following steps S701-S704:
step S701, the identity data of the user is adopted through the information acquisition module 14;
step S702, inquiring user profile data according to the identity data;
step S703, making information content according to the learning progress of the user in the user profile data;
step S704, the information content is displayed to the user through the interactive module 15 and/or the electronic large screen 3 and is correspondingly explained.
Through the steps, customized contents can be displayed according to the conditions of each person, and the intelligent degree is high.
The robot, the service system and the service method are combined with the technologies of the robot, digitalization and the like, and can effectively solve a plurality of pain points in work. The robot is used for explanation and teaching, and firstly, workers can be separated from complicated work; secondly, the form is novel, and the enthusiasm of people for learning is more easily aroused; thirdly, the robot provides massive learning materials, and the masses can interact with the robot in an inquiry and touch mode, so that the robot is simple and easy to operate; and finally, the data of the robot such as explanation, reception, inquiry and the like can be summarized and counted, so that the data analysis is convenient.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (10)
1. Robot, its characterized in that, it includes control module and with control module is connected:
a robot body that is movable;
the drawing navigation module is used for acquiring environmental data and the operation data of the robot body;
the information acquisition module is used for acquiring video, image and sound information;
the interaction module is used for carrying out information interaction with people; and
the data transceiving module is used for carrying out data transceiving interaction with an external system or a terminal;
the control module comprises a processor and a memory, wherein transaction data are stored in the memory, and the transaction data comprise transaction data.
2. The robot of claim 1, wherein the mapping navigation module includes a lidar, an ultrasonic sensor, an IMU, and a odometer.
3. The robot of claim 1, wherein the information collection module comprises a video stream collection camera and a user information collection module.
4. The robot of claim 3, wherein the user information collection module comprises a face recognition camera and a voice recognition module.
5. A robot as claimed in claim 1, wherein the interaction module comprises a microphone, a loudspeaker and a display screen.
6. The service system is characterized by comprising the robot as claimed in any one of claims 1 to 5, and further comprising a master control center and a plurality of electronic large screens, wherein the master control center can be in wireless communication with the robot, and the robot can be in linkage with the electronic large screens.
7. Service method, based on the service system according to claim 6, characterized in that it comprises:
collecting information of participating users through the information collection module;
receiving a service trigger signal; the form of the service triggering signal is a remote control instruction sent by the master control center or an instruction triggered by a user through the interaction module;
executing a service task corresponding to the service trigger signal according to the service trigger signal; the service tasks comprise mobile explanation services, consultation services, content display services, ceremony services and reception services, wherein the mobile explanation services are services for walking along a preset route and explaining, and the content display services are services for outputting and displaying contents combined with the electronic large screen;
and generating the service information of the service task and archiving or synchronizing the service information to the cloud.
8. The service method according to claim 7, wherein, when the service task is the mobile explanation service, the executing the service task corresponding to the service trigger signal specifically comprises:
acquiring operation data through the map building navigation module;
controlling the robot body to perform navigation movement along the preset route according to the operation data and an environment map;
and performing stay explanation, motion explanation or movement with a user according to the real-time position of the robot body.
9. The service method of claim 7, wherein the method further comprises:
counting business data in a set time period, wherein the business data comprises question and answer data, reception data, the total number of active participants and learning data of each person;
and sending the service data to the master control center through the data transceiver module.
10. The service method of claim 7, wherein the method further comprises:
adopting the identity data of the user through the information acquisition module;
inquiring user profile data according to the identity data;
formulating information content according to the learning progress of the user in the user profile data;
and displaying the information content to a user through the interactive module and/or the electronic large screen and carrying out corresponding explanation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111499432.8A CN114131626A (en) | 2021-12-09 | 2021-12-09 | Robot, service system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111499432.8A CN114131626A (en) | 2021-12-09 | 2021-12-09 | Robot, service system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114131626A true CN114131626A (en) | 2022-03-04 |
Family
ID=80385404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111499432.8A Pending CN114131626A (en) | 2021-12-09 | 2021-12-09 | Robot, service system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114131626A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008106832A1 (en) * | 2007-03-07 | 2008-09-12 | Jianrong Xu | Intelligent tour guide interpreting system and the working method thereof |
CN103699126A (en) * | 2013-12-23 | 2014-04-02 | 中国矿业大学 | Intelligent tour guide robot |
CN104967567A (en) * | 2015-04-24 | 2015-10-07 | 山大地纬软件股份有限公司 | Intelligent social insurance business consultation terminal and working method thereof |
CN108818569A (en) * | 2018-07-30 | 2018-11-16 | 浙江工业大学 | Intelligent robot system towards public service scene |
CN109571499A (en) * | 2018-12-25 | 2019-04-05 | 广州天高软件科技有限公司 | A kind of intelligent navigation leads robot and its implementation |
WO2019157633A1 (en) * | 2018-02-13 | 2019-08-22 | Nec Hong Kong Limited | Intelligent service terminal and platform system and methods thereof |
CN113370229A (en) * | 2021-06-08 | 2021-09-10 | 山东新一代信息产业技术研究院有限公司 | Exhibition hall intelligent explanation robot and implementation method |
-
2021
- 2021-12-09 CN CN202111499432.8A patent/CN114131626A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008106832A1 (en) * | 2007-03-07 | 2008-09-12 | Jianrong Xu | Intelligent tour guide interpreting system and the working method thereof |
CN103699126A (en) * | 2013-12-23 | 2014-04-02 | 中国矿业大学 | Intelligent tour guide robot |
CN104967567A (en) * | 2015-04-24 | 2015-10-07 | 山大地纬软件股份有限公司 | Intelligent social insurance business consultation terminal and working method thereof |
WO2019157633A1 (en) * | 2018-02-13 | 2019-08-22 | Nec Hong Kong Limited | Intelligent service terminal and platform system and methods thereof |
CN108818569A (en) * | 2018-07-30 | 2018-11-16 | 浙江工业大学 | Intelligent robot system towards public service scene |
CN109571499A (en) * | 2018-12-25 | 2019-04-05 | 广州天高软件科技有限公司 | A kind of intelligent navigation leads robot and its implementation |
CN113370229A (en) * | 2021-06-08 | 2021-09-10 | 山东新一代信息产业技术研究院有限公司 | Exhibition hall intelligent explanation robot and implementation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107782314B (en) | Code scanning-based augmented reality technology indoor positioning navigation method | |
US6950119B2 (en) | Videoconference system, terminal equipment included therein and data delivery method | |
CN101141611B (en) | Method and system for informing a user of gestures made by others out of the user's line of sight | |
US6646673B2 (en) | Communication method and terminal | |
CN1717065B (en) | Information processing apparatus, method and display device | |
CN102685440A (en) | Automated selection and switching of displayed information | |
CN111242704B (en) | Method and electronic equipment for superposing live character images in real scene | |
Ntakolia et al. | User-centered system design for assisted navigation of visually impaired individuals in outdoor cultural environments | |
CN105765971A (en) | Video interaction between physical locations | |
CN111338481B (en) | Data interaction system and method based on whole body dynamic capture | |
JP2003050559A (en) | Autonomously movable robot | |
US20130215214A1 (en) | System and method for managing avatarsaddressing a remote participant in a video conference | |
CN110085072A (en) | A kind of implementation method and device of the asymmetric display in multimachine position | |
US10979666B2 (en) | Asymmetric video conferencing system and method | |
US7986336B2 (en) | Image capture apparatus with indicator | |
CN114186045A (en) | Artificial intelligence interactive exhibition system | |
CN114131626A (en) | Robot, service system and method | |
RU2622843C2 (en) | Management method of image processing device | |
Hagita et al. | Collaborative capturing of experiences with ubiquitous sensors and communication robots | |
CN101389997A (en) | Enlarged reality visualization system with pervasive computation | |
CN113220130A (en) | VR experience system for party building and equipment thereof | |
CN112565165B (en) | Interaction method and system based on optical communication device | |
Boll | Multimedia at CHI: telepresence at work for remote conference participation | |
KR20210099496A (en) | A satisfaction survey system through motion recognition in field space | |
CN103885574A (en) | Status switching method, status switching device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |