WO2020027406A1 - Robot mobile à intelligence artificielle - Google Patents

Robot mobile à intelligence artificielle Download PDF

Info

Publication number
WO2020027406A1
WO2020027406A1 PCT/KR2019/005462 KR2019005462W WO2020027406A1 WO 2020027406 A1 WO2020027406 A1 WO 2020027406A1 KR 2019005462 W KR2019005462 W KR 2019005462W WO 2020027406 A1 WO2020027406 A1 WO 2020027406A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
mobile robot
feedback
input
unit
Prior art date
Application number
PCT/KR2019/005462
Other languages
English (en)
Korean (ko)
Inventor
조민규
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020180140743A external-priority patent/KR102314538B1/ko
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to CN201980065085.2A priority Critical patent/CN112805128A/zh
Priority to EP19844736.9A priority patent/EP3831548B1/fr
Publication of WO2020027406A1 publication Critical patent/WO2020027406A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls

Definitions

  • the present invention relates to a mobile robot and a control method thereof, and more particularly, to a mobile robot and a control method capable of providing information and services based on the learned artificial intelligence.
  • Robots have been developed for industrial use and have been a part of factory automation. Recently, the application of robots has been further expanded, medical robots, aerospace robots, and the like have been developed, and home robots that can be used in general homes have also been made. Among these robots, a moving robot capable of traveling by magnetic force is called a mobile robot.
  • a representative example of a mobile robot used at home is a robot cleaner, which is a device that cleans a corresponding area by inhaling dust or foreign matter while driving around a certain area by itself.
  • the mobile robot is capable of moving by itself and is free to move, and is provided with a plurality of sensors for avoiding obstacles and the like while driving, so that the robot can travel to avoid obstacles.
  • voice recognition technology is applied to various devices, and researches on a method of controlling a mobile robot using voice recognition technology are increasing.
  • prior art document 1 Korean Patent Publication No. 10-2012-0114670, published on October 17, 2012 discloses that the robot cleaner has a speech recognition unit and recognizes a user's speech signal to provide a corresponding control command. Is running.
  • the voice input is unidirectional from the user to the robot cleaner, so that it stays in the additional means of the control operation of pressing a button or operating with a remote controller. Accordingly, there is a problem that the voice recognition function is hard to give the user more than simple control, and cannot provide other functions and services other than the addition of control input means.
  • the object of the present invention is to provide an electronic device such as a mobile robot capable of interacting with a user through a voice and a method of controlling the same, using only voice recognition as one control input means. Is in.
  • An object of the present invention is to provide a voice output of an electronic device, such as a mobile robot that is variable according to the use time, frequency, pattern, etc. to provide expectations and fun elements to the user and improve the reliability and preference of the product.
  • An object of the present invention is to provide an electronic device such as a mobile robot to the user to provide a variety of information and services to the user.
  • an electronic device such as a mobile robot according to an aspect of the present invention may utter a voice for guiding predetermined information or a service to a user, thereby communicating with the user through voice and interacting with the user. can do.
  • an electronic device such as a mobile robot according to an aspect of the present invention provides a user with voice feedback in which the voice, tone, etc. of the mobile robot are varied according to the use time, frequency, pattern, and the like. It can provide anticipation and fun, and improve product reliability and preferences.
  • an electronic device such as a mobile robot according to an aspect of the present invention includes an input unit for receiving a voice input of a user, an audio output unit for outputting a feedback voice message corresponding to the voice input,
  • an input unit for receiving a voice input of a user
  • an audio output unit for outputting a feedback voice message corresponding to the voice input
  • a storage unit for storing the usage history of the mobile robot
  • a control unit for outputting the feedback voice message in a different voice according to the stored usage history, it is possible to provide different voice feedback according to the usage history.
  • the controller selects the voice according to the usage time of the mobile robot, the task execution success rate of the mobile robot, or the number of task execution success times, so that the user hears the voice of the mobile robot and how much the user uses the mobile robot. You can intuitively see how often you use your work or whether you have been successful.
  • the controller may select a voice set as the voice for outputting the feedback voice message, corresponding to the mission level at which the usage history of the mobile robot has been reached among a plurality of preset mission levels.
  • the preset plurality of mission levels may include a frequency of execution condition for a predetermined task for each level.
  • the controller may select a voice of a higher age as the voice for outputting the feedback voice message as the mission level reached by the usage history of the mobile robot is higher. Accordingly, it is possible to provide voice feedback that grows according to the degree of mission achievement.
  • the control unit may provide different guide messages according to the usage history by controlling the content of the feedback voice message for the same voice input differently according to the usage history of the mobile robot.
  • the controller may select the voice according to a frequency of performing a specific task of the mobile robot, thereby providing voice feedback that is changed based on a usage history of the specific task.
  • control unit may select the first voice if the frequency of execution of the first task is greater than or equal to the reference value, and the second voice if the frequency of execution of the second task is greater than or equal to the reference value.
  • the controller may select the voice according to a voice recognition result of the voice input, and select the voice according to a usage history of a job included in the voice recognition result.
  • the mobile robot may perform a speech recognition process by itself or a speech recognition process through a server.
  • an electronic device such as a mobile robot according to an aspect of the present invention includes an input unit for receiving a voice input of a user, an audio output unit for outputting a feedback voice message corresponding to the voice input,
  • the controller may be configured to output the feedback voice message in a different voice according to a voice recognition result of the voice input, thereby providing different voice feedback according to the voice input of the user.
  • an electronic device such as a mobile robot may include a storage unit in which a usage history is stored, the voice may be selected according to a usage history of a job included in the voice recognition result, and the same voice input may be performed according to the usage history. You can control the content of the comment voice message for the feedback.
  • control unit may provide a voice feedback that is growing by selecting a voice of a higher age as the voice for outputting the feedback voice message as the frequency of performing tasks included in the voice recognition result increases.
  • an electronic device such as a mobile robot according to an aspect of the present invention includes an input unit for receiving a voice input of a user, an audio output unit for outputting a feedback voice message corresponding to the voice input, And a storage unit for storing a usage history, and a controller for controlling to output the feedback voice message according to a learning level of the stored usage history, thereby providing different voice feedback according to the learning level of artificial intelligence.
  • an input unit for receiving a voice input of a user
  • an audio output unit for outputting a feedback voice message corresponding to the voice input
  • a storage unit for storing a usage history
  • a controller for controlling to output the feedback voice message according to a learning level of the stored usage history, thereby providing different voice feedback according to the learning level of artificial intelligence.
  • control unit may provide a voice feedback that grows gradually according to the learning level by selecting a voice of a higher age as the voice for outputting the feedback voice message as the learning level of the stored usage history is higher.
  • an electronic device such as a mobile robot may speak a voice to the user, and may interact with and interact with the user through the voice.
  • an evolving user experience may be provided.
  • the speech recognition may be performed by an electronic device such as a mobile robot by itself, by a server, or step by step, thereby performing effective speech recognition.
  • electronic devices such as mobile robots actively provide information, recommend services, functions, and the like, thereby increasing user reliability, preference, and product utilization. have.
  • FIG. 1 is a block diagram of a home appliance network system according to an embodiment of the present invention.
  • FIG. 2 is a perspective view showing a mobile robot according to an embodiment of the present invention.
  • FIG. 3 is a plan view of the mobile robot of FIG. 2.
  • FIG. 4 is a side view of the mobile robot of FIG. 2.
  • FIG. 5 is a block diagram showing a control relationship between major components of a mobile robot according to an embodiment of the present invention.
  • FIG. 6 is a view referred to for describing learning using product data according to an embodiment of the present invention.
  • FIG. 7 is an example of a simplified internal block diagram of a server according to an embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a control method of a mobile robot according to an embodiment of the present invention.
  • 9 to 11 are views referred to for describing the control method of the mobile robot according to an embodiment of the present invention.
  • module and “unit” for the components used in the following description are merely given in consideration of ease of preparation of the present specification, and do not give particular meaning or role in themselves. Therefore, the “module” and “unit” may be used interchangeably.
  • An electronic device is a device that enables voice interaction, such as voice recognition and voice utterance, and may correspond to various devices.
  • an electronic device may include an air conditioner (see 11 in FIG. 1), a mobile robot (see 22 in FIG. 1, 100 in FIG. 2, etc.), a refrigerator (see 31 in FIG. 1). , A washing machine (see 32 in FIG. 1), and the like.
  • the mobile robot 100 means a robot that can move itself by using a wheel or the like, and may be a home helper robot or a robot cleaner.
  • FIG. 1 is a block diagram of a home appliance network system according to an embodiment of the present invention.
  • a home appliance network system may include a home appliance including a communication module, which may communicate with another device, the server 70, or connect to a network.
  • the home appliance may correspond to an air conditioner 10 having a communication module, a cleaner 20, a refrigerator 31, a washing machine 32, and the like.
  • the air conditioner 10 may include at least one of an air conditioner 11, an air cleaner 12 and 13, a humidifier 14, and a hood 15.
  • the cleaner 20 may be a vacuum cleaner 21, a robot cleaner 22, or the like.
  • the communication module included in the home appliances 10, 20, 31, and 32 may be a Wi-Fi communication module, and the present invention is not limited to the communication method.
  • the home appliances 10, 20, 31, and 32 may include other types of communication modules or may include a plurality of communication modules.
  • the home appliances 10, 20, 31, and 32 may include an NFC module, a Zigbee communication module, a Bluetooth TM communication module, and the like.
  • the home appliances 10, 20, 31, and 32 may be connected to a predetermined server 70 through a Wi-Fi communication module, and may support smart functions such as remote monitoring and remote control.
  • the home appliance network system may include a mobile terminal 50 such as a smart phone and a tablet PC.
  • the user may check information on the home appliances 10, 20, 31, and 32 in the home appliance network system or control the home appliances 10, 20, 31, and 32 through the portable terminal 50.
  • the home appliance network system may include a plurality of Internet of Things (IoT) devices (not shown).
  • the home appliance network system may include home appliances 10, 20, 31, and 32, portable terminal 50, and Internet of Things (IoT) devices.
  • the home appliance network system is not limited to a communication scheme constituting a network.
  • the home appliances 10, 20, 31, and 32, the portable terminal 50, and the Internet of Things (IoT) devices may be communicatively connected through the wire / wireless router 60.
  • IoT Internet of Things
  • devices in the home appliance network system may form a mesh topology that is individually communicated with each other.
  • the home appliances 10, 20, 31, and 32 in the home appliance network system may communicate with the server 70 or the mobile terminal 50 via the wire / wireless router 60.
  • the home appliances 10, 20, 31, and 32 in the home appliance network system may communicate with the server 70 or the portable terminal 50 by Ethernet.
  • FIG. 2 is a perspective view illustrating a mobile robot according to an embodiment of the present invention
  • FIG. 3 is a plan view of the mobile robot of FIG. 2
  • FIG. 4 is a side view of the mobile robot of FIG. 2.
  • the mobile robot 100 may drive a certain area by itself.
  • the mobile robot 100 may perform a function of cleaning the floor. Cleaning of the floor here includes suctioning dust (including foreign matter) from the floor or mopping the floor.
  • the mobile robot 100 includes a main body 110.
  • the main body 110 includes a cabinet forming an appearance.
  • the mobile robot 100 may include a suction unit 130 and a dust container 140 provided in the main body 110.
  • the mobile robot 100 includes an image acquisition unit 120 that detects information related to an environment around the mobile robot.
  • the mobile robot 100 includes a driving unit 160 for moving the main body.
  • the mobile robot 100 includes a control unit 181 for controlling the mobile robot 100.
  • the controller 181 is provided in the main body 110.
  • the driving unit 160 includes a wheel unit 111 for traveling of the mobile robot 100.
  • the wheel unit 111 is provided in the main body 110.
  • the mobile robot 100 may be moved back, front, left, and right by the wheel unit 111 or rotated.
  • the controller controls the driving of the wheel unit 111, the mobile robot 100 may autonomously travel the floor.
  • the wheel unit 111 includes a main wheel 111a and a sub wheel 111b.
  • the main wheels 111a are provided at both sides of the main body 110, and are configured to be rotatable in one direction or the other direction according to the control signal of the controller. Each main wheel 111a may be configured to be driven independently of each other. For example, each main wheel 111a may be driven by different motors.
  • the sub wheel 111b supports the main body 110 together with the main wheel 111a, and is configured to assist driving of the mobile robot 100 by the main wheel 111a.
  • the sub wheel 111b may also be provided in the suction unit 130 described later.
  • the suction unit 130 may be disposed to protrude from the front side F of the main body 110.
  • the suction unit 130 is provided to suck air containing dust.
  • the suction unit 130 may have a form protruding from the front of the main body 110 to both left and right sides.
  • the front end of the suction unit 130 may be disposed in a position spaced forward from one side of the main body 110.
  • the left and right both ends of the suction unit 130 may be disposed at positions spaced apart from the main body 110 to the left and right sides.
  • the main body 110 is formed in a circular shape, and as both rear ends of the suction unit 130 protrude from the main body 110 to the left and right sides, respectively, an empty space, that is, between the main body 110 and the suction unit 130. Gaps may be formed.
  • the empty space is a space between the left and right both ends of the main body 110 and the left and right both ends of the suction unit 130, and has a shape recessed inside the mobile robot 100.
  • the suction unit 130 may be detachably coupled to the main body 110.
  • the mop module (not shown) may be detachably coupled to the main body 110 in place of the separated suction unit 130.
  • the image acquisition unit 120 may be disposed in the main body 110.
  • the image acquisition unit 120 may be disposed in front of the main body 110.
  • the image acquisition unit 120 may be disposed to overlap the suction unit 130 in the vertical direction of the main body 110.
  • the image acquisition unit 120 may be disposed above the suction unit 130.
  • the image acquisition unit 120 may detect an obstacle around the mobile robot 100.
  • the image acquisition unit 120 may detect an obstacle or a feature in front of the suction unit 130 located in the front of the mobile robot 100 so as not to collide with the obstacle.
  • the image acquisition unit 120 may further perform other sensing functions to be described later in addition to the sensing function.
  • the main body 110 may be provided with a dust container accommodating part (not shown).
  • the dust container 140 is detachably coupled to the dust container 140 which separates and collects dust in the sucked air.
  • the dust container accommodation part may be formed at the rear side R of the main body 110. Part of the dust container 140 is accommodated in the dust container receiving portion, the other part of the dust container 140 may be formed to protrude toward the rear (R) of the main body 110.
  • the dust container 140 has an inlet (not shown) through which air containing dust is introduced and an outlet (not shown) through which air from which dust is separated is formed.
  • the inlet and the outlet of the dust container 140 are configured to communicate with the first opening (not shown) and the second opening (not shown) formed in the inner wall of the dust container accommodation part when the dust container 140 is mounted on the dust container accommodation part. .
  • a suction flow path for guiding air from the suction port of the suction unit 130 to the first opening is provided.
  • An exhaust passage for guiding air to an exhaust port (not shown) opened toward the outside of the second opening is provided.
  • the air containing the dust introduced through the suction unit 130 is introduced into the dust container 140 through the intake passage inside the main body 110, and the air and the dust are passed through the filter or the cyclone of the dust container 140. Are separated from each other. Dust is collected in the dust container 140, the air is discharged from the dust container 140, and then through the exhaust flow path inside the main body 110 is finally discharged to the outside through the exhaust port.
  • FIG. 5 is a block diagram showing a control relationship between major components of a mobile robot according to an embodiment of the present invention.
  • the mobile robot 100 includes a main body 110 and an image acquisition unit 120 that acquires an image around the main body 110.
  • the mobile robot 100 includes a driving unit 160 for moving the main body 110.
  • the driving unit 160 includes at least one wheel unit 111 for moving the main body 110.
  • the driving unit 160 includes a driving motor (not shown) connected to the wheel unit 111 to rotate the wheel unit 111.
  • the image acquisition unit 120 photographs a driving zone and may include a camera module.
  • the camera module may include a digital camera.
  • the digital camera includes at least one optical lens and an image sensor (eg, a CMOS image sensor) including a plurality of photodiodes (eg, pixels) formed by the light passing through the optical lens.
  • the apparatus may include a digital signal processor (DSP) that forms an image based on signals output from the photodiodes.
  • the digital signal processor may generate not only a still image but also a moving image composed of frames composed of the still image.
  • Multiple cameras may be installed for each part for photographing efficiency.
  • the image photographed by the camera may be used to recognize a kind of material such as dust, hair, floor, etc. present in the corresponding space, whether to clean or check the cleaning time.
  • the camera may photograph a situation of an obstacle or a cleaning area existing on the front of the moving direction of the mobile robot 100.
  • the image acquisition unit 120 may acquire a plurality of images by continuously photographing the periphery of the main body 110, and the obtained plurality of images may be stored in the storage unit 105. Can be.
  • the mobile robot 100 improves the accuracy of spatial recognition, location recognition, and obstacle recognition using a plurality of images, or selects one or more images from the plurality of images and uses effective data to provide spatial recognition, location recognition, and obstacle recognition. You can increase the accuracy.
  • the mobile robot 100 may include a sensor unit 170 including sensors for sensing various data related to the operation and state of the mobile robot.
  • the sensor unit 170 may include an obstacle detecting sensor detecting a front obstacle.
  • the sensor unit 170 may further include a cliff detection sensor for detecting the presence of a cliff on the floor in the driving zone, and a lower camera sensor for obtaining an image of the floor.
  • the obstacle detecting sensor may include an infrared sensor, an ultrasonic sensor, an RF sensor, a geomagnetic sensor, a position sensitive device (PSD) sensor, and the like.
  • the position and type of the sensor included in the obstacle detection sensor may vary depending on the type of the mobile robot, the obstacle detection sensor may include more various sensors.
  • the sensor unit 170 may further include a motion detection sensor for detecting the operation of the mobile robot 100 according to the driving of the main body 110 and outputs the motion information.
  • a motion detection sensor for detecting the operation of the mobile robot 100 according to the driving of the main body 110 and outputs the motion information.
  • a gyro sensor, a wheel sensor, an acceleration sensor, or the like may be used as the motion detection sensor.
  • the gyro sensor detects the rotation direction and detects the rotation angle when the mobile robot 100 moves according to the driving mode.
  • the gyro sensor detects the angular velocity of the mobile robot 100 and outputs a voltage value proportional to the angular velocity.
  • the controller 150 calculates the rotation direction and the rotation angle by using the voltage value output from the gyro sensor.
  • the wheel sensor is connected to the wheel unit 111 to sense the number of revolutions of the wheel.
  • the wheel sensor may be a rotary encoder.
  • the acceleration sensor detects a change in the speed of the mobile robot 100, for example, a change in the mobile robot 100 due to start, stop, direction change, collision with an object, and the like.
  • the acceleration sensor may be built in the controller 150 to detect a speed change of the mobile robot 100.
  • the controller 150 may calculate a position change of the mobile robot 100 based on the motion information output from the motion detection sensor. This position becomes a relative position corresponding to the absolute position using the image information.
  • the mobile robot can improve the performance of position recognition using image information and obstacle information through the relative position recognition.
  • the mobile robot 100 may include a power supply unit (not shown) for supplying power to the mobile robot by having a rechargeable battery.
  • the power supply unit supplies driving power and operation power to each component of the mobile robot 100, and when the remaining power is insufficient, power may be supplied and charged from a charging stand (not shown).
  • the mobile robot 100 may further include a battery detector (not shown) that detects a charging state of the battery and transmits a detection result to the controller 150.
  • the battery is connected to the battery detector so that the battery remaining amount and the charging state are transmitted to the controller 150.
  • the battery remaining amount may be displayed on the display 182 of the output unit 180.
  • the mobile robot 100 includes an input unit 125 for inputting on / off or various commands.
  • the input unit 125 may include a button, a dial, a touch screen, and the like.
  • the input unit 125 may include a microphone for receiving a user's voice command. Through the input unit 125, various control commands necessary for the overall operation of the mobile robot 100 may be input.
  • the mobile robot 100 may include an output unit 180 to display reservation information, a battery state, an operation mode, an operation state, an error state, etc. as an image or output a sound.
  • the output unit 180 may include a sound output unit 181 for outputting an audio signal.
  • the sound output unit 181 may output a warning message, such as a warning sound, an operation mode, an operation state, an error state, etc., under the control of the controller 150.
  • the sound output unit 181 may convert an electrical signal from the controller 150 into an audio signal and output the audio signal.
  • a speaker or the like may be provided.
  • the output unit 180 may further include a display 182 that displays reservation information, a battery state, an operation mode, an operation state, an error state, and the like as an image.
  • the mobile robot 100 includes a controller 150 for processing and determining various information such as recognizing a current location, and a storage 105 for storing various data.
  • the mobile robot 100 may further include a communication unit 190 for transmitting and receiving data with an external terminal.
  • the external terminal includes an application for controlling the mobile robot 100, and displays an map of the driving area to be cleaned by the mobile robot 100 through execution of the application, and designates an area to clean a specific area on the map.
  • Examples of the external terminal may include a remote controller, a PDA, a laptop, a smartphone, a tablet, and the like, having an application for setting a map.
  • the external terminal may communicate with the mobile robot 100 to display a current location of the mobile robot together with a map, and information about a plurality of areas may be displayed. In addition, the external terminal updates and displays its position as the mobile robot travels.
  • the controller 150 controls the image acquisition unit 120, the input unit 125, the driving unit 160, the suction unit 130, etc. constituting the mobile robot 100 to control the overall operation of the mobile robot 100. To control.
  • the controller 150 may process a voice input signal of the user received through the microphone of the input unit 125 and perform a voice recognition process.
  • the mobile robot 100 may include a voice recognition module that performs voice recognition inside or outside the controller 150.
  • simple voice recognition may be performed by the mobile robot 100 itself, and high-level voice recognition such as natural language processing may be performed by the server 70.
  • the storage unit 105 records various types of information necessary for the control of the mobile robot 100 and may include a volatile or nonvolatile recording medium.
  • the recording medium stores data that can be read by a microprocessor, and includes a hard disk drive (HDD), a solid state disk (SSD), a silicon disk drive (SDD), a ROM, a RAM, a CD-ROM, a magnetic Tapes, floppy disks, optical data storage devices, and the like.
  • the storage unit 105 may store a map for the driving zone.
  • the map may be input by an external terminal, a server, or the like, which may exchange information with the mobile robot 100 through wired or wireless communication, or may be generated by the mobile robot 100 by learning itself.
  • the map may indicate the location of the rooms in the driving zone.
  • the current position of the mobile robot 100 may be displayed on the map, and the current position of the mobile robot 100 on the map may be updated during the driving process.
  • the external terminal stores the same map as the map stored in the storage unit 105.
  • the storage unit 105 may store cleaning history information. Such cleaning history information may be generated every time cleaning is performed.
  • the map of the driving zone stored in the storage unit 105 stores a navigation map used for driving during cleaning, a Simultaneous Localization and Mapping (SLAM) map used for location recognition, an obstacle, and the like. It may be a learning map used for learning cleaning, a global location map used for global location recognition, an obstacle recognition map in which information about the recognized obstacle is recorded.
  • SLAM Simultaneous Localization and Mapping
  • maps may be stored and managed in the storage unit 105 for each use, but the map may not be clearly classified for each use.
  • a plurality of pieces of information may be stored in one map to be used for at least two purposes.
  • the controller 150 may include a driving control module 151, a map generation module 152, a position recognition module 153, and an obstacle recognition module 154.
  • the driving control module 151 controls the driving of the mobile robot 100, and controls the driving of the driving unit 160 according to the driving setting.
  • the driving control module 151 may determine the driving path of the mobile robot 100 based on the operation of the driving unit 160. For example, the driving control module 151 may determine the current or past moving speed of the mobile robot 100, the distance traveled, and the like based on the rotational speed of the wheel unit 111, and the mobile robot thus identified ( Based on the driving information of the 100, the position of the mobile robot 100 on the map may be updated.
  • the map generation module 152 may generate a map of the driving zone.
  • the map generation module 152 may generate a map by processing an image acquired through the image acquisition unit 120. That is, a cleaning map corresponding to the cleaning area can be created.
  • the map generation module 152 may recognize the global position by processing the image acquired through the image acquisition unit 120 at each position in association with the map.
  • the position recognition module 153 estimates and recognizes a current position.
  • the position recognition module 153 uses the image information of the image acquisition unit 120 to determine the position in connection with the map generation module 152 to estimate the current position even when the position of the mobile robot 100 suddenly changes. Can be recognized.
  • the location recognition module 153 may recognize the property of the current location, that is, the location recognition module 153 may recognize the space.
  • the mobile robot 100 may recognize a position during continuous driving through the position recognition module 153, and also, through the map generation module 152 and the obstacle recognition module 154, without the position recognition module 153. Learn, estimate your current location, and more.
  • the image acquisition unit 120 acquires images around the mobile robot 100.
  • an image acquired by the image acquisition unit 120 is defined as an 'acquisition image'.
  • the acquired image includes various features such as lightings on the ceiling, edges, corners, blobs, and ridges.
  • the map generation module 152 detects a feature from each of the acquired images, and calculates a descriptor based on each feature point.
  • the map generation module 152 classifies at least one descriptor into a plurality of groups according to a predetermined sub-classification rule for each acquired image based on descriptor information obtained through the acquired image of each position, and according to the predetermined sub-representation rule, the same group. Descriptors included in each can be converted into lower representative descriptors.
  • all descriptors collected from acquired images in a predetermined area are classified into a plurality of groups according to a predetermined sub-classification rule, and descriptors included in the same group according to the predetermined sub-representation rule are respectively represented by lower representative descriptors. You can also convert to.
  • the map generation module 152 can obtain the feature distribution of each location through this process.
  • Each positional feature distribution can be represented by a histogram or an n-dimensional vector.
  • the map generation module 152 may estimate an unknown current position based on a descriptor calculated from each feature point without passing through a predetermined sub classification rule and a predetermined sub representative rule.
  • the current position of the mobile robot 100 when the current position of the mobile robot 100 is unknown due to a position leap or the like, the current position may be estimated based on data such as a previously stored descriptor or a lower representative descriptor.
  • the mobile robot 100 obtains an acquired image through the image acquisition unit 120 at an unknown current position. Through the image, various features such as lightings on the ceiling, edges, corners, blobs, and ridges are identified.
  • the position recognition module 153 detects features from the acquired image and calculates a descriptor.
  • the position recognition module 153 is based on at least one descriptor information obtained through an acquired image of an unknown current position, and position information (for example, feature distribution of each position) to be compared according to a predetermined lower conversion rule. Convert to comparable information (sub-recognition feature distribution).
  • each position feature distribution may be compared with each recognition feature distribution to calculate each similarity. Similarity (probability) may be calculated for each location corresponding to each location, and the location where the greatest probability is calculated may be determined as the current location.
  • the controller 150 may distinguish a driving zone and generate a map composed of a plurality of regions, or recognize a current position of the main body 110 based on a pre-stored map.
  • the controller 150 may transmit the generated map to an external terminal, a server, etc. through the communication unit 190. As described above, the controller 150 may store the map in the storage 105 when a map is received from an external terminal, a server, or the like.
  • the map may be divided into a plurality of cleaning areas, and include a connection path connecting the plurality of areas, and may include information about obstacles in the area.
  • the controller 150 determines whether the position on the map matches the current position of the mobile robot.
  • the cleaning command may be input from a remote controller, an input unit, or an external terminal.
  • the controller 150 recognizes the current position and recovers the current position of the mobile robot 100 based on the current position.
  • the driving unit 160 may be controlled to move to the designated area.
  • the position recognition module 153 analyzes the acquired image input from the image acquisition unit 120 to estimate the current position based on the map. can do.
  • the obstacle recognition module 154 or the map generation module 152 may also recognize the current position in the same manner.
  • the driving control module 151 calculates a driving route from the current position to the designated region and controls the driving unit 160 to move to the designated region.
  • the driving control module 151 may divide the entire driving zone into a plurality of areas according to the received cleaning pattern information, and set at least one area as a designated area.
  • the driving control module 151 may calculate the driving route according to the received cleaning pattern information, travel along the driving route, and perform cleaning.
  • the controller 150 may store the cleaning record in the storage unit 105 when cleaning of the set designated area is completed.
  • controller 150 may transmit the operation state or cleaning state of the mobile robot 100 to the external terminal and the server at a predetermined cycle through the communication unit 190.
  • the external terminal displays the location of the mobile robot along with the map on the screen of the running application based on the received data, and outputs information on the cleaning state.
  • the mobile robot 100 moves in one direction until an obstacle or a wall surface is detected, and when the obstacle recognition module 154 recognizes the obstacle, the robot moves straight, rotates, or the like according to the recognized obstacle's properties.
  • the pattern can be determined.
  • the controller 150 may control to perform the avoidance driving in a different pattern based on the recognized property of the obstacle.
  • the controller 150 may control to avoid driving in different patterns according to the properties of obstacles such as non-hazardous obstacles (general obstacles), dangerous obstacles, and movable obstacles.
  • the controller 150 may control the dangerous obstacle to be bypassed in a state where a safe distance of a longer distance is secured.
  • the controller 150 may control to perform the avoiding driving corresponding to the general obstacle or the avoiding driving corresponding to the dangerous obstacle.
  • the controller 150 may control to travel accordingly.
  • the mobile robot 100 may perform obstacle recognition and avoidance based on machine learning.
  • the controller 150 may drive the driving unit 160 based on an obstacle recognition module 154 that recognizes an obstacle previously learned by machine learning in an input image and an attribute of the recognized obstacle. It may include a driving control module 151 for controlling.
  • FIG. 5 illustrates an example in which the plurality of modules 151, 152, 153, and 154 are separately provided in the controller 160, the present invention is not limited thereto.
  • the position recognition module 153 and the obstacle recognition module 154 may be integrated into one recognizer and constitute one recognition module 155.
  • the recognizer may be trained using a learning technique such as machine learning, and the learned recognizer may recognize attributes of an area, an object, and the like by classifying data input thereafter.
  • the map generation module 152, the position recognition module 153, and the obstacle recognition module 154 may be configured as one integrated module.
  • the position recognition module 153 and the obstacle recognition module 154 are integrated as one recognizer and described with reference to an embodiment configured as one recognition module 155, but the position recognition module 153 and the obstacle recognition are described.
  • the module 154 may operate in the same manner even when each is provided.
  • the mobile robot 100 may include a recognition module 155 in which attributes of objects and spaces are learned by machine learning.
  • Machine learning means that a computer can learn from data and let the computer take care of a problem without having to instruct the computer directly to the logic.
  • ANN Deep Learning Based on Artificial Neural Networks
  • the artificial neural network may be implemented in software or in the form of hardware such as a chip.
  • the recognition module 155 may include an artificial neural network (ANN) in the form of software or hardware in which properties of an object, such as an object of a space or an obstacle, are learned.
  • ANN artificial neural network
  • the recognition module 155 may include a deep neural network (DNN) such as a convolutional neural network (CNN), a recurrent neural network (RNN), a deep belief network (DBN), and the like that have been learned by deep learning. It may include.
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • DNN deep belief network
  • the recognition module 155 may determine an attribute of a space and an object included in the input image data based on weights among nodes included in the deep neural network DNN.
  • the driving control module 151 may control the driving of the driving unit 160 based on the recognized space and the properties of the obstacle.
  • the recognition module 155 may recognize attributes of spaces and obstacles included in the selected specific viewpoint image based on data previously learned by machine learning.
  • the storage unit 105 may store space, input data for determining object properties, and data for learning the deep neural network DNN.
  • the storage unit 105 may store the original image obtained by the image acquisition unit 120 and the extracted images from which the predetermined region is extracted.
  • the storage 105 may store weights and biases forming the deep neural network (DNN).
  • DNN deep neural network
  • weights and biases constituting the deep neural network structure may be stored in an embedded memory of the recognition module 155.
  • the recognition module 155 performs a learning process by using a predetermined image as training data whenever the image acquisition unit 120 acquires an image or extracts a partial region of the image, or a predetermined number or more. After the image is acquired, the learning process may be performed.
  • the mobile robot 100 may receive data related to machine learning from the predetermined server through the communication unit 190.
  • the mobile robot 100 may update the recognition module 155 based on data related to machine learning received from the predetermined server.
  • FIG. 6 is a view referred to for describing learning using product data according to an embodiment of the present invention.
  • product data obtained by the operation of a predetermined device such as a mobile robot 100 may be transmitted to the server 70.
  • the mobile robot 100 may transmit a space, an object, and usage related data to the server 70 to the server 70.
  • the space and object related data may be a space recognized by the mobile robot 100 and data related to recognition of an object, or a space obtained by the image acquisition unit 120. And image data about an object.
  • the usage-related data is data obtained according to the use of a predetermined product, for example, the mobile robot 100, the use history data, the sensing data obtained from the sensor unit 170, etc. Can be.
  • control unit 150 more specifically, the recognition module 155 of the mobile robot 100 may be equipped with a deep neural network structure (DNN) such as a convolutional neural network (CNN).
  • DNN deep neural network structure
  • CNN convolutional neural network
  • the learned deep neural network structure DNN may receive input data for recognition, recognize a property of an object and a space included in the input data, and output the result.
  • the learned deep neural network structure may receive input data for recognition, analyze and learn the usage-related data (Data) of the mobile robot 100 to recognize the use pattern, the use environment, and the like. have.
  • the space, object, and usage related data may be transmitted to the server 70 through the communication unit 190.
  • the server 70 may generate a configuration of learned weights, and the server 70 may learn a deep neural network (DNN) structure using training data.
  • DNN deep neural network
  • the server 70 may transmit the updated deep neural network (DNN) structure data to the mobile robot 100 to be updated.
  • DNN deep neural network
  • home appliance products such as mobile robot 100 may become smarter and provide an evolving user experience (UX) as they are used.
  • UX user experience
  • FIG. 7 is an example of a simplified internal block diagram of a server according to an embodiment of the present invention.
  • the server 70 may include a communication unit 720, a storage unit 730, a learning module 740, and a processor 710.
  • the processor 710 may control the overall operation of the server 70.
  • the server 70 may be a server operated by a home appliance manufacturer such as the mobile robot 100 or a server operated by a service provider, or may be a kind of cloud server.
  • the communication unit 720 may receive various data such as status information, operation information, operation information, and the like from a portable terminal, a home appliance such as the mobile robot 100, a gateway, or the like.
  • the communication unit 720 may transmit data corresponding to the received various information to a portable terminal, a home appliance such as the mobile robot 100, a gateway, or the like.
  • the communication unit 720 may include one or more communication modules, such as an internet module and a mobile communication module.
  • the storage unit 730 may store the received information and may include data for generating result information corresponding thereto.
  • the storage unit 730 may store data used for machine learning, result data, and the like.
  • the learning module 740 may serve as a learner of a home appliance such as the mobile robot 100.
  • the learning module 740 may include an artificial neural network, for example, a deep neural network (DNN) such as a convolutional neural network (CNN), a recurrent neural network (RNN), and a deep belief network (DBN). You can learn neural networks.
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • DBN deep belief network
  • the controller 710 may control to update the artificial neural network structure of the home appliance such as the mobile robot 100 to the learned artificial neural network structure after learning according to a setting.
  • the learning module 740 may receive input data for recognition, recognize a property of an object and a space included in the input data, and output the result.
  • the communication unit 720 may transmit the recognition result to the mobile robot 100.
  • the learning module 740 may analyze and learn usage-related data of the mobile robot 100 to recognize a usage pattern, a usage environment, and the like, and output the result.
  • the communication unit 720 may transmit the recognition result to the mobile robot 100.
  • home appliance products such as the mobile robot 100 may receive a recognition result from the server 70 and operate by using the received recognition result.
  • the server 70 becomes smarter by learning using the product data, it is possible to provide an evolving user experience (UX) as using the home appliance product.
  • UX evolving user experience
  • the mobile robot 100 and the server 70 may also use external information.
  • the mobile robot 100 and the server 70 may be obtained from spatial information of a specific home appliance product such as the mobile robot 100, object information, internal information such as a usage pattern, and other products, or the server 70. Can provide excellent user experience by comprehensively using external information obtained from other linked service servers.
  • washing machine 32 Washing may be performed such that washing is finished in accordance with the time when the user arrives at home.
  • the server 70 may perform voice recognition by receiving a voice input signal spoken by a user.
  • the server 70 may include a speech recognition module, and the speech recognition module may include an artificial neural network trained to perform speech recognition on input data and output a speech recognition result.
  • the server 70 may include a voice recognition server for voice recognition.
  • the voice recognition server may also include a plurality of servers that share a predetermined process of the voice recognition process.
  • a speech recognition server may include an automatic speech recognition (ASR) server that receives speech data and converts the received speech data into text data, and the text from the automatic speech recognition server. It may include a natural language processing (NLP) server that receives the data, and analyzes the received text data to determine the voice command.
  • the speech recognition server may further include a text to speech (TTS) server that converts the text speech recognition result output from the natural language processing server into speech data and transmits the speech data to another server or a home appliance. .
  • ASR automatic speech recognition
  • NLP natural language processing
  • TTS text to speech
  • the mobile robot 100 and / or the server 70 may perform voice recognition, so that a user voice may be used as an input for controlling the mobile robot 100.
  • the mobile robot 100 may provide a variety of active control functions to the user by actively providing information or outputting a voice recommending a function or service.
  • FIGS. 9 to 11 are views referred to for describing a control method of a mobile robot according to an embodiment of the present invention.
  • the mobile robot 100 may store a usage history when performing a specific task (S810).
  • the mobile robot 100 may store the cleaning history information in the storage unit 105 after completing the cleaning of the driving zone once in a predetermined mode.
  • Information such as a cleaning mode, a cleaning success rate, and the like may be included.
  • the mobile robot 100 may store interaction details with the user and data sensed by the sensor unit 170 as a usage history.
  • the usage history data stored in the storage unit 105 may be transmitted to the server 70 through the communication unit 190, and the server 70 may use products such as usage history data received from the mobile robot 100. Data may be stored in the storage 730.
  • the artificial intelligence is mounted on the controller 150 of the mobile robot 100 to learn the usage history (S820).
  • the server 70 may analyze and learn product data obtained by the operation of a predetermined device such as the mobile robot 100.
  • the learning step S820 of the mobile robot 100 receives artificial intelligence-related data such as updated deep neural network (DNN) structure data based on the learning performed in the server 70 and updates the stored artificial intelligence. It can be done in a manner.
  • artificial intelligence-related data such as updated deep neural network (DNN) structure data
  • the mobile robot 100 may receive a user's voice input through at least one microphone (MIC) provided in the input unit 125 (S830), and may recognize the received user's voice input. (S840).
  • MIC microphone
  • the voice recognition process (S840) for the voice input is performed by the mobile robot 100 by itself, by the server 70, or step by step by the mobile robot 100 and the server 70, thereby providing effective voice recognition. Can be done.
  • the mobile robot 100 may include a voice recognition module inside or outside the controller 150 to recognize a user's voice by itself.
  • the mobile robot 100 may transmit data based on the voice input received through the communication unit 190 to the server 70, and receive a voice recognition result corresponding to the voice input from the server 70. have.
  • the mobile robot 100 may perform call word recognition and simple keyword command recognition by itself, and the server 70 may perform high-dimensional natural language speech recognition.
  • the mobile robot 100 may perform an operation corresponding to the voice command of the user identified in response to the reception of the user voice command (S830).
  • the mobile robot 100 may output a feedback voice message through the sound output unit 181 (S850).
  • the mobile robot 100 may output a feedback voice message through the sound output unit 181 together with performing a corresponding operation (S860) (S850).
  • the mobile robot 100 may include a text-to-speech (TTS) module in the audio output unit 181 and output a feedback voice message corresponding to the voice recognition result (S850).
  • TTS text-to-speech
  • the mobile robot 100 may store the announcements for outputting the feedback voice message in the storage unit 105, and may select one of the stored comments and convert the voice into a voice according to the voice recognition result. .
  • the mobile robot 100 may output a feedback voice message after requesting and receiving a sound source file of a predetermined feedback voice message from the server 70.
  • the present invention by providing a voice feedback that grows by varying the voice, tone, etc. of the mobile robot according to the use time, frequency, pattern, etc., it is possible to provide expectation and fun elements to the user and improve the reliability and preference of the product. .
  • the electronic device such as the mobile robot 100 according to an aspect of the present invention
  • the input unit 125 for receiving a user's voice input
  • the sound output unit for outputting a feedback (voice) voice message corresponding to the voice input 181
  • a storage unit 105 storing a usage history of an electronic device such as a mobile robot 100
  • a controller 150 controlling to output the feedback voice message in a different voice according to the stored usage history.
  • the controller 150 may select a voice for the feedback voice message according to the usage time of the electronic device such as the mobile robot 100.
  • voices output from various devices are provided with one kind of voice (tone or tone) for all functions.
  • voice tone or tone
  • the same voice is provided regardless of operations such as meticulous cleaning, zigzag cleaning, concentrated cleaning, area classification cleaning, smart diagnosis, and home guard. That is, the same voice is output whether the user performs any function or feedback on any operation. This indicates that the robot cleaner is operated with a minimum guide function for informing the user of the product status and the working status of the robot cleaner, and the user has no expectation and utilization of this voice.
  • the voice and tone of the guide voice are changed according to the use history to provide different voice feedback to the user.
  • the expectation of the user may occur as the voice and tone of the voice notifying the user are changed according to the usage history of the electronic device such as the mobile robot 100.
  • the electronic device such as the mobile robot 100.
  • the expectation of the user may occur as the voice and tone of the voice notifying the user are changed according to the usage history of the electronic device such as the mobile robot 100.
  • the electronic device such as the mobile robot 100.
  • the controller 150 may select a voice of a higher age as the voice for outputting the feedback voice message as the usage time of the electronic device such as the mobile robot 100 increases.
  • Electronic devices such as the mobile robot 100 utter various voice guidance messages, such as feedback voice messages, by the voice of the child immediately after purchase and before use for a predetermined time, and age each time the usage time reaches a preset time interval. You can use a higher age voice up.
  • voice guidance messages such as feedback voice messages
  • the user can hear the voice of the electronic device such as the mobile robot 100 and check how much the electronic device such as the mobile robot 100 is used without additional data confirmation, and the electronic device such as the mobile robot 100 grows. It can provide a sense of emotion.
  • the controller 150 may select the voice according to the task execution success rate or the task execution success rate of the electronic device such as the mobile robot 100.
  • the controller 150 may select a voice for the feedback voice message in response to the cleaning success rate for the driving zone after the cleaning operation is performed.
  • the controller 150 divides the driving zone into a plurality of zones and selects a voice for the feedback voice message based on the success rate or the number of successes of zone division cleaning for cleaning a specific zone or cleaning zones according to a predetermined setting. Can be.
  • controller 150 may select a voice of a high age and / or hihg tone as the task execution success rate or the task execution success rate of the mobile robot 100 increases.
  • the performance of the mobile robot 100 may be recorded so that the voice and the announcement may be variably changed according to the recording of the mission.
  • the mission may be a charging station return count, cleaning completion count, home guard count, DNN object detection count, or the like.
  • the tone and comment of the guide voice can be varied in accordance with the mission recording. For example, a voice may be provided step by step from a child to an adult in response to a mission record.
  • the controller 150 may select, as the voice for outputting the feedback voice message, a voice set corresponding to the mission level at which the usage history of the mobile robot 100 has reached among a plurality of preset mission levels. .
  • the preset plurality of mission levels may include a frequency of execution condition for a predetermined task for each level.
  • the lowest level 1 911 may be set to mission 1 921 having the lowest difficulty level.
  • mission 1 921 may set drawing generation conditions for 10 times of general cleaning, 1 time of home guard, all or at least a part of the map.
  • the eye voice 931 set corresponding to the level 1 911 may be used as the voice for outputting the feedback voice message.
  • the voice may be changed to the eye voice 931 set corresponding to the level 1 911 after 10 times of general cleaning, 1 time of home guard, and drawing generation.
  • a level 2 912 that is one level higher than the level 1 911 may be set to a mission 2 922 having a higher difficulty level or a higher number of executions than the mission 1 921.
  • mission 2 922 may be set to 30 times the general cleaning, 10 times the home guard, 10 times the area classification cleaning.
  • the controller 150 may select a voice of a higher age as the voice for outputting the feedback voice message as the mission level reached by the usage history of the mobile robot 100 is higher. Accordingly, it is possible to provide voice feedback that grows according to the degree of mission achievement.
  • the student voice 932 of the youth group set corresponding to the level 2 912 may be used as the voice for outputting the feedback voice message.
  • the level 3 913 which is one level higher than the level 2 912, may be set to the mission 3 923 having a higher difficulty level or a higher number of executions than the mission 2 922.
  • mission 3 923 may be set to 150 times of general cleaning, 50 times of home guard, 50 times of area classification cleaning.
  • an adult voice 933 which is set in correspondence with the level 3 913 may be used as the voice for outputting the feedback voice message.
  • each mission 921, 922, and 923 may vary.
  • the level 1 911 and the child voice 931 may be set as the default level and the default voice. In this case, when the child voice 931 is used at the beginning of the use of the mobile robot 100 and mission 1 921 is achieved, the level 2 912 may be changed to the student voice 932.
  • the mobile robot 100 not only the mobile robot 100 but also voice interactions such as recognition and speech can be applied to various electronic devices, and the contents of each of the missions 921, 922, and 923 may be changed for each device.
  • the mission includes the air conditioner 11 such as smart care 10 times, power saving operation 10 times, public hearing function 10 times, and remote control 10 times. It may be set based on the number of times for some of the tasks that can be performed.
  • the mission is a number of times for a part of the work that the washing machine 32 can perform, such as 10 times smart care, 10 times remote control, voice interaction 20 times. It can be set as a reference.
  • controller 150 may provide different guide messages according to the usage history by controlling the content of the feedback voice message for the same voice input differently according to the usage history of the electronic device such as the mobile robot 100. have.
  • the voice may be changed according to the usage history of the electronic device such as the mobile robot 100, but also the feedback voice message corresponding to the same voice input may be output as different contents.
  • a feedback voice message with more details may be output.
  • the feedback voice message may be output in a tone and content suitable for the voice selected according to the usage history of the mobile robot 100.
  • the home guard may vary the tone and comment of the guide voice to suit the work content, such as a thick male voice and a scrubbing mother-in-law's voice.
  • the controller 150 may provide a voice feedback that is changed based on a usage history of a specific job by selecting the voice according to a frequency of performing a specific job of the mobile robot. That is, the tone and the tone of the voice may be variably changed to fit the work content according to the frequency of the specific work.
  • the controller 150 selects a first voice when the mobile robot performs a first task at or above a reference value, and selects a second voice when the mobile robot executes a second task at or above a reference value. Can be.
  • a thick male voice 1012 may be used as the voice for outputting the feedback voice message.
  • the mother-in-law voice 1022 may be used as the voice for outputting the feedback voice message.
  • the child voice 1032 can be used as the voice for outputting the feedback voice message.
  • a voice set corresponding to the most performed task may be used.
  • the voice set in response to a task included in or associated with the user voice command may be used based on the voice recognition result.
  • voice interactions such as recognition and speech utterance may be applied to various electronic devices, and the contents of each of the use conditions 1021, 1022, and 1023 may be changed for each device.
  • the use conditions are air conditioner 11 such as 10 times of smart care, 10 times of power saving operation, 10 times of public hearing function, 10 times of remote control, and the like. May be set based on the number of times for some of the operations that can be performed.
  • the use conditions are the number of times for some of the operations that the washing machine 32 can perform, such as 10 times of smart care, 10 times of remote control, and 20 times of voice interaction. It may be set based on.
  • the mobile robot 100 may perform a voice recognition process by itself or a voice recognition process through the server 70, and the controller 150 may provide the feedback based on a voice recognition result of the voice input.
  • Voice for voice message output can be selected.
  • controller 150 may select the voice according to the usage history of the job included in the voice recognition result.
  • an electronic device such as a mobile robot 100 according to an aspect of the present invention
  • the input unit 125 for receiving a user's voice input the sound output unit for outputting a feedback (voice) voice message corresponding to the voice input ( 181) and the controller 150 may be configured to output the feedback voice message to another voice according to a voice recognition result of the voice input, thereby providing different voice feedback according to the voice input of the user.
  • the electronic device such as the mobile robot 100 may include a storage unit 105 in which a usage history of the electronic device such as the mobile robot 100 is stored. The voice can be selected accordingly.
  • the controller 150 may check a usage history of the home guard task.
  • a thick male voice set corresponding to 10 or more and less than 20 times may be used for outputting the feedback voice message.
  • control unit 150 may provide a voice feedback that is growing by selecting a voice of a higher age as the voice for outputting the feedback voice message as the frequency of performing tasks included in the voice recognition result increases.
  • the controller 150 may differently control the content of the feedback voice message for the same voice input according to the usage history of the mobile robot 100.
  • the present invention can provide a means for indirectly appealing to a user without checking data, such as driving accuracy and work success of the mobile robot 100.
  • a notification may be provided by varying a tone or a comment of a voice provided to a user through an indicator such as the number of records or a success rate based on a driving record or a record of a work.
  • the present invention can provide a voice tone, the voice wife of the comment according to the functional characteristics of the function of the mobile robot 100.
  • an electronic device such as a mobile robot 100 according to an aspect of the present invention
  • the input unit 125 for receiving a user's voice input
  • the sound output unit for outputting a feedback (voice) voice message corresponding to the voice input
  • a storage unit 105 that stores a usage history of an electronic device such as a mobile robot 100
  • a controller 150 that controls to output the feedback voice message according to a learning level of the stored usage history.
  • the controller 150 may provide a voice feedback that grows gradually according to the learning level by selecting a voice of a higher age as the voice for outputting the feedback voice message as the learning level of the stored usage history is higher. have.
  • a voice guidance service may be provided by analyzing and learning internal information and external information such as spatial information, object information, and usage pattern, and using different voices according to a learning level.
  • the order of cleaning through the understanding of the space may be adjusted, and the voice may also grow in response.
  • the mobile robot 100 learned the space with the child's voice, and reduced the cleaning time. The entire cleaning took about 1 hour and 20 minutes in the zigzag mode. Voice briefing, such as ".”
  • the voice may be spoken by the feedback voice guide message.
  • the mobile robot 100 After about three months of further learning, if the level of spatial learning is further improved, the mobile robot 100 said, “Did you enjoy your meeting? Air purifier is dusty. If the dust subsides after 30 minutes, do you want to clean the living room? ”, A teenager could suggest a cleaning plan that takes into account the information and behavior of other devices.
  • the mobile robot 100 maps space and family members. Can you arrange the order to finish cleaning the wisdom room before 12 o'clock? ”
  • the mobile robot 100 changes the furniture layout. I learned the space again. The total cleaning time is about 1 hour and 30 minutes. The 40s voice can guide the space changes.
  • the cleaning order can be adjusted through the understanding of the person, and the voice can be grown accordingly.
  • the mobile robot 100 after learning a behavior pattern of a person using a driving zone based on video and audio data acquired for about one month, the mobile robot 100 has "three families as child voices.” I want to know the names of my family. " Alternatively, in response to a voice input such as a cleaning command of the user, the voice may be spoken by the feedback voice guide message.
  • the mobile robot 100 says, “It's a good time to clean like a whirlwind while 11 ⁇ 3 o'clock. Would you like to schedule a cleaning? ”A teen can suggest a cleaning plan that takes into account user behavior patterns.
  • the mobile robot 100 considered the personality of the family members. Now I can understand a little more by speaking dialect. Say it often so you can learn more. ”You can output voice guidance in your 20s.
  • the mobile robot 100 checks for 40 user changes such as "A new person has been identified. Would you like to add it as a family? Please tell me your name," etc. You can guide by voice.
  • the mobile robot according to the present invention is not limited to the configuration and method of the embodiments described as described above, but the embodiments may be selectively combined with all or part of the embodiments so that various modifications can be made. It may be configured.
  • the control method of the mobile robot it is possible to implement as a processor readable code on a processor-readable recording medium.
  • the processor-readable recording medium includes all kinds of recording devices that store data that can be read by the processor. Examples of the processor-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet. .
  • the processor-readable recording medium can also be distributed over network coupled computer systems so that the processor-readable code is stored and executed in a distributed fashion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Dans un aspect, l'invention concerne un robot mobile à intelligence artificielle, pouvant fournir différentes rétroactions vocales en fonction d'un historique d'utilisation et comprenant : une unité d'entrée destinée à recevoir une entrée vocale d'un utilisateur ; une unité de sortie sonore destinée à émettre en sortie un message vocal de rétroaction correspondant à l'entrée vocale ; une unité de stockage destinée à stocker un historique d'utilisation d'un robot mobile ; et une unité de commande destinée à commander la sortie du message vocal de rétroaction, avec des voix différentes, en fonction de l'historique d'utilisation stocké.
PCT/KR2019/005462 2018-08-01 2019-05-08 Robot mobile à intelligence artificielle WO2020027406A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980065085.2A CN112805128A (zh) 2018-08-01 2019-05-08 人工智能移动机器人
EP19844736.9A EP3831548B1 (fr) 2018-08-01 2019-05-08 Robot mobile à intelligence artificielle

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862713501P 2018-08-01 2018-08-01
US62/713,501 2018-08-01
KR1020180140743A KR102314538B1 (ko) 2018-08-01 2018-11-15 인공지능 이동 로봇
KR10-2018-0140743 2018-11-15

Publications (1)

Publication Number Publication Date
WO2020027406A1 true WO2020027406A1 (fr) 2020-02-06

Family

ID=69230707

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/005462 WO2020027406A1 (fr) 2018-08-01 2019-05-08 Robot mobile à intelligence artificielle

Country Status (1)

Country Link
WO (1) WO2020027406A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040043422A (ko) * 2002-11-18 2004-05-24 삼성전자주식회사 수퍼 컴퓨터를 이용하는 홈로봇 및 이를 포함하는홈네트워크 시스템
KR20060032877A (ko) * 2004-10-13 2006-04-18 엘지전자 주식회사 로봇청소기를 이용한 음성인식 시스템 및 방법
KR20120114670A (ko) 2011-04-07 2012-10-17 엘지전자 주식회사 로봇 청소기 및 이의 제어 방법
KR20140072601A (ko) * 2012-12-05 2014-06-13 엘지전자 주식회사 로봇 청소기
KR20150014237A (ko) * 2013-07-29 2015-02-06 삼성전자주식회사 자동 청소 시스템, 청소 로봇 및 그 제어 방법
KR20180082242A (ko) * 2017-01-10 2018-07-18 엘지전자 주식회사 이동 로봇 및 그 제어 방법

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040043422A (ko) * 2002-11-18 2004-05-24 삼성전자주식회사 수퍼 컴퓨터를 이용하는 홈로봇 및 이를 포함하는홈네트워크 시스템
KR20060032877A (ko) * 2004-10-13 2006-04-18 엘지전자 주식회사 로봇청소기를 이용한 음성인식 시스템 및 방법
KR20120114670A (ko) 2011-04-07 2012-10-17 엘지전자 주식회사 로봇 청소기 및 이의 제어 방법
KR20140072601A (ko) * 2012-12-05 2014-06-13 엘지전자 주식회사 로봇 청소기
KR20150014237A (ko) * 2013-07-29 2015-02-06 삼성전자주식회사 자동 청소 시스템, 청소 로봇 및 그 제어 방법
KR20180082242A (ko) * 2017-01-10 2018-07-18 엘지전자 주식회사 이동 로봇 및 그 제어 방법

Similar Documents

Publication Publication Date Title
WO2020045732A1 (fr) Procédé de commande de robot mobile
WO2021006556A1 (fr) Robot mobile et son procédé de commande
WO2020246643A1 (fr) Robot de service et procédé de service au client mettant en œuvre ledit robot de service
WO2018124682A2 (fr) Robot mobile et son procédé de commande
WO2021006677A2 (fr) Robot mobile faisant appel à l'intelligence artificielle et son procédé de commande
WO2018139865A1 (fr) Robot mobile
WO2020218652A1 (fr) Purificateur d'air
WO2019216578A1 (fr) Procédé et appareil d'exécution d'une fonction de nettoyage
WO2020246640A1 (fr) Dispositif d'intelligence artificielle pour déterminer l'emplacement d'un utilisateur et procédé associé
WO2021006542A1 (fr) Robot mobile faisant appel à l'intelligence artificielle et son procédé de commande
WO2021029457A1 (fr) Serveur d'intelligence artificielle et procédé permettant de fournir des informations à un utilisateur
WO2020246647A1 (fr) Dispositif d'intelligence artificielle permettant de gérer le fonctionnement d'un système d'intelligence artificielle, et son procédé
EP3684563A1 (fr) Robot mobile et son procédé de commande
WO2019004746A1 (fr) Procédé de fonctionnement de robot mobile
WO2018117616A1 (fr) Robot mobile
WO2019117576A1 (fr) Robot mobile et procédé de commande de robot mobile
WO2021006404A1 (fr) Serveur d'intelligence artificielle
WO2020145625A1 (fr) Dispositif d'intelligence artificielle et procédé de fonctionnement associé
EP3773111A1 (fr) Procédé et appareil d'exécution d'une fonction de nettoyage
WO2020241920A1 (fr) Dispositif d'intelligence artificielle pouvant commander un autre dispositif sur la base d'informations de dispositif
WO2020241951A1 (fr) Procédé d'apprentissage par intelligence artificielle et procédé de commande de robot l'utilisant
WO2020256160A1 (fr) Robot domestique à intelligence artificielle et procédé de commande dudit robot
WO2021172642A1 (fr) Dispositif d'intelligence artificielle permettant de fournir une fonction de commande de dispositif sur la base d'un interfonctionnement entre des dispositifs et procédé associé
WO2020251096A1 (fr) Robot à intelligence artificielle et procédé de fonctionnement associé
WO2020251101A1 (fr) Dispositif d'intelligence artificielle pour déterminer un trajet de déplacement d'un utilisateur, et procédé associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19844736

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019844736

Country of ref document: EP

Effective date: 20210301