WO2021010503A1 - Appareil d'épuration d'air à intelligence artificielle, son procédé de commande, et appareil d'intelligence artificielle connecté à celui-ci - Google Patents

Appareil d'épuration d'air à intelligence artificielle, son procédé de commande, et appareil d'intelligence artificielle connecté à celui-ci Download PDF

Info

Publication number
WO2021010503A1
WO2021010503A1 PCT/KR2019/008675 KR2019008675W WO2021010503A1 WO 2021010503 A1 WO2021010503 A1 WO 2021010503A1 KR 2019008675 W KR2019008675 W KR 2019008675W WO 2021010503 A1 WO2021010503 A1 WO 2021010503A1
Authority
WO
WIPO (PCT)
Prior art keywords
air cleaning
cleaning device
processor
feature vector
image
Prior art date
Application number
PCT/KR2019/008675
Other languages
English (en)
Korean (ko)
Inventor
김진옥
황성목
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2019/008675 priority Critical patent/WO2021010503A1/fr
Publication of WO2021010503A1 publication Critical patent/WO2021010503A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01DSEPARATION
    • B01D46/00Filters or filtering processes specially modified for separating dispersed particles from gases or vapours
    • B01D46/42Auxiliary equipment or operation thereof
    • B01D46/44Auxiliary equipment or operation thereof controlling filtration
    • B01D46/46Auxiliary equipment or operation thereof controlling filtration automatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Definitions

  • the present invention relates to an artificial intelligence-based air cleaning device and a device connected thereto.
  • the air purifier is a device that purifies the inhaled air and discharges the purified air.
  • the contaminated air is sucked in with a fan to collect dust and bacteria by a filter, and various odors such as body odor can be deodorized.
  • a substance that deteriorates the performance or life of the air cleaning device may be released due to the specific action.
  • oil vapor generated while a user is cooking or excessive dust generated while cleaning may be an example.
  • the user does not operate the air cleaning device when taking a specific action such as cooking or cleaning.
  • a specific action such as cooking or cleaning.
  • the problem to be solved by the present invention is to provide an artificial intelligence air cleaning device capable of automatically controlling the operation of an air cleaning device according to an action taken by a user or the like, and an artificial intelligence device connected thereto.
  • An air cleaning device includes a camera for acquiring image data; An air cleaning unit including a fan motor and a filter module; And extracting at least one feature vector from the image data, obtaining image description data for the image data based on the extracted at least one feature vector, and generating a control command based on the obtained image description data. And a processor for controlling the air cleaning unit based on the generated control command.
  • the processor may extract the at least one feature vector from the image data through a feature vector extraction module, and the feature vector extraction module may include a previously learned CNN-based encoder.
  • the at least one feature vector may include at least one feature point of brightness, saturation, illuminance, hue, noise level, blur level, and depth of the image data.
  • the processor may obtain the image description data from the at least one feature vector through an image description data acquisition module, and the image description data acquisition module may include an LSTM network model-based decoder.
  • the processor may set a parameter for at least one of a power source, an air volume, and a wind direction of the air cleaning device based on the image description data, and generate the control command including at least one set parameter.
  • the image description data includes a plurality of words, and the processor may set the at least one parameter based on at least one of components and meanings of the plurality of words.
  • the processor may set a parameter for at least one of power and air volume of the air cleaning device based on at least one of a subject and a predicate among the plurality of words.
  • the processor may set a parameter for a wind direction of the air cleaning device based on at least one of a preposition and an adjective among the plurality of words.
  • the processor detects the subject and the predicate among the plurality of words, based on the position and mutual relationship of each of the plurality of words, and when the subject corresponds to a person, the predicate A parameter for at least one of the power and air volume may be set based on the category.
  • the air cleaning device further includes an air quality detection sensor that senses air quality around the air cleaning device, and the processor is, when the subject does not correspond to a person, the air quality detection sensor The air cleaning unit can be controlled based on the detection result.
  • the air cleaning device further includes a driving unit for movement
  • the processor includes, based on the image description data, a parameter for at least one of power, air volume, wind direction, and driving of the air cleaning device. Is set, and the control command including at least one set parameter may be generated.
  • the artificial intelligence device may be connected to an air cleaning device.
  • the artificial intelligence device includes: a communication unit for receiving image data from the air cleaning device; And extracting at least one feature vector from the received image data, obtaining image description data for the image data based on the extracted at least one feature vector, and generating a control command based on the obtained image description data. And, it includes a processor for transmitting the generated control command to the air cleaning device.
  • a method of controlling an air cleaning device includes: acquiring image data through a camera included in the air cleaning device; Extracting at least one feature vector from the obtained image data; Obtaining image description data for the image data based on the extracted at least one feature vector; Generating a control command for the air cleaning device based on the acquired image description data; And controlling the air cleaning unit included in the air cleaning device based on the generated control command.
  • the air cleaning device or the artificial intelligence device provides a more effective air cleaning function reflecting the surrounding situation by controlling the air cleaning operation based on image description data acquired from the image around the air cleaning device. can do.
  • the air cleaning device or artificial intelligence device detects in advance the release of substances that reduce the life or performance of the air cleaning device based on the behavior or state of the person detected from the image, and stops the operation of the air cleaning device. It is possible to minimize the degradation of the life or performance of the cleaning device.
  • FIG 1 shows an AI device according to an embodiment of the present invention.
  • FIG 2 shows an AI server according to an embodiment of the present invention.
  • FIG 3 shows an AI system according to an embodiment of the present invention.
  • FIG. 4 is a view for explaining an air cleaning device and an artificial intelligence device according to an embodiment of the present invention.
  • FIG. 5 is a schematic block diagram of the air cleaning device shown in FIG. 4.
  • FIG. 6 is a flowchart for explaining the operation of the air cleaning device according to an embodiment of the present invention.
  • FIG. 7 is a view for explaining an operation of extracting a feature vector from an image by the air cleaning device.
  • FIG. 8 is a diagram illustrating an operation of acquiring image description data from a feature vector by the air cleaning device.
  • FIG. 9 is a flowchart for explaining in more detail an operation of the air cleaning device generating a control command for the air cleaning device from image description data.
  • FIG. 10 is a flowchart illustrating an example of an operation of setting at least one operation control parameter based on an element in a sentence of the image description data by the air cleaning device.
  • 11 to 12 are exemplary diagrams related to the operation illustrated in FIG. 10.
  • FIG. 13 is a flowchart illustrating an example of an operation of setting at least one operation control parameter based on a constituent element in a sentence of the image description data by the air cleaning device.
  • FIG. 14 is an exemplary diagram related to the operation illustrated in FIG. 13.
  • 15 is a ladder diagram for explaining the operation of the air cleaning device and the artificial intelligence device according to an embodiment of the present invention.
  • Machine learning refers to the field of researching methodologies to define and solve various problems dealt with in the field of artificial intelligence. do.
  • Machine learning is also defined as an algorithm that improves the performance of a task through continuous experience.
  • An artificial neural network is a model used in machine learning, and may refer to an overall model with problem-solving capabilities, composed of artificial neurons (nodes) that form a network by combining synapses.
  • the artificial neural network may be defined by a connection pattern between neurons of different layers, a learning process for updating model parameters, and an activation function for generating an output value.
  • the artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include neurons and synapses connecting neurons. In an artificial neural network, each neuron can output a function of an activation function for input signals, weights, and biases input through synapses.
  • Model parameters refer to parameters determined through learning, and include weights of synaptic connections and biases of neurons.
  • hyperparameters refer to parameters that must be set before learning in a machine learning algorithm, and include a learning rate, iteration count, mini-batch size, and initialization function.
  • the purpose of learning artificial neural networks can be seen as determining model parameters that minimize the loss function.
  • the loss function can be used as an index to determine an optimal model parameter in the learning process of the artificial neural network.
  • Machine learning can be classified into supervised learning, unsupervised learning, and reinforcement learning according to the learning method.
  • Supervised learning refers to a method of training an artificial neural network when a label for training data is given, and a label indicates the correct answer (or result value) that the artificial neural network should infer when training data is input to the artificial neural network. It can mean.
  • Unsupervised learning may refer to a method of training an artificial neural network in a state where a label for training data is not given.
  • Reinforcement learning may mean a learning method in which an agent defined in a certain environment learns to select an action or action sequence that maximizes the cumulative reward in each state.
  • machine learning implemented as a deep neural network (DNN) including a plurality of hidden layers is sometimes referred to as deep learning (deep learning), and deep learning is a part of machine learning.
  • DNN deep neural network
  • machine learning is used in the sense including deep learning.
  • a robot may refer to a machine that automatically processes or operates a task given by its own capabilities.
  • a robot having a function of recognizing the environment and performing an operation by self-determining may be referred to as an intelligent robot.
  • Robots can be classified into industrial, medical, household, military, etc. depending on the purpose or field of use.
  • the robot may be provided with a driving unit including an actuator or a motor to perform various physical operations such as moving a robot joint.
  • a driving unit including an actuator or a motor to perform various physical operations such as moving a robot joint.
  • the movable robot includes a wheel, a brake, a propeller, etc. in a driving unit, and can travel on the ground or fly in the air through the driving unit.
  • Autonomous driving refers to self-driving technology
  • autonomous driving vehicle refers to a vehicle that is driven without a user's manipulation or with a user's minimal manipulation.
  • a technology that maintains a driving lane a technology that automatically adjusts the speed such as adaptive cruise control, a technology that automatically drives along a specified route, and a technology that automatically sets a route when a destination is set, etc. All of these can be included.
  • the vehicle includes all of a vehicle having only an internal combustion engine, a hybrid vehicle including an internal combustion engine and an electric motor, and an electric vehicle including only an electric motor, and may include not only automobiles, but also trains and motorcycles.
  • the autonomous vehicle can be viewed as a robot having an autonomous driving function.
  • the extended reality collectively refers to Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
  • VR technology provides only CG images of real world objects or backgrounds
  • AR technology provides virtually created CG images on top of real object images
  • MR technology is a computer that mixes and combines virtual objects in the real world. It is a graphic technology.
  • MR technology is similar to AR technology in that it shows real and virtual objects together.
  • virtual objects are used in a form that complements real objects
  • MR technology virtual objects and real objects are used with equal characteristics.
  • XR technology can be applied to HMD (Head-Mount Display), HUD (Head-Up Display), mobile phones, tablet PCs, laptops, desktops, TVs, digital signage, etc., and devices applied with XR technology are XR devices. It can be called as.
  • HMD Head-Mount Display
  • HUD Head-Up Display
  • mobile phones tablet PCs, laptops, desktops, TVs, digital signage, etc.
  • devices applied with XR technology are XR devices. It can be called as.
  • FIG 1 shows an AI device according to an embodiment of the present invention.
  • the AI device 100 includes a TV, a projector, a mobile phone, a smartphone, a desktop computer, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a tablet PC, a wearable device, a set-top box (STB). ), a DMB receiver, a radio, a washing machine, a refrigerator, a desktop computer, a digital signage, a robot, a vehicle, and the like.
  • PDA personal digital assistant
  • PMP portable multimedia player
  • STB set-top box
  • the terminal 100 includes a communication unit 110, an input unit 120, a running processor 130, a sensing unit 140, an output unit 150, a memory 170, and a processor 180.
  • the communication unit 110 may transmit and receive data with external devices such as other AI devices 100a to 100e or the AI server 200 using wired/wireless communication technology.
  • the communication unit 110 may transmit and receive sensor information, a user input, a learning model, and a control signal with external devices.
  • the communication technologies used by the communication unit 110 include Global System for Mobile communication (GSM), Code Division Multi Access (CDMA), Long Term Evolution (LTE), 5G, Wireless LAN (WLAN), and Wireless-Fidelity (Wi-Fi). ), Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), ZigBee, and Near Field Communication (NFC).
  • GSM Global System for Mobile communication
  • CDMA Code Division Multi Access
  • LTE Long Term Evolution
  • 5G Fifth Generation
  • WLAN Wireless LAN
  • Wi-Fi Wireless-Fidelity
  • Bluetooth Bluetooth
  • IrDA Infrared Data Association
  • ZigBee ZigBee
  • NFC Near Field Communication
  • the input unit 120 may acquire various types of data.
  • the input unit 120 may include a camera for inputting an image signal, a microphone for receiving an audio signal, a user input unit for receiving information from a user, and the like.
  • a camera or microphone for treating a camera or microphone as a sensor, a signal obtained from the camera or microphone may be referred to as sensing data or sensor information.
  • the input unit 120 may acquire training data for model training and input data to be used when acquiring an output by using the training model.
  • the input unit 120 may obtain unprocessed input data, and in this case, the processor 180 or the running processor 130 may extract an input feature as a preprocess for the input data.
  • the learning processor 130 may train a model composed of an artificial neural network using the training data.
  • the learned artificial neural network may be referred to as a learning model.
  • the learning model can be used to infer a result value for new input data other than the training data, and the inferred value can be used as a basis for a decision to perform a certain operation.
  • the learning processor 130 may perform AI processing together with the learning processor 240 of the AI server 200.
  • the learning processor 130 may include a memory integrated or implemented in the AI device 100.
  • the learning processor 130 may be implemented using the memory 170, an external memory directly coupled to the AI device 100, or a memory maintained in an external device.
  • the sensing unit 140 may acquire at least one of internal information of the AI device 100, information about the surrounding environment of the AI device 100, and user information by using various sensors.
  • the sensors included in the sensing unit 140 include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and a lidar. , Radar, etc.
  • the output unit 150 may generate output related to visual, auditory or tactile sense.
  • the output unit 150 may include a display unit that outputs visual information, a speaker that outputs auditory information, and a haptic module that outputs tactile information.
  • the memory 170 may store data supporting various functions of the AI device 100.
  • the memory 170 may store input data, training data, a learning model, and a learning history acquired from the input unit 120.
  • the processor 180 may determine at least one executable operation of the AI device 100 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. Further, the processor 180 may perform the determined operation by controlling the components of the AI device 100.
  • the processor 180 may request, search, receive, or utilize data from the learning processor 130 or the memory 170, and perform a predicted or desirable operation among the at least one executable operation.
  • the components of the AI device 100 can be controlled to execute.
  • the processor 180 may generate a control signal for controlling the corresponding external device and transmit the generated control signal to the corresponding external device.
  • the processor 180 may obtain intention information for a user input, and determine a user's requirement based on the obtained intention information.
  • the processor 180 uses at least one of a Speech To Text (STT) engine for converting a speech input into a character string or a Natural Language Processing (NLP) engine for obtaining intention information of a natural language. Intention information corresponding to the input can be obtained.
  • STT Speech To Text
  • NLP Natural Language Processing
  • At this time, at least one or more of the STT engine and the NLP engine may be composed of an artificial neural network, at least partially trained according to a machine learning algorithm.
  • at least one of the STT engine or the NLP engine is learned by the learning processor 130, learned by the learning processor 240 of the AI server 200, or learned by distributed processing thereof. Can be.
  • the processor 180 collects history information including user feedback on the operation content or operation of the AI device 100 and stores it in the memory 170 or the learning processor 130, or the AI server 200 Can be transferred to an external device.
  • the collected history information can be used to update the learning model.
  • the processor 180 may control at least some of the components of the AI device 100 to drive an application program stored in the memory 170. Furthermore, the processor 180 may operate by combining two or more of the components included in the AI device 100 to drive the application program.
  • FIG 2 shows an AI server according to an embodiment of the present invention.
  • the AI server 200 may refer to a device that trains an artificial neural network using a machine learning algorithm or uses the learned artificial neural network.
  • the AI server 200 may be composed of a plurality of servers to perform distributed processing, or may be defined as a 5G network.
  • the AI server 200 may be included as a part of the AI device 100 to perform at least part of AI processing together.
  • the AI server 200 may include a communication unit 210, a memory 230, a learning processor 240, and a processor 260.
  • the communication unit 210 may transmit and receive data with an external device such as the AI device 100.
  • the memory 230 may include a model storage unit 231.
  • the model storage unit 231 may store a model (or artificial neural network, 231a) being trained or trained through the learning processor 240.
  • the learning processor 240 may train the artificial neural network 231a using the training data.
  • the learning model may be used while being mounted on the AI server 200 of the artificial neural network, or may be mounted on an external device such as the AI device 100 and used.
  • the learning model can be implemented in hardware, software, or a combination of hardware and software. When part or all of the learning model is implemented in software, one or more instructions constituting the learning model may be stored in the memory 230.
  • the processor 260 may infer a result value for new input data using the learning model, and generate a response or a control command based on the inferred result value.
  • FIG 3 shows an AI system according to an embodiment of the present invention.
  • the AI system 1 includes at least one of an AI server 200, a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e. It is connected to the cloud network 10.
  • the robot 100a to which the AI technology is applied, the autonomous vehicle 100b, the XR device 100c, the smartphone 100d, or the home appliance 100e may be referred to as the AI devices 100a to 100e.
  • the cloud network 10 may constitute a part of the cloud computing infrastructure or may mean a network that exists in the cloud computing infrastructure.
  • the cloud network 10 may be configured using a 3G network, a 4G or Long Term Evolution (LTE) network, or a 5G network.
  • LTE Long Term Evolution
  • the devices 100a to 100e and 200 constituting the AI system 1 may be connected to each other through the cloud network 10.
  • the devices 100a to 100e and 200 may communicate with each other through a base station, but may communicate with each other directly without through a base station.
  • the AI server 200 may include a server that performs AI processing and a server that performs an operation on big data.
  • the AI server 200 includes at least one of a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e, which are AI devices constituting the AI system 1 It is connected through the cloud network 10 and may help at least part of the AI processing of the connected AI devices 100a to 100e.
  • the AI server 200 may train an artificial neural network according to a machine learning algorithm in place of the AI devices 100a to 100e, and may directly store the learning model or transmit it to the AI devices 100a to 100e.
  • the AI server 200 receives input data from the AI devices 100a to 100e, infers a result value for the received input data using a learning model, and generates a response or control command based on the inferred result value. It can be generated and transmitted to the AI devices 100a to 100e.
  • the AI devices 100a to 100e may infer a result value of input data using a direct learning model, and generate a response or a control command based on the inferred result value.
  • the AI devices 100a to 100e to which the above-described technology is applied will be described.
  • the AI devices 100a to 100e illustrated in FIG. 3 may be viewed as a specific example of the AI device 100 illustrated in FIG. 1.
  • the robot 100a is applied with AI technology and may be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, and the like.
  • the robot 100a may include a robot control module for controlling an operation, and the robot control module may refer to a software module or a chip implementing the same as hardware.
  • the robot 100a acquires status information of the robot 100a by using sensor information acquired from various types of sensors, detects (recognizes) the surrounding environment and objects, generates map data, or moves paths and travels. It can decide a plan, decide a response to user interaction, or decide an action.
  • the robot 100a may use sensor information obtained from at least one sensor from among a lidar, a radar, and a camera in order to determine a moving route and a driving plan.
  • the robot 100a may perform the above operations using a learning model composed of at least one artificial neural network.
  • the robot 100a may recognize a surrounding environment and an object using a learning model, and may determine an operation using the recognized surrounding environment information or object information.
  • the learning model may be directly learned by the robot 100a or learned by an external device such as the AI server 200.
  • the robot 100a may perform an operation by generating a result using a direct learning model, but it transmits sensor information to an external device such as the AI server 200 and performs the operation by receiving the result generated accordingly. You may.
  • the robot 100a determines a movement path and a driving plan using at least one of map data, object information detected from sensor information, or object information acquired from an external device, and controls the driving unit to determine the determined movement path and travel plan. Accordingly, the robot 100a can be driven.
  • the map data may include object identification information on various objects arranged in a space in which the robot 100a moves.
  • the map data may include object identification information on fixed objects such as walls and doors and movable objects such as flower pots and desks.
  • the object identification information may include a name, type, distance, and location.
  • the robot 100a may perform an operation or run by controlling a driving unit based on a user's control/interaction.
  • the robot 100a may acquire interaction intention information according to a user's motion or voice speech, and determine a response based on the obtained intention information to perform an operation.
  • the autonomous vehicle 100b may be implemented as a mobile robot, vehicle, or unmanned aerial vehicle by applying AI technology.
  • the autonomous driving vehicle 100b may include an autonomous driving control module for controlling an autonomous driving function, and the autonomous driving control module may refer to a software module or a chip implementing the same as hardware.
  • the autonomous driving control module may be included inside as a configuration of the autonomous driving vehicle 100b, but may be configured as separate hardware and connected to the exterior of the autonomous driving vehicle 100b.
  • the autonomous driving vehicle 100b acquires state information of the autonomous driving vehicle 100b using sensor information obtained from various types of sensors, detects (recognizes) surrounding environments and objects, or generates map data, It is possible to determine a travel route and a driving plan, or to determine an action.
  • the autonomous vehicle 100b may use sensor information obtained from at least one sensor from among a lidar, a radar, and a camera, similar to the robot 100a, in order to determine a moving route and a driving plan.
  • the autonomous vehicle 100b may recognize an environment or object in an area where the view is obscured or an area greater than a certain distance by receiving sensor information from external devices, or directly recognized information from external devices. .
  • the autonomous vehicle 100b may perform the above operations using a learning model composed of at least one artificial neural network.
  • the autonomous vehicle 100b may recognize a surrounding environment and an object using a learning model, and may determine a driving movement using the recognized surrounding environment information or object information.
  • the learning model may be directly learned by the autonomous vehicle 100b or learned by an external device such as the AI server 200.
  • the autonomous vehicle 100b may perform an operation by generating a result using a direct learning model, but it operates by transmitting sensor information to an external device such as the AI server 200 and receiving the result generated accordingly. You can also do
  • the autonomous vehicle 100b determines a movement path and a driving plan using at least one of map data, object information detected from sensor information, or object information acquired from an external device, and controls the driving unit to determine the determined movement path and driving.
  • the autonomous vehicle 100b can be driven according to a plan.
  • the map data may include object identification information on various objects arranged in a space (eg, a road) in which the autonomous vehicle 100b travels.
  • the map data may include object identification information on fixed objects such as street lights, rocks, and buildings, and movable objects such as vehicles and pedestrians.
  • the object identification information may include a name, type, distance, and location.
  • the autonomous vehicle 100b may perform an operation or drive by controlling a driving unit based on a user's control/interaction.
  • the autonomous vehicle 100b may acquire interaction intention information according to a user's motion or voice speech, and determine a response based on the obtained intention information to perform the operation.
  • the XR device 100c is applied with AI technology, such as HMD (Head-Mount Display), HUD (Head-Up Display) provided in the vehicle, TV, mobile phone, smart phone, computer, wearable device, home appliance, digital signage. , A vehicle, a fixed robot, or a mobile robot.
  • HMD Head-Mount Display
  • HUD Head-Up Display
  • the XR device 100c analyzes 3D point cloud data or image data acquired through various sensors or from an external device to generate location data and attribute data for 3D points, thereby providing information on surrounding spaces or real objects.
  • the XR object to be acquired and output can be rendered and output.
  • the XR apparatus 100c may output an XR object including additional information on the recognized object in correspondence with the recognized object.
  • the XR device 100c may perform the above operations by using a learning model composed of at least one artificial neural network.
  • the XR device 100c may recognize a real object from 3D point cloud data or image data using a learning model, and may provide information corresponding to the recognized real object.
  • the learning model may be directly learned by the XR device 100c or learned by an external device such as the AI server 200.
  • the XR device 100c may directly generate a result using a learning model to perform an operation, but transmits sensor information to an external device such as the AI server 200 and receives the result generated accordingly to perform the operation. You can also do it.
  • the robot 100a may be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, etc. by applying AI technology and autonomous driving technology.
  • the robot 100a to which AI technology and autonomous driving technology are applied may refer to a robot having an autonomous driving function or a robot 100a interacting with the autonomous driving vehicle 100b.
  • the robot 100a having an autonomous driving function may collectively refer to devices that move by themselves according to a given movement line without the user's control or by determining the movement line by themselves.
  • the robot 100a having an autonomous driving function and the autonomous driving vehicle 100b may use a common sensing method to determine one or more of a moving route or a driving plan.
  • the robot 100a having an autonomous driving function and the autonomous driving vehicle 100b may determine one or more of a movement route or a driving plan using information sensed through a lidar, a radar, and a camera.
  • the robot 100a interacting with the autonomous driving vehicle 100b exists separately from the autonomous driving vehicle 100b, and is linked to an autonomous driving function inside the autonomous driving vehicle 100b, or to the autonomous driving vehicle 100b. It is possible to perform an operation associated with the user on board.
  • the robot 100a interacting with the autonomous driving vehicle 100b acquires sensor information on behalf of the autonomous driving vehicle 100b and provides it to the autonomous driving vehicle 100b, or acquires sensor information and information about the surrounding environment or By generating object information and providing it to the autonomous vehicle 100b, it is possible to control or assist the autonomous driving function of the autonomous driving vehicle 100b.
  • the robot 100a interacting with the autonomous vehicle 100b may monitor a user in the autonomous vehicle 100b or control the function of the autonomous vehicle 100b through interaction with the user. .
  • the robot 100a may activate an autonomous driving function of the autonomous driving vehicle 100b or assist the control of a driving unit of the autonomous driving vehicle 100b.
  • the functions of the autonomous vehicle 100b controlled by the robot 100a may include not only an autonomous driving function, but also functions provided by a navigation system or an audio system provided inside the autonomous driving vehicle 100b.
  • the robot 100a interacting with the autonomous driving vehicle 100b may provide information or assist a function to the autonomous driving vehicle 100b from outside of the autonomous driving vehicle 100b.
  • the robot 100a may provide traffic information including signal information to the autonomous vehicle 100b, such as a smart traffic light, or interact with the autonomous driving vehicle 100b, such as an automatic electric charger for an electric vehicle. You can also automatically connect an electric charger to the charging port.
  • the robot 100a may be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, a drone, etc., by applying AI technology and XR technology.
  • the robot 100a to which the XR technology is applied may refer to a robot that is an object of control/interaction in an XR image.
  • the robot 100a is distinguished from the XR device 100c and may be interlocked with each other.
  • the robot 100a which is the object of control/interaction in the XR image, acquires sensor information from sensors including a camera
  • the robot 100a or the XR device 100c generates an XR image based on the sensor information.
  • the XR device 100c may output the generated XR image.
  • the robot 100a may operate based on a control signal input through the XR device 100c or a user's interaction.
  • the user can check the XR image corresponding to the viewpoint of the robot 100a linked remotely through an external device such as the XR device 100c, and adjust the autonomous driving path of the robot 100a through the interaction.
  • You can control motion or driving, or check information on surrounding objects.
  • the autonomous vehicle 100b may be implemented as a mobile robot, a vehicle, or an unmanned aerial vehicle by applying AI technology and XR technology.
  • the autonomous driving vehicle 100b to which the XR technology is applied may refer to an autonomous driving vehicle including a means for providing an XR image, or an autonomous driving vehicle that is an object of control/interaction within the XR image.
  • the autonomous vehicle 100b, which is an object of control/interaction in the XR image is distinguished from the XR device 100c and may be interlocked with each other.
  • the autonomous vehicle 100b provided with a means for providing an XR image may acquire sensor information from sensors including a camera, and may output an XR image generated based on the acquired sensor information.
  • the autonomous vehicle 100b may provide an XR object corresponding to a real object or an object in a screen to the occupant by outputting an XR image with a HUD.
  • the XR object when the XR object is output to the HUD, at least a part of the XR object may be output to overlap the actual object facing the occupant's gaze.
  • the XR object when the XR object is output on a display provided inside the autonomous vehicle 100b, at least a part of the XR object may be output to overlap an object in the screen.
  • the autonomous vehicle 100b may output XR objects corresponding to objects such as lanes, other vehicles, traffic lights, traffic signs, motorcycles, pedestrians, and buildings.
  • the autonomous driving vehicle 100b which is the object of control/interaction in the XR image, acquires sensor information from sensors including a camera
  • the autonomous driving vehicle 100b or the XR device 100c is based on the sensor information.
  • An XR image is generated, and the XR device 100c may output the generated XR image.
  • the autonomous vehicle 100b may operate based on a control signal input through an external device such as the XR device 100c or a user's interaction.
  • FIG. 4 is a view for explaining an air cleaning device and an artificial intelligence device according to an embodiment of the present invention.
  • the air cleaning device 400 is a device that purifies the inhaled air and discharges the purified air.
  • the contaminated air is sucked through a fan to collect dust or bacteria, etc. It can deodorize various odors.
  • the conventional air cleaning device 400 includes a sensor that detects a concentration of dust and a gas in the air, and may control driving based on a sensing value measured through the sensor.
  • the types of substances in the air that can be detected through the sensor may be somewhat limited. If a substance that degrades the performance or lifespan of the air cleaning device 400 is present in the air but does not detect it, the performance or life may be deteriorated as the air cleaning device 400 is continuously driven.
  • the manufacturing cost may increase.
  • the provision range of the purified air discharged from the air cleaning device 400 is limited, it may be desirable to provide the purified air to a region around the user.
  • the air cleaning device 400 includes a camera 442 to detect a surrounding situation from image data (still image and/or video) acquired through the camera 442, and Depending on the situation, it is possible to control the driving of the air cleaning device 400. Through this, the air cleaning device 400 can provide an air cleaning function optimized for a surrounding situation such as a user's behavior, and stop driving when a specific situation is detected, thereby preventing a reduction in life or performance.
  • the air cleaning device 400 may be connected to the artificial intelligence device 200a through a network.
  • the artificial intelligence device 200a may be a server operated by a manufacturer of the air cleaning device 400, but is not limited thereto.
  • the artificial intelligence device 200a may correspond to an example implementation of the AI server 200 described above in FIG. 2. Accordingly, the configuration and related contents of the AI server 200 described above in FIG. 2 may be similarly applied to the artificial intelligence device 200a.
  • the air cleaning device 400 may transmit image data acquired through the camera 442 to the artificial intelligence device 200a.
  • the artificial intelligence device 200a may control the driving of the air cleaning device 400 based on the acquired image data.
  • FIG. 5 is a schematic block diagram of the air cleaning device shown in FIG. 4.
  • the air cleaning device 400 includes a communication unit 410, an input unit 420, a running processor 430, a sensing unit 440, an output unit 450, an air cleaning unit 460, and a memory ( 470), and a control unit 480.
  • the configurations shown in FIG. 5 are not essential for implementing the air cleaning device 400, and the air cleaning device 400 may include more or less components.
  • the air cleaning device 400 may correspond to an example of the AI device 100 described above in FIG. 1.
  • the contents of each of the configurations described above in FIG. 1 may be similarly applied to each of the corresponding configurations among the configurations of the air cleaning apparatus 400.
  • the communication unit 410 may include at least one communication module for connecting the air cleaning device 400 to a user's terminal or appliance.
  • An example of the communication technology supported by the at least one communication module is as described above with reference to FIG. 1.
  • the processor 480 may transmit state information or operation information of the air cleaning device 400 to a terminal or the like through the communication unit 410. Alternatively, the processor 480 may receive control information of the air cleaning device 400 from a terminal or the like through the communication unit 410.
  • the communication unit 410 may connect the air cleaning device 400 to the artificial intelligence device 200a described above in FIG. 4.
  • the processor 480 may transmit image data acquired through the camera 442 to the artificial intelligence device 200a, and receive a control command of the air cleaning device 400 from the artificial intelligence device 200a. .
  • the input unit 420 may include at least one input means for inputting a predetermined signal or data to the air cleaning device 400 by a user's manipulation.
  • the sensing unit 440 may include at least one sensor that senses various information around the air cleaning device 400.
  • the sensing unit 440 may include a camera 442, a microphone 444, and an air quality sensor 446.
  • the camera 442 may acquire an image (still image and/or video) around the air cleaning device 400.
  • the microphone 444 may detect sounds (personal voices, sounds generated from objects, etc.) around the air cleaning device 400.
  • the air quality detection sensor 446 may include at least one sensor that detects the concentration of dust, the type and/or concentration of gas, temperature, humidity, etc. contained in the air around the air cleaning device 400.
  • the processor 480 may adjust the air volume or direction based on the detection result of the air quality sensor 446.
  • some of the components (eg, a camera, a microphone, etc.) included in the sensing unit 440 may function as the input unit 420.
  • the output unit 450 may include an output means for informing a user of various information (air volume, wind direction, driving mode, air quality information, etc.) related to the operation of the air cleaning device 400.
  • the output unit 450 may include a display or a light output unit as a graphic or text output unit, and may include a speaker or a buzzer as an audio output unit.
  • the air cleaning unit 460 may include components related to the air cleaning operation of the air cleaning device 400.
  • the air cleaner 460 may include a fan motor 462 for inhaling and discharging air, and a filter module 464 for purifying the inhaled air.
  • a fan (not shown) provided in the air cleaning device 400 may be rotated to generate a flow of air.
  • the processor 480 may adjust the air volume of the air cleaning device 400 by controlling the rotation speed of the fan motor 462.
  • the filter module 464 may include at least one filter for purifying the inhaled air.
  • the filter module 464 includes a dust collecting filter for collecting foreign matter or dust in the air, a deodorizing filter for removing odors by decomposing volatile organic chemical substances in the air (for example, a photocatalytic filter, activated carbon, etc.), a carbon dioxide collecting filter, etc.
  • the filter module 464 may further include a sterilization filter (UV lamp, UV LED, etc.) that sterilizes bacteria or microorganisms in the air.
  • the processor 480 may activate at least one of the filters included in the filter module 464 on the basis of a set driving mode among a plurality of driving modes related to the air cleaning operation.
  • the memory 470 includes control data for controlling the operation of components included in the air cleaning device 400, set values such as an operation mode input through the input unit 420, and whether an error occurs in the air cleaning device 400 Various types of information and data, such as data for determining, may be stored.
  • the memory 470 may store algorithms or program data related to each of the feature vector extraction module 482 and the image description data acquisition module 486 to be described later.
  • the memory 470 may include various storage devices such as ROM, RAM, EEPROM, flash drive, and hard drive.
  • the processor 480 may include at least one processor or a controller that controls the operation of the air cleaning device 400.
  • the processor 480 may include at least one CPU, an application processor (AP), a microcomputer (or microcomputer), an integrated circuit, an application specific integrated circuit (ASIC), and the like.
  • AP application processor
  • microcomputer or microcomputer
  • ASIC application specific integrated circuit
  • the processor 480 may control the overall operation of components included in the air cleaning device 400.
  • the processor 480 may include an ISP that generates image data by processing an image signal acquired through the camera 442, a display controller that controls the operation of the display 452, and the like.
  • the air cleaning apparatus 400 may obtain image description data describing features included in the image data from image data obtained through the camera 442.
  • the feature may refer to a user, an animal, an action of the user or animal, and other objects included in image data.
  • the air cleaning device 400 may include a feature vector extraction module 482 and an image description data acquisition module 486.
  • the feature vector extraction module 482 may extract at least one feature vector from image data.
  • the feature vector extraction module 482 may include an encoder based on a convolutional neural network (CNN).
  • CNN convolutional neural network
  • the image description data acquisition module 486 may obtain image description data for the image data from at least one feature vector extracted by the feature vector extraction module 482.
  • the image description data may be provided in text form.
  • the image description data acquisition module 486 may include a long short-term memory (LSTM)-based decoder included in a recurrent neural network (RNN).
  • LSTM long short-term memory
  • RNN recurrent neural network
  • Each of the feature vector extraction module 482 and the image description data acquisition module 486 may be implemented by hardware, software, or a combination thereof.
  • the feature vector extraction module 482 and the image description data acquisition module 486 may be implemented in the artificial intelligence device 200a.
  • the air cleaning device 400 may be implemented as a movable air cleaning device capable of moving inside and outside the space in which it is disposed.
  • the air cleaning device 400 may include a driving unit 490 for driving.
  • the driving unit 490 may include a driving means such as a wheel, and a driving motor that provides power to the driving means.
  • FIG. 6 is a flowchart for explaining the operation of the air cleaning device according to an embodiment of the present invention.
  • 7 is a view for explaining an operation of extracting a feature vector from an image by the air cleaning device.
  • 8 is a diagram illustrating an operation of acquiring image description data from a feature vector by the air cleaning device.
  • the air cleaning device 400 may acquire an image (image data) through the camera 442 (S100) and extract at least one feature vector from the acquired image data (S110).
  • the processor 480 may acquire image data about the air cleaning device 400 through the camera 442.
  • the image data may be still image data, but is not limited thereto and may be moving image data.
  • the processor 480 may acquire the image data when the air cleaning device 400 is powered on. Alternatively, the processor 480 may periodically acquire the image data during or regardless of the driving of the air cleaning device 400.
  • the processor 480 may extract at least one feature vector from the acquired image data.
  • the processor 480 may input the image data to the feature vector extraction module 482.
  • the feature vector extraction module 482 may extract at least one feature vector from the input image data.
  • the feature vector extraction module 482 may include an extractor 483 for extracting at least one feature vector FV from image data IMAGE.
  • the extractor 483 may include an encoder based on a previously learned convolutional neural network (CNN).
  • the at least one feature vector FV may mean an image feature point representing a feature of the image data IMAGE.
  • the at least one feature vector FV may include at least one of brightness, saturation, illuminance, hue, noise level, blur level, frequency-based feature point, energy level, or depth as image feature points. .
  • the frequency-based feature point may include at least one of an edge, a shape, or a texture.
  • the frequency-based feature point may be obtained through Fourier transformation in image data IMAGE.
  • the edge and shape may be obtained in a high frequency region, and a texture, etc. may be obtained in a low frequency region.
  • the frequency-based feature points may be extracted using an image pyramid network.
  • the at least one feature vector (FV) may include red (R), green (G), and blue (B) according to the RGB model as a feature point for a color, and a hue (Hue) according to the HSV model, Saturation and brightness may be included, and brightness (Y) and color differences (Cb, Cr) may be included according to the YCbCr model (YUV model).
  • the feature vector extraction module 482 may output a feature vector matrix L_FV obtained by arranging 485 the extracted at least one feature vector FV in a one-dimensional matrix form.
  • the air cleaning device 400 may acquire image description data by using at least one extracted feature vector (S120).
  • the processor 480 may obtain image description data through the image description data acquisition module 486 from at least one feature vector (or feature vector matrix) extracted from the image data.
  • the image description data may include text indicating characteristics of the image data.
  • the processor 480 may input at least one feature vector (feature vector matrix (L_FV)) acquired through the feature vector extraction module 482 to the image description data acquisition module 486. I can.
  • feature vector matrix L_FV
  • the image description data acquisition module 486 may include a decoder based on an LSTM network model, which is a kind of RNN.
  • the RNN that is, a recurrent neural network, may have a structure in which a current output value is obtained by using a previous output value and a current input value.
  • the LSTM network model may be implemented to obtain a current output value related to at least one previous output value by assigning a weight to how much influence each of the at least one previous output value has in predicting the current output value.
  • the image description data acquisition module 486 may provide a probability value for each of the output values LSTM_OUTPUT through the activation function softmax.
  • the total sum of the probability values of each of the output values LSTM_OUTPUT may be 1.
  • the output values Y1 to Y9 finally obtained may correspond to the output values having the highest probability.
  • the image description data acquisition module 486 uses at least one previously acquired word and a currently input feature vector to determine the previous word. Current words that can be connected with may be sequentially obtained.
  • the image description data acquisition module 486 may finally acquire image description data (IMAGE_DESC) in a sentence form based on the acquired words.
  • the image description data acquisition module 486 may output image description data IMAGE_DESC from at least one feature vector (feature vector matrix L_FV).
  • the air cleaning device 400 may generate a control command for the air cleaning device 400 based on the acquired image description data (S130), and drive the air cleaning device 400 according to the generated control command (S140). ).
  • the processor 480 may generate a control command for controlling the on/off, air volume, and/or wind direction of the air cleaning device 400 based on the text of the acquired image description data.
  • the processor 480 may control the components based on the generated control command.
  • FIG. 9 is a flowchart for explaining in more detail an operation of the air cleaning device generating a control command for the air cleaning device from image description data.
  • the air cleaning device 400 may analyze a sentence corresponding to the text of the image description data (S200 ).
  • the processor 480 may analyze which component each of the words in the sentence corresponding to the text corresponds to.
  • the components are constituent elements forming a sentence, and may include a subject, a predicate, an object, a preposition, an adjective, and the like.
  • the processor 480 may analyze components of each of the words included in the sentence based on the basic structure of the language corresponding to the sentence or the definition or form of words included in the sentence.
  • the air cleaning device 400 may set at least one operation control parameter from the constituent elements (components) in the sentence (S210).
  • the operation control parameter may mean a parameter of items related to driving of the air cleaning device 400 such as power, air volume, wind direction, and driving.
  • the processor 480 may set operation control parameters of the air cleaning device 400 based on a component or definition of each word according to the analysis result of the sentence.
  • the processor 480 may set parameters related to the power and/or air volume of the air cleaning device 400 based on a subject and/or a predicate in a sentence. Further, the processor 480 may set parameters related to the wind direction and/or driving of the air cleaning device 400 based on prepositions and/or adjectives in the sentence.
  • the air cleaning device 400 may generate a control command including at least one set operation control parameter (S220) and control the driving of the air cleaning device 400 based on the generated control command (S230).
  • S220 set operation control parameter
  • S230 generated control command
  • FIGS. 10 to 14 are merely examples related to the operation of FIG. 9, and the method or standard for generating the control command may be variously modified according to a user or manufacturer's setting.
  • FIG. 10 is a flowchart illustrating an example of an operation of setting at least one operation control parameter based on an element in a sentence of the image description data by the air cleaning device.
  • the air cleaning device 400 may check a subject in a sentence corresponding to the text of the image description data (S300).
  • the processor 480 may detect a word corresponding to a subject in the sentence based on the position and mutual relationship of each word included in the sentence.
  • the word corresponding to the subject may include a noun or pronoun representing a person, an animal, or an object.
  • the air cleaning device 400 may check a predicate (verb, etc.) in the sentence (S320).
  • the processor 480 may check a predicate in the sentence in order to confirm the behavior or state of the person.
  • the processor 480 may determine the predicate by estimating the position of the predicate based on the language type of the sentence and then confirming the word at the estimated position.
  • the processor 480 may check the predicate by detecting a verb or the like corresponding to the predicate among words in the sentence.
  • the processor 480 may control the driving of the air cleaning device 400 based on a driving algorithm set when the user is absent when the subject is not identified or the subject is not related to a person. .
  • the processor 480 may not drive the air cleaning device 400 when the user is absent.
  • the processor 480 may drive the air cleaning device 400 based on the detection result of the air quality sensor 446 when the user is absent.
  • the air cleaning device 400 sets a parameter corresponding to power off (S340), and can generate a control command including the set parameter. Yes (S220).
  • Predicates included in the first category are likely to release substances that degrade the performance or life of the air cleaning device 400 when a person corresponding to the subject is performing an action corresponding to the predicate or is in a state. It can mean if there is.
  • the memory 470 may store information on predicates included in the first category.
  • predicates such as “cook” and “clean” may be included in the first category. This is because when a person is cooking, harmful oil vapor may be discharged to the air cleaning device 400, and excessive dust may be discharged during cleaning.
  • the processor 480 may set a parameter corresponding to power off and generate a control command including the set parameter.
  • the processor 480 turns off the power of the air cleaning device 400 based on the generated control command, thereby minimizing the inhalation of substances that degrade the performance or life of the air cleaning device 400 into the air cleaning device 400 can do.
  • the air cleaning device 400 may set a parameter corresponding to the first air volume (S360).
  • the predicate included in the second category may be a predicate corresponding to a dynamic action or state.
  • a predicate corresponding to a dynamic action or state When a person corresponding to the subject performs or is in a state corresponding to the predicate, dust or odor in the air may increase.
  • the processor 480 may set a parameter related to the air volume to'strong' (or set a parameter to increase the air volume from the present).
  • the air cleaning device 400 may set a parameter corresponding to the second air volume (S370).
  • a predicate not included in each of the first category and the second category may be a predicate corresponding to a static action or state.
  • a person corresponding to the subject performs an action corresponding to the predicate or is in a state, dust or odor in the air may not increase or decrease.
  • the processor 480 sets the parameter related to the air volume to'about' (or the parameter to reduce the air volume from the present). Can be set).
  • the air cleaning device 400 can efficiently drive the air cleaning device 400 by setting the optimum air volume or stopping the driving of the air cleaning device 400 based on the characteristics of the predicate in the sentence included in the image description data. And minimize the degradation of life or performance.
  • the air cleaning device 400 may set a parameter for a discharge position and/or direction of air based on the sentence (S380).
  • the processor 480 may detect a position of a person based on a preposition, adjective, and/or noun included in the sentence, and may set a parameter for a wind direction so that air is discharged to an area including the detected position.
  • the processor 480 may set a driving-related parameter to move the air cleaning device 400 to an area adjacent to the sensed position.
  • the air cleaning device 400 may generate a control command including set parameters (S220).
  • the generated control command may include at least one parameter related to power, air volume, wind direction, and/or driving.
  • 11 to 12 are exemplary diagrams related to the operation illustrated in FIG. 10.
  • the processor 480 may acquire first image data IMAGE1 through the camera 442.
  • the processor 480 may obtain the first image description data IMAGE_DESC1 from the first image data IMAGE1 through the feature vector extraction module 482 and the image description data acquisition module 486.
  • the first image description data (IMAGE_DESC1) is “A man is cooking a pot in the kitchen.” May correspond to.
  • the processor 480 may detect a subject (“A man”) among words included in the acquired first image description data IMAGE_DESC1.
  • the processor 480 may detect a predicate among the words. For example, the processor 480 may recognize a word (“is cooking”) corresponding to a predicate among the words, or recognize a word next to the subject based on the sentence structure of English as a predicate.
  • the processor 480 may confirm that the recognized predicate (“is cooking” or “cooking”) is included in the first category. In this case, the processor 480 may set a first parameter PARA1 corresponding to power off, and generate a first control command CMD1 including the set first parameter PARA1.
  • the processor 480 may not perform the air cleaning operation by turning off the power of the air cleaning device 400 based on the generated first control command CMD1.
  • the processor 480 may acquire the second image data IMAGE2 through the camera 442.
  • the processor 480 may obtain the second image description data IMAGE_DESC2 from the second image data IMAGE2 through the feature vector extraction module 482 and the image description data acquisition module 486.
  • the second image description data IMAGE_DESC2
  • the second image description data is “A man and a woman are sleeping in the bed.” May correspond to.
  • the processor 480 may detect a subject (“A man and a woman”) among words included in the acquired second image description data IMAGE_DESC2.
  • the processor 480 may detect a predicate among the words. For example, the processor 480 may recognize a word (“are sleeping”) corresponding to a predicate among the words, or recognize a word next to the subject based on the sentence structure of English as a predicate.
  • the processor 480 may recognize that the recognized predicate (“are sleeping” or “sleeping”) is not included in each of the first category and the second category. In this case, the processor 480 may set the first parameter PARA1 corresponding to the air volume'about'.
  • the processor 480 may set the second parameter PARA2 related to the wind direction to correspond to the direction in which the bed is located, based on “in the bed” among words included in the second image description data IMAGE_DESC2. .
  • the processor 480 may generate a second control command CMD2 including the set first parameter PARA1 and the second parameter PARA2.
  • the processor 480 controls the fan motor 462 based on the generated second control command CMD2 and controls the rotation mechanism (or the driving unit 490) related to the wind direction, thereby controlling the air corresponding to the set parameters. Clean operation can be performed.
  • FIG. 13 is a flowchart illustrating an example of an operation of setting at least one operation control parameter based on a constituent element in a sentence of the image description data by the air cleaning device.
  • the air cleaning device 400 may check a subject in a sentence corresponding to the text of the image description data (S400).
  • the processor 480 may detect a word corresponding to a subject in the sentence based on the position and mutual relationship of each word included in the sentence.
  • the word corresponding to the subject may include a noun or pronoun representing a person, an animal, or an object.
  • the air cleaning device 400 may check a predicate (verb, etc.) in the sentence (S420).
  • the processor 480 may check the predicate in the sentence to confirm the behavior or state of the animal.
  • the processor 480 may determine the predicate by estimating the position of the predicate based on the language type of the sentence and then confirming the word at the estimated position.
  • the processor 480 may check the predicate by detecting a verb or the like corresponding to the predicate among words in the sentence.
  • the air cleaning device 400 may set a parameter corresponding to the first air volume (S440).
  • the predicate included in the second category may be a predicate corresponding to a dynamic action or state.
  • the processor 480 may set a parameter related to the air volume to'strong' (or set a parameter to increase the air volume from the present).
  • the air cleaning device 400 may set a parameter corresponding to the second air volume (S450).
  • a predicate not included in each of the second categories may be a predicate corresponding to a static action or state. If the identified predicate is a predicate that is not included in the second category, the processor 480 may set a parameter related to the air volume to'about' (or set a parameter to reduce the air volume from the present).
  • the air cleaning device 400 may set a parameter for a discharge position and/or direction of air based on the sentence (S460).
  • the processor 480 may detect the position of the animal based on a preposition, adjective, and/or noun included in the sentence, and may set a parameter for a wind direction so that air is discharged to an area including the detected position.
  • the processor 480 may set a driving-related parameter to move the air cleaning device 400 to an area adjacent to the sensed position.
  • the air cleaning device 400 may generate a control command including set parameters (S220).
  • the generated control command may include at least one parameter related to air volume, wind direction, and/or driving.
  • FIG. 14 is an exemplary diagram related to the operation illustrated in FIG. 13.
  • the processor 480 may acquire third image data IMAGE3 through the camera 442.
  • the processor 480 may obtain the third image description data IMAGE_DESC3 from the third image data IMAGE3 through the feature vector extraction module 482 and the image description data acquisition module 486.
  • the third image description data IMAGE_DESC3 is “On the right of the picture, a cat is playing with the ball.” May correspond to.
  • the processor 480 may detect a subject (“a cat”) among words included in the acquired third image description data IMAGE_DESC3.
  • the processor 480 may detect a predicate among the words. For example, the processor 480 may recognize a word (“is playing”) corresponding to a predicate among the words, or recognize a word next to the subject based on an English sentence structure as a predicate.
  • the processor 480 may recognize that the recognized predicate (“is playing” or “playing”) is included in the second category. In this case, the processor 480 may set the second parameter PARA2 corresponding to the air volume'strong'.
  • the processor 480 is based on "On the right of the picture” among words included in the third image description data IMAGE_DESC3, the first parameter related to the wind direction so as to correspond to the right area of the image data IMAGE3. (PARA1) can be set.
  • the processor 480 may generate a third control command CMD3 including the set first parameter PARA1 and the second parameter PARA2.
  • the processor 480 controls the fan motor 462 based on the generated third control command CMD3, and controls the rotation mechanism (or the driving unit 490) related to the wind direction, thereby controlling air corresponding to the set parameters. Clean operation can be performed.
  • 15 is a ladder diagram for explaining the operation of the air cleaning device and the artificial intelligence device according to an embodiment of the present invention.
  • Some of the operations of the air cleaning device 400 described above in FIGS. 6 to 14 may be performed by the artificial intelligence device 200a connected to the air cleaning device 400.
  • the feature vector extraction module 482 and the image description data acquisition module 486 shown in FIG. 5 may be included in the artificial intelligence device 200a instead of the air cleaning device 400.
  • the processing performance and speed of the artificial intelligence device 200a may be superior to the processing performance and speed of the air cleaning device 400. Therefore, when the operation of generating the control command of the air cleaning device 400 from the image data as shown in FIG. 15 is performed by the artificial intelligence device 200a, the speed or accuracy of generating the control command may increase compared to FIG. have.
  • the air cleaning device 400 may acquire an image (image data) through a camera 442 (S500) and transmit the acquired image data to the artificial intelligence device 200a (S510).
  • the processor 260 of the artificial intelligence device 200a extracts at least one feature vector from the received image data (S520), obtains image description data using the extracted feature vector (S530), and describes the acquired image
  • a control command of the air cleaning device 400 may be generated based on the data (S540).
  • Steps S520 to S540 are similar to steps S110 to S130 of FIG. 6, and thus a description thereof will be omitted.
  • the artificial intelligence device 200a may transmit the generated control command to the air cleaning device 400 (S550).
  • the air cleaning device 400 may control the air cleaning unit 460 and/or the driving unit 490 based on the received control command (S560).
  • the processor 480 controls the air cleaning unit 460 and/or the driving unit 490 based on at least one parameter related to the air volume, wind direction, and/or driving included in the received control command to perform an air cleaning operation. Can be done.
  • the processor 480 may turn off the power of the air cleaning device 400.
  • the air cleaning device 400 or the artificial intelligence device 200a controls the air cleaning operation based on image description data obtained from images around the air cleaning device 400, It can provide more effective air cleaning function reflecting the surrounding situation.
  • the air cleaning device 400 or the artificial intelligence device 200a is based on the behavior or state of a person sensed from the image and recognizes the release of substances that reduce the life or performance of the air cleaning device 400 in advance to clean the air. By stopping the drive of the device 400, it is possible to minimize the deterioration of life or performance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Chemical & Material Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

Un appareil d'épuration d'air selon un mode de réalisation de la présente invention comprend: une caméra pour obtenir des données d'image; une unité de purification d'air comprenant un moteur de ventilateur et un module de filtre; et un processeur qui extrait au moins un vecteur de caractéristique à partir des données d'image, obtient des données de description d'image pour les données d'image sur la base du ou des vecteurs de caractéristiques extraits, génère une instruction de commande sur la base des données de description d'image obtenues, et commande l'unité d'épuration d'air sur la base de l'instruction de commande générée.
PCT/KR2019/008675 2019-07-12 2019-07-12 Appareil d'épuration d'air à intelligence artificielle, son procédé de commande, et appareil d'intelligence artificielle connecté à celui-ci WO2021010503A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2019/008675 WO2021010503A1 (fr) 2019-07-12 2019-07-12 Appareil d'épuration d'air à intelligence artificielle, son procédé de commande, et appareil d'intelligence artificielle connecté à celui-ci

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2019/008675 WO2021010503A1 (fr) 2019-07-12 2019-07-12 Appareil d'épuration d'air à intelligence artificielle, son procédé de commande, et appareil d'intelligence artificielle connecté à celui-ci

Publications (1)

Publication Number Publication Date
WO2021010503A1 true WO2021010503A1 (fr) 2021-01-21

Family

ID=74211003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/008675 WO2021010503A1 (fr) 2019-07-12 2019-07-12 Appareil d'épuration d'air à intelligence artificielle, son procédé de commande, et appareil d'intelligence artificielle connecté à celui-ci

Country Status (1)

Country Link
WO (1) WO2021010503A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115826627A (zh) * 2023-02-21 2023-03-21 白杨时代(北京)科技有限公司 一种编队指令的确定方法、系统、设备及存储介质
CN117469755A (zh) * 2023-11-22 2024-01-30 苏州兴亚净化工程有限公司 一种超净化过滤装置、方法、系统及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180028198A (ko) * 2016-09-08 2018-03-16 연세대학교 산학협력단 실시간 영상을 이용하여 위험 상황을 예측하기 위한 영상 처리 방법, 장치 및 그를 이용하여 위험 상황을 예측하는 방법, 서버
KR20180049471A (ko) * 2016-11-02 2018-05-11 엘지전자 주식회사 공기청정기 및 그 제어방법
KR20180057029A (ko) * 2016-11-21 2018-05-30 현대자동차주식회사 차량의 주행차로 추정 장치 및 방법
KR20190026519A (ko) * 2017-09-05 2019-03-13 엘지전자 주식회사 인공지능 공기조화기의 동작 방법
KR20190035376A (ko) * 2017-09-26 2019-04-03 엘지전자 주식회사 인공지능을 이용한 이동 로봇 및 이동 로봇의 제어방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180028198A (ko) * 2016-09-08 2018-03-16 연세대학교 산학협력단 실시간 영상을 이용하여 위험 상황을 예측하기 위한 영상 처리 방법, 장치 및 그를 이용하여 위험 상황을 예측하는 방법, 서버
KR20180049471A (ko) * 2016-11-02 2018-05-11 엘지전자 주식회사 공기청정기 및 그 제어방법
KR20180057029A (ko) * 2016-11-21 2018-05-30 현대자동차주식회사 차량의 주행차로 추정 장치 및 방법
KR20190026519A (ko) * 2017-09-05 2019-03-13 엘지전자 주식회사 인공지능 공기조화기의 동작 방법
KR20190035376A (ko) * 2017-09-26 2019-04-03 엘지전자 주식회사 인공지능을 이용한 이동 로봇 및 이동 로봇의 제어방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115826627A (zh) * 2023-02-21 2023-03-21 白杨时代(北京)科技有限公司 一种编队指令的确定方法、系统、设备及存储介质
CN117469755A (zh) * 2023-11-22 2024-01-30 苏州兴亚净化工程有限公司 一种超净化过滤装置、方法、系统及介质

Similar Documents

Publication Publication Date Title
WO2020246643A1 (fr) Robot de service et procédé de service au client mettant en œuvre ledit robot de service
WO2018139865A1 (fr) Robot mobile
WO2021006404A1 (fr) Serveur d'intelligence artificielle
WO2020256195A1 (fr) Robot de gestion d'immeuble et procédé pour fournir un service à l'aide dudit robot
WO2020241929A1 (fr) Robot de nettoyage
WO2021015308A1 (fr) Robot et procédé de reconnaissance de mot de déclenchement associé
WO2020241920A1 (fr) Dispositif d'intelligence artificielle pouvant commander un autre dispositif sur la base d'informations de dispositif
WO2021091030A1 (fr) Appareil de cuisson à intelligence artificielle
WO2021029457A1 (fr) Serveur d'intelligence artificielle et procédé permettant de fournir des informations à un utilisateur
WO2020184748A1 (fr) Dispositif d'intelligence artificielle et procédé de commande d'un système d'arrêt automatique sur la base d'informations de trafic
WO2020251074A1 (fr) Robot à intelligence artificielle destiné à fournir une fonction de reconnaissance vocale et procédé de fonctionnement associé
WO2021025217A1 (fr) Serveur d'intelligence artificielle
WO2020262746A1 (fr) Appareil à base d'intelligence artificielle pour recommander un parcours de linge, et son procédé de commande
WO2019031825A1 (fr) Dispositif électronique et procédé de fonctionnement associé
WO2020246640A1 (fr) Dispositif d'intelligence artificielle pour déterminer l'emplacement d'un utilisateur et procédé associé
WO2020262721A1 (fr) Système de commande pour commander une pluralité de robots par l'intelligence artificielle
WO2020246647A1 (fr) Dispositif d'intelligence artificielle permettant de gérer le fonctionnement d'un système d'intelligence artificielle, et son procédé
WO2021020621A1 (fr) Agent de déplacement à intelligence artificielle
WO2020145625A1 (fr) Dispositif d'intelligence artificielle et procédé de fonctionnement associé
WO2020184746A1 (fr) Appareil d'intelligence artificielle permettant de commander un système d'arrêt automatique sur la base d'informations de conduite, et son procédé
WO2021172642A1 (fr) Dispositif d'intelligence artificielle permettant de fournir une fonction de commande de dispositif sur la base d'un interfonctionnement entre des dispositifs et procédé associé
WO2021010503A1 (fr) Appareil d'épuration d'air à intelligence artificielle, son procédé de commande, et appareil d'intelligence artificielle connecté à celui-ci
WO2020251086A1 (fr) Appareil de manipulation de linge à intelligence artificielle
WO2021206221A1 (fr) Appareil à intelligence artificielle utilisant une pluralité de couches de sortie et procédé pour celui-ci
WO2021215547A1 (fr) Dispositif et procédé de maison intelligente

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19938028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19938028

Country of ref document: EP

Kind code of ref document: A1