WO2020071854A1 - Electronic apparatus and control method thereof - Google Patents
Electronic apparatus and control method thereofInfo
- Publication number
- WO2020071854A1 WO2020071854A1 PCT/KR2019/013040 KR2019013040W WO2020071854A1 WO 2020071854 A1 WO2020071854 A1 WO 2020071854A1 KR 2019013040 W KR2019013040 W KR 2019013040W WO 2020071854 A1 WO2020071854 A1 WO 2020071854A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- external electronic
- electronic apparatus
- learning data
- artificial intelligence
- voice
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
Definitions
- the disclosure relates to an electronic apparatus and a control method. More particularly, the disclosure relates to a method of training an artificial intelligence model of at least one external electronic apparatus among a plurality of external electronic apparatuses based on learning data stored in the plurality of external electronic apparatuses.
- AI artificial intelligence
- An artificial intelligence (AI) system is a computer system implementing intelligence of a human level, and is a system wherein a machine learns, determines, and becomes smarter by itself, unlike rule-based smart systems.
- An artificial intelligence system shows a more improved recognition rate as it is used more, and becomes capable of understanding user preferences more correctly. For this reason, rule-based smart systems are gradually being replaced by deep learning-based artificial intelligence systems.
- AI technology consists of machine learning (deep learning) and element technologies utilizing machine learning.
- Machine learning refers to an algorithm technology of classifying/learning the characteristics of input data by itself.
- an element technology refers to a technology utilizing a machine learning algorithm such as deep learning, and includes fields of technologies such as linguistic understanding, visual understanding, inference/prediction, knowledge representation, and operation control.
- Linguistic understanding refers to a technology of recognizing languages/characters of humans, and applying/processing them, and includes natural speech processing, machine translation, communication systems, queries and answers, voice recognition/synthesis, and the like.
- Visual understanding refers to a technology of recognizing an object in a similar manner to human vision, and processing the object, and includes recognition of an object, tracking of an object, search of an image, recognition of humans, understanding of a scene, understanding of a space, improvement of an image, and the like.
- Inference/prediction refers to a technology of determining information and then making logical inference and prediction, and includes knowledge/probability based inference, optimization prediction, preference based planning, recommendation, and the like.
- Knowledge representation refers to a technology of automatically processing information of human experiences into knowledge data, and includes knowledge construction (data generation/classification), knowledge management (data utilization), and the like.
- Operation control refers to a technology of controlling autonomous driving of vehicles and movements of robots, and includes movement control (navigation, collision, driving), operation control (behavior control), and the like.
- a user has various types of user devices.
- Various user devices for example, smartphones, artificial intelligence speakers, digital TVs, refrigerators, etc. may include AI models.
- AI models may include AI models.
- a problem has existed, which is that the amount of learning data collected by some user devices is insufficient, and is insufficient for training an AI model, or levels of learning vary for each user device, and accordingly, even if the same user uses a user device, the same result value of performance cannot be maintained.
- an aspect of the disclosure is to address the aforementioned problems, and relates to a method of sharing learning data among a plurality of external electronic apparatuses.
- a control method of an electronic apparatus for achieving the aforementioned purpose includes receiving, from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses, identifying first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus, training the second artificial intelligence model used by the second external electronic apparatus based on the first learning data, and transmitting the trained second artificial intelligence model to the second external electronic apparatus.
- the control method may further include receiving first and second characteristic information of the first and second external electronic apparatuses, respectively, based on the first characteristic information of the first external electronic apparatus and the second characteristic information of the second external electronic apparatus, converting the first learning data into third learning data to train the second artificial intelligence model used by the second external electronic apparatus, and training the second artificial intelligence model used by the second external electronic apparatus based on the third learning data.
- At least one of an input value or a label value included in the second learning data may be compared with an input value and a label value included in the learning data received from the first external electronic apparatus and the first learning data may be acquired.
- the second artificial intelligence model may be an artificial intelligence model for voice recognition, and the plurality of learning data may include voice data, a label value of the voice data, and user information corresponding to the voice data.
- At least one of second voice data, a second label value of the second voice data, or second user information corresponding to the second voice data included in the second learning data may be compared with at least one of first voice data, a first label value of the first voice data, and first user information corresponding to the first voice data included in the learning data received from the first external electronic apparatus and the first learning data may be acquired.
- the first and second characteristic information may include at least one of characteristic information related to voice inputters included in the first and second external electronic apparatuses, characteristic information related to noises input through the voice inputters of the first and second external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the first and second external electronic apparatuses.
- voice data included in the first learning data may be converted by using a frequency filter.
- the first and second artificial intelligence models used by the first and second external electronic apparatuses, respectively, the plurality of learning data stored in the first and second external electronic apparatuses, and first and second characteristic information of the first and second external electronic apparatuses, respectively, may be received.
- an electronic apparatus includes a memory, a communicator, and a processor configured to receive, via the communicator from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses, identify first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus, train the second artificial intelligence model used by the second external electronic apparatus based on the first learning data, and transmit, via the communicator, the trained second artificial intelligence model to the second external electronic apparatus.
- the processor may receive, via the communicator, first and second characteristic information of the first and second external electronic apparatuses, respectively, based on the first characteristic information of the first external electronic apparatus and the second characteristic information of the second external electronic apparatus, convert the first learning data into third learning data to train the second artificial intelligence model used by the second external electronic apparatus, and train the second artificial intelligence model used by the second external electronic apparatus based on the third learning data.
- the processor may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in the learning data received from the first external electronic apparatus and acquire the first learning data.
- the second artificial intelligence model may be an artificial intelligence model for voice recognition, and the plurality of learning data may include voice data, a label value of the voice data, and user information corresponding to the voice data.
- the processor may compare at least one of second voice data, a second label value of the second voice data, or second user information corresponding to the second voice data included in the second learning data with at least one of first voice data, a first label value of the first voice data, and first user information corresponding to the fist voice data included in the learning data received from the first external electronic apparatus and acquire the first learning data.
- the first and second characteristic information may include at least one of characteristic information related to voice inputters included in the first and second external electronic apparatuses, characteristic information related to noises input through the voice inputters of the first and second external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the first and second external electronic apparatuses.
- the processor may convert voice data included in the first learning data by using a frequency filter.
- the processor may, based on a predetermined time condition, receive, via the communicator, the first and second artificial intelligence models used by the first and second external electronic apparatuses, respectively, the plurality of learning data stored in the first and second external electronic apparatuses, and first and second characteristic information of the first and second external electronic apparatuses, respectively.
- an electronic apparatus includes a memory, a communicator, and a processor configured to receive, via the communicator from external electronic apparatuses, a plurality of learning data stored in the external electronic apparatuses, identify first learning data corresponding to second learning data stored in the electronic apparatus among the plurality of learning data received from the external electronic apparatuses, and train an artificial intelligence model used by the electronic apparatus based on the identified first learning data.
- the processor may receive, via the communicator, characteristic information of the external electronic apparatuses, based on the characteristic information of the external electronic apparatuses and characteristic information of the electronic apparatus, convert the first learning data into third learning data to train the artificial intelligence model used by the electronic apparatus, and train the artificial intelligence model used by the electronic apparatus based on the third learning data.
- the processor may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in the plurality of learning data received from the external electronic apparatuses and identify the first learning data.
- the characteristic information of the electronic apparatus and the external electronic apparatuses may include at least one of characteristic information related to voice inputters of the electronic apparatus and the external electronic apparatuses, characteristic information related to noises input through the voice inputters of the electronic apparatus and the external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the electronic apparatus and the external electronic apparatuses.
- an electronic apparatus can provide learning data suitable for an external electronic apparatus wherein personalized learning data is insufficient, and train an artificial intelligence model by using the provided learning data.
- FIG. 1 is a diagram schematically illustrating an embodiment of the disclosure
- FIG. 2 is a block diagram illustrating a schematic configuration of an electronic apparatus according to an embodiment of the disclosure
- FIG. 3 is a block diagram illustrating a detailed configuration of an electronic apparatus according to an embodiment of the disclosure.
- FIG. 4 is a diagram illustrating a data sharing method according to an embodiment of the disclosure.
- FIG. 5 is a diagram illustrating a data sharing method according to an embodiment of the disclosure.
- FIG. 6 is a diagram illustrating a method of converting learning data according to an embodiment of the disclosure.
- FIG. 7 is a diagram illustrating a method of acquiring learning data according to an embodiment of the disclosure.
- FIG. 8 is a diagram illustrating a method of sharing learning data according to an embodiment of the disclosure.
- FIG. 9 is a diagram illustrating a method for a second external electronic apparatus to convert learning data according to an embodiment of the disclosure.
- FIG. 10 is a flow chart illustrating a control method of an electronic apparatus according to an embodiment of the disclosure.
- the expressions “A or B,” “at least one of A and/or B,” or “one or more of A and/or B” and the like may include all possible combinations of the listed items.
- “A or B,” “at least one of A and B,” or “at least one of A or B” refer to all of the following cases: (1) including at least one A, (2) including at least one B, or (3) including at least one A and at least one B.
- one element e.g., a first element
- another element e.g., a second element
- the description that one element is "directly coupled” or “directly connected” to another element (e.g., a second element) can be interpreted to mean that another further element (e.g., a third element) does not exist between the one element and other element.
- the expression “configured to” used in the disclosure may be interchangeably used with other expressions such as “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” and “capable of,” depending on cases.
- the term “configured to” does not necessarily mean that a device is “specifically designed to” in terms of hardware. Instead, under some circumstances, the expression “a device configured to” may mean that the device “is capable of” performing an operation together with another device or component.
- a sub-processor configured to perform A, B and C may mean a dedicated processor (e.g., an embedded processor) for performing the corresponding operations, or a generic-purpose processor (e.g., a central processing unit (CPU) or an application processor) that can perform the corresponding operations by executing one or more software programs stored in a memory device.
- a dedicated processor e.g., an embedded processor
- a generic-purpose processor e.g., a central processing unit (CPU) or an application processor
- An electronic apparatus may include, at least one of, for example, a smartphone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a personal digital assistant (PDA), a portable multimedia player (PMP), a moving picture experts group phase 1 or phase 2 (MPEG-1 or MPEG-2) audio layer 3 (MP3) player, a medical instrument, a camera, or a wearable device.
- a smartphone a tablet personal computer (PC)
- PC personal computer
- PMP portable multimedia player
- MPEG-1 or MPEG-2 moving picture experts group phase 1 or phase 2
- MP3 audio layer 3
- a wearable device may include at least one of an accessory-type device (e.g., a watch, a ring, a bracelet, an ankle bracelet, a necklace, glasses, a contact lens, or a head-mounted-device (HMD)), a device integrated with fabrics or clothing (e.g., electronic clothing), a body-attached device (e.g., a skin pad or a tattoo), or an implantable circuit.
- an accessory-type device e.g., a watch, a ring, a bracelet, an ankle bracelet, a necklace, glasses, a contact lens, or a head-mounted-device (HMD)
- a device integrated with fabrics or clothing e.g., electronic clothing
- a body-attached device e.g., a skin pad or a tattoo
- an electronic apparatus may include at least one of, for example, a television, a digital versatile disk (DVD) player, an audio, a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washing machine, an air cleaner, a set top box, a home automation control panel, a security control panel, a media box (e.g., Samsung HomeSync ⁇ , Apple TV ⁇ , or Google TV ⁇ ), a game console (e.g., Xbox ⁇ , PlayStation ⁇ ), an electronic dictionary, an electronic key, a camcorder, or an electronic photo frame.
- a television a digital versatile disk (DVD) player
- an audio e.g., a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washing machine, an air cleaner, a set top box, a home automation control panel, a security control panel, a media box (e.g., Samsung HomeSync ⁇ , Apple TV ⁇ , or Google TV ⁇ ), a game console (e.g., Xbox ⁇ , PlayStation ⁇ ), an electronic
- an electronic apparatus may include at least one of various types of medical instruments (e.g., various types of portable medical measurement instruments (a blood glucose meter, a heart rate meter, a blood pressure meter, or a thermometer, etc.), magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), computed tomography (CT), a photographing device, an ultrasonic instrument, etc.), a navigation device, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), a vehicle infotainment device, an electronic device for vessels (e.g., a navigation device for vessels, a gyrocompass, etc.), avionics, a security device, a head unit for a vehicle, an industrial or a household robot, a drone, an automated teller machine (ATM) of a financial institution, a point of sales (POS) of a store, or an Internet of things (IoT) device (e.g.,
- the term "user” may refer to a person who uses an electronic apparatus or an apparatus using an electronic apparatus (e.g., an artificial intelligence electronic apparatus).
- a first artificial intelligence model may mean an artificial intelligence model used by a first external electronic apparatus or an artificial intelligence model received from a first external electronic apparatus
- a second artificial intelligence model may mean an artificial intelligence model used by a second external electronic apparatus or an artificial intelligence model received from a second external electronic apparatus.
- first learning data may mean learning data stored in a first external electronic apparatus
- second learning data may mean learning data stored in a second external electronic apparatus.
- FIG. 1 is a diagram schematically illustrating an embodiment of the disclosure.
- a plurality of external electronic apparatuses may communicate with an electronic apparatus 100.
- the plurality of external electronic apparatuses may include artificial intelligence models, and may be apparatuses for providing various services by using artificial intelligence models.
- artificial intelligence models used by the plurality of external electronic apparatuses may vary according to the purpose of each of the plurality of external electronic apparatuses.
- an artificial intelligence model may vary such as an artificial intelligence model for voice recognition, an artificial intelligence model for image analysis, etc.
- the electronic apparatus 100 is an apparatus for training artificial intelligence models used by the plurality of external electronic apparatuses.
- the electronic apparatus 100 may store artificial intelligence models used by the plurality of external electronic apparatuses, and data necessary for training the artificial intelligence models. Specifically, the electronic apparatus 100 may receive the artificial intelligence model of each of the plurality of external electronic apparatuses from the plurality of external electronic apparatuses, and store the models.
- the artificial intelligence model of each of the plurality of external electronic apparatuses may be an artificial intelligence model trained based on learning data of each of the plurality of external electronic apparatuses.
- the disclosure is not limited thereto, and the electronic apparatus 100 can receive an artificial intelligence model used by each of the plurality of external electronic apparatuses from an external server, and store the model.
- the electronic apparatus 100 may receive learning data from each of the plurality of external electronic apparatuses, and store the data.
- the electronic apparatus 100 may classify artificial intelligence models and learning data received from each of the plurality of external electronic apparatuses by each external electronic apparatus, and store them.
- the electronic apparatus 100 may train the artificial intelligence model used by each of the plurality of external electronic apparatuses based on the learning data received from each of the plurality of external electronic apparatuses. For example, the electronic apparatus 100 may train the first artificial intelligence model used by the first external electronic apparatus 200-1 based on the learning data received from the first to third external electronic apparatuses 200-1 to 200-3. By the same method, the electronic apparatus 100 may train the second artificial intelligence model used by the second external electronic apparatus 200-2 based on the learning data received from the first to third external electronic apparatuses 200-1 to 200-3. Also, by the same method, the electronic apparatus 100 may train the third artificial intelligence model used by the third external electronic apparatus 200-3 based on the learning data received from the first to third external electronic apparatuses 200-1 to 200-3.
- the plurality of external electronic apparatuses may share learning data stored in each of the apparatuses and construct personalized artificial intelligence models.
- the first external electronic apparatus 200-1 is an apparatus where there is a lot of stored learning data.
- the learning data stored in the second external electronic apparatus 200-2 and the third external electronic apparatus 200-3 is small compared to the first external electronic apparatus 200-1, there would be an effect that the second external electronic apparatus 200-2 and the third external electronic apparatus 200-3 can train artificial intelligence models by using the learning data stored from the first external electronic apparatus 200-1 for solving the problem of insufficient learning data.
- artificial intelligence models sharing learning data may be artificial intelligence models performing similar functions.
- all of the first to third artificial intelligence models may be artificial intelligence models related to voice recognition.
- all of the first to third artificial intelligence models may be artificial intelligence models for image analysis.
- the disclosure is not limited thereto, and in case there is a need to share learning data, learning data stored by artificial intelligence models different from one another can be shared by the plurality of external electronic apparatuses.
- the electronic apparatus 100 can train artificial intelligence models of the plurality of external electronic apparatuses by using an external server storing various learning data.
- training artificial intelligence models by using all of the learning data of the external server has a disadvantage that a large amount of resources are needed.
- the electronic apparatus 100 may select learning data similar to the learning data included in the first to third external electronic apparatuses 200-1 to 200-3 among the vast amount of learning data stored by the external server, and train artificial intelligence models based on the selected learning data.
- FIG. 2 is a block diagram illustrating a schematic configuration of an electronic apparatus according to an embodiment of the disclosure.
- the electronic apparatus 100 may include a memory 110, a communicator 120, and a processor 130.
- the memory 110 may store instructions or data related to at least one other different component of the electronic apparatus 100.
- the memory 110 may be implemented as a nonvolatile memory, a volatile memory, a flash memory, a hard disk drive (HDD), or a solid state drive (SSD), etc.
- the memory 110 may be accessed by the processor 130, and reading/recording/correction/deletion/update, etc. of data by the processor 130 may be performed.
- the term memory may include the memory 110, the read-only memory (ROM) (not shown) and random access memory (RAM) (not shown) inside the processor 130, or a memory card (not shown) mounted on the electronic apparatus 100 (e.g., a micro secure digital (SD) card, a memory stick).
- SD micro secure digital
- the memory 110 may receive artificial intelligence models used by the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, and learning data stored in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2. Further, the memory 110 may receive information on characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, and store the information.
- characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 may include at least one of characteristic information related to the voice inputters included in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, characteristic information related to noises input through the voice inputters of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, or characteristic information related to the distance between a location wherein a user voice was generated and the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
- the communicator 120 is a component for performing communication with another electronic apparatus.
- communicative connection of the communicator 120 with another electronic apparatus may include communication through a third apparatus (e.g., a repeater, a hub, an access point, a server, a gateway, etc.).
- Wireless communication may include cellular communication using at least one of long term evolution (LTE), LTE Advance (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), a universal mobile telecommunications system (UMTS), Wireless Broadband (WiBro), or a Global System for Mobile Communications (GSM).
- LTE long term evolution
- LTE-A LTE Advance
- CDMA code division multiple access
- WCDMA wideband CDMA
- UMTS universal mobile telecommunications system
- WiBro Wireless Broadband
- GSM Global System for Mobile Communications
- wireless communication may include, for example, at least one of Wi-Fi, Bluetooth, Bluetooth low energy (BLE), Zigbee, near field communication (NFC), Magnetic Secure Transmission, radio frequency (RF), or a body area network (BAN).
- Wired communication may include, for example, at least one of a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard 232 (RS-232), power line communication, or a plain old telephone service (POTS).
- Networks wherein wireless communication or wired communication is performed may include at least one of a telecommunication network, for example, a computer network (e.g., a local area network (LAN) or a wide area network (WAN)), Internet, or a telephone network.
- LAN local area network
- WAN wide area network
- Internet or a telephone network.
- the communicator 120 may receive artificial intelligence models used by the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, and learning data stored in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
- the processor 130 may be electronically connected with the memory 110, and control the overall operations and functions of the electronic apparatus 100.
- the processor 130 may identify first learning data corresponding to second learning data received from the second external electronic apparatus 200-2 among a plurality of learning data received from the first external electronic apparatus 200-1, and train the artificial intelligence model used by the second external electronic apparatus 200-2 based on the identified first learning data.
- the first external electronic apparatus 200-1 may be an external server including a vast amount of learning data
- the second external electronic apparatus 200-2 may be a user terminal.
- the processor 130 may receive characteristic information of each of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
- the electronic apparatus 100 may convert the first learning data into third learning data for training the second artificial intelligence model based on the characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
- learning data may be data for a user voice.
- user voices received by the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 may vary according to characteristic information of each external electronic apparatus.
- the first external electronic apparatus 200-1 is a user terminal such as a smartphone
- the second external electronic apparatus 200-2 is a refrigerator
- the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 may be different from each other in the types of microphones included inside the apparatuses, the number of microphones, etc., and also, the distances from the starting point of a user voice to the external electronic apparatuses may be different. Accordingly, the noise, frequency characteristics, etc.
- the processor 130 may convert the first learning data into third learning data for training the second artificial intelligence model based on the characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
- the processor 130 may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in each of the plurality of learning data received from the first external electronic apparatus 200-1 and acquire first learning data.
- the electronic apparatus 100 may identify learning data for training the second artificial intelligence model, among the learning data of the first external electronic apparatus 200-1.
- the electronic apparatus 100 may identify learning data similar to the input value and the label value of the second learning data from the plurality of learning data of the first external electronic apparatus 200-1.
- the first and second learning data may include user voice data, a label value of the voice data, and user information corresponding to the voice data.
- an input value may be data related to an acoustic model
- a label value may be data related to a language model
- user information may be user identification information.
- an input value may be data such as the waveform of a user voice, and may include data such as intonation and tone
- a label value may be text data wherein a user voice was converted into a text.
- the electronic apparatus 100 may identify whether waveforms of input values are similar, or distribution of label values expressed in a vector format exists within a predetermined distance.
- the processor 130 may convert voice data included in the first learning data by using a frequency filter. Specifically, the processor 130 may acquire a frequency filter that can generate voice data appropriate for the second external electronic apparatus 200-2 based on characteristic information of the first external electronic apparatus 200-1 and characteristic information of the second external electronic apparatus 200-2, and acquire third learning data by using the acquired filter.
- the processor 130 may perform the aforementioned operations in case a predetermined time condition is satisfied.
- the processor 130 may receive learning data from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 around dawn and train the first artificial intelligence model and the second artificial intelligence model.
- the first artificial intelligence model and the second artificial intelligence model may be artificial intelligence models for understanding of natural languages.
- learning data may be information on the purport of a user voice and a slot included in a result of voice recognition acquired through a voice recognition model.
- FIG. 3 is a block diagram illustrating a detailed configuration of an electronic apparatus according to an embodiment of the disclosure.
- the electronic apparatus 100 may further include an inputter 140, a display 150, and an audio outputter 160 in addition to the memory 110, the communicator 120, and the processor 130.
- the components are not limited to the aforementioned components, and some components may be added or omitted depending on needs.
- the inputter 140 is a component for receiving input of a user instruction.
- the inputter 140 may include a camera 141, a microphone 142, a touch panel 143, etc.
- the camera 141 is a component for acquiring image data around the electronic apparatus 100.
- the camera 141 may photograph a still image and a moving image.
- the camera 141 may include one or more image sensors (e.g., a front surface sensor or a back surface sensor), a lens, an image signal processor (ISP), or a flash (e.g., a light emitting diode (LED), a xenon lamp, etc.).
- the microphone 142 is a component for acquiring sounds around the electronic apparatus 100.
- the microphone 142 may receive input of an acoustic signal outside, and generate electronic voice information. Also, the microphone 142 may use various noise removal algorithms for removing noises generated in the process of receiving input of an acoustic signal outside.
- the touch panel 143 is a component for receiving various user inputs. The touch panel 143 may receive input of data by a user manipulation. Also, the touch panel 143 may be constituted while being combined with the display that will be described below. Meanwhile, it is obvious that the inputter 140 may include various components for receiving input of various data in addition to the camera 141, the microphone 142, and the touch panel 143.
- the aforementioned various components of the inputter 140 may be used in various forms.
- the electronic apparatus 100 may input a user voice input through the microphone 142 into the first artificial intelligence model received from the first external electronic apparatus 200-1 and output a result value, and transmit the value to the second external electronic apparatus 200-2. That is, the electronic apparatus 100 may not only share learning data between the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, but also acquire a result value for an input value of an artificial intelligence model not included in each external electronic apparatus instead of the external electronic apparatuses, and transmit the result value.
- the display 150 is a component for outputting various images.
- the display 150 for providing various images may be implemented as display panels in various forms.
- the display panel may be implemented as various display technologies such as a liquid crystal display (LCD), organic light emitting diodes (OLEDs), an active-matrix organic light-emitting diode (AM-OLED), liquid crystal on silicon (LcoS), digital light processing (DLP), etc.
- the display 150 may be combined with at least one of the front surface area, the side surface area, or the back surface area of the electronic apparatus 100, in the form of a flexible display.
- the display 150 may display various setting information for sharing learning data among external electronic apparatuses. That is, sharing of learning data among external electronic apparatuses may be performed automatically, but may also be performed by a user instruction, and the display 150 may output various information for receiving input of a user instruction or inquiring about a user instruction.
- the audio outputter 160 is a component outputting various kinds of notification sounds or voice messages as well as various types of audio data for which various processing operations such as decoding or amplification, noise filtering, etc. were performed by an audio processor.
- the audio processor is a component performing processing of audio data. At the audio processor, various processing such as decoding or amplification, noise filtering, etc. of audio data may be performed. Audio data processed at the audio processor may be output to the audio outputter 160.
- the audio outputter may be implemented as a speaker, but this is merely an example, and the audio outputter may be implemented as an output terminal that can output audio data.
- the processor 130 controls the overall operations of the electronic apparatus 100, as described above.
- the processor 130 may include a RAM 131, a ROM 132, a main CPU 134, a graphic processor 133, first to nth interfaces 135-1 to 135-n, and a bus 136.
- the RAM 131, the ROM 132, the main CPU 134, the graphic processor 133, and the first to nth interfaces 135-1 to 135-n may be connected with one another through the bus 136.
- the ROM 132 stores a set of instructions, etc. for system booting.
- the main CPU 134 copies an operating system (O/S) stored in the memory in the RAM 131 according to the instruction stored in the ROM 132, and boots the system by executing the O/S.
- O/S operating system
- the main CPU 134 copies various types of application programs stored in the memory in the RAM 131, and performs various types of operations by executing the application programs copied in the RAM 131.
- the main CPU 134 accesses the memory 110, and performs booting by using the O/S stored in the memory 110. Then, the main CPU 134 performs various operations by using various programs, contents, data, etc. stored in the memory 110.
- the first to nth interfaces 135-1 to 135-n are connected with the aforementioned various components.
- One of the interfaces may be a network interface connected to an external electronic apparatus through a network.
- FIGS. 4 and 5 are diagrams illustrating a data sharing method according to various embodiments of the disclosure.
- FIG 4 is an diagram illustrating a method of selecting learning data necessary for training the second artificial intelligence model in case the first external electronic apparatus 200-1 is an external server storing various and vast learning data
- FIG. 5 is a diagram illustrating a method of selecting learning data necessary for training the second artificial intelligence model in case the first external electronic apparatus 200-1 is a personal terminal storing personalized learning data.
- the first external electronic apparatus 200-1 may include the first artificial intelligence model and the first database
- the second external electronic apparatus 200-2 may include the second artificial intelligence model and the second database.
- the first database and the second database may store a plurality of learning data.
- a selection module may acquire the first learning data similar to the second learning data (including an input value D2 Input and a label value D2 Label) including an input value and a label value among the plurality of learning data of the second database from the plurality of learning data of the first database. Specifically, as illustrated in FIG. 4, the selection module may compare at least one of the input value D2 Input or the label value D2 Label of the second learning data with at least one of the input value or the label value of each of the plurality of learning data of the first database, and select similar data.
- the selection module may train the second artificial intelligence model as the first learning data including an input value D1 Input and a label value D1 Label similar to the second learning data as an input value.
- the electronic apparatus 100 may train the second artificial intelligence model by acquiring learning data similar to each of the plurality of learning data of the second database from the first database.
- the electronic apparatus 100 may use all of the plurality of learning data of the first database as learning data for training the second artificial intelligence model. That is, unlike the case in FIG. 4, in case both of the learning data stored in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 are learning data received from a user, the electronic apparatus 100 may train the second artificial intelligence model by using all of the plurality of learning data of the first database.
- FIG. 6 is a diagram illustrating a method of converting learning data according to an embodiment of the disclosure.
- the selection module may acquire the first learning data similar to the second learning data (including an input value D2 Input and a label value D2 Label) including an input value and a label value among the plurality of learning data of the second database from the plurality of learning data of the first database. Specifically, as illustrated in FIG. 4, the selection module may compare at least one of the input value D2 Input or the label value D2 Label of the second learning data with at least one of the input value or the label value of each of the plurality of learning data of the first database, and select similar data.
- the selection module may acquire the first learning data including the input value D1 Input and the label value D1 Label similar to the second learning data, and a conversion module may acquire third learning data (an input value D1' Input and a label value D1' Label) based on the input value D1 Input, the input value D2 Input, and the label value D1. That is, as the third learning data, learning data appropriate for the second external electronic apparatus 200-2 may be acquired by converting the first learning data.
- the D1 Input and the D2 Input may be data related to a user voice received by each external electronic apparatus.
- the electronic apparatus 100 may add data for noises around the first external electronic apparatus to the second learning data (specifically, the D2 Input), and acquire third learning data (specifically, the D1' Input).
- the electronic apparatus 100 may acquire data for the ambient noises of the first external electronic apparatus 200-1 according to the usage environment of the first external electronic apparatus 200-1, and add the acquired data for the ambient noises of the first external electronic apparatus 200-1 and the D2 Input in the time area.
- the electronic apparatus 100 may filter the D2 Input in the frequency area by using the frequency filter for the ambient noise environment of the first external electronic apparatus 200-1.
- the electronic apparatus 100 may acquire data for a non-voice section included in the D1 Input by using voice activity detection (VAD), and add data for the non-voice section and the D2 input in the time area.
- VAD voice activity detection
- VAD is a technology of dividing a portion where a user voice is included and a portion where a user voice is not included (a mute portion or a non-voice portion) in an input user utterance
- the electronic apparatus 100 may acquire third learning data by using the noise environment information included in the first external electronic apparatus 200-1 by using the non-voice portion.
- the electronic apparatus 100 may acquire third learning data based on characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
- the electronic apparatus 100 may acquire third learning data in consideration of the difference between the specification information of the first external electronic apparatus 200-1 and the specification information of the second external electronic apparatus 200-2.
- the electronic apparatus 100 may acquire third learning data by using the difference between the noise input into the first external electronic apparatus 200-1 and the noise input into the second external electronic apparatus 200-2.
- the electronic apparatus 100 may acquire third learning data based on the distance information between the location of the first external electronic apparatus 200-1 and a location wherein a user voice was generated and the distance information between the location of the second external electronic apparatus 200-2 and a location wherein a user voice was generated.
- a case wherein a user voice is input into the first external electronic apparatus 200-1 in a short distance, and a user voice is input into the second external electronic apparatus 200-2 from a far distance can be assumed.
- the electronic apparatus 100 may convert the third learning data in consideration of the distance information between the location of the first external electronic apparatus 200-1 and a location wherein a user voice was generated and the distance information between the location of the second external electronic apparatus 200-2 and a location wherein a user voice was generated, in consideration of the aforementioned matter.
- the electronic apparatus 100 may use a frequency filter.
- the electronic apparatus 100 may convert all of the plurality of learning data of the first database to suit the characteristic information of the second external electronic apparatus 200-2, as described in FIG. 5. That is, in FIG. 6, an embodiment where data similar to the second learning data is acquired, and the acquired learning data is changed was described, but the disclosure is not limited thereto, and the electronic apparatus 100 can convert all learning data of the first database, without a process of acquiring learning data similar to the second learning data.
- FIG. 7 is a diagram for illustrating a method of acquiring learning data according to an embodiment of the disclosure.
- the first external electronic apparatus 200-1 is an electronic apparatus which received learning data for users, and was trained for a long time, and may include an artificial intelligence model which includes learning data related to a plurality of users and performed a lot of learning.
- the second external electronic apparatus 200-2 is, for example, an electronic apparatus that a user newly purchased, and may include an artificial intelligence model which has a small amount of learning data for the user, and which is not personalized.
- the electronic apparatus 100 may input the D2 Input received from the second external electronic apparatus 200-2 into the first artificial intelligence model as an input value, and output a result value.
- the electronic apparatus 100 may output a label value D1 Label for the D2 Input and a probability value Prob regarding whether the D1 Label is a label value for the D2 Input.
- the selection module may train the second artificial intelligence model based on fourth learning data including the D2 Input and the D1 Label.
- the electronic apparatus 100 may acquire fourth learning data for the input value (D2 Input) of the second learning data by using the first artificial intelligence model wherein a lot of learning has proceeded.
- FIG. 8 is a diagram illustrating a method of sharing learning data according to an embodiment of the disclosure.
- FIG. 9 is a diagram illustrating a method for a second external electronic apparatus to convert learning data according to an embodiment of the disclosure.
- the electronic apparatus 100 may train the second artificial intelligence model by using the first learning data having similar characteristics, and train the first artificial intelligence model by using the second learning data.
- the electronic apparatus 100 may acquire the first learning data, and acquire the second learning data having an identical or similar label to that of the first learning data from the plurality of learning data of the second database.
- the electronic apparatus 100 can acquire the second learning data, and acquire the first learning data having an identical or similar label to that of the second learning data from the plurality of learning data of the first database.
- a label in an artificial intelligence model for understanding of a language may be a label related to the intent and entity of a user voice for a result of voice recognition.
- the electronic apparatus 100 receives an artificial intelligence model and learning data of each of a plurality of external electronic apparatuses from the plurality of external electronic apparatuses and acquires learning data appropriate for another external electronic apparatus.
- the disclosure is not limited to the aforementioned embodiment, and the aforementioned functions and operations of the electronic apparatus 100 can be used in each of a plurality of external electronic apparatuses.
- the aforementioned functions and operations of the electronic apparatus 100 may be performed by the first external electronic apparatus 200-1 or the second external electronic apparatus 200-2.
- the second external electronic apparatus 200-2 may receive a plurality of learning data stored in the first external electronic apparatus 200-1, and convert the received learning data into learning data appropriate for the artificial intelligence model of the second external electronic apparatus 200-2.
- the second external electronic apparatus 200-2 may receive a plurality of learning data stored in the first external electronic apparatus 200-1, and identify first learning data corresponding to the second learning data stored in the second external electronic apparatus 200-2 among the plurality of learning data received from the first external electronic apparatus 200-1.
- the second external electronic apparatus 200-2 may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in each of the plurality of learning data received from the first external electronic apparatus 200-1, and identify first learning data.
- the second external electronic apparatus 200-2 may train the second artificial intelligence model used by the second external electronic apparatus 200-2 based on the identified first learning data.
- the second external electronic apparatus 200-2 may receive characteristic information of the first external electronic apparatus 200-1, and convert the first learning data into third learning data based on the received characteristic information.
- characteristic information may include at least one of characteristic information related to voice inputters included in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, characteristic information related to noises input through the voice inputters of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, or characteristic information related to the distance between a location wherein a user voice was generated and the first external electronic apparatus 200-1, and the distance between a location wherein a user voice was generated and the second external electronic apparatus 200-2.
- FIG. 10 is a flow chart for illustrating a control method of an electronic apparatus according to an embodiment of the disclosure.
- the electronic apparatus 100 may receive an artificial intelligence model used by each of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 and a plurality of learning data stored in each of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus and the second external electronic apparatus, at operation S1110. Further, the electronic apparatus 100 can receive characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 and store the information.
- the electronic apparatus 100 may identify first learning data corresponding to the second learning data received from the second external electronic apparatus 200-2 among the plurality of learning data received from the first external electronic apparatus 200-1, at operation S1120. Specifically, the electronic apparatus 100 may compare the input value and the label value of the first learning data with the input value and the label value of the second learning data and identify first learning data.
- the electronic apparatus 100 may train the artificial intelligence model used by the second external electronic apparatus 200-2 based on the acquired first learning data, at operation S1130.
- the electronic apparatus 100 can train the second artificial intelligence model based on third learning data which is learning data converted from the first learning data to a form appropriate for the second external electronic apparatus 200-2.
- the electronic apparatus 100 may transmit the trained artificial intelligence model to the second external electronic apparatus 200-2, at operation S1140.
- the second external electronic apparatus 200-2 may output a personalized result based on the second artificial intelligence model trained based on various personalized learning data.
- part or “module” used in the disclosure includes a unit consisting of hardware, software, or firmware, and it may be interchangeably used with terms such as logic, a logical block, a component, or a circuit.
- a part or “a module” may be a component consisting of an integrated body or a minimum unit performing one or more functions or a portion thereof.
- a module may consist of an application-specific integrated circuit (ASIC).
- ASIC application-specific integrated circuit
- the various embodiments of the disclosure may be implemented as software including instructions stored in machine-readable storage media, which can be read by machines (e.g., computers).
- the machines refer to apparatuses that call instructions stored in a storage medium, and can operate according to the called instructions, and the apparatuses may include an electronic apparatus according to the aforementioned embodiments (e.g., an electronic apparatus 100).
- the processor may perform a function corresponding to the instruction by itself, or by using other components under its control.
- An instruction may include a code that is generated or executed by a compiler or an interpreter.
- a storage medium that is readable by machines may be provided in the form of a non-transitory storage medium.
- the term 'non-transitory' only means that a storage medium does not include signals, and is tangible, but does not indicate whether data is stored in the storage medium semi-permanently or temporarily.
- a computer program product refers to a product, and it can be traded between a seller and a buyer.
- a computer program product can be distributed on-line in the form of a storage medium that is readable by machines (e.g., a compact disc read only memory (CD-ROM)), or through an application store (e.g., play store ⁇ ).
- a storage medium such as the server of the manufacturer, the server of the application store, and the memory of the relay server at least temporarily, or may be generated temporarily.
- each of the components according to the aforementioned various embodiments may consist of a singular object or a plurality of objects. Also, among the aforementioned corresponding sub components, some sub components may be omitted, or other sub components may be further included in the various embodiments. Generally or additionally, some components (e.g., a module or a program) may be integrated as an object, and perform the functions that were performed by each of the components before integration identically or in a similar manner. Operations performed by a module, a program, or other components according to the various embodiments may be executed sequentially, in parallel, repetitively, or heuristically. Or, at least some of the operations may be executed or omitted in a different order, or other operations may be added.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Selective Calling Equipment (AREA)
- Image Analysis (AREA)
- Telephonic Communication Services (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
An electronic apparatus and a control method thereof are provided. The control method of the electronic apparatus includes receiving, from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses, identifying first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus, training the second artificial intelligence model used by the second external electronic apparatus based on the first learning data, and transmitting the trained second artificial intelligence model to the second external electronic apparatus.
Description
The disclosure relates to an electronic apparatus and a control method. More particularly, the disclosure relates to a method of training an artificial intelligence model of at least one external electronic apparatus among a plurality of external electronic apparatuses based on learning data stored in the plurality of external electronic apparatuses.
Also, the disclosure relates to an artificial intelligence (AI) system simulating functions of a human brain such as cognition and determination by utilizing a machine learning algorithm, and applications thereof.
An artificial intelligence (AI) system is a computer system implementing intelligence of a human level, and is a system wherein a machine learns, determines, and becomes smarter by itself, unlike rule-based smart systems. An artificial intelligence system shows a more improved recognition rate as it is used more, and becomes capable of understanding user preferences more correctly. For this reason, rule-based smart systems are gradually being replaced by deep learning-based artificial intelligence systems.
AI technology consists of machine learning (deep learning) and element technologies utilizing machine learning.
Machine learning refers to an algorithm technology of classifying/learning the characteristics of input data by itself. Meanwhile, an element technology refers to a technology utilizing a machine learning algorithm such as deep learning, and includes fields of technologies such as linguistic understanding, visual understanding, inference/prediction, knowledge representation, and operation control.
Examples of various fields to which artificial intelligence technologies are applied are as follows. Linguistic understanding refers to a technology of recognizing languages/characters of humans, and applying/processing them, and includes natural speech processing, machine translation, communication systems, queries and answers, voice recognition/synthesis, and the like. Visual understanding refers to a technology of recognizing an object in a similar manner to human vision, and processing the object, and includes recognition of an object, tracking of an object, search of an image, recognition of humans, understanding of a scene, understanding of a space, improvement of an image, and the like. Inference/prediction refers to a technology of determining information and then making logical inference and prediction, and includes knowledge/probability based inference, optimization prediction, preference based planning, recommendation, and the like. Knowledge representation refers to a technology of automatically processing information of human experiences into knowledge data, and includes knowledge construction (data generation/classification), knowledge management (data utilization), and the like. Operation control refers to a technology of controlling autonomous driving of vehicles and movements of robots, and includes movement control (navigation, collision, driving), operation control (behavior control), and the like.
Meanwhile, recently, various methods for using an AI model in a device having a small resource like a user device are being discussed. Further, a method for constructing a personalized AI model by training an AI model with data stored in a user device as learning data is also being discussed.
Recently, a user has various types of user devices. Various user devices, for example, smartphones, artificial intelligence speakers, digital TVs, refrigerators, etc. may include AI models. However, a problem has existed, which is that the amount of learning data collected by some user devices is insufficient, and is insufficient for training an AI model, or levels of learning vary for each user device, and accordingly, even if the same user uses a user device, the same result value of performance cannot be maintained.
The above information is presented as background information only to assist with an understanding of the disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the disclosure.
Aspects of the disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the disclosure is to address the aforementioned problems, and relates to a method of sharing learning data among a plurality of external electronic apparatuses.
Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments.
In accordance with an aspect of the disclosure, a control method of an electronic apparatus for achieving the aforementioned purpose is provided. The control method includes receiving, from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses, identifying first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus, training the second artificial intelligence model used by the second external electronic apparatus based on the first learning data, and transmitting the trained second artificial intelligence model to the second external electronic apparatus.
The control method may further include receiving first and second characteristic information of the first and second external electronic apparatuses, respectively, based on the first characteristic information of the first external electronic apparatus and the second characteristic information of the second external electronic apparatus, converting the first learning data into third learning data to train the second artificial intelligence model used by the second external electronic apparatus, and training the second artificial intelligence model used by the second external electronic apparatus based on the third learning data.
Also, in the identifying, at least one of an input value or a label value included in the second learning data may be compared with an input value and a label value included in the learning data received from the first external electronic apparatus and the first learning data may be acquired.
The second artificial intelligence model may be an artificial intelligence model for voice recognition, and the plurality of learning data may include voice data, a label value of the voice data, and user information corresponding to the voice data.
Also, in the identifying, at least one of second voice data, a second label value of the second voice data, or second user information corresponding to the second voice data included in the second learning data may be compared with at least one of first voice data, a first label value of the first voice data, and first user information corresponding to the first voice data included in the learning data received from the first external electronic apparatus and the first learning data may be acquired.
The first and second characteristic information may include at least one of characteristic information related to voice inputters included in the first and second external electronic apparatuses, characteristic information related to noises input through the voice inputters of the first and second external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the first and second external electronic apparatuses.
Meanwhile, in the converting, voice data included in the first learning data may be converted by using a frequency filter.
Also, in the receiving, based on a predetermined time condition, the first and second artificial intelligence models used by the first and second external electronic apparatuses, respectively, the plurality of learning data stored in the first and second external electronic apparatuses, and first and second characteristic information of the first and second external electronic apparatuses, respectively, may be received.
Meanwhile, an electronic apparatus according to an embodiment of the disclosure includes a memory, a communicator, and a processor configured to receive, via the communicator from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses, identify first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus, train the second artificial intelligence model used by the second external electronic apparatus based on the first learning data, and transmit, via the communicator, the trained second artificial intelligence model to the second external electronic apparatus.
Here, the processor may receive, via the communicator, first and second characteristic information of the first and second external electronic apparatuses, respectively, based on the first characteristic information of the first external electronic apparatus and the second characteristic information of the second external electronic apparatus, convert the first learning data into third learning data to train the second artificial intelligence model used by the second external electronic apparatus, and train the second artificial intelligence model used by the second external electronic apparatus based on the third learning data.
Also, the processor may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in the learning data received from the first external electronic apparatus and acquire the first learning data.
The second artificial intelligence model may be an artificial intelligence model for voice recognition, and the plurality of learning data may include voice data, a label value of the voice data, and user information corresponding to the voice data.
Also, the processor may compare at least one of second voice data, a second label value of the second voice data, or second user information corresponding to the second voice data included in the second learning data with at least one of first voice data, a first label value of the first voice data, and first user information corresponding to the fist voice data included in the learning data received from the first external electronic apparatus and acquire the first learning data.
The first and second characteristic information may include at least one of characteristic information related to voice inputters included in the first and second external electronic apparatuses, characteristic information related to noises input through the voice inputters of the first and second external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the first and second external electronic apparatuses.
Also, the processor may convert voice data included in the first learning data by using a frequency filter.
The processor may, based on a predetermined time condition, receive, via the communicator, the first and second artificial intelligence models used by the first and second external electronic apparatuses, respectively, the plurality of learning data stored in the first and second external electronic apparatuses, and first and second characteristic information of the first and second external electronic apparatuses, respectively.
Meanwhile, an electronic apparatus according to an embodiment of the disclosure includes a memory, a communicator, and a processor configured to receive, via the communicator from external electronic apparatuses, a plurality of learning data stored in the external electronic apparatuses, identify first learning data corresponding to second learning data stored in the electronic apparatus among the plurality of learning data received from the external electronic apparatuses, and train an artificial intelligence model used by the electronic apparatus based on the identified first learning data.
The processor may receive, via the communicator, characteristic information of the external electronic apparatuses, based on the characteristic information of the external electronic apparatuses and characteristic information of the electronic apparatus, convert the first learning data into third learning data to train the artificial intelligence model used by the electronic apparatus, and train the artificial intelligence model used by the electronic apparatus based on the third learning data.
Also, the processor may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in the plurality of learning data received from the external electronic apparatuses and identify the first learning data.
The characteristic information of the electronic apparatus and the external electronic apparatuses may include at least one of characteristic information related to voice inputters of the electronic apparatus and the external electronic apparatuses, characteristic information related to noises input through the voice inputters of the electronic apparatus and the external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the electronic apparatus and the external electronic apparatuses.
According to the aforementioned various embodiments of the disclosure, an electronic apparatus can provide learning data suitable for an external electronic apparatus wherein personalized learning data is insufficient, and train an artificial intelligence model by using the provided learning data.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the disclosure.
The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a diagram schematically illustrating an embodiment of the disclosure;
FIG. 2 is a block diagram illustrating a schematic configuration of an electronic apparatus according to an embodiment of the disclosure;
FIG. 3 is a block diagram illustrating a detailed configuration of an electronic apparatus according to an embodiment of the disclosure;
FIG. 4 is a diagram illustrating a data sharing method according to an embodiment of the disclosure;
FIG. 5 is a diagram illustrating a data sharing method according to an embodiment of the disclosure;
FIG. 6 is a diagram illustrating a method of converting learning data according to an embodiment of the disclosure;
FIG. 7 is a diagram illustrating a method of acquiring learning data according to an embodiment of the disclosure;
FIG. 8 is a diagram illustrating a method of sharing learning data according to an embodiment of the disclosure;
FIG. 9 is a diagram illustrating a method for a second external electronic apparatus to convert learning data according to an embodiment of the disclosure; and
FIG. 10 is a flow chart illustrating a control method of an electronic apparatus according to an embodiment of the disclosure.
Throughout the drawings, like reference numerals will be understood to refer to like parts, components, and structures.
The following described with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a component surface" includes reference to one or more of such surfaces.
In the disclosure, expressions such as "have," "may have," "include," and "may include" should be construed as denoting that there are such characteristics (e.g., elements such as numerical values, functions, operations and components), and the terms are not intended to exclude the existence of additional characteristics.
Also, in the disclosure, the expressions "A or B," "at least one of A and/or B," or "one or more of A and/or B" and the like may include all possible combinations of the listed items. For example, "A or B," "at least one of A and B," or "at least one of A or B" refer to all of the following cases: (1) including at least one A, (2) including at least one B, or (3) including at least one A and at least one B.
Further, the expressions "first," "second," and the like used in the disclosure may be used to describe various elements regardless of any order and/or degree of importance. Also, such expressions are used only to distinguish one element from another element, and are not intended to limit the elements.
Also, the description in the disclosure that one element (e.g., a first element) is "(operatively or communicatively) coupled with/to" or "connected to" another element (e.g., a second element) should be interpreted to mean that the one element may be directly coupled to the another element, or the one element may be coupled to the another element through another element (e.g., a third element). In contrast, the description that one element (e.g., a first element) is "directly coupled" or "directly connected" to another element (e.g., a second element) can be interpreted to mean that another further element (e.g., a third element) does not exist between the one element and other element.
In addition, the expression "configured to" used in the disclosure may be interchangeably used with other expressions such as "suitable for," "having the capacity to," "designed to," "adapted to," "made to," and "capable of," depending on cases. Meanwhile, the term "configured to" does not necessarily mean that a device is "specifically designed to" in terms of hardware. Instead, under some circumstances, the expression "a device configured to" may mean that the device "is capable of" performing an operation together with another device or component. For example, the phrase "a sub-processor configured to perform A, B and C" may mean a dedicated processor (e.g., an embedded processor) for performing the corresponding operations, or a generic-purpose processor (e.g., a central processing unit (CPU) or an application processor) that can perform the corresponding operations by executing one or more software programs stored in a memory device.
An electronic apparatus according to various embodiments of the disclosure may include, at least one of, for example, a smartphone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a personal digital assistant (PDA), a portable multimedia player (PMP), a moving picture experts group phase 1 or phase 2 (MPEG-1 or MPEG-2) audio layer 3 (MP3) player, a medical instrument, a camera, or a wearable device. Meanwhile, a wearable device may include at least one of an accessory-type device (e.g., a watch, a ring, a bracelet, an ankle bracelet, a necklace, glasses, a contact lens, or a head-mounted-device (HMD)), a device integrated with fabrics or clothing (e.g., electronic clothing), a body-attached device (e.g., a skin pad or a tattoo), or an implantable circuit. Also, in some embodiments, an electronic apparatus according to various embodiments of the disclosure may include at least one of, for example, a television, a digital versatile disk (DVD) player, an audio, a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washing machine, an air cleaner, a set top box, a home automation control panel, a security control panel, a media box (e.g., Samsung HomeSync쪠, Apple TV쪠, or Google TV쪠), a game console (e.g., Xbox쪠, PlayStation쪠), an electronic dictionary, an electronic key, a camcorder, or an electronic photo frame.
According to another embodiment of the disclosure, an electronic apparatus may include at least one of various types of medical instruments (e.g., various types of portable medical measurement instruments (a blood glucose meter, a heart rate meter, a blood pressure meter, or a thermometer, etc.), magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), computed tomography (CT), a photographing device, an ultrasonic instrument, etc.), a navigation device, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), a vehicle infotainment device, an electronic device for vessels (e.g., a navigation device for vessels, a gyrocompass, etc.), avionics, a security device, a head unit for a vehicle, an industrial or a household robot, a drone, an automated teller machine (ATM) of a financial institution, a point of sales (POS) of a store, or an Internet of things (IoT) device (e.g., a light bulb, various types of sensors, a sprinkler device, a fire alarm, a thermostat, a street light, a toaster, exercise equipment, a hot water tank, a heater, a boiler, etc.).
Also, in the disclosure, the term "user" may refer to a person who uses an electronic apparatus or an apparatus using an electronic apparatus (e.g., an artificial intelligence electronic apparatus).
Meanwhile, in the disclosure, a first artificial intelligence model may mean an artificial intelligence model used by a first external electronic apparatus or an artificial intelligence model received from a first external electronic apparatus, and a second artificial intelligence model may mean an artificial intelligence model used by a second external electronic apparatus or an artificial intelligence model received from a second external electronic apparatus.
Meanwhile, in the disclosure, first learning data may mean learning data stored in a first external electronic apparatus, and second learning data may mean learning data stored in a second external electronic apparatus.
Hereinafter, the disclosure will be described in more detail with reference to the accompanying drawings.
FIG. 1 is a diagram schematically illustrating an embodiment of the disclosure.
Referring to FIG. 1, a plurality of external electronic apparatuses (e.g., a first external electronic apparatus 200-1, a second external electronic apparatus 200-2, and a third external electronic apparatus 200-3) may communicate with an electronic apparatus 100. The plurality of external electronic apparatuses may include artificial intelligence models, and may be apparatuses for providing various services by using artificial intelligence models. Here, artificial intelligence models used by the plurality of external electronic apparatuses may vary according to the purpose of each of the plurality of external electronic apparatuses. For example, an artificial intelligence model may vary such as an artificial intelligence model for voice recognition, an artificial intelligence model for image analysis, etc. The electronic apparatus 100 is an apparatus for training artificial intelligence models used by the plurality of external electronic apparatuses.
Specifically, the electronic apparatus 100 may store artificial intelligence models used by the plurality of external electronic apparatuses, and data necessary for training the artificial intelligence models. Specifically, the electronic apparatus 100 may receive the artificial intelligence model of each of the plurality of external electronic apparatuses from the plurality of external electronic apparatuses, and store the models. Here, the artificial intelligence model of each of the plurality of external electronic apparatuses may be an artificial intelligence model trained based on learning data of each of the plurality of external electronic apparatuses. However, the disclosure is not limited thereto, and the electronic apparatus 100 can receive an artificial intelligence model used by each of the plurality of external electronic apparatuses from an external server, and store the model.
Meanwhile, the electronic apparatus 100 may receive learning data from each of the plurality of external electronic apparatuses, and store the data. Here, the electronic apparatus 100 may classify artificial intelligence models and learning data received from each of the plurality of external electronic apparatuses by each external electronic apparatus, and store them.
The electronic apparatus 100 may train the artificial intelligence model used by each of the plurality of external electronic apparatuses based on the learning data received from each of the plurality of external electronic apparatuses. For example, the electronic apparatus 100 may train the first artificial intelligence model used by the first external electronic apparatus 200-1 based on the learning data received from the first to third external electronic apparatuses 200-1 to 200-3. By the same method, the electronic apparatus 100 may train the second artificial intelligence model used by the second external electronic apparatus 200-2 based on the learning data received from the first to third external electronic apparatuses 200-1 to 200-3. Also, by the same method, the electronic apparatus 100 may train the third artificial intelligence model used by the third external electronic apparatus 200-3 based on the learning data received from the first to third external electronic apparatuses 200-1 to 200-3.
Through the aforementioned method, the plurality of external electronic apparatuses may share learning data stored in each of the apparatuses and construct personalized artificial intelligence models. Also, for example, the first external electronic apparatus 200-1 is an apparatus where there is a lot of stored learning data. In addition, in case the learning data stored in the second external electronic apparatus 200-2 and the third external electronic apparatus 200-3 is small compared to the first external electronic apparatus 200-1, there would be an effect that the second external electronic apparatus 200-2 and the third external electronic apparatus 200-3 can train artificial intelligence models by using the learning data stored from the first external electronic apparatus 200-1 for solving the problem of insufficient learning data.
Meanwhile, artificial intelligence models sharing learning data as mentioned above may be artificial intelligence models performing similar functions. For example, all of the first to third artificial intelligence models may be artificial intelligence models related to voice recognition. Alternatively, all of the first to third artificial intelligence models may be artificial intelligence models for image analysis. However, the disclosure is not limited thereto, and in case there is a need to share learning data, learning data stored by artificial intelligence models different from one another can be shared by the plurality of external electronic apparatuses.
Meanwhile, the electronic apparatus 100 can train artificial intelligence models of the plurality of external electronic apparatuses by using an external server storing various learning data. In this case, training artificial intelligence models by using all of the learning data of the external server has a disadvantage that a large amount of resources are needed. Accordingly, the electronic apparatus 100 may select learning data similar to the learning data included in the first to third external electronic apparatuses 200-1 to 200-3 among the vast amount of learning data stored by the external server, and train artificial intelligence models based on the selected learning data.
Hereinafter, various embodiments of the disclosure will be described based on the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 among the plurality of external electronic apparatuses. Meanwhile, the technical idea of the disclosure can be applied to two or more external electronic apparatuses.
FIG. 2 is a block diagram illustrating a schematic configuration of an electronic apparatus according to an embodiment of the disclosure.
The electronic apparatus 100 may include a memory 110, a communicator 120, and a processor 130.
The memory 110 may store instructions or data related to at least one other different component of the electronic apparatus 100. In particular, the memory 110 may be implemented as a nonvolatile memory, a volatile memory, a flash memory, a hard disk drive (HDD), or a solid state drive (SSD), etc. The memory 110 may be accessed by the processor 130, and reading/recording/correction/deletion/update, etc. of data by the processor 130 may be performed. In the disclosure, the term memory may include the memory 110, the read-only memory (ROM) (not shown) and random access memory (RAM) (not shown) inside the processor 130, or a memory card (not shown) mounted on the electronic apparatus 100 (e.g., a micro secure digital (SD) card, a memory stick).
In addition, the memory 110 may receive artificial intelligence models used by the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, and learning data stored in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2. Further, the memory 110 may receive information on characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, and store the information. Here, characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 may include at least one of characteristic information related to the voice inputters included in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, characteristic information related to noises input through the voice inputters of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, or characteristic information related to the distance between a location wherein a user voice was generated and the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
The communicator 120 is a component for performing communication with another electronic apparatus. Meanwhile, communicative connection of the communicator 120 with another electronic apparatus may include communication through a third apparatus (e.g., a repeater, a hub, an access point, a server, a gateway, etc.). Wireless communication may include cellular communication using at least one of long term evolution (LTE), LTE Advance (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), a universal mobile telecommunications system (UMTS), Wireless Broadband (WiBro), or a Global System for Mobile Communications (GSM). According to an embodiment, wireless communication may include, for example, at least one of Wi-Fi, Bluetooth, Bluetooth low energy (BLE), Zigbee, near field communication (NFC), Magnetic Secure Transmission, radio frequency (RF), or a body area network (BAN). Wired communication may include, for example, at least one of a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard 232 (RS-232), power line communication, or a plain old telephone service (POTS). Networks wherein wireless communication or wired communication is performed may include at least one of a telecommunication network, for example, a computer network (e.g., a local area network (LAN) or a wide area network (WAN)), Internet, or a telephone network.
Specifically, the communicator 120 may receive artificial intelligence models used by the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, and learning data stored in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
The processor 130 may be electronically connected with the memory 110, and control the overall operations and functions of the electronic apparatus 100.
Specifically, the processor 130 may identify first learning data corresponding to second learning data received from the second external electronic apparatus 200-2 among a plurality of learning data received from the first external electronic apparatus 200-1, and train the artificial intelligence model used by the second external electronic apparatus 200-2 based on the identified first learning data. In this case, the first external electronic apparatus 200-1 may be an external server including a vast amount of learning data, and the second external electronic apparatus 200-2 may be a user terminal.
Meanwhile, the processor 130 may receive characteristic information of each of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2. The electronic apparatus 100 may convert the first learning data into third learning data for training the second artificial intelligence model based on the characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
As an example, in case the second artificial intelligence model is an artificial intelligence model for voice recognition, learning data may be data for a user voice. Here, user voices received by the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 may vary according to characteristic information of each external electronic apparatus. For example, a case where the first external electronic apparatus 200-1 is a user terminal such as a smartphone, and the second external electronic apparatus 200-2 is a refrigerator may be assumed. The first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 may be different from each other in the types of microphones included inside the apparatuses, the number of microphones, etc., and also, the distances from the starting point of a user voice to the external electronic apparatuses may be different. Accordingly, the noise, frequency characteristics, etc. included in a user voice received by the first external electronic apparatus 200-1 may be different from the noise, frequency characteristics, etc. included in a user voice received by the second external electronic apparatus 200-2. Thus, in case the electronic apparatus 100 is going to use the second artificial intelligence model, there is a need to convert the first learning data to suit the characteristics of the second external electronic apparatus 200-2. Accordingly, the processor 130 may convert the first learning data into third learning data for training the second artificial intelligence model based on the characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
In the aforementioned embodiment, a case where an artificial intelligence model is for voice recognition was described. However, in the case of an artificial intelligence model performing a different function, the same technical idea can be applied depending on needs.
Meanwhile, the processor 130 may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in each of the plurality of learning data received from the first external electronic apparatus 200-1 and acquire first learning data. Specifically, in case the first external electronic apparatus 200-1 is an external server including a vast amount of learning data, the electronic apparatus 100 may identify learning data for training the second artificial intelligence model, among the learning data of the first external electronic apparatus 200-1. In this case, the electronic apparatus 100 may identify learning data similar to the input value and the label value of the second learning data from the plurality of learning data of the first external electronic apparatus 200-1.
As an example, in case the first and second artificial intelligence models are artificial intelligence models for voice recognition, the first and second learning data may include user voice data, a label value of the voice data, and user information corresponding to the voice data. Here, an input value may be data related to an acoustic model, and a label value may be data related to a language model, and user information may be user identification information. Specifically, an input value may be data such as the waveform of a user voice, and may include data such as intonation and tone, and a label value may be text data wherein a user voice was converted into a text. Also, for identifying learning data similar to the input value and the label value of the second learning data from the plurality of learning data of the first external electronic apparatus 200-1, the electronic apparatus 100 may identify whether waveforms of input values are similar, or distribution of label values expressed in a vector format exists within a predetermined distance.
Meanwhile, the processor 130 may convert voice data included in the first learning data by using a frequency filter. Specifically, the processor 130 may acquire a frequency filter that can generate voice data appropriate for the second external electronic apparatus 200-2 based on characteristic information of the first external electronic apparatus 200-1 and characteristic information of the second external electronic apparatus 200-2, and acquire third learning data by using the acquired filter.
Also, the processor 130 may perform the aforementioned operations in case a predetermined time condition is satisfied. For example, the processor 130 may receive learning data from the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 around dawn and train the first artificial intelligence model and the second artificial intelligence model.
Meanwhile, according to another embodiment of the disclosure, the first artificial intelligence model and the second artificial intelligence model may be artificial intelligence models for understanding of natural languages. In this case, learning data may be information on the purport of a user voice and a slot included in a result of voice recognition acquired through a voice recognition model.
FIG. 3 is a block diagram illustrating a detailed configuration of an electronic apparatus according to an embodiment of the disclosure.
Referring to FIG. 3, the electronic apparatus 100 may further include an inputter 140, a display 150, and an audio outputter 160 in addition to the memory 110, the communicator 120, and the processor 130. However, the components are not limited to the aforementioned components, and some components may be added or omitted depending on needs.
The inputter 140 is a component for receiving input of a user instruction. Here, the inputter 140 may include a camera 141, a microphone 142, a touch panel 143, etc. The camera 141 is a component for acquiring image data around the electronic apparatus 100. The camera 141 may photograph a still image and a moving image. For example, the camera 141 may include one or more image sensors (e.g., a front surface sensor or a back surface sensor), a lens, an image signal processor (ISP), or a flash (e.g., a light emitting diode (LED), a xenon lamp, etc.). The microphone 142 is a component for acquiring sounds around the electronic apparatus 100. The microphone 142 may receive input of an acoustic signal outside, and generate electronic voice information. Also, the microphone 142 may use various noise removal algorithms for removing noises generated in the process of receiving input of an acoustic signal outside. The touch panel 143 is a component for receiving various user inputs. The touch panel 143 may receive input of data by a user manipulation. Also, the touch panel 143 may be constituted while being combined with the display that will be described below. Meanwhile, it is obvious that the inputter 140 may include various components for receiving input of various data in addition to the camera 141, the microphone 142, and the touch panel 143.
The aforementioned various components of the inputter 140 may be used in various forms. For example, in case the first external electronic apparatus 200-1 includes a voice recognition model, but the second external electronic apparatus 200-2 does not include a voice recognition model, and the second external electronic apparatus 200-2 needs a result for voice recognition, the electronic apparatus 100 may input a user voice input through the microphone 142 into the first artificial intelligence model received from the first external electronic apparatus 200-1 and output a result value, and transmit the value to the second external electronic apparatus 200-2. That is, the electronic apparatus 100 may not only share learning data between the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, but also acquire a result value for an input value of an artificial intelligence model not included in each external electronic apparatus instead of the external electronic apparatuses, and transmit the result value.
The display 150 is a component for outputting various images. The display 150 for providing various images may be implemented as display panels in various forms. For example, the display panel may be implemented as various display technologies such as a liquid crystal display (LCD), organic light emitting diodes (OLEDs), an active-matrix organic light-emitting diode (AM-OLED), liquid crystal on silicon (LcoS), digital light processing (DLP), etc. Also, the display 150 may be combined with at least one of the front surface area, the side surface area, or the back surface area of the electronic apparatus 100, in the form of a flexible display.
Specifically, the display 150 may display various setting information for sharing learning data among external electronic apparatuses. That is, sharing of learning data among external electronic apparatuses may be performed automatically, but may also be performed by a user instruction, and the display 150 may output various information for receiving input of a user instruction or inquiring about a user instruction.
The audio outputter 160 is a component outputting various kinds of notification sounds or voice messages as well as various types of audio data for which various processing operations such as decoding or amplification, noise filtering, etc. were performed by an audio processor. The audio processor is a component performing processing of audio data. At the audio processor, various processing such as decoding or amplification, noise filtering, etc. of audio data may be performed. Audio data processed at the audio processor may be output to the audio outputter 160. In particular, the audio outputter may be implemented as a speaker, but this is merely an example, and the audio outputter may be implemented as an output terminal that can output audio data.
The processor 130 controls the overall operations of the electronic apparatus 100, as described above. The processor 130 may include a RAM 131, a ROM 132, a main CPU 134, a graphic processor 133, first to nth interfaces 135-1 to 135-n, and a bus 136. Here, the RAM 131, the ROM 132, the main CPU 134, the graphic processor 133, and the first to nth interfaces 135-1 to 135-n may be connected with one another through the bus 136.
The ROM 132 stores a set of instructions, etc. for system booting. When a turn-on instruction is input and power is supplied, the main CPU 134 copies an operating system (O/S) stored in the memory in the RAM 131 according to the instruction stored in the ROM 132, and boots the system by executing the O/S. When booting is completed, the main CPU 134 copies various types of application programs stored in the memory in the RAM 131, and performs various types of operations by executing the application programs copied in the RAM 131.
Specifically, the main CPU 134 accesses the memory 110, and performs booting by using the O/S stored in the memory 110. Then, the main CPU 134 performs various operations by using various programs, contents, data, etc. stored in the memory 110.
The first to nth interfaces 135-1 to 135-n are connected with the aforementioned various components. One of the interfaces may be a network interface connected to an external electronic apparatus through a network.
Hereinafter, various embodiments according to the disclosure will be described with reference to FIGS. 4 to 8.
FIGS. 4 and 5 are diagrams illustrating a data sharing method according to various embodiments of the disclosure.
Specifically, FIG 4 is an diagram illustrating a method of selecting learning data necessary for training the second artificial intelligence model in case the first external electronic apparatus 200-1 is an external server storing various and vast learning data, and FIG. 5 is a diagram illustrating a method of selecting learning data necessary for training the second artificial intelligence model in case the first external electronic apparatus 200-1 is a personal terminal storing personalized learning data.
Referring to FIG. 4, the first external electronic apparatus 200-1 may include the first artificial intelligence model and the first database, and the second external electronic apparatus 200-2 may include the second artificial intelligence model and the second database. Here, the first database and the second database may store a plurality of learning data.
First, a selection module may acquire the first learning data similar to the second learning data (including an input value D2 Input and a label value D2 Label) including an input value and a label value among the plurality of learning data of the second database from the plurality of learning data of the first database. Specifically, as illustrated in FIG. 4, the selection module may compare at least one of the input value D2 Input or the label value D2 Label of the second learning data with at least one of the input value or the label value of each of the plurality of learning data of the first database, and select similar data.
The selection module may train the second artificial intelligence model as the first learning data including an input value D1 Input and a label value D1 Label similar to the second learning data as an input value.
As in the aforementioned method, the electronic apparatus 100 may train the second artificial intelligence model by acquiring learning data similar to each of the plurality of learning data of the second database from the first database.
Referring to FIG. 5, the electronic apparatus 100 may use all of the plurality of learning data of the first database as learning data for training the second artificial intelligence model. That is, unlike the case in FIG. 4, in case both of the learning data stored in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 are learning data received from a user, the electronic apparatus 100 may train the second artificial intelligence model by using all of the plurality of learning data of the first database.
FIG. 6 is a diagram illustrating a method of converting learning data according to an embodiment of the disclosure.
Referring to FIG. 6, the selection module may acquire the first learning data similar to the second learning data (including an input value D2 Input and a label value D2 Label) including an input value and a label value among the plurality of learning data of the second database from the plurality of learning data of the first database. Specifically, as illustrated in FIG. 4, the selection module may compare at least one of the input value D2 Input or the label value D2 Label of the second learning data with at least one of the input value or the label value of each of the plurality of learning data of the first database, and select similar data.
The selection module may acquire the first learning data including the input value D1 Input and the label value D1 Label similar to the second learning data, and a conversion module may acquire third learning data (an input value D1' Input and a label value D1' Label) based on the input value D1 Input, the input value D2 Input, and the label value D1. That is, as the third learning data, learning data appropriate for the second external electronic apparatus 200-2 may be acquired by converting the first learning data.
As an embodiment for conversion of learning data, in case the first learning data and the second learning data are data for training artificial intelligence models, the D1 Input and the D2 Input may be data related to a user voice received by each external electronic apparatus.
Specifically, the electronic apparatus 100 may add data for noises around the first external electronic apparatus to the second learning data (specifically, the D2 Input), and acquire third learning data (specifically, the D1' Input). For example, the electronic apparatus 100 may acquire data for the ambient noises of the first external electronic apparatus 200-1 according to the usage environment of the first external electronic apparatus 200-1, and add the acquired data for the ambient noises of the first external electronic apparatus 200-1 and the D2 Input in the time area. As another example, the electronic apparatus 100 may filter the D2 Input in the frequency area by using the frequency filter for the ambient noise environment of the first external electronic apparatus 200-1. As another example, the electronic apparatus 100 may acquire data for a non-voice section included in the D1 Input by using voice activity detection (VAD), and add data for the non-voice section and the D2 input in the time area. Here, VAD is a technology of dividing a portion where a user voice is included and a portion where a user voice is not included (a mute portion or a non-voice portion) in an input user utterance, and the electronic apparatus 100 may acquire third learning data by using the noise environment information included in the first external electronic apparatus 200-1 by using the non-voice portion.
As another example, the electronic apparatus 100 may acquire third learning data based on characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2.
Specifically, in case characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 is specification information for the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, the electronic apparatus 100 may acquire third learning data in consideration of the difference between the specification information of the first external electronic apparatus 200-1 and the specification information of the second external electronic apparatus 200-2.
Alternatively, in case characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 is characteristic information related to noises input through the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, the electronic apparatus 100 may acquire third learning data by using the difference between the noise input into the first external electronic apparatus 200-1 and the noise input into the second external electronic apparatus 200-2.
Alternatively, in case characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 is distance information between the location of the first external electronic apparatus 200-1 and a location wherein a user voice was generated and the distance information between the location of the second external electronic apparatus 200-2 and a location wherein a user voice was generated, the electronic apparatus 100 may acquire third learning data based on the distance information between the location of the first external electronic apparatus 200-1 and a location wherein a user voice was generated and the distance information between the location of the second external electronic apparatus 200-2 and a location wherein a user voice was generated. For example, a case wherein a user voice is input into the first external electronic apparatus 200-1 in a short distance, and a user voice is input into the second external electronic apparatus 200-2 from a far distance can be assumed. In general, a characteristic exists which is that, in case a user voice is input from a far distance, in an audio signal received by an apparatus, a low frequency portion is emphasized, and in case a user voice is input in a short distance, in an audio signal received by an apparatus, a high frequency portion is emphasized. Accordingly, for converting an audio signal corresponding to a user voice received by the first external electronic apparatus 200-1 into an audio signal appropriate for the second external electronic apparatus 200-2, signal processing such as reducing the high frequency area of the audio signal corresponding to the user voice received by the first external electronic apparatus 200-1, and increasing the low frequency portion, etc. is needed. Thus, the electronic apparatus 100 may convert the third learning data in consideration of the distance information between the location of the first external electronic apparatus 200-1 and a location wherein a user voice was generated and the distance information between the location of the second external electronic apparatus 200-2 and a location wherein a user voice was generated, in consideration of the aforementioned matter. As described above, for converting the first learning data into the third learning data, the electronic apparatus 100 may use a frequency filter.
Meanwhile, the electronic apparatus 100 may convert all of the plurality of learning data of the first database to suit the characteristic information of the second external electronic apparatus 200-2, as described in FIG. 5. That is, in FIG. 6, an embodiment where data similar to the second learning data is acquired, and the acquired learning data is changed was described, but the disclosure is not limited thereto, and the electronic apparatus 100 can convert all learning data of the first database, without a process of acquiring learning data similar to the second learning data.
FIG. 7 is a diagram for illustrating a method of acquiring learning data according to an embodiment of the disclosure.
Referring to FIG. 7, the first external electronic apparatus 200-1 is an electronic apparatus which received learning data for users, and was trained for a long time, and may include an artificial intelligence model which includes learning data related to a plurality of users and performed a lot of learning. The second external electronic apparatus 200-2 is, for example, an electronic apparatus that a user newly purchased, and may include an artificial intelligence model which has a small amount of learning data for the user, and which is not personalized.
In this case, as illustrated in FIG. 7, the electronic apparatus 100 may input the D2 Input received from the second external electronic apparatus 200-2 into the first artificial intelligence model as an input value, and output a result value. Here, when the D2 Input is input into the first artificial intelligence model, the electronic apparatus 100 may output a label value D1 Label for the D2 Input and a probability value Prob regarding whether the D1 Label is a label value for the D2 Input. In case the probability value Prob is equal to or greater than a predetermined value, the selection module may train the second artificial intelligence model based on fourth learning data including the D2 Input and the D1 Label.
That is, in case the second external electronic apparatus 200-2 is an electronic apparatus newly purchased, or an electronic apparatus of which internal information has been initialized, etc., the output value and the label value for the input data are not correct. Thus, the electronic apparatus 100 may acquire fourth learning data for the input value (D2 Input) of the second learning data by using the first artificial intelligence model wherein a lot of learning has proceeded.
FIG. 8 is a diagram illustrating a method of sharing learning data according to an embodiment of the disclosure. FIG. 9 is a diagram illustrating a method for a second external electronic apparatus to convert learning data according to an embodiment of the disclosure.
Referring to FIGS. 8 and 9, in case the first artificial intelligence model and the second artificial intelligence model are artificial intelligence models for understanding of natural languages, the electronic apparatus 100 may train the second artificial intelligence model by using the first learning data having similar characteristics, and train the first artificial intelligence model by using the second learning data.
Specifically, the electronic apparatus 100 may acquire the first learning data, and acquire the second learning data having an identical or similar label to that of the first learning data from the plurality of learning data of the second database. Alternatively, it is obvious that the electronic apparatus 100 can acquire the second learning data, and acquire the first learning data having an identical or similar label to that of the second learning data from the plurality of learning data of the first database. Here, a label in an artificial intelligence model for understanding of a language may be a label related to the intent and entity of a user voice for a result of voice recognition.
Meanwhile, in the aforementioned various embodiments, descriptions were made based on the embodiment wherein the electronic apparatus 100 receives an artificial intelligence model and learning data of each of a plurality of external electronic apparatuses from the plurality of external electronic apparatuses and acquires learning data appropriate for another external electronic apparatus. However, the disclosure is not limited to the aforementioned embodiment, and the aforementioned functions and operations of the electronic apparatus 100 can be used in each of a plurality of external electronic apparatuses.
Specifically, as illustrated in FIG. 9, the aforementioned functions and operations of the electronic apparatus 100 may be performed by the first external electronic apparatus 200-1 or the second external electronic apparatus 200-2.
For example, the second external electronic apparatus 200-2 may receive a plurality of learning data stored in the first external electronic apparatus 200-1, and convert the received learning data into learning data appropriate for the artificial intelligence model of the second external electronic apparatus 200-2.
Specifically, the second external electronic apparatus 200-2 may receive a plurality of learning data stored in the first external electronic apparatus 200-1, and identify first learning data corresponding to the second learning data stored in the second external electronic apparatus 200-2 among the plurality of learning data received from the first external electronic apparatus 200-1.
Also, the second external electronic apparatus 200-2 may compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in each of the plurality of learning data received from the first external electronic apparatus 200-1, and identify first learning data.
Then, the second external electronic apparatus 200-2 may train the second artificial intelligence model used by the second external electronic apparatus 200-2 based on the identified first learning data.
Specifically, the second external electronic apparatus 200-2 may receive characteristic information of the first external electronic apparatus 200-1, and convert the first learning data into third learning data based on the received characteristic information. Meanwhile, it is obvious that to the types of characteristic information and the conversion method, the same methods as the aforementioned various embodiments can be applied. That is, characteristic information may include at least one of characteristic information related to voice inputters included in the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, characteristic information related to noises input through the voice inputters of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2, or characteristic information related to the distance between a location wherein a user voice was generated and the first external electronic apparatus 200-1, and the distance between a location wherein a user voice was generated and the second external electronic apparatus 200-2.
FIG. 10 is a flow chart for illustrating a control method of an electronic apparatus according to an embodiment of the disclosure.
Referring to FIG. 10, the electronic apparatus 100 may receive an artificial intelligence model used by each of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 and a plurality of learning data stored in each of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 from the first external electronic apparatus and the second external electronic apparatus, at operation S1110. Further, the electronic apparatus 100 can receive characteristic information of the first external electronic apparatus 200-1 and the second external electronic apparatus 200-2 and store the information.
Also, the electronic apparatus 100 may identify first learning data corresponding to the second learning data received from the second external electronic apparatus 200-2 among the plurality of learning data received from the first external electronic apparatus 200-1, at operation S1120. Specifically, the electronic apparatus 100 may compare the input value and the label value of the first learning data with the input value and the label value of the second learning data and identify first learning data.
Then, the electronic apparatus 100 may train the artificial intelligence model used by the second external electronic apparatus 200-2 based on the acquired first learning data, at operation S1130. Here, the electronic apparatus 100 can train the second artificial intelligence model based on third learning data which is learning data converted from the first learning data to a form appropriate for the second external electronic apparatus 200-2.
The electronic apparatus 100 may transmit the trained artificial intelligence model to the second external electronic apparatus 200-2, at operation S1140. Through the aforementioned process, the second external electronic apparatus 200-2 may output a personalized result based on the second artificial intelligence model trained based on various personalized learning data.
Meanwhile, the term "part" or "module" used in the disclosure includes a unit consisting of hardware, software, or firmware, and it may be interchangeably used with terms such as logic, a logical block, a component, or a circuit. Also, "a part" or "a module" may be a component consisting of an integrated body or a minimum unit performing one or more functions or a portion thereof. For example, a module may consist of an application-specific integrated circuit (ASIC).
The various embodiments of the disclosure may be implemented as software including instructions stored in machine-readable storage media, which can be read by machines (e.g., computers). The machines refer to apparatuses that call instructions stored in a storage medium, and can operate according to the called instructions, and the apparatuses may include an electronic apparatus according to the aforementioned embodiments (e.g., an electronic apparatus 100). In case an instruction is executed by a processor, the processor may perform a function corresponding to the instruction by itself, or by using other components under its control. An instruction may include a code that is generated or executed by a compiler or an interpreter. A storage medium that is readable by machines may be provided in the form of a non-transitory storage medium. Here, the term 'non-transitory' only means that a storage medium does not include signals, and is tangible, but does not indicate whether data is stored in the storage medium semi-permanently or temporarily.
Also, according to an embodiment of the disclosure, the methods according to the various embodiments described in the disclosure may be provided while being included in a computer program product. A computer program product refers to a product, and it can be traded between a seller and a buyer. A computer program product can be distributed on-line in the form of a storage medium that is readable by machines (e.g., a compact disc read only memory (CD-ROM)), or through an application store (e.g., play store 쪠). In the case of on-line distribution, at least a portion of a computer program product may be stored in a storage medium such as the server of the manufacturer, the server of the application store, and the memory of the relay server at least temporarily, or may be generated temporarily.
Further, each of the components according to the aforementioned various embodiments (e.g., a module or a program) may consist of a singular object or a plurality of objects. Also, among the aforementioned corresponding sub components, some sub components may be omitted, or other sub components may be further included in the various embodiments. Generally or additionally, some components (e.g., a module or a program) may be integrated as an object, and perform the functions that were performed by each of the components before integration identically or in a similar manner. Operations performed by a module, a program, or other components according to the various embodiments may be executed sequentially, in parallel, repetitively, or heuristically. Or, at least some of the operations may be executed or omitted in a different order, or other operations may be added.
While the disclosure has been shown described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
Claims (15)
- A control method of an electronic apparatus, the control method comprising:receiving, from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses;identifying first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus;training the second artificial intelligence model used by the second external electronic apparatus based on the first learning data; andtransmitting the trained second artificial intelligence model to the second external electronic apparatus.
- The control method of claim 1, further comprising:receiving first and second characteristic information of the first and second external electronic apparatuses, respectively;based on the first characteristic information of the first external electronic apparatus and the second characteristic information of the second external electronic apparatus, converting the first learning data into third learning data to train the second artificial intelligence model used by the second external electronic apparatus; andtraining the second artificial intelligence model used by the second external electronic apparatus based on the third learning data.
- The control method of claim 1, wherein the identifying of the first learning data comprises:comparing at least one of an input value or a label value included in the second learning data with an input value and a label value included in the learning data received from the first external electronic apparatus; andidentifying the first learning data based on a result of the comparing.
- The control method of claim 2,wherein the second artificial intelligence model comprises an artificial intelligence model for voice recognition, andwherein the plurality of learning data includes voice data, a label value of the voice data, and user information corresponding to the voice data.
- The control method of claim 4, wherein the identifying of the first learning data comprises:comparing at least one of second voice data, a second label value of the second voice data, or second user information corresponding to the second voice data included in the second learning data with at least one of first voice data, a first label value of the first voice data, and first user information corresponding to the first voice data included in the learning data received from the first external electronic apparatus; andidentifying the first learning data based on a result of the comparing.
- The control method of claim 4, wherein the first and second characteristic information includes at least one of characteristic information related to voice inputters included in the first and second external electronic apparatuses, characteristic information related to noises input through the voice inputters of the first and second external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the first and second external electronic apparatuses.
- The control method of claim 4, wherein the converting of the first learning data into the third learning data comprises converting voice data included in the first learning data by using a frequency filter.
- The control method of claim 1, wherein the receiving of the first and second artificial intelligence models comprises:based on a predetermined time condition, receiving the first and second artificial intelligence models used by the first and second external electronic apparatuses, respectively;receiving the plurality of learning data from the first and second external electronic apparatuses; andreceiving first and second characteristic information of the first and second external electronic apparatuses, respectively.
- An electronic apparatus comprising:a memory;a communicator; anda processor configured to:receive, via the communicator from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses,identify first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus,train the second artificial intelligence model used by the second external electronic apparatus based on the first learning data, andtransmit, via the communicator, the trained second artificial intelligence model to the second external electronic apparatus.
- The electronic apparatus of claim 9, wherein the processor is further configured to:receive, via the communicator, first and second characteristic information of the first and second external electronic apparatuses, respectively,based on the first characteristic information of the first external electronic apparatus and the second characteristic information of the second external electronic apparatus, convert the first learning data into third learning data to train the second artificial intelligence model used by the second external electronic apparatus, andtrain the second artificial intelligence model used by the second external electronic apparatus based on the third learning data.
- The electronic apparatus of claim 9, wherein the processor is further configured to:compare at least one of an input value or a label value included in the second learning data with an input value and a label value included in the learning data received from the first external electronic apparatus, andidentify the first learning data based on a result of the comparing.
- The electronic apparatus of claim 10,wherein the second artificial intelligence model comprises an artificial intelligence model for voice recognition, andwherein the plurality of learning data includes voice data, a label value of the voice data, and user information corresponding to the voice data.
- The electronic apparatus of claim 12, wherein the processor is further configured to:compare at least one of second voice data, a second label value of the second voice data, or second user information corresponding to the second voice data included in the second learning data with at least one of first voice data, a first label value of the first voice data, and first user information corresponding to the first voice data included in the learning data received from the first external electronic apparatus, andidentify the first learning data based on a result of the comparing.
- The electronic apparatus of claim 12, wherein the first and second characteristic information includes at least one of characteristic information related to voice inputters included in the first and second external electronic apparatuses, characteristic information related to noises input through the voice inputters of the first and second external electronic apparatuses, or characteristic information related to distances between a location where a user voice was generated and locations of the first and second external electronic apparatuses.
- An electronic apparatus comprising:a memory;a communicator; anda processor configured to:receive, via the communicator from external electronic apparatuses, a plurality of learning data stored in the external electronic apparatuses,identify first learning data corresponding to second learning data stored in the electronic apparatus among the plurality of learning data received from the external electronic apparatuses, andtrain an artificial intelligence model used by the electronic apparatus based on the identified first learning data.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980065502.3A CN112789628A (en) | 2018-10-05 | 2019-10-04 | Electronic device and control method thereof |
EP19869924.1A EP3785180A4 (en) | 2018-10-05 | 2019-10-04 | Electronic apparatus and control method thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020180119054A KR20200044173A (en) | 2018-10-05 | 2018-10-05 | Electronic apparatus and control method thereof |
KR10-2018-0119054 | 2018-10-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020071854A1 true WO2020071854A1 (en) | 2020-04-09 |
Family
ID=70051764
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/013040 WO2020071854A1 (en) | 2018-10-05 | 2019-10-04 | Electronic apparatus and control method thereof |
Country Status (5)
Country | Link |
---|---|
US (2) | US11586977B2 (en) |
EP (1) | EP3785180A4 (en) |
KR (1) | KR20200044173A (en) |
CN (1) | CN112789628A (en) |
WO (1) | WO2020071854A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6852141B2 (en) * | 2018-11-29 | 2021-03-31 | キヤノン株式会社 | Information processing device, imaging device, control method of information processing device, and program |
US11929079B2 (en) | 2020-10-27 | 2024-03-12 | Samsung Electronics Co., Ltd | Electronic device for managing user model and operating method thereof |
KR102493655B1 (en) * | 2020-12-01 | 2023-02-07 | 가천대학교 산학협력단 | Method for managing ai model training dataset |
KR20220121637A (en) * | 2021-02-25 | 2022-09-01 | 삼성전자주식회사 | Electronic device and operating method for the same |
WO2024063508A1 (en) * | 2022-09-19 | 2024-03-28 | 삼성전자 주식회사 | Electronic device, and method for providing operating state of plurality of devices |
WO2024155743A1 (en) * | 2023-01-18 | 2024-07-25 | Capital One Services, Llc | Systems and methods for maintaining bifurcated data management while labeling data for artificial intelligence model development |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8996372B1 (en) | 2012-10-30 | 2015-03-31 | Amazon Technologies, Inc. | Using adaptation data with cloud-based speech recognition |
US20150193693A1 (en) * | 2014-01-06 | 2015-07-09 | Cisco Technology, Inc. | Learning model selection in a distributed network |
US20160267380A1 (en) * | 2015-03-13 | 2016-09-15 | Nuance Communications, Inc. | Method and System for Training a Neural Network |
US20170161603A1 (en) * | 2015-06-08 | 2017-06-08 | Preferred Networks, Inc. | Learning device unit |
JP2017535857A (en) * | 2014-10-24 | 2017-11-30 | ナショナル・アイシーティ・オーストラリア・リミテッド | Learning with converted data |
KR20180096473A (en) * | 2017-02-21 | 2018-08-29 | 한국과학기술원 | Knowledge Sharing Based Knowledge Transfer Method for Improving Quality of Knowledge and Apparatus Therefor |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4000828B2 (en) | 2001-11-06 | 2007-10-31 | 株式会社デンソー | Information system, electronic equipment, program |
US7533019B1 (en) | 2003-12-23 | 2009-05-12 | At&T Intellectual Property Ii, L.P. | System and method for unsupervised and active learning for automatic speech recognition |
KR101065188B1 (en) | 2009-07-24 | 2011-09-19 | 고려대학교 산학협력단 | Apparatus and method for speaker adaptation by evolutional learning, and speech recognition system using thereof |
US8843371B2 (en) | 2012-05-31 | 2014-09-23 | Elwha Llc | Speech recognition adaptation systems based on adaptation data |
US9514740B2 (en) | 2013-03-13 | 2016-12-06 | Nuance Communications, Inc. | Data shredding for speech recognition language model training under data retention restrictions |
US20150161986A1 (en) | 2013-12-09 | 2015-06-11 | Intel Corporation | Device-based personal speech recognition training |
KR102146462B1 (en) * | 2014-03-31 | 2020-08-20 | 삼성전자주식회사 | Speech recognition system and method |
US9293134B1 (en) | 2014-09-30 | 2016-03-22 | Amazon Technologies, Inc. | Source-specific speech interactions |
KR20180012639A (en) * | 2016-07-27 | 2018-02-06 | 삼성전자주식회사 | Voice recognition method, voice recognition device, apparatus comprising Voice recognition device, storage medium storing a program for performing the Voice recognition method, and method for making transformation model |
KR20180102871A (en) * | 2017-03-08 | 2018-09-18 | 엘지전자 주식회사 | Mobile terminal and vehicle control method of mobile terminal |
US10360214B2 (en) * | 2017-10-19 | 2019-07-23 | Pure Storage, Inc. | Ensuring reproducibility in an artificial intelligence infrastructure |
-
2018
- 2018-10-05 KR KR1020180119054A patent/KR20200044173A/en not_active Application Discontinuation
-
2019
- 2019-10-04 EP EP19869924.1A patent/EP3785180A4/en active Pending
- 2019-10-04 WO PCT/KR2019/013040 patent/WO2020071854A1/en unknown
- 2019-10-04 CN CN201980065502.3A patent/CN112789628A/en active Pending
- 2019-10-04 US US16/593,589 patent/US11586977B2/en active Active
-
2023
- 2023-01-31 US US18/162,218 patent/US11880754B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8996372B1 (en) | 2012-10-30 | 2015-03-31 | Amazon Technologies, Inc. | Using adaptation data with cloud-based speech recognition |
US20150193693A1 (en) * | 2014-01-06 | 2015-07-09 | Cisco Technology, Inc. | Learning model selection in a distributed network |
JP2017535857A (en) * | 2014-10-24 | 2017-11-30 | ナショナル・アイシーティ・オーストラリア・リミテッド | Learning with converted data |
US20160267380A1 (en) * | 2015-03-13 | 2016-09-15 | Nuance Communications, Inc. | Method and System for Training a Neural Network |
US20170161603A1 (en) * | 2015-06-08 | 2017-06-08 | Preferred Networks, Inc. | Learning device unit |
KR20180096473A (en) * | 2017-02-21 | 2018-08-29 | 한국과학기술원 | Knowledge Sharing Based Knowledge Transfer Method for Improving Quality of Knowledge and Apparatus Therefor |
Non-Patent Citations (2)
Title |
---|
See also references of EP3785180A4 |
VADIM MAZALOV ET AL.: "Writing on Clouds", INTELLIGENT COMPUTER MATHEMATICS, pages 402 - 416 |
Also Published As
Publication number | Publication date |
---|---|
EP3785180A4 (en) | 2021-10-13 |
US11586977B2 (en) | 2023-02-21 |
US20230177398A1 (en) | 2023-06-08 |
CN112789628A (en) | 2021-05-11 |
US11880754B2 (en) | 2024-01-23 |
EP3785180A1 (en) | 2021-03-03 |
US20200111025A1 (en) | 2020-04-09 |
KR20200044173A (en) | 2020-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020071854A1 (en) | Electronic apparatus and control method thereof | |
WO2020166896A1 (en) | Electronic apparatus and controlling method thereof | |
US11455830B2 (en) | Face recognition method and apparatus, electronic device, and storage medium | |
WO2019182346A1 (en) | Electronic device for modulating user voice using artificial intelligence model and control method thereof | |
WO2015167160A1 (en) | Command displaying method and command displaying device | |
WO2020085796A1 (en) | Electronic device and method for controlling electronic device thereof | |
EP3642838A1 (en) | Method for operating speech recognition service and electronic device and server for supporting the same | |
WO2020091503A1 (en) | Electronic apparatus and control method thereof | |
WO2020204655A1 (en) | System and method for context-enriched attentive memory network with global and local encoding for dialogue breakdown detection | |
US10691402B2 (en) | Multimedia data processing method of electronic device and electronic device thereof | |
WO2020027454A1 (en) | Multi-layered machine learning system to support ensemble learning | |
WO2020040517A1 (en) | Electronic apparatus and control method thereof | |
WO2021020810A1 (en) | Learning method of ai model and electronic apparatus | |
WO2020180001A1 (en) | Electronic device and control method therefor | |
WO2017171266A1 (en) | Diagnostic model generating method and diagnostic model generating apparatus therefor | |
WO2021071110A1 (en) | Electronic apparatus and method for controlling electronic apparatus | |
WO2023229305A1 (en) | System and method for context insertion for contrastive siamese network training | |
WO2019164144A1 (en) | Electronic device and natural language generation method thereof | |
WO2017034225A1 (en) | Electronic apparatus and method of transforming content thereof | |
WO2022197136A1 (en) | System and method for enhancing machine learning model for audio/video understanding using gated multi-level attention and temporal adversarial training | |
WO2020130383A1 (en) | Electronic device and method for controlling same | |
WO2020050554A1 (en) | Electronic device and control method therefor | |
WO2024029771A1 (en) | Method, apparatus and computer readable medium for generating clean speech signal using speech denoising networks based on speech and noise modeling | |
WO2018164435A1 (en) | Electronic apparatus, method for controlling the same, and non-transitory computer readable recording medium | |
WO2022131476A1 (en) | Electronic device for converting artificial intelligence model, and operation method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19869924 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019869924 Country of ref document: EP Effective date: 20201123 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |