EP3698258A1 - Elektronische vorrichtung und steuerungsverfahren dafür - Google Patents

Elektronische vorrichtung und steuerungsverfahren dafür

Info

Publication number
EP3698258A1
EP3698258A1 EP19768025.9A EP19768025A EP3698258A1 EP 3698258 A1 EP3698258 A1 EP 3698258A1 EP 19768025 A EP19768025 A EP 19768025A EP 3698258 A1 EP3698258 A1 EP 3698258A1
Authority
EP
European Patent Office
Prior art keywords
text
illustration
illustrations
artificial intelligence
acquired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP19768025.9A
Other languages
English (en)
French (fr)
Other versions
EP3698258A4 (de
Inventor
Jooyoung Kim
Hyunwoo Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2019/002853 external-priority patent/WO2019177344A1/en
Publication of EP3698258A1 publication Critical patent/EP3698258A1/de
Publication of EP3698258A4 publication Critical patent/EP3698258A4/de
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Definitions

  • the disclosure relates to apparatuses and methods consistent with an electronic apparatus and a controlling method thereof. More particularly, the disclosure relates to an electronic apparatus for generating an image associated with text and a controlling method thereof.
  • Apparatuses and methods consistent with the disclosure relate to an artificial intelligence (AI) system that mimics functions of a human brain, such as cognition, determination, and the like, using a machine learning algorithm and an application thereof.
  • AI artificial intelligence
  • the artificial intelligence system is a system that a machine itself learns, determines, and becomes smart, unlike an existing rule-based smart system.
  • Machine learning e.g., deep learning
  • element technologies that utilize the machine learning.
  • the machine learning is an algorithm technology that classifies/learns the characteristics of input data by itself
  • the element technology is a technology that mimics functions such as cognition, determination, and the like of human brain by utilizing machine learning algorithms such as deep learning and the like and includes technical fields such as linguistic understanding, visual understanding, inference and prediction, knowledge representation, motion control, and the like.
  • the linguistic understanding is a technology for recognizing, applying, and processing human’s language/characters, and includes natural language processing, machine translation, dialogue system, query response, voice recognition/synthesis, and the like.
  • the visual understanding is a technology for recognizing and processing objects as human vision, and includes object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, image enhancement, and like.
  • the inference and prediction is a technology for determining and logically inferring and predicting information, and includes knowledge/probability based inference, optimization prediction, preference based planning, recommendation, and the like.
  • the knowledge representation is a technology that automates the processing of human experience information into knowledge data, and includes knowledge building (data generation/classification), knowledge management (data utilization), and the like.
  • the motion control is a technology for controlling an autonomous running of a vehicle and a motion of a robot, and includes a movement control (navigation, collision, running), an operation control (behavior control), and the like.
  • books, newspapers, advertisements, presentations, etc. may be created by inserting illustrations together with text when creating the books, the newspapers, the advertisements, and the presentations.
  • illustrations Conventionally, it took a long time to find desired illustrations because it had to find the illustrations suitable for the text one by one, and it was also difficult to unify the designs of the illustrations inserted into one material of the related art.
  • an aspect of the disclosure is to provide an electronic apparatus for generating an image associated with a text using an artificial intelligence (AI) model, and a controlling method thereof.
  • AI artificial intelligence
  • a method of controlling an electronic apparatus includes acquiring a text based on a user input, determining a plurality of key terms from the acquired text, acquiring a plurality of first illustrations corresponding to the plurality of key terms, acquiring a second illustration by synthesizing at least two or more of the first illustrations from among the plurality of first illustrations, and outputting the acquired second illustration.
  • an electronic apparatus includes a memory configured to store one or more instructions, and at least one processor coupled to the memory, wherein the at least one processor is configured to execute the one or more instructions to acquire a text based on a user input, determine a plurality of key terms from the acquired text, acquire a plurality of first illustrations corresponding to the plurality of key terms, acquire a second illustration by synthesizing at least two or more of the first illustrations from among the plurality of first illustrations, and output the acquired second illustration.
  • FIG. 1 is a diagram for describing an illustration providing method according to an embodiment of the disclosure
  • FIG. 2A is a flow chart for describing a controlling method of an electronic apparatus according to an embodiment of the disclosure
  • FIG. 2B is a flow chart for describing a controlling method of an electronic apparatus according to an embodiment of the disclosure
  • FIG. 3 is a diagram illustrating an example of a learning method through a generative adversarial network (GAN) according to an embodiment of the disclosure
  • FIG. 4 is a diagram for describing an illustration search method according to an embodiment of the disclosure using a database including illustrations matched with tag information according to an embodiment of the disclosure;
  • FIGS. 5, 6, 7, and 8 are diagrams for describing an embodiment of the disclosure of acquiring a synthesized illustration in which a plurality of illustrations are synthesized according to various embodiments of the disclosure;
  • FIGS. 9, 10, and 11 are diagrams for describing an embodiment of the disclosure of providing a plurality of synthesized illustrations which are synthesized in various combinations according to various embodiments of the disclosure;
  • FIG. 12 is a diagram for describing an embodiment of the disclosure of acquiring an illustration associated with a text and corresponding to a design of a presentation image according to an embodiment of the disclosure
  • FIGS. 13, 14, 15, and 16 are diagrams for describing a user interface for providing illustrations according to diverse embodiments of the disclosure.
  • FIGS. 17 and 18A are diagrams for describing diverse embodiments of the disclosure in which an illustration generation function is applied to a messenger program according to various embodiments of the disclosure;
  • FIG. 18B is a diagram for describing an embodiment of the disclosure in which the illustration generation function is applied to a keyboard program according to an embodiment of the disclosure
  • FIG. 19 is a block diagram for describing a configuration of an electronic apparatus according to an embodiment of the disclosure.
  • FIG. 20A is a flow chart of a network system using a recognition model according to various embodiments of the disclosure.
  • FIG. 20B is a flow chart of a network system using an artificial intelligence model according to an embodiment of the disclosure.
  • FIG. 20C is a configuration diagram of a network system according to an embodiment of the disclosure.
  • FIG. 21 is a block diagram for describing an electronic apparatus for learning and using a recognition model according to an embodiment of the disclosure.
  • FIGS. 22 and 23 are block diagrams for describing a learner and an analyzer according to various embodiments of the disclosure.
  • an expression “have”, “may have”, “include”, “may include”, or the like indicates an existence of a corresponding feature (for example, a numerical value, a function, an operation, a component such as a part, or the like), and does not exclude an existence of an additional feature.
  • an expression “A or B”, “at least one of A and/or B”, “one or more of A and/or B”, or the like, may include all possible combinations of items listed together.
  • “A or B”, “at least one of A and B”, or “at least one of A or B” may indicate all of 1) a case in which at least one A is included, 2) a case in which at least one B is included, or 3) a case in which both of at least one A and at least one B are included.
  • first”, “second”, or the like, used in the disclosure may indicate various components regardless of a sequence and/or importance of the components, will be used only in order to distinguish one component from the other components, and do not limit the corresponding components.
  • a first user device and a second user device may indicate different user devices regardless of a sequence or importance thereof.
  • the first component may be named the second component and the second component may also be similarly named the first component, without departing from the scope of the disclosure.
  • module used in the disclosure is a term for referring to the component performing at least one function or operation, and such a component may be implemented in hardware or software or may be implemented in a combination of hardware and software.
  • a plurality of “modules”, “units”, “parts”, or the like may be integrated into at least one module or chip and may be implemented in at least one processor, except for a case in which they need to be each implemented in individual specific hardware.
  • any component for example, a first component
  • another component for example, a second component
  • any component is directly coupled with/to another component or may be coupled with/to another component through the other component (for example, a third component).
  • any component for example, a first component
  • the other component for example, a third component
  • An expression “configured (or set) to” used in the disclosure may be replaced by an expression “suitable for”, “having the capacity to”, “designed to”, “adapted to”, “made to”, or “capable of” depending on a situation.
  • a term “configured (or set) to” may not necessarily mean only “specifically designed to” in hardware. Instead, an expression “an apparatus configured to” may mean that the apparatus is “capable of” together with other apparatuses or components.
  • a “processor configured (or set) to perform A, B, and C” may mean a dedicated processor (for example, an embedded processor) for performing the corresponding operations or a generic-purpose processor (for example, a central processing unit (CPU) or an application processor) that may perform the corresponding operations by executing one or more software programs stored in a memory apparatus.
  • a dedicated processor for example, an embedded processor
  • a generic-purpose processor for example, a central processing unit (CPU) or an application processor
  • An electronic apparatus may include at least one of, for example, a smartphone, a tablet personal computer (PC), a mobile phone, an image phone, an e-book reader, a desktop personal computer (PC), a laptop personal computer (PC), a netbook computer, a workstation, a server, a personal digital assistant (PDA), a portable multimedia player (PMP), an moving picture experts group phase 1 or phase 2 (MPEG-1 or MPEG-2) audio layer-3 (MP3) player, a mobile medical device, a camera, or a wearable device.
  • a smartphone a tablet personal computer (PC)
  • PC personal computer
  • PMP portable multimedia player
  • MPEG-1 or MPEG-2 moving picture experts group phase 1 or phase 2
  • MP3 audio layer-3
  • the wearable device may include at least one of an accessory type wearable device (for example, a watch, a ring, a bracelet, an anklet, a necklace, a glasses, a contact lens, or a head-mounted-device (HMD), a textile or clothing integral type wearable device (for example, an electronic clothing), a body attachment type wearable device (for example, a skin pad or a tattoo), or a living body implantation type wearable device (for example, an implantable circuit).
  • an accessory type wearable device for example, a watch, a ring, a bracelet, an anklet, a necklace, a glasses, a contact lens, or a head-mounted-device (HMD)
  • a textile or clothing integral type wearable device for example, an electronic clothing
  • a body attachment type wearable device for example, a skin pad or a tattoo
  • a living body implantation type wearable device for example, an implantable circuit
  • the electronic apparatus may be a home appliance.
  • the home appliance may include at least one of, for example, a television (TV), a digital video disc (DVD) player, an audio, a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washing machine, an air cleaner, a set-top box, a home automation control panel, a security control panel, a TV box (for example, HomeSyncTM of Samsung Electronics Co., Ltd, TVTM of Apple Inc, or TVTM of Google), a game console (for example XboxTM, PlayStationTM), an electronic dictionary, an electronic key, a camcorder, or a digital photo frame.
  • TV television
  • DVD digital video disc
  • the electronic apparatus may include at least one of various medical devices (for example, various portable medical measuring devices (such as a blood glucose meter, a heart rate meter, a blood pressure meter, a body temperature meter, or the like), a magnetic resonance angiography (MRA), a magnetic resonance imaging (MRI), a computed tomography (CT), a photographing device, an ultrasonic device, or the like), a navigation device, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), an automobile infotainment device, a marine electronic equipment (for example, a marine navigation device, a gyro compass, or the like), avionics, a security device, an automobile head unit, an industrial or household robot, an automatic teller's machine of a financial institute, a point of sales (POS) of a shop, or Internet of things (IoT) devices (for example, a light bulb, various sensors, an electric or gas meter, or the like
  • the electronic apparatus may include at least one of a portion of a furniture or a building/structure, an electronic board, an electronic signature receiving device, a projector, or various meters (for example, water, electricity, gas, electric wave meter, or the like).
  • the electronic apparatus may be one or one or more combinations of the various apparatuses described above.
  • An electronic apparatus according to some embodiments may be a flexible electronic apparatus.
  • the electronic apparatus according to the embodiments of the disclosure is not limited to the apparatuses described above, but may include new electronic apparatuses in accordance with the development of technologies.
  • FIG. 1 is a diagram for describing an illustration providing method according to an embodiment of the disclosure.
  • a user enters a text 10, which is a presentation script, through a presentation program such as Microsoft PowerPoint TM, an illustration 20 corresponding to the text 10 may be provided.
  • a meaning of the text 10 may be detected using an artificial intelligence technology and the illustration 20 that matches the meaning of the text 10 may be provided.
  • Such an illustration providing function may be provided as a plugin or an additional function (add-in and an add-on) to presentation software such as Microsoft PowerPointTM, KeyNoteTM, or the like, or may also be provided as separate software.
  • the illustration providing function according to the disclosure may be applied not only to presentation materials but also to any field utilizing images suitable for texts such as books, newspapers, advertisements, magazines, electronic postcards, emails, instant messengers and the like.
  • the term “illustration” used in the disclosure may also be referred to as the term such as a pictogram, a flaticon, an international system of typographic picture education (ISOTYPE), an infographic, an image (video or still image), a picture, an emoticon, or the like.
  • ISOTYPE international system of typographic picture education
  • infographic an image (video or still image), a picture, an emoticon, or the like.
  • illustrations used in the disclosure may be directly created by a subject providing the service, or may be an externally collected illustration.
  • the subject providing the service should collect only illustrations in which a copyright issue is solved and utilize the illustrations to the service. If illustrations including copyright are used to provide better quality illustrations, the subject providing the service should solve the copyright issue.
  • an additional charge may be received from the user.
  • FIGS. 2A and 2B The illustration providing function according to diverse embodiments of the disclosure may be implemented through an electronic apparatus.
  • a controlling method of an electronic apparatus according to an embodiment of the disclosure will be described with reference to FIGS. 2A and 2B.
  • FIGS. 2A and 2B are flow charts for describing a controlling method of an electronic apparatus according to an embodiment of the disclosure.
  • the electronic apparatus acquires a text based on a user input at operation S210.
  • the electronic apparatus may provide a presentation image, receive a text for the presentation image, and acquire the text for the presentation image based on the user input.
  • the presentation image may be a screen provided by executing presentation software, for example, a screen as illustrated in FIG. 1.
  • the presentation image may be displayed through a display embedded in the electronic apparatus, or may be displayed through an external display device connected to the electronic apparatus.
  • the text for the presentation image may also be referred to as a script, an announcement, or the like.
  • the text 10 may be input to a text input window provided on the screen on which the presentation image is displayed.
  • the electronic apparatus may receive the text for the presentation image through an input device.
  • the input device may include, for example, a keyboard, a touchpad, a mouse, a button, or the like.
  • the input device may be an external input device embedded in the electronic apparatus or connected to the electronic apparatus.
  • the user input may be a voice input according to an utterance of a user.
  • the electronic apparatus may acquire utterance information of the user by receiving the voice input of the user and analyzing the received voice input, and acquire a text corresponding to the acquired utterance information of the user.
  • the electronic apparatus determines (or identifies) a plurality of key terms (or key words) from the acquired text at operation S220. In addition, if the plurality of key terms is determined, the electronic apparatus acquires a plurality of first illustrations corresponding to the plurality of key terms at operation S230.
  • the electronic apparatus may input information and text for a design of a presentation image to a first artificial intelligence model learned by an artificial intelligence algorithm to thereby acquire a plurality of first illustrations associated with the text and corresponding to the design of the presentation image.
  • a text 10 “Great teamwork leads to success” may be input to the artificial intelligence model learned by the artificial intelligence algorithm to thereby acquire an illustration 20 associated with the text 10.
  • the artificial intelligence model may be learned by a generative adversarial network (GAN).
  • GAN generative adversarial network
  • a GAN technology is a key concept in which a generative model and a discriminative model are opposed to each other and performances of each other are gradually improved.
  • FIG. 3 illustrates an example of a learning method through the GAN.
  • FIG. 3 is a diagram illustrating an example of a learning method through a generative adversarial network (GAN) according to an embodiment of the disclosure.
  • GAN generative adversarial network
  • the generative model 310 generates any image (fake image) from random noise, and the discriminative model 320 discriminates a real image (or learned data) and the fake image generated by the generative model from each other.
  • the generative model 310 is learned in a direction that the discriminative model 320 is gradually unable to discriminate between the real image and the fake image, and on the other hand, the discriminative model 320 is learned in a direction that better discriminates between the real image and the fake image.
  • the generative model 310 may generate the fake image that is substantially similar to the real image.
  • the generative model 310 learned as described above may be utilized as the artificial intelligence model at operation S230.
  • At operation S230 at least one key work may be acquired from the text using the artificial intelligence model, and an illustration corresponding to at least one acquired key term may be searched from a pre-stored database.
  • an artificial intelligence model for natural language processing may be provided, and such an artificial intelligence model may be used to perform morphological analysis, key term extraction, detection of meaning and association of keywords (e.g., detection of homonyms, background words, key terms, etc.).
  • FIG. 4 is a diagram for describing an illustration search method according to an embodiment of the disclosure using a database including illustrations matched with tag information according to an embodiment of the disclosure.
  • a database in which illustrations matched to tag information may be provided.
  • the text may be entered into an artificial intelligence model for natural-language processing (NLP) to acquire ‘artificial intelligence’, ‘start-up’, and ‘increase’ as the key terms, and the illustrations matched to tag information including the key terms may be searched from the database.
  • NLP natural-language processing
  • the sentence when an entire sentence is input, the sentence may be divided into phrases/paragraphs to sequentially generate and provide illustrations corresponding to the phrases/paragraphs having main meanings of the entire sentence.
  • the electronic apparatus may input the text to the first artificial intelligence model to acquire a plurality of first illustrations associated with the text and having the same graphic effect as each other.
  • the electronic apparatus acquires a second illustration by synthesizing at least two or more first illustrations of the plurality of first illustrations at operation S240.
  • the electronic apparatus may input the information on the design of the presentation image and the plurality of first illustrations to the learned second artificial intelligence model to thereby acquire and output a second illustration modified so that at least two or more of the plurality of first illustrations correspond to the design of the presentation image.
  • the electronic apparatus may input the text to the artificial intelligence model to thereby acquire the plurality of first illustrations and to acquire the second illustration in which the plurality of first illustrations are synthesized as an illustration associated with the text. That is, a synthesized illustration in which several illustrations are synthesized may be provided.
  • the electronic apparatus may determine the plurality of key terms from the text, acquire the plurality of first illustrations corresponding to the plurality of key terms, and acquire the synthesized second illustration by disposing the plurality of first illustrations according to context of the plurality of key terms.
  • the electronic apparatus may acquire the second illustration by disposing and synthesizing the plurality of first illustrations according to the context of the plurality of key terms.
  • the electronic apparatus When the second illustration is acquired according to the above-described processes, the electronic apparatus outputs the acquired second illustration at operation S250. Specifically, when the second illustration is acquired, the electronic apparatus may control the display to display the acquired second illustration and output the acquired second illustration through the display.
  • the electronic apparatus may acquire a text based on a user input in a state in which a presentation image is not displayed.
  • the electronic apparatus may also display a presentation image, and acquire a text for the presentation image.
  • an embodiment of acquiring a text for a presentation image, and acquiring an illustration based on the acquired text will be described again, with reference to FIG. 2B.
  • overlapping descriptions will be omitted.
  • a presentation image is displayed at operation S210-1.
  • a presentation image is an image that is provided by executing presentation software, and for example, it may be a screen as illustrated in FIG. 1.
  • the electronic apparatus When a presentation image is displayed, the electronic apparatus receives an input of a text for the presentation image at operation S220-1. For example, as illustrated in FIG. 1, the electronic apparatus may receive an input of a text 10 on a text input window provided on a screen displaying the presentation image.
  • the electronic apparatus acquires at least one illustration associated with the text by inputting the text to an artificial intelligence model learned by an artificial intelligence algorithm at operation S230-1.
  • an artificial intelligence model learned by an artificial intelligence algorithm For example, referring to FIG. 1, a text 10 “Great teamwork leads to success” may be input to the artificial intelligence model learned by the artificial intelligence algorithm to thereby acquire an illustration 20 associated with the text 10.
  • the electronic apparatus displays an illustration selected by a user among the acquired at least one illustration on the presentation image at operation S240-1.
  • FIGS. 5 to 8 are diagrams for describing an embodiment of the disclosure of acquiring a synthesized illustration in which a plurality of illustrations are synthesized according to an embodiment of the disclosure.
  • FIG. 5 is a diagram for describing an embodiment of acquiring a synthesized illustration in which a plurality of illustrations are synthesized
  • FIG. 6 is a diagram for describing an embodiment of acquiring a synthesized illustration in which a plurality of illustrations are synthesized
  • FIG. 7 is a diagram for describing an embodiment of acquiring a synthesized illustration in which a plurality of illustrations are synthesized
  • FIG. 8 is a diagram for describing an embodiment of acquiring a synthesized illustration in which a plurality of illustrations are synthesized.
  • the artificial intelligence model may be used to acquire “artificial intelligence”, “start-up”, “breakthrough”, and “increase” as the key terms and determine association between these key terms.
  • the association may be calculated as a numerical vale (percent) of the degree of association of the respective words.
  • a process of determining a context includes a process of determining a role of each key term in the sentence, for example, whether each key term is a word corresponding to a background, a word corresponding to a phenomenon/result, or a word corresponding to a center of the sentence.
  • a plurality of illustrations corresponding to the acquired key terms may be acquired.
  • the plurality of illustrations may be classified according to the association and the context of the key terms. For example, at least one illustration corresponding to the key terms corresponding to the background and at least one illustration corresponding to the key terms corresponding to the phenomenon/result may be classified.
  • the plurality of illustrations may be disposed and synthesized according to the association and the context of the key terms.
  • the illustration corresponding to the background word may be disposed behind other illustrations and may be set to have higher transparency than other illustrations.
  • the illustrations corresponding to the center word and the word representing the phenomenon/result may be set to have lower transparency than other illustrations, and may be expressed by a thick line.
  • the user may use the synthesized illustration as it is as illustrated in FIG. 8, or may also generate a new synthesized illustration by separately modifying the plurality of illustrations in the synthesized illustration as desired (modifying sizes, graphic effects, layout positions, and the like).
  • FIGS. 9 to 11 are diagrams for describing an embodiment of the disclosure of providing a plurality of synthesized illustrations which are synthesized in according to various embodiment of the disclosure.
  • the key terms may be extracted using the artificial intelligence model, and the plurality of illustrations corresponding to each key term may be acquired.
  • the illustrations corresponding to the key term “artificial intelligence”, the illustrations corresponding to the key term “start-up”, and the illustrations corresponding to the key term “increase” may be acquired, respectively.
  • the illustrations of each key term may be configured in various combinations. In this case, using the artificial intelligence model, various combinations may be provided in consideration of similarity of the type of illustration and the type of the presentation image, the similarity between the illustrations, and the like.
  • various synthesized illustrations may be provided in a form of a recommendation list by disposing and synthesizing the illustrations of each combination based on the context of the key terms.
  • a first database constituted by templates for a layout of the illustrations defined according to the type of association between the words
  • a second database constituted by templates for a layout of the illustrations defined according to the type of association between phrases/paragraphs may be used.
  • the illustrations may be disposed by loading the templates from the databases.
  • the user may select and use a desired synthesized illustration from the recommendation list.
  • the user may also generate a new synthesized illustration by separately modifying the plurality of illustrations in the synthesized illustration as desired (modifying sizes, graphic effects, layout positions, and the like) instead of using the provided synthesized illustration as it is.
  • a weight may be assigned to the synthesize illustration selected by the user, that is, the combination selected by the user, and the artificial intelligence model may be re-learned using the weight. That is, reinforcement learning technology may be used.
  • the information on the design of the presentation image and the text may be input to the artificial intelligence model to thereby acquire at least one illustration associated with the text and corresponding to the design of the presentation image.
  • the information on the design of the presentation image may include information such as themes, background styles, colors, fonts, graphic effects, brightness, contrast, transparency, and the like of the presentation image, or a capture screen of an entirety of a current presentation image.
  • the artificial intelligence model may include a first artificial intelligence model that generates a basic form of the illustration and a second artificial intelligence model that modifies the illustration of the basic form to correspond to the design of the presentation image.
  • the basic form of the illustration may include color, a form to which a design effect is not applied, a line-only picture, a black-and-white picture, and the like. This will be described with reference to FIG. 12.
  • FIG. 12 is a diagram for describing an embodiment of the disclosure of acquiring an illustration associated with a text and corresponding to a design of a presentation image according to an embodiment of the disclosure.
  • a first artificial intelligence model 1210 is a model that generates the illustration corresponding to the text, and is a model learned using the text and the image as learning data.
  • a second artificial intelligence model 1220 is a model that modifies the image to correspond to the design of the presentation image, and is a model learned using the information on the presentation image and the text as the learning data.
  • the information on the design of the presentation image may be information on themes, background styles, colors, fonts, graphic effects, brightness, contrast, transparency, and the like of the presentation image.
  • the second artificial intelligence model 1220 may modify an input image according to the design of the presentation image in relation to the theme, line style, line thickness, color, size, graphic effect, brightness, contrast, shape, layout, synthesis, and the like of the input image.
  • the second artificial intelligence model 1220 may list colors used in the design of the presentation image, calculate color theme information of the presentation image by using frequency, area, and the like of the colors as a weight, and color the illustration using the colors in the calculated color theme.
  • the second artificial intelligence model 1220 may define the style of the presentation image from design elements such as a line style, a line thickness, a curve frequency, an edge processing, and the like used in the design of the presentation image in addition to the color information, and may change the graphic effect of the illustration using the information.
  • the second artificial intelligence model may give dynamic motion or give sound effects to the illustration. There may be a movement in a certain part of the illustration, such as rotation, blinking, shaking, repetition of increasing or decreasing over a certain size, or the like, and at the time of appearance of the illustration, an effect sound or a short music that suitably matches the illustration may be provided together with the illustration.
  • At least one first illustration 1211 may be acquired by inputting the text for the presentation image to the first artificial intelligence model 1210. Because the first artificial intelligence model 1210 may perform the natural-language processing, the first artificial intelligence model 1210 may extract the key terms from the text and detect the meaning and the association of each key term.
  • the form of the first illustration 1211 may be generated according to the meaning of the key term, the first illustration 1211 may be formed by disposing and synthesizing the plurality of illustrations according to the association between the key terms and the meaning of the context, and the size, position, transparency, and the like of the plurality of illustrations may be determined according to the importance of the key terms (according to whether the keyword is a background word, a main word, or a sub-word).
  • At least one second illustration 1221 that at least one first illustration 1211 is modified to correspond to the design of the presentation image may be acquired by inputting the information on the design of the presentation image and at least one first illustration 1211 to the second artificial intelligence model 1220.
  • a design of a new illustration may be determined to be matched to a design of an existing generated illustration.
  • the graphic effect of the illustration may be automatically changed to be matched to the changed design.
  • graphic effects of other illustrations may be automatically changed in the same manner as the modified graphic effect.
  • the user may create the presentation materials having higher completion in a sense of design.
  • the illustrations may be generated so that the designs between the illustrations are similar to each other.
  • the plurality of illustrations associated with the text and having the same graphic effect as each other may be acquired by inputting the text for the presentation image to the artificial intelligence model.
  • the graphic effect may include a shadow effect, a reflection effect, a neon sign effect, a stereoscopic effect, a three-dimensional rotation effect, and the like.
  • designs between illustrations acquired from one sentence/one paragraph and a sentence/paragraph designated by the user may be generated similar to each other, or designs between the entire illustrations of the same presentation material may be generated similar to each other.
  • the illustration selected by the user among one or more illustrations acquired according to the diverse embodiments described above is displayed on the presentation image at operation S240.
  • At least one illustration acquired according to the embodiments described above may be provided in some area in the screen on which the presentation image is displayed, and here, the selected illustration may be displayed on the presentation image.
  • the acquired illustration may be displayed on the presentation image without the selection of the user.
  • the illustration displayed on the presentation image may be edited by an additional user operation.
  • FIG. 13 is a diagram for describing a user interface for providing illustrations according to diverse embodiments of the disclosure
  • FIG. 14 is a diagram for describing a user interface for providing illustrations according to diverse embodiments of the disclosure
  • FIG. 15 is a diagram for describing a user interface for providing illustrations according to diverse embodiments of the disclosure
  • FIG. 16 is a diagram for describing a user interface for providing illustrations according to diverse embodiments of the disclosure.
  • an illustration generation function may be included in presentation software.
  • a user interface (UI) 1320 for searching for an illustration may be displayed, and when the text is input into a text input area 1321 provided in the UI 1320 and a search 1323 is selected, a search result 1325 including at least one illustration associated with the text may be provided.
  • An illustration selected by the user among the illustrations included in the search result 1325 may be displayed on the presentation image 1330.
  • the user may display the illustration on the presentation image 1330 by an operation such as clicking, dragging and dropping, long touch, or the like using an input device such as a mouse or a touch pad.
  • FIG. 14 is a diagram for describing a method for providing an illustration according to an embodiment of the disclosure
  • FIG. 15 is a diagram for describing a method for providing an illustration according to an embodiment of the disclosure.
  • an illustration generation button 1410 may be provided in a script input window 1400 provided on the screen provided by the presentation software.
  • the user inputs the text to the script input window 1400 and selects the illustration generation button 1410, at least one illustration 1421 associated with the text may be displayed on the presentation image 1420.
  • At least one illustration 1531 associated with a designated text 1520 may be displayed on the presentation image 1530.
  • the illustration may be generated for each designated sentence.
  • FIG. 16 illustrates a method for providing an illustration according to an embodiment of the disclosure.
  • a menu 1640 may be displayed, when the user selects an illustration generation item including the menu 1640, the block-designated text is input to a text input area 1610 of a UI 1600 for searching for an illustration, and thereafter, when the user selects a search 1620, a search result 1630 including at least one illustration associated with the block-designated text may be provided.
  • search result 1630 several illustrations may be listed according to scores evaluated by the number of uses by other users, the degree of design matching, and the like.
  • the plurality of key terms may be extracted from the text, priorities of the plurality of key terms may be ranked, and information on the plurality of key terms and the priorities may be defined as a key term vector and may be input to the artificial intelligence model learned to generate the illustration to thereby generate the form of an illustration.
  • the sentence is input into the artificial intelligence model that performs the natural language processing, and the sentence is divided into a unit of phrases/paragraphs to detect the meaning of the corresponding phrase/paragraphs.
  • a relationship between the respective phrases/paragraphs (background/phenomenon, cause/result, contrast, assertion and evidence, etc.) is defined.
  • words of the sentence in the respective phrases/paragraphs are discriminated.
  • each word is separately prioritized in the meanings of the phrase/paragraph in which each word is included.
  • N e.g., two
  • an association (subject and predicate, predicate and object, subject, predicate, and object, etc.) between N main words which are prioritized in the phrase/paragraph, and the degree of connection between the words are defined. For example, if a sentence “gradually growing start-up challenge” is input, core words such as ‘growing (1)’, ‘start-up (2)’, ‘challenge (3), and ‘gradually (4)’ may be extracted and prioritized.
  • the core words may be shaped into the illustrations from a concept of a small range.
  • a first database may be constituted by templates for a layout of the illustrations defined according to the type of association between the words
  • a second database may be constituted by templates for a layout of the illustrations defined according to the type of association between phrases/paragraphs.
  • illustrations that match the meaning of each word may be searched and an illustration that matches at highest probability may be selected.
  • the template is internally loaded and prepared by the association of the words, and the illustration matched from each word is inserted into the template to generate primary illustrations.
  • the primary illustrations are inserted into the template loaded from the second database to generate a secondary illustration.
  • the secondary illustration as described above may be defined as a basic form.
  • a graphic effect on the basic form of the illustration is automatically changed using the basic form of the illustration and the design of the current presentation image.
  • colors used in the design of the current presentation image may be listed, color theme information of the current presentation image by using frequency, area, and the like of the colors as a weight may be calculated, and the illustration of the basic form may be colored using the colors in the calculated color theme.
  • a design of the current presentation image may be defined from design elements such as a line style, a line thickness, a curve frequency, an edge processing, and the like used in the design of the presentation image in addition to the color information, and the graphic effect of the illustration may be changed using the information.
  • the user may post-edit the illustration generated as described above.
  • the illustration may also be re-generated to be matched to the change in the design of the presentation image. It is possible for the user to select each template or primary illustration search, and the selection of the user may be scored to perform a reinforcement learning of the artificial intelligence model. In a template or illustration search using a reinforcement learning concept, results that users or individual users prefer may be gradually learned and shown.
  • FIG. 17 is a diagram for describing embodiments in which an illustration generation function according to the disclosure is applied to a messenger program
  • FIG. 18A is a diagram for describing embodiments in which an illustration generation function according to the disclosure is applied to a messenger program.
  • At least one emoticon associated with the input text may be generated and displayed.
  • the user may select a desired emoticon among the generated emoticons and send the selected emoticon to a conversation partner.
  • the user may generate an illustration that matches not only the emoticon but also the text and send them to the other party.
  • a background image that matches the input text may be generated.
  • the text input to a text window may be inserted into the background image.
  • a position of the text may be changed by a user operation such as touch and drag.
  • a message in the form of an image in which the text is inserted into the background image may be sent to a conversation partner.
  • FIG. 18B is a diagram for describing an embodiment of the disclosure in which the illustration generation function is applied to a keyboard program.
  • the keyboard program may operate in conjunction with various other programs.
  • the keyboard program may operate in conjunction with a web browser program, a document creation program, a chatting program, a messenger program, or the like. That is, illustration information associated with the text input to the keyboard program may be acquired and may be transferred to the web browser program, the document creation program, the chatting program, or the messenger program.
  • FIG. 19 is a block diagram for describing a configuration of an electronic device 100 according to an embodiment of the disclosure.
  • the electronic apparatus 100 is an apparatus capable of performing all or some of the operations of the embodiments described above with reference to FIGS. 1 to 18A.
  • the electronic apparatus 100 includes a memory 110 and a processor 120.
  • the memory 110 may include an internal memory or an external memory.
  • the internal memory may include at least one of, for example, a volatile memory (for example, a dynamic random access memory (DRAM), a static RAM (SRAM), a synchronous dynamic RAM (SDRAM), or the like), a non-volatile memory (for example, a one time programmable read only memory (OTPROM), a programmable ROM (PROM), an erasable and programmable ROM (EPROM), an electrically erasable and programmable ROM (EEPROM), a mask ROM, a flash ROM, or the like), a flash memory (for example, a NAND flash, a NOR flash, or the like), a hard drive, or a solid state drive (SSD)).
  • a volatile memory for example, a dynamic random access memory (DRAM), a static RAM (SRAM), a synchronous dynamic RAM (SDRAM), or the like
  • a non-volatile memory for example, a one time programmable
  • the external memory may include a flash drive such as a compact flash (CF), a secure digital (SD), a micro secure digital (Micro-SD), a mini secure digital (Mini-SD), an extreme digital (xD), a multi-media card (MMC), a memory stick, or the like.
  • the external memory may be functionally and/or physically connected to the electronic apparatus 100 through various interfaces.
  • the memory 110 is accessed by the processor 120, and readout, writing, correction, deletion, update, and the like, of data in the memory 110 may be performed by the processor 120.
  • a term ‘memory’ includes the memory 110, a read only memory (ROM) in the processor 120, a random access memory (RAM), or a memory card (for example, a micro secure digital (SD) card or a memory stick) mounted in the electronic apparatus 100.
  • ROM read only memory
  • RAM random access memory
  • SD micro secure digital
  • the memory 110 may store computer executable instructions for performing the control method according to the embodiments described above with reference to FIG. 1 to 18A.
  • the memory 110 may store presentation software, messenger software, and the like.
  • the memory 110 may store the artificial intelligence models according to the embodiments described above with reference to FIGS. 1 to 18A.
  • the artificial intelligence model may be learned by an external server and may be provided to the electronic apparatus 100.
  • the electronic apparatus 100 may download the artificial intelligence model from the external server and store the artificial intelligence model in the memory 110, and may receive and store an updated artificial intelligence model from the external server when the artificial intelligence model is updated (or re-learned).
  • the electronic apparatus 100 may be connected to the external server through a local area network (LAN), an Internet network, or the like.
  • LAN local area network
  • Internet network or the like.
  • the memory 110 may store various databases such as a database constituted by illustrations to which tag information is matched, a database constituted by templates defining the form of layout of the illustrations according to an association of the words in the sentence, a database constituted by templates defining the form of layout of the illustrations according to an association between the phrases/paragraphs of the sentence, and the like.
  • the memory 110 may also be implemented as an external server of the electronic apparatus 100 such as a cloud server.
  • the processor 120 is a component for controlling an overall operation of the electronic apparatus 100.
  • the processor 120 may be implemented by, for example, a central processing unit (CPU), an application specific integrated chip (ASIC), a system-on-a chip (SoC), a microcomputer (MICOM), or the like.
  • the processor 120 may drive an operating system (OS) or an application program to control a plurality of hardware or software components connected to the processor 120 and perform various kinds of data processing and calculation.
  • the processor 120 may further include a graphic processing unit (GPU) and/or an image signal processor.
  • the processor 120 executes the computer executable instructions stored in the memory 110 to enable the electronic apparatus 100 to perform the functions according to all or some of the embodiments described in FIGS. 1 to 18A.
  • the processor 120 may acquire the text based on the user input by executing at least one or more instructions stored in the memory 110, determine the plurality of key terms from the acquired text, acquire a plurality of first illustrations corresponding to the plurality of key terms, acquire a second illustration by synthesizing at least two or more first illustrations of the plurality of first illustrations, and output the acquired second illustration.
  • the processor 120 may provide the presentation image, acquire at least one illustration associated with the text by inputting the text to the artificial intelligence model learned by the artificial intelligence algorithm when the text for the presentation image is input, and provide an illustration selected by the user among one or more acquired illustrations onto the presentation image.
  • the electronic apparatus 100 may use a personal assistant program, which is an artificial intelligence dedicated program (or an artificial intelligence agent), to acquire the illustrations associated with the text.
  • the personal assistant program is a dedicated program for providing an artificial intelligence based service, and may be executed by the processor 120.
  • the processor 120 may be a general-purpose processor or a separate AI-dedicated processor.
  • the electronic apparatus 100 itself includes a display, and the processor 120 may control the display to display various images.
  • the electronic apparatus 100 may be connected to an external display device to output an image signal to the external display device so that various images are displayed on the external display device.
  • the electronic apparatus 100 may be connected to the external display device by wire or wirelessly.
  • the electronic apparatus 100 may include at least one of a component input jack, a high-definition multimedia interface (HDMI) input port, a USB port, or ports such as red, green, and blue (RGB), digital visual interface (DVI), HDMI, dynamic programming (DP), and thunderbolt, and may be connected to the external display device through such a port.
  • HDMI red, green, and blue
  • DP dynamic programming
  • the electronic apparatus 100 may be connected to the external display device through communication methods such as wireless fidelity (WiFi), wireless display (WiDi), wireless HD (WiHD), wireless home digital interface (WHDI), miracast, Wi-Fi direct, Bluetooth (e.g., Bluetooth classic), Bluetooth low energy, AirPlay, Zigbee, and the like.
  • WiFi wireless fidelity
  • WiDi wireless display
  • WiHD wireless HD
  • WPDI wireless home digital interface
  • miracast Wi-Fi direct
  • Bluetooth e.g., Bluetooth classic
  • Bluetooth low energy AirPlay
  • Zigbee Zigbee
  • the display included in the electronic apparatus 100 or the external display device connected to the electronic apparatus 100 may include, for example, a liquid crystal display (LCD), a light-emitting diode (LED) display, an organic light-emitting diode (OLED) display (e.g., active-matrix organic light-emitting diode (AMOLED), passive-matrix OLED (PMOLED), a microelectromechanical systems (MEMS) display, an electronic paper display, or a touchscreen.
  • LCD liquid crystal display
  • LED light-emitting diode
  • OLED organic light-emitting diode
  • AMOLED active-matrix organic light-emitting diode
  • PMOLED passive-matrix OLED
  • MEMS microelectromechanical systems
  • the processor 120 “providing” images, illustrations, icons, and the like includes controlling an internal display of the electronic apparatus 100 to display the images or the illustrations through the internal display, or outputting image signals for the images, the illustrations, and the like to the external display device of the electronic apparatus 100.
  • the electronic apparatus 100 itself may include an input device and may receive various user inputs through the input device.
  • the input device may include, for example, a touch panel, a touch screen, a button, a sensor capable of receiving a motion input, a camera, or a microphone capable of receiving a voice input.
  • the electronic apparatus 100 may be connected to an external input device and receive various user inputs through the external input device.
  • the external input device may include a keyboard, a mouse, a remote controller, or the like.
  • the electronic apparatus 100 may be connected to the external input device by wirelessly or wire.
  • the electronic apparatus 100 may be connected to the external input device by wire through a USB port or the like.
  • the electronic apparatus 100 may be wirelessly connected to the external input device through communication methods such as infrared data association (IrDA), radio frequency identification (RFID), wireless fidelity (WiFi), Wi-Fi direct, Bluetooth (e.g., Bluetooth classic), Bluetooth low energy), Zigbee, and the like.
  • IrDA infrared data association
  • RFID radio frequency identification
  • WiFi wireless fidelity
  • Wi-Fi Wi-Fi direct
  • Bluetooth e.g., Bluetooth classic
  • Bluetooth low energy Zigbee
  • the electronic apparatus 100 may receive various user inputs such as a text for generating an illustration and a user input for selecting an illustration through the input device included in the electronic device 100 itself or the external input device.
  • the processor 120 may provide a screen provided with the text input window as illustrated in FIG. 1, and when the text is input to the text input window, the processor 120 may input the text to the artificial intelligence model to acquire at least one illustration associated with the text.
  • the processor 120 may provide the screen as illustrated in FIG. 13, and when an illustration generation menu 1310 is selected, the processor 120 may provide a UI 1320 for searching for an illustration.
  • the processor 120 may input the text to the artificial intelligence model to provide a search result 1325 including at least one illustration associated with the text.
  • the processor 120 may provide an illustration selected from the search result 1325 to a presentation image 1330.
  • the processor 120 may provide the screen as illustrated in FIG. 14, and when the text is input to a script input window 1400 and an illustration generation button 1410 is selected, the processor 120 may input the input text to the artificial intelligence model to provide at least one illustration 1421 associated with the text.
  • the processor 120 may provide the screen as illustrated in FIG. 15, and when a user input for designating the text and a user input selecting an illustration generation button 1510 are received, the processor 120 may input the designated text 1520 to the artificial intelligence model to provide at least one illustration 1531 associated with the text.
  • the processor 120 may designate a block of the text as illustrated in FIG. 16 and provide a menu 1640 when a specific user operation for the block-designated text is input, and the processor 120 may provide a UI 1610 for searching for an illustration that the block-designated text is input to a text input area 1610 when an illustration generation item included in the menu 1640 is selected, and may input the block-designated text to the artificial intelligence model to provide a search result 1630 including at least one illustration associated with the text when a search 1620 is selected.
  • the processor 120 may provide an illustration selected from the search result 1630 to the presentation image 1330.
  • the processor 120 may input the information on the design of the presentation image and the text to the artificial intelligence model to thereby acquire at least one illustration associated with the text and corresponding to the design of the presentation image.
  • the processor 120 may input the text to the first artificial intelligence model 1210 to acquire at least one first illustration 1211, and may input the information on the design of the presentation image and at least one first illustration 1211 to the second artificial intelligence model 1220 to acquire at least one second illustration 1221 modified so that at least one first illustration 1211 corresponds to the design of the presentation image.
  • the processor 120 may input the text to the artificial intelligence model to acquire the plurality of illustrations associated with the text and having the same graphic effect as each other.
  • the processor 120 may input the text to the artificial intelligence model to acquire a plurality of first illustrations and acquire a second illustration in which the plurality of first illustrations are synthesized as the illustration associated with the text.
  • the processor 120 may acquire the illustration in which the plurality of illustrations is synthesized using the artificial intelligence model as described with reference to FIGS. 5 to 11.
  • the memory 120 may store the database including the illustrations matched to the tag information as described in FIG. 4.
  • the processor 120 may input the text to the artificial intelligence model to acquire at least one key term from the text, and may search for an illustration corresponding to at least one acquired key term from the database stored in the memory 120.
  • the database may also be stored in an external server of the electronic apparatus 100.
  • the processor 120 may re-learn the artificial intelligence model by applying feedback data including information on an illustration selected by the user among one or more illustrations acquired using the artificial intelligence model.
  • the processor 120 may input the text input to the UI provided by executing the messenger program to the artificial intelligence model to provide the emoticon associated with the text as described in FIG. 17 and provide the background image as described in FIG. 18A, for example.
  • FIG. 20A is a flow chart of a network system using an artificial intelligence model according to diverse embodiments of the disclosure.
  • the network system using the artificial intelligence system may include a first component 2010a and a second component 2020a.
  • the first component 2010a may be an electronic apparatus such as a desktop, a smartphone, a tablet PC, or the like
  • the second component 2020a may be a server in which the artificial intelligence model, the database, and the like are stored.
  • the first component 2010a may be a general purpose processor and the second component 2020a may be an artificial intelligence dedicated processor.
  • the first component 2010a may be at least one application and the second component 2020a may be an operating system (OS).
  • OS operating system
  • the second component 2020a is a component that is more integrated, dedicated, has less delay, has dominated performance, or has more resources than the first component 2010a, and may be a component capable of processing many calculations that are required at the time of generating, updating, or applying the model faster and more efficiently than the first component 2010a.
  • An interface for transmitting/receiving data between the first component 2010a and the second component 2020a may be defined.
  • an application program interface having learning data to be applied to the model as an argument value (or an intermediate value or a transfer value) may be defined.
  • the API may be defined as a set of subroutines or functions that may be called for any processing of another protocol (e.g., a protocol defined in the second component 2020a) in any one protocol (e.g., a protocol defined in the first component 2010a). That is, an environment in which an operation of another protocol may be performed in any one protocol through the API may be provided.
  • the first component 2010a may be input with a text at operation S2001a.
  • the first component 2010a may be input with the text through various input devices such as a keyboard, a touchscreen, and the like.
  • the first component 2010a may be input with a voice and converts the voice into a text.
  • the text may be a script for the presentation image or a text input to the text input window of the messenger program.
  • the first component 2010a may send the input text to the second component 2020a at operation S2003a.
  • the first component 2010a may be connected to the second component 2020a through a local area network (LAN) or an Internet network, or may be connected to the second component 2020a through a wireless communication (e.g., wireless communication such as GSM, UMTS, LTE, WiBRO, or the like) method.
  • LAN local area network
  • Internet network or may be connected to the second component 2020a through a wireless communication (e.g., wireless communication such as GSM, UMTS, LTE, WiBRO, or the like) method.
  • the first component 2010a may send the input text as it is to the second component 2020a, or may perform a natural language processing on the input text and transmit it to the second component 2020a.
  • the first component 2010a may store an artificial intelligence model for performing the natural language processing.
  • the second component 2020a may input a received text to the artificial intelligence model to acquire at least one illustration associated with the text at operation S2005a.
  • the second component 2020a may store a database including various data necessary to generate the artificial intelligence model and the illustration.
  • the second component 2020a may perform the operation using the artificial intelligence model according to the diverse embodiments described above.
  • the second component 2020a may send at least one acquired illustration to the first component 2010a at operation S2007a.
  • the second component 2020a may send at least one acquired illustration to the first component 2010a in the form of an image file.
  • the second component 2020a may send information on a storage address (e.g., a URL address) of at least one acquired illustration to the first component 2010a.
  • the first component 2010a may provide the illustration received from the second component 2020a at operation S2009a.
  • the first component 2010a may display at least one received illustration through a display or an external display device included in the first component 2010a itself.
  • the user may select and use an illustration desired to be used from one or more displayed illustrations.
  • the illustration may be used to create the presentation image, and may be used as an emoticon, a background, etc. to be sent to a conversation partner in a messenger program.
  • the artificial intelligence model as described above may be a determination model learned based on an artificial intelligence algorithm, for example, a model based on a neural network.
  • the learned artificial intelligence model may be designed to simulate a human brain structure on a computer, and may include a plurality of network nodes having weights that simulate a neuron of the neural network of a human. The plurality of network nodes may each form a connection relationship so that the neuron simulates synaptic activity of the neuron exchanging a signal via synapse.
  • the learned artificial intelligence model may include, for example, a neural network model or a deep learning model developed from the neural network model.
  • the plurality of network nodes may exchange data according to a convolution connection relationship while being located at different depths (or layers).
  • Examples of the learned artificial intelligence model may include a deep neural network (DNN), a recurrent neural network (RNN), a bidirectional recurrent deep neural network (BRDNN), and the like, but are not limited thereto.
  • the first component 2010a may use a personal assistant program, which is an artificial intelligence dedicated program (or an artificial intelligence agent), to acquire the illustrations associated with the text described above.
  • the personal assistant program is a dedicated program for providing an artificial intelligence based service, and may be executed by an existing general purpose processor or a separate AI dedicated processor.
  • the artificial intelligence agent may be operated (or executed).
  • the artificial intelligence agent may send the text to the second component 2020a, and may provide at least one illustration received from the second component 2020a.
  • the artificial intelligence agent may also be operated.
  • the artificial intelligence agent may be in a pre-executed state before the predetermined user input is detected or the button included in the first component 2010a is selected.
  • the artificial intelligence agent of the first component 2010a may acquire the illustration based on the text.
  • the artificial intelligence agent may be in a standby state before the predetermined user input is detected or the button included in the first component 2010a is selected.
  • the standby state is a state in which a reception of a predefined user input is detected to control a start of an operation of the artificial intelligence agent.
  • the first component 2010a may operate the artificial intelligence agent and provide the illustration acquired based on the text.
  • the artificial intelligence agent may control the artificial intelligence model to acquire at least one illustration associated with the text.
  • the artificial intelligence agent may perform the operation of the second component 2020a described above.
  • FIG. 20B is a flow chart of a network system using an artificial intelligence model according to an embodiment of the disclosure.
  • a network system using an artificial intelligence system may include a first component 2010b, a second component 2020b, and a third component 2030b.
  • the first component 2010b may be an electronic apparatus such as a desktop, a smartphone, a tablet PC, or the like
  • the second component 2020b may be a server running presentation software such as Microsoft PowerPointTM, KeyNoteTM, or the like
  • the third component 2030b may be a server in which an artificial intelligence model or the like that performs a natural language processing is stored.
  • An interface for transmitting/receiving data between the first component 2010b, the second component 2020b, and the third component 2030b may be defined.
  • the first component 2010b may be input with a text at operation S2001b.
  • the first component 2010b may be input with the text through various input devices such as a keyboard, a touchscreen, and the like.
  • the first component 2010b may be input with a voice and converts the voice into a text.
  • the first component 2010b may send the input text to the third component 2030b at operation S2003b.
  • the first component 2010b may be connected to the third component 2030b through a local area network (LAN) or an Internet network, or may be connected to the third component 2030b through a wireless communication (e.g., wireless communication such as GSM, UMTS, LTE, WiBRO, or the like) method.
  • LAN local area network
  • Internet network or may be connected to the third component 2030b through a wireless communication (e.g., wireless communication such as GSM, UMTS, LTE, WiBRO, or the like) method.
  • the third component 2030b may input the received text to the artificial intelligence model to acquire at least one key term associated with the text and an association between the key terms at operation S2005b.
  • the third component 2030b may send the key term and the association between the key terms to the second component 2020b at operation S2007b.
  • the second component 2020b may generate a synthesize illustration using the received key term and association between the key terms at operation S2009b.
  • the second component 2020b may transfer the generated synthesized illustration to the first component 2010b at operation S2011b.
  • the first component 2010b may display at least one received illustration through a display or an external display device included in the first component 2010b itself.
  • the illustration may be used to create the presentation image, and may be used as an emoticon, a background, etc. to be sent to a conversation partner in a messenger program.
  • FIG. 20C is a configuration diagram of a network system according to an embodiment of the disclosure.
  • a network system using an artificial intelligence model may include a first component 2010c and a second component 2020c.
  • the first component 2010c may be an electronic apparatus such as a desktop, a smartphone, a tablet PC, or the like
  • the second component 2020c may be a server in which the artificial intelligence model, the database, and the like are stored.
  • the first component 2010c may include an inputter 2012c and an outputter 2014c.
  • the inputter 2012c may be input a text through an input device.
  • the input device may include, for example, a keyboard, a touchpad, a mouse, a button, or the like.
  • the input device may be embedded in the first component 2010c or may be an external input device connected to the first component 2010c.
  • the outputter 2014c may output an image through an output device. For example, the outputter 2014c may output an illustration based on information received from the second component 2020c through the output device.
  • the output device may include, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, microelectromechanical systems (MEMS) display, an electronic paper display, or a touchscreen.
  • the output device may be embedded in the first component 2010c or may be an external output device connected to the first component 2010c.
  • the second component 2020c may include a natural language processor 2022c, a database 2026c, and an illustration generator 2024c.
  • the natural language processor 2022c may extract key terms from the text using the artificial intelligence model and detect an association and a context between the key terms.
  • the database 2026c may store illustrations matched to tag information. For example, the illustrations matched to the tag information including the key terms output from the natural language processor 2022c may be searched from the database.
  • the illustration generator 2024c may generate a synthesized illustration by combining a plurality of illustrations searched from the database 2026c based on the received key terms and the association between the key terms.
  • the natural language processor 2022c and the illustration generator 2024c are illustrated as being included in one server, this is merely one example.
  • the natural language processor 2022c and the illustration generator 2024c may also be included in a separate server, and may also be included in the first component 2010c.
  • FIG. 21 is a block diagram illustrating a configuration of an electronic apparatus for learning and using an artificial intelligence model according to an embodiment of the disclosure.
  • an electronic apparatus 2100 includes a learner 2110 and a determiner 2120.
  • the electronic apparatus 2100 of FIG. 21 may correspond to the electronic apparatus 100 of FIG. 19 and the second component 2020a of FIG. 20A.
  • the learner 2110 may generate and learn an artificial intelligence model having a reference for acquiring at least one image (an illustration, an emoticon, or the like) associated with a text using learning data.
  • the learner 2110 may generate an artificial intelligence model having a determination reference using collected learning data.
  • the learner 2110 may generate, learn, or re-learn the artificial intelligence model so as to acquire an image associated with the text by using the text and the image as the learning data.
  • the learner 2110 may generate, learn, or re-learn the artificial intelligence model for modifying the image so as to correspond to the design of the presentation by using information on the image and the design of the presentation as the learning data.
  • the determiner 2120 may acquire the image associated with the text by using a predetermined data as input data of the learned artificial intelligence model.
  • the determiner 2120 may acquire the image associated with the text by using the text as the input data of the learned artificial intelligence model. As another example, the determiner 2120 may modify the image so as to correspond to the design of the presentation by using the information on the image and the design of the presentation as the input data of the artificial intelligence model.
  • At least a portion of the learner 2110 and at least a portion of the determiner 2120 may be implemented in a software module or manufactured in the form of at least one hardware chip and may be mounted in the electronic apparatus 100 or the second component 2020.
  • at least one of the learner 2110 or the determiner 2120 may also be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or may be manufactured as a portion of an existing general purpose processor (e.g., CPU or application processor) or a graphic dedicated processor (e.g., GPU) and mounted in a variety of electronic apparatuses.
  • AI artificial intelligence
  • the dedicated hardware chip for artificial intelligence is a dedicated processor specialized in a probability calculation, and has higher parallel processing performance than the general purpose processor of the related art, so it may quickly process calculation operations in an artificial intelligence field such as machine learning.
  • the learner 2110 and the determiner 2120 are implemented as a software module (or a program module including instructions)
  • the software module may be stored in a non-transitory computer readable media.
  • the software module may be provided by an operating system (OS), or may be provided by a predetermined application. Alternatively, some of the software modules may be provided by the operating system (OS), and the remaining of the software modules may be provided by the predetermined application.
  • the learner 2110 and the determiner 2120 may also be mounted in one electronic apparatus, or may also be mounted in separate electronic apparatuses, respectively.
  • the learner 2110 and the determiner 2120 may also provide model information constructed by the learner 2110 to the determiner 2120 by wired or wireless line, and the data input to the learner 2110 may also be provided to the learner 2110 as additional learning data.
  • FIGS. 21 and 22 are block diagrams of the learner 2110 and the determiner 2120 according to diverse embodiments of the disclosure.
  • the learner 2110 may include a learning data acquirer 2110-1 and a model learner 2110-4.
  • the learner 2110 may selectively further include at least one of a learning data pre-processor 2110-2, a learning data selector 2110-3, or a model evaluator 2110-5.
  • the learning data acquirer 2110-1 may acquire learning data necessary for an artificial intelligence model for acquiring at least one image associated with a text. As an embodiment of the disclosure, the learning data acquirer 2110-1 may acquire information on the text, the image, the design of the presentation, and the like as the learning data. The learning data may be data collected or tested by the learner 2110 or a manufacturer of the learner 2110.
  • the model learner 2110-4 may learn the artificial intelligence model so as to have a reference of acquiring the image associated with the text, using the learning data.
  • the model learner 2110-4 may learn the artificial intelligence model through supervised learning using at least a portion of the learning data as the reference for acquiring the image associated with the text.
  • the model learner 2110-4 may learn the artificial intelligence model through unsupervised learning of finding the reference for acquiring the image associated with the text by self-learning using the learning data without any supervision, for example.
  • the model learner 2110-4 may learn the artificial intelligence model using a generative adversarial network (GAN) technology.
  • GAN generative adversarial network
  • model learner 2110-4 may learn the artificial intelligence model through reinforcement learning using a feedback as to whether a determination result according to the learning is correct, for example.
  • model learner 2110-4 may learn the artificial intelligence model using a learning algorithm or the like including, for example, error back-propagation or gradient descent.
  • model learner 2110-4 may learn a selection reference about which learning data should be used to acquire the image associated with the text using the input data.
  • the model learner 2110-4 may determine an artificial intelligence model having a great relation between the input learning data and basic learning data as an artificial intelligence model to be learned.
  • the basic learning data may be pre-classified for each type of data
  • the artificial intelligence model may be pre-constructed for each type of data.
  • the basic learning data may be pre-classified by various references such as an area in which the learning data is generated, a time at which the learning data is generated, a size of the learning data, a genre of the learning data, a generator of the learning data, types of objects in the learning data, and the like.
  • the model learner 2110-4 may store the learned artificial intelligence model.
  • the model learner 2110-4 may store the learned artificial intelligence model in the memory 110 of the electronic apparatus 100 or the memory of the second component 2020.
  • the artificial intelligence model learned from the text and image set has learned characteristics of the image form with respect to contents that the text means.
  • the artificial intelligence model learned from the information on the design of the presentation image and the image set has learned what the image has characteristics for the design of the presentation image.
  • the learner 2110 may further include the learning data pre-processor 2110-2 and the learning data selector 2110-3 to improve the determination result of the artificial intelligence model or to save resources or time required for the generation of the artificial intelligence model.
  • the learning data pre-processor 2110-2 may pre-process the acquired data so that the acquired data may be used for learning to acquire the image associated with the text.
  • the learning data pre-processor 2110-2 may process the acquired data into a predetermined format so that the model learner 2110-4 may use the acquired data to acquire the image associated with the text.
  • the learning data pre-processor 2110-2 may remove texts (e.g., adverb, interjection, etc.) that are not needed when the artificial intelligence model provides a response among the input texts.
  • the learning data selector 2110-3 may select data necessary for learning from the data acquired by the learning data acquirer 2110-1 or the data pre-processed by the learning data pre-processor 2110-2.
  • the selected learning data may be provided to the model learner 2110-4.
  • the learning data selector 2110-3 may select learning data necessary for learning among the acquired or pre-processed data, depending on a predetermined selection reference.
  • the learning data selector 2110-3 may also select the learning data according to a predetermined selection reference by learning by the model learner 2110-4.
  • the learner 2110 may further include a model evaluator 2110-5 to improve the determination result of the artificial intelligence model.
  • the model evaluator 2110-5 may input evaluation data to the artificial intelligence model, and when the determination result outputted from the evaluation data does not satisfy the predetermined reference, the model evaluator 2110-5 may cause the model learner 2110-4 to learn again.
  • the evaluation data may be predefined data for evaluating the artificial intelligence model.
  • the model evaluator 2110-5 may evaluate that the predetermined reference is not satisfied.
  • the model evaluator 2110-5 may evaluate whether each of the learned artificial intelligence models satisfies the predetermined reference, and determine a model satisfying the predetermined reference as a final artificial intelligence model. In this case, when there are a plurality of models satisfying the predetermined reference, the model evaluator 2110-5 may determine any one or a predetermined number of models previously set in descending order of evaluation score as the final artificial intelligence model.
  • the determiner 2120 may include an input data acquirer 2120-1 and a determination result provider 2120-4.
  • determiner 2120 may selectively further include at least one of an input data pre-processor 2120-2, an input data selector 2120-3, or a model updater 2120-5.
  • the input data acquirer 2120-1 may acquire data necessary to acquire at least one image associated with the text.
  • the determination result provider 2120-4 may acquire at least one image associated with the text by applying the input data acquired by the input data acquirer 2120-1 to the learned artificial intelligence model as an input value.
  • the determination result provider 2120-4 may acquire the determination result by applying the data selected by the input data pre-processor 2120-2 or the input data selector 2120-3 to be described later to the artificial intelligence model as an input value.
  • the determination result provider 2120-4 may acquire at least one image associated with the text by applying the text acquired by the input data acquirer 2120-1 to the learned artificial intelligence model.
  • the determiner 2120 may further include the input data pre-processor 2120-2 and the input data selector 2120-3 to improve the determination result of the artificial intelligence model or to save resources or time for provision of the determination result.
  • the input data pre-processor 2120-2 may pre-process the acquired data so that the acquired data may be used to acquire at least one image associated with the text.
  • the input data pre-processor 2120-2 may process the acquired data into a predetermined format so that the determination result provider 2120-4 may use the acquired data to acquire at least one image associated with the text.
  • the input data selector 2120-3 may select data necessary for response provision from the data acquired by the input data acquirer 2120-1 or the data pre-processed by the input data pre-processor 2120-2.
  • the selected data may be provided to the determination result provider 2120-4.
  • the input data selector 2120-3 may select some or all of the acquired or pre-processed data, depending on a predetermined selection reference for response provision.
  • the input data selector 2120-3 may also select the data according to a predetermined selection reference by learning by the model learner 2110-4.
  • the model updater 2120-5 may control the artificial intelligence model to be updated based on the evaluation for the determination result provided by the determination result provider 2120-4.
  • the model updater 2120-5 may request the model learner 2110-4 to additional learn or update the artificial intelligence model by providing the determination result provided by the determination result provider 2120-4 to the model learner 2110-4.
  • the model updater 2120-5 may re-learn the artificial intelligence model based on the feedback information according to the user input.
  • presentation material creation method described in the above embodiments may be applied to any field requiring the images that match the text, such as the books, magazines, newspapers, advertisements, webpage production, and the like.
  • the diverse embodiments described above may be implemented in software, hardware, or a combination thereof.
  • the embodiments described in the disclosure may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, or electric units for performing other functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, or electric units for performing other functions.
  • the embodiments such as procedures and functions described in the disclosure may be implemented by separate software modules. Each of the software modules may perform one or more functions and operations described in the specification.
  • the diverse embodiments of the disclosure may be implemented in software including instructions that may be stored in machine-readable storage media readable by a machine (e.g., a computer).
  • the machine is an apparatus that invokes the stored instructions from the storage medium and is operable according to the called instruction, and may include the electronic apparatus (e.g., the electronic apparatus 100) according to the disclosed embodiments.
  • the processor may perform a function corresponding to the instruction, either directly or using other components under the control of the processor.
  • the instruction may include a code generated or executed by a compiler or an interpreter.
  • the machine-readable storage medium may be provided in the form of a non-transitory storage medium.
  • the ‘transitory’ means that the storage medium does not include a signal and is tangible, but does not distinguish whether data is stored semi-permanently or temporarily in the storage medium.
  • the method according to the diverse embodiments disclosed in the disclosure may be included and provided in a computer program product.
  • the computer program product may be traded as a product between a seller and a purchaser.
  • the computer program product may be distributed in the form of a storage medium (for example, a compact disc read only memory (CD-ROM)) that may be read by a machine, or online through an application store (for example, PlayStoreTM).
  • an application store for example, PlayStoreTM
  • at least a portion of the computer program product may be at least temporarily stored in a storage medium such as a memory of a server of a manufacturer, a server of an application store, or a relay server, or be temporarily generated.
  • Each of the components (e.g., modules or programs) according to the diverse embodiments may include a single entity or a plurality of entities, and some sub-components of the sub-components described above may be omitted, or other sub-components may be further included in the diverse embodiments.
  • some components e.g., modules or programs
  • the operations performed by the module, the program, or other component, in accordance with the diverse embodiments may be performed in a sequential, parallel, iterative, or heuristic manner, or at least some operations may be executed in a different order or omitted, or other operations may be added.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
EP19768025.9A 2018-03-12 2019-03-12 Elektronische vorrichtung und steuerungsverfahren dafür Ceased EP3698258A4 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR20180028603 2018-03-12
KR1020190023901A KR20190118108A (ko) 2018-03-12 2019-02-28 전자 장치 및 그의 제어방법
PCT/KR2019/002853 WO2019177344A1 (en) 2018-03-12 2019-03-12 Electronic apparatus and controlling method thereof

Publications (2)

Publication Number Publication Date
EP3698258A1 true EP3698258A1 (de) 2020-08-26
EP3698258A4 EP3698258A4 (de) 2020-11-11

Family

ID=68424433

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19768025.9A Ceased EP3698258A4 (de) 2018-03-12 2019-03-12 Elektronische vorrichtung und steuerungsverfahren dafür

Country Status (3)

Country Link
EP (1) EP3698258A4 (de)
KR (1) KR20190118108A (de)
CN (1) CN111902812A (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024182438A1 (en) * 2023-03-01 2024-09-06 Snap Inc. Automatic image generation in an interaction system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11545133B2 (en) 2020-10-12 2023-01-03 Google Llc On-device personalization of speech synthesis for training of speech model(s)
KR102284539B1 (ko) * 2020-11-30 2021-08-02 주식회사 애자일소다 머신러닝 기반 인공지능 모델 학습, 개발, 배포 및 운영 시스템과 이를 이용한 서비스 방법
KR102287407B1 (ko) * 2020-12-18 2021-08-06 영남대학교 산학협력단 이미지 생성을 위한 학습 장치 및 방법과 이미지 생성 장치 및 방법
KR102280028B1 (ko) * 2021-01-26 2021-07-21 주식회사 미디어코어시스템즈 빅데이터와 인공지능을 이용한 챗봇 기반 콘텐츠 관리 방법 및 장치
CN118781225A (zh) * 2023-04-04 2024-10-15 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备、及计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3648577B2 (ja) * 1995-12-07 2005-05-18 カシオ計算機株式会社 画像処理装置
WO2013075316A1 (en) * 2011-11-24 2013-05-30 Microsoft Corporation Interactive multi-modal image search
US9710545B2 (en) * 2012-12-20 2017-07-18 Intel Corporation Method and apparatus for conducting context sensitive search with intelligent user interaction from within a media experience
CN108351890B (zh) * 2015-11-24 2022-04-12 三星电子株式会社 电子装置及其操作方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024182438A1 (en) * 2023-03-01 2024-09-06 Snap Inc. Automatic image generation in an interaction system

Also Published As

Publication number Publication date
CN111902812A (zh) 2020-11-06
KR20190118108A (ko) 2019-10-17
EP3698258A4 (de) 2020-11-11

Similar Documents

Publication Publication Date Title
WO2019177344A1 (en) Electronic apparatus and controlling method thereof
EP3698258A1 (de) Elektronische vorrichtung und steuerungsverfahren dafür
WO2020067633A1 (en) Electronic device and method of obtaining emotion information
WO2019098573A1 (en) Electronic device and method for changing chatbot
WO2019083275A1 (ko) 관련 이미지를 검색하기 위한 전자 장치 및 이의 제어 방법
WO2019203488A1 (en) Electronic device and method for controlling the electronic device thereof
WO2019146942A1 (ko) 전자 장치 및 그의 제어방법
WO2019027258A1 (en) ELECTRONIC DEVICE AND METHOD FOR CONTROLLING THE ELECTRONIC DEVICE
WO2018117704A1 (en) Electronic apparatus and operation method thereof
WO2019027259A1 (en) APPARATUS AND METHOD FOR PROVIDING SUMMARY INFORMATION USING ARTIFICIAL INTELLIGENCE MODEL
WO2019143227A1 (en) Electronic device providing text-related image and method for operating the same
WO2020080834A1 (en) Electronic device and method for controlling the electronic device
WO2018117428A1 (en) Method and apparatus for filtering video
WO2019231130A1 (ko) 전자 장치 및 그의 제어방법
EP3602334A1 (de) Vorrichtung und verfahren zur bereitstellung von zusammengefassten informationen unter verwendung eines modells der künstlichen intelligenz
WO2016126007A1 (en) Method and device for searching for image
EP3820369A1 (de) Elektronische vorrichtung und verfahren zum erhalt von gefühlsinformationen
EP3539056A1 (de) Elektronische vorrichtung und betriebsverfahren dafür
EP3523710A1 (de) Vorrichtung und verfahren zur bereitstellung eines satzes auf der basis einer benutzereingabe
WO2019132410A1 (en) Electronic device and control method thereof
WO2018101671A1 (en) Apparatus and method for providing sentence based on user input
WO2018084581A1 (en) Method and apparatus for filtering a plurality of messages
WO2020096255A1 (en) Electronic apparatus and control method thereof
EP3596667A1 (de) Elektronische vorrichtung und verfahren zur steuerung der elektronischen vorrichtung
EP3869393B1 (de) Verfahren und vorrichtung zur bilderkennung, elektronische vorrichtung und medium

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200521

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20201009

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 16/58 20190101AFI20201005BHEP

Ipc: G06F 16/538 20190101ALI20201005BHEP

Ipc: G06N 20/00 20190101ALI20201005BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220117

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20231215